Auto Scaling for Amazon DynamoDB
This fully managed cloud database, described as a "fast and flexible NoSQL database service for any scale," is designed to provide both document and key-value store models sporting consistent, single-digit millisecond latency no matter how big it scales.
The service has been tweaked to let users more easily manage the amount of resources used, which previously was done with a provisioned capacity model that allowed them to specify the read and write capacity required by applications.
With the service now being used for more serverless computing jobs, AWS announced DynamoDB Auto Scaling, which automates capacity management for a user's tables and global secondary indexes.
"You simply specify the desired target utilization and provide upper and lower bounds for read and write capacity," spokesperson Jeff Barr said in a recent blog post. "DynamoDB will then monitor throughput consumption using Amazon CloudWatch alarms and then will adjust provisioned capacity up or down as needed. Auto Scaling will be on by default for all new tables and indexes, and you can also configure it for existing ones.
"Even if you're not around, DynamoDB Auto Scaling will be monitoring your tables and indexes to automatically adjust throughput in response to changes in application traffic. This can make it easier to administer your DynamoDB data, help you maximize availability for your applications, and help you reduce your DynamoDB costs."
Developers can manage the service -- now available in all regions -- with a command-line interface (CLI) or via APIs, providing control functionality such as the ability to enable and disable scaling policies. Provisioned capacity is subject to the regular prices for using DynamoDB.
Schema Conversion Tool Expansion
AWS and other cloud providers take pains to ease the importation of databases and data warehouses from other systems into their platforms, and the Amazon cloud has been at the forefront of such measures (see "AWS Eases Data Warehouse Imports" and "AWS Database Migration Service Now Available").
The AWS Schema Conversion Tool (SCT) "makes heterogeneous database migrations easy by automatically converting the source database schema and a majority of the custom code, including views, stored procedures, and functions, to a format compatible with the target database," according to AWS.
One of the goals of the tool is to allow for easy migration of proprietary commercial data warehouses and databases into cloud-based, open source solutions, including these options:
Source Database |
Target Database on Amazon RDS |
Oracle Database |
Amazon Aurora, MySQL, PostgreSQL, MariaDB |
Oracle Data Warehouse |
Amazon Redshift |
Microsoft SQL Server |
Amazon Aurora, Amazon Redshift, MySQL, PostgreSQL, MariaDB |
Teradata |
Amazon Redshift |
IBM Netezza |
Amazon Redshift |
Greenplum |
Amazon Redshift |
HPE Vertica |
Amazon Redshift |
MySQL and MariaDB |
PostgreSQL |
PostgreSQL |
Amazon Aurora, MySQL, MariaDB |
Amazon Aurora |
PostgreSQL |
That list includes new options enabled by the latest enhancement to the conversion tool that expands support for legacy data warehouses.
"You can convert and extract data from legacy versions of Teradata (version 13 and above) and Oracle Data Warehouse (version 10g and above) for direct import into Amazon Redshift, without first performing an in-place upgrade," AWS said in a blog post.
As part of the tools' upgrade, the company made it easier for users to select the data to be imported by quickly locating target objects among complicated schemas and then filtering the exact data subsets to be migrated.
"In addition, conversion rules have been enhanced to convert more of the source schema to an open source database target automatically," AWS said.
The expansion follows a similar update last month that provided new SQL Server functionality.
The latest update is available for download to work on Windows, Mac, Fedora and Ubuntu platforms.