The AWS re:Invent conference gets bigger and bigger every year. No surprise considering the $13B annual revenue stream. The 32,000 attendees of re:Invent watched executives announce a flurry of new products. This may have been a bit overwhelming if you were not completely familiar with the AWS portfolio of services. Don’t worry, we have your back.
Here’s a rundown of the new AWS products and services introduced this year.
New instances, FPGAs, and GPUs for EC2
AWS kept up with their tradition of announcing new Instance Types (virtual machines that have various CPU/RAM/Storage/Network specifications) at the conference by rolling out new options across a number of the Instance Families (groups of Instance Types that have similar characteristics).
In addition to releasing new instances backed with graphics processing units (GPUs), AWS will also offer elastic GPUs for EC2, a way for people to attach GPU resources to their existing VM instances. There are also new F1-branded VM instances that are accelerated with field-programmable gate arrays (FPGAs). The AWS service lets you write code, package it up as an image, and then run it as custom logic on the FPGAs.
A DigitalOcean Killer
The new service, Amazon Lightsail, offers a new way for developers to quickly and easily get access to low-cost virtual private servers (VPS). They don’t need to worry about provisioning storage, security groups, or identity and access management (IAM) when they want to just get a box to run a simple web application.
AWS’ first AI services, including a conversational app framework called Lex
Following years of mounting interest in a type of artificial intelligence (AI) called deep learning, AWS announced its first Amazon AI services that make use of the technology.
There is the new Rekognition image recognition service — presumably drawing on the talent and technology from deep learning startup Orbeus, whose team Amazon hired in the past year. There is also the new Polly text-to-speech (TTS) service, which supports 47 voices and 24 languages.
But the most significant announcement is the launch of Amazon Lex. It’s effectively the technology underlying Alexa, Amazon’s voice-activated virtual assistant. Lex provides deep learning-powered automatic speech recognition and natural-language understanding.
Athena service for Querying data in S3
Also announced this year, Athena, a tool for running queries on data that’s stored in AWS’ widely used S3 cloud storage service. People can use the standard Structured Query Language (SQL) with the service and don’t need to worry about setting up the infrastructure for it.
AWS doesn’t believe Athena will overlap with the data processing tools that are available through its Elastic Map Reduce (EMR) service and its Redshift data warehousing service.
PostgreSQL support in the Aurora database engine
With PostgreSQL support being the top request from customers of Aurora, AWS has decide to add this option to the managed cloud database engine. Amazon already lets developers store and retrieve data using PostgreSQL through its Relational Database Service (RDS), this was added in 2013.
The 100PB Snowmobile truck and 100TB Snowball Edge boxes to efficiently move data to the cloud
The first version was announced at last year’s re:Invent — it had a capacity of 50TB. Then, in April of this year, AWS showed off a 80TB version. Now AWS has added onboard computing resources, leveraging code execution platform AWS Lambda, which opens up new possibilities for companies that need a Snowball.
Multiple Snowball Edge boxes — each of which has a 100TB capacity and a color touchscreen — can divvy up databases with sharding and sync data to S3. But also, they can run new AWS software called AWS Greengrass, which effectively brings the serverless event-driven computing model of AWS Lambda outside of AWS and onto other kinds of devices, including embedded devices.
Unit Testing and debugging services, and a personal health dashboard
During the second keynote, AWS announced CodeBuild, which is meant to automatically compile developers’ code and then run unit tests on it.
AWS will charge by the minute and automatically scale it in and out based on the needs of the workload, the service can also be customized.
X-Ray, a service to help developers with debugging their code was also introduced. The service will show performance bottlenecks and show which services are causing issues. It will also show the impact of issues for users. In addition to X-Ray, OpsWorks for Chef Automate, a fully managed version of a Chef server for automating the management of infrastructure, will be added.
AWS is giving customers a new Personal Health Dashboard that compliments the existing Service Health Dashboard. This dashboard gives users a personalized view into the performance and availability of the AWS services, along with alerts that are automatically triggered by changes in the health of the services.
DDos mitigation services
AWS said it has turned on distributed denial of service (DDoS) attack mitigation technology, called AWS Shield Standard, for all of its customers, free of charge. It protects you from 96 percent of the most common attacks today, including SYN/ACK floods, Reflection attacks, and HTTP slow reads.
To help customers prevent more sophisticated attacks, AWS is also introducing a premium tier called AWS Shield Advanced. It lets customers call in a special support team that’s available 24 x 7. It includes customer notifications about attacks.
When AWS detects attacks, they will work with DDoS protection teams to create the right level of protection using AWS’ WAF [web application firewall]. They will also keep an eye on cost, making sure users don’t incur any additional cost by using the service.
A mobile analytics tool
Amazon Pinpoint, a mobile analytics service will help developers understand the behaviors of people using mobile apps. It lets developers send push notifications and then track the impact of them. It integrates with AWS’ existing Mobile Hub service, and it supports both iOS (Swift and Objective C) and Android apps, with optional campaign analytics and A/B testing.
AWS Glue, for automated data integration
AWS Glue was introduced. This is a tool for automatically running jobs for cleaning up data from multiple sources and getting it all ready for analysis in other tools, like business intelligence (BI) software. This type of work is typically known as extract-transform-load, or ETL. Companies including Informatica and Talend offer these types of software solutions. Now AWS does too.
It’s been possible to use AWS infrastructure to do ETL work, with services like EMR (Elastic Map Reduce). The other big public clouds have Hadoop-based tools for this sort of thing, too. AWS Glue will make this easier. And with the help of JDBC connectors, it will be able to connect to data housed on-premise (or in other clouds).
Several enhancements to the Lambda event-driven computing service
AWS has added support for the C# programming language in its Lambda event-driven computing service. Though Werner Vogels, (Amazon.com CTO, who made the announcement) seemed reluctant to have to allow a language created by crosstown archrival, Microsoft, onto the platform. It also revealed a new capability called Lambda@Edge, which makes it possible to run Lambda functions at edge locations where customers store media content around the world on its CloudFront content distribution network (CDN).
Also new: AWS Step Functions, which will allow developers to build full applications in the form of functions that are hooked up together. A visual editor makes it easy to connect multiple functions.
Altogether this is a big refresh to Lambda, which AWS first introduced at re:Invent two years ago. At the time it was viewed as a revolutionary concept because it let developers execute code without needing to set up and manage the underlying compute and storage infrastructure.
AWS released a preview of AWS Batch, a service for automating the deployment of batch processing jobs. In the past decade or so, people have relied on the Hadoop open-source big data software to do batch processing, and AWS and other public clouds have come up with managed versions of Hadoop and additional services that cater to batch and streaming workloads. Now AWS is trying to more directly meet the needs of developers who want to process lots of data automatically even if it doesn’t happen instantly.
AWS Batch is designed to work with containers as opposed to the more traditional virtual machines. Customers can provide the exact container images that need to be run on top of the AWS EC2 computing infrastructure. (Shell scripts and Linux executables are also supported, and it will be possible to run Lambda functions in the future.) Batch is also able to take advantage of the cheaper EC2 instances available on the AWS Spot market. Customers can specify the types of instances they’d like, as well as minimum and maximum compute resources.
Open-source software for building your own container scheduler
AWS has open-sourced new software called Blox that lets developers create custom schedulers for use inside AWS’ EC2 Container Service (ECS). The first two components of the Blox software, a “reference scheduler” and a service for capturing data on clusters that can then be queried, are available now on GitHub under an Apache 2.0 license.
With this move, AWS is making its container deployment service easier to tinker with, rather than using existing third-party schedulers, such as Google-backed Kubernetes, Mesosphere’s Mesos, or Docker’s own Swarm.
Content source: VentureBeat post – if you would like to read more about all of these new AWS products/services, there is a link provided in the original VB post. Enjoy!
For more information on AWS, AWS Managed Services, Cloud Cost Optimization, or hybrid cloud options, contact one of our infrastructure experts here.