We’re sorry we missed you at re:Invent, but we can still meet!

Highlights from the launch of Amazon ElastiCache Serverless

ElastiCache Serverless has true autoscaling for everything—except cost.

カワジャ・シャムス ヘッドショット
カワジャ・シャムス
著者

Share

One of AWS’s big announcements of re:Invent 2023 was the launch of Amazon ElastiCache Serverless. I sincerely congratulate the team behind the release! It’s an exciting new offering that solves many pain points of the original, non-serverless service. It still falls short of the litmus test for serverless, though.

Caching is obviously a topic near and dear to our hearts here at Momento, so I put together a blog with some highlights about the ElastiCache Serverless release – and some questions about its approach. Spoiler: Momento Cacheis the answer.

Amazon ElastiCache Serverless launch highlights

Simplicity

As a staunch advocate for simplicity, I love the “radically simple” approach. This fits well with the serverless hypothesis. No more picking instance types, making sure you have enough bandwidth, worrying about the TPS limits on nodes, or deciding number of replicas.

Native Memcache and Redis API support

ElastiCache Serverless supports Memcache and Redis APIs natively. This makes migration easier from existing serverful stacks! This really embraces the “radically simple” framing.

Multi-AZ by default

You don’t have to worry about setting up high availability or paying more for it. Radically simple again!

Built-in VPC support

Works with VPCs on day one. Unfortunately, VPCs are required though – so if you are running your Lambda functions outside of VPCs, you do have to put them in a VPC to access ElastiCache Serverless.

No more autoscaling groups

This is awesome! But there are some caveats. The launch benchmark shows that you can double capacity every 10 minutes… Serverful ElastiCache allows you to have auto-scaling rules that you can use to proactively provision more capacity.

The other caveat here is that your spikes may not be 2x the load. They may in fact be 10x – in which case, it could take up to 40 minutes to scale up. The spike may be over by then…

Pay-per-use pricing

Close but no cigar. The first 0-1GB are $90/month. Beyond that 1GB, you pay a prorated per-GB cost plus .34 cents/GB of data transferred. Storage is rounded up to the nearest GB – so even .01GB means $90/month. A truly serverless service should scale to zero in price as well as capacity!

Final thoughts on Amazon ElastiCache Serverless

All in all, while this does not pass all the criteria in the serverless litmus test, ElastiCache Serverless has done a meaningfully better job in its serverless-ness than Aurora or OpenSearch.

I’m proud to say Momento still comes out ahead even with this huge enhancement for ElastiCache customers. Momento Cache meets every aspect of the serverless litmus test: nothing to provision or manage, usage-based pricing with no minimums, ready with a single API call, no planned downtime, and no instances to speak of. If you’re interested in our truly serverless replacement for ElastiCache Redis, we have 互換クライアント ready to go.

What do you think? Does Amazon ElastiCache Serverless tick all the boxes for you, or do you see room for improvement? Let @momentohq and I (@ksshams) know on X/Twitter!

Share