We’re sorry we missed you at re:Invent, but we can still meet!

Stop settling for almost serverless

Holding your full stack to the serverless standard is possible right now.

ミーラ・ジンダル
著者

Share

The serverless dream is so close to coming true. We have serverless functions (AWS Lambda and Google Cloud Functions), serverless storage (Amazon S3 and Google Cloud Storage), and serverless databases (Amazon DynamoDB, Google Firestore, MongoDB Atlas, etc.). But when it’s time to accelerate with a cache, cold starts, hot keys, provisioning capacity, and caveats dampen all that momentum. The dream of fully serverless falls apart when you get to caching.

But it doesn’t have to. There’s a new caching solution that meets every true serverless requirement.

What makes a solution truly serverless?

Configuration: not your problem

Managed services for Redis and Memcached are the most popular caching solutions today. However, they fail the fundamental litmus test for serverless in that customers of these technologies must engage in capacity planning, configuration, management, maintenance, fault tolerance, and scaling. Most of the time, customers run over-provisioned cache clusters because scaling up and down is labor intensive and risky. Handling software updates or security patches is a challenge and requires pre-planning so that cache hit and miss rates are not impacted in the process. In fact, popular services like Amazon ElastiCache impose maintenance windows forcing customers to either plan for downtime or complicate their infrastructure further through multiple clusters with different slots for maintenance windows.

A truly serverless cache seamlessly supports high scale, maintains high cache hit rate, and delivers high availability without any work on the customer’s end. A truly serverless cache requires zero configuration and has zero planned downtime. Essentially, a serverless cache should just work—so customers can focus on their application core logic rather than spending precious cycles on maintaining and tuning caching infrastructure.

Hot keys: not your problem

A hot key is a popular key that receives more traffic than other keys. In Redis and Memcached clusters, the key determines where in the cluster that data is stored. This means when a key is hot, all the requests for it hit a single node and get bottlenecked, impacting the cache hit rates and application experience. With legacy caches, customers try to work around this by scaling up vertically—which may not be possible if they are already on the biggest node—or by refactoring the data model, which may take weeks.

A truly serverless cache detects hot keys and spreads them to additional nodes/shards—all behind the scenes. Customers no longer need to write extra software to monitor for hot keys or suffer weeks of outages after they are discovered.

Traffic bursts: not your problem

In order to handle bursts of traffic, caching clusters need to be pre-scaled before the traffic hits. This scale-up typically impacts cache hit rates, as nodes always start cold.   Prewarming new nodes requires additional work and complexity that customers just do not have time to invest in.

A truly serverless cache handles bursts of load seamlessly without overprovisioning. It scales instantly under load without impacting cache hit rates.

Portability: not your problem

Many serverless technologies are specific to a cloud provider. In some cases, these services may only exist in one cloud provider and not others. AWS Fargate, for instance, works in AWS, but it does not work in GCP, Azure, or other clouds.  

A truly serverless service works seamlessly across multiple cloud providers without requiring customers to make changes in their application when they switch cloud providers.

Continuous testing: not your problem

Continuously testing an application integration with a serverless service is challenging unless there’s a testing environment in their pipeline. Setting up such an environment is extra work for application developers (e.g. endpoint setting, SDK tuning)—and even if they go to the trouble of doing so, environmental idiosyncrasies often cause unforeseen issues.

A truly serverless cache provides a holistic developer experience where developers can unit test their application code without having to set up a special environment for it.

Does a truly serverless cache exist?

Yes. Momento Cache finally fills the gap in a fully serverless stack. It’s a cache built from the ground up to be serverless. Momento Cache requires zero configuration for setup, seamlessly meets high scale, maintains high cache hit rate, and delivers high availability without the typical leaky abstractions in the alternative solutions . Momento is a truly serverless cache because it:

  • Automatically detects hot-keys and adds partitions to mitigate impact
  • Warms cache nodes during scale-out, scale-in, and deployments to minimize impact on Cache Hit Rates
  • Has no maintenance windows (planned downtime is so 2000).
  • Supports both AWS and GCP

Engineers have spent too long dealing with half-baked serverless caching solutions. If you’re ready for the real serverless experience, get started with Momento for free today.

Share