because writing is clarifying

because writing is clarifying

Subbu Allamaraju’s Journal

Contemporary Views on Serverless and Implications

We want near-instantaneous elasticity of resources and never have to pre-allocate resources or pay for more resources than needed. We also want all the operational best practices baked into a runtime to free us from having to worry about most of the low-level automation, operations, and robustness to run our code. These are two of the most fundamental pursuits of cloud computing for nearly a decade, and serverless is the closest available to realize these opportunities.

However, as I look back into 2018, I found the year to be confusing for serverless on the message, value, and direction. Despite the potential, availability of a number of frameworks, publication of a number of books, and several conferences worldwide on this subject, and more important, continued enterprise adoption, we’re slow to realize the benefits.

What may be holding us back are our mental models and views on serverless. As a recent paper “Serverless Computing: One Step Forward, Two Steps Back” noted, “the notion of serverless computing is vague enough to allow optimists to project any number of possible broad interpretations on what it might mean.” I couldn’t agree more.

In this post, I want to summarize three contemporary views of what counts as serverless, and the implications of these views. My goal is to show that our views determine the outcomes and that unless we refine our views, we may not find a better future for ourselves.

1. Serverless as someone else managing your servers

In this view, a serverless capability shifts the operational responsibilities to a provider, so that, you, as the consumer of that serverless capability, do not have to think about managing servers. All the associated responsibilities like server provisioning, operating system upgrades, maintenance, capacity management etc., therefore, shift from the consumer to the provider of the capability. This view supposedly frees you from thinking about “ops” and lets you focus on your code.

In the extreme, you can extend this view to classify any service that someone else runs as being serverless. Here I’m using Wikipedia’s definition of a service as “a discrete unit of functionality that can be accessed remotely and acted upon and updated independently”. The slide below from Kelsey Hightower’s tweet exemplifies this point of view of serverless as an operational construct. At the time of writing this, I’m not aware of the original author of this slide.

Per this view, any cloud service qualifies to be “serverless”. You can grade each service by its “degree of serverless-ness” based on the ease of gaining agility, elasticity and cost efficiency. The easier a service gets these qualities, the more serverless the capability is. That’s what you see in the slide above from left to right.

CNCF’s serverless working group also falls into this view when describing “backend-as-service” (BaaS).

Backend-as-a-Service (BaaS), which are third-party API-based services that replace core subsets of functionality in an application. Because those APIs are provided as a service that auto-scales and operates transparently, this appears to the developer to be serverless.

BaaS is a jargon-word to describe multi-tenant middleware services.

A key limitation of this view is that it constrains you into focusing on outsourcing the heavy-lifting to a provider while ignoring a key property of serverless, which is that serverless also includes a programming and run-time environment to let you write and run your code.

For example, consider S3 and Lambda. Both are multi-tenant, elastic, auto-provisioned, pay-per-use services, though one offers you an API to store objects while the other gives you an opinionated programming framework and run-time to write and run your code.

This view also favors cloud providers like AWS that operate many managed services like those you see in the slide above, but not portable opensource serverless frameworks that exist in the wild today. Most open source and third-party serverless solutions require you to provision some resources and manage those yourself. For example, running a framework like OpenFaaS or Kubeless on Kubernetes is not serverless per this view since you still need to provision, manage, upgrade, and secure your Kubernetes clusters.

2. Serverless as functions and events

In this point of view, serverless is a programming model consisting of small units of code written as functions triggered by events through a declarative configuration. It is the idea of micro-services taken to its logical limit. In this view, functions, and events are the developer-facing abstractions to write applications.

This view entirely focuses on developer-facing abstractions. Per this view, any framework offering functions and events as abstractions is serverless. It really does not matter who operates the run-time and the resources that runtime needs. Below is tweet by Chad Arimura, who leads Oracle’s Fn Project, that summarizes this preference towards developer experience.

I’ve heard other proponents of Kubernetes based function frameworks express a similar view. As this view does not focus on who runs the servers, it provides the broadest umbrella for several open source projects to offer innovative and fun to use function and event-based programming frameworks.

Though developer experience is important, this view ignores properties like elasticity, cost efficiency, and lower operational overhead. You might also wonder if this view is nothing but a reverse-engineering of AWS Lambda to recreate the developer experience without the cost and operational efficiencies.

3. Serverless as functions as a service (FaaS)

This most commonly used view of serverless describes what was originally offered by AWS Lambda in 2014, and now followed by a few other cloud providers. It incorporates both a function and event-based stateless programming model, and a run-time offered as a service. In addition, you’ve access to a rich set of cloud services for middleware functions.

While this view serves certain stateless application patterns very well, it also pigeon-holes us into not thinking beyond functions and events. We can’t express every one of today’s and tomorrow’s programming problems in the world in the form of loosely coupled stateless ephemeral event-triggered functions.

No other work describes the consequences of this pigeon-holing better than the recently released UC Berkeley paper “Serverless Computing: One Step Forward, Two Steps Back”. Below are some example highlights from this paper.

  1. For a model training problem: “Lambda’s limited resources and data-shipping architecture mean that running this algorithm on Lambda is 21×slower and 7.3× more expensive than running on EC2.”
  2. For a low-latency prediction serving via batching problem: “This “serverful” version (that replaces Lambda and SQS with EC2 and ZeroMQ respectively) had a per batch latency of 2.8ms — 127×faster than the optimized Lambda implementation.” Text in parenthesis is mine.
  3. For a distributed computing problem: “in the (unachievable) best-case scenario — when each leader is elected immediately after it joins the system — the system will spend 1.9% of its aggregate time simply in the leader election protocol. Even if you think this is tolerable, note that using DynamoDB as a fine-grained communication medium is incredibly expensive: Supporting a cluster of 1,000 nodes costs at minimum $450 per hour.”

There are several other undocumented examples like these. For instance, I would not pigeonhole many of Apache Spark powered large-scale data crunching solutions into event-triggered functions.

Cloud-native managed services may fill the void for solving such problems while still offering elasticity, cost and operational efficiencies of serverless. I made such an argument in the past, and yet I recognize that waiting for such developments does nothing but stifle experimentation and innovation.

What is next?

Where do these views lead us? Not far from where we’re today.

Though I can’t back up with numbers, it is very likely that less than a tiny fraction of a percent of today’s worldwide compute capacity is used for running serverless workloads. The serverless opportunity is nearly infinite, and it is clear that today’s views on serverless won’t get us to a point of providing near-instantaneous elasticity, cost, and operational efficiencies for most programming problems. It is foolish to assume that we’ve all the serverless primitives necessary to solve all current and future programming problems.

Yet, I’m hopeful of the future. More examples like the UC Berkeley paper above shall continue to shine the light on limitations of contemporary views on serverless.

I also look forward to us acknowledging that event-triggered function as the primary developer facing abstraction is just one of several possibilities and that we need new types of frameworks offered as services to solve other types of problems.

Tomorrow’s serverless offerings will likely be “frameworks as services”, with “function as a service” being just one possibility to solve a certain class of stateless programming problems.

If you enjoyed this article, consider subscribing for future articles.

See Also