Architectural Implications of Function-as-a-Service Computing
Chances are that you have heard the rapidly growing term "serverless" or even worked with serverless offerings such as AWS Lambda. Everyone loves them because serverless cloud services provide fine-grained provisioning of resources that scale automatically with user demand. Function as a service (more commonly-known as FaaS) applications follow this serverless model. The developer provides their application as a set of functions that are executed in response to a user- or system-generated event.
The emerging FaaS paradigm got us into thinking about its implications on future cloud servers. Specifically, we were curious to understand whether the short-lived, deeply-virtualized function executions have overheads on contemporary cloud processors. So we took the commercial-grade Apache OpenWhisk FaaS platform and carefully ran it on real servers to investigate and identify the architectural implications of FaaS serverless computing.
We found that the FaaS workloads, along with the way that FaaS inherently interleaves short functions from various tenants frustrates many of the locality-preserving architectural structures common in modern processors. Some of our findings have been reported by others too. These include a noticeable slowdown due to containerization compared to native execution, and significant latency overhead of cold starts. However, we also realized deeper architectural overheads. For instance, short functions have considerably higher branch mispredictions rates, per invocation memory bandwidth usage is much higher compared to native execution, and inter-function interference has a major impact on the server's IPC. I invite you to read our paper to find out these and more detailed observations.
Given the limitations we faced early on in the project, we developed a testing and profiling tool, called FaaSProfiler. We open-sourced it, as we think it would accelerate other serverless research endeavors. Here are some of its useful components:
- Invoking various functions with various invocation patterns made easy. You can simply specify synthetic distributions or feed in traces of inter-arrival times to describe complex invocation patterns in a rich JSON format.
- Automated handling of large amount of performance and profiling data.
- A number of representative benchmarks and microbenchmarks.
Related talks:
-
Title: Architectural Implications of FaaS Computing [Slides] [Video]
Speaker: Mohammad Shahrad
Venue: OpenWhisk Technical Interchange Call
Date: 10/30/2019
-
Title: Architectural Implications of Function-as-a-Service Computing
Speaker: Jonathan Balkind
Venue: 52nd IEEE/ACM International Symposium on Microarchitecture, Columbus, Ohio
Date: 10/16/2019 -
Title: Serverless Computing, An Architectural Perspective
Speaker: Mohammad Shahrad
Venue: Electrical and Computer Engineering Department, University of Waterloo, Canada
Date: 10/10/2019
You may also access the paper PDF for free through the ACM Author-Izer link, below:
MICRO '52 Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019
Related publication:
-
Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider
M. Shahrad, R. Fonseca, I. Goiri, G. Chaudhry, P. Batum, J. Cooke, E. Laureano, C. Tresness, M. Russinovich, and R. Bianchini
2020 USENIX Annual Technical Conference (ATC '20)
[Received the Community Award at USENIX ATC '20.]
Tweet