Publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2024
2023
- Warming Up a Cold Front-End with IgniteDavid Schall, Andreas Sandberg, and Boris GrotIn Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture , 2023
Serverless computing is a popular software deployment model for the cloud, in which applications are designed as a collection of stateless tasks. Developers are charged for the CPU time and memory footprint during the execution of each serverless function, which incentivizes them to reduce both runtime and memory usage. As a result, functions tend to be short (often on the order of a few milliseconds) and compact (128–256 MB). Cloud providers can pack thousands of such functions on a server, resulting in frequent context switches and a tremendous degree of interleaving. As a result, when a given memory-resident function is re-invoked, it commonly finds its on-chip microarchitectural state completely cold due to thrashing by other functions — a phenomenon termed lukewarm invocation. Our analysis shows that the cold microarchitectural state due to lukewarm invocations is highly detrimental to performance, which corroborates prior work. The main source of performance degradation is the front-end, composed of instruction delivery, branch identification via the BTB and the conditional branch prediction. State-of-the-art front-end prefetchers show only limited effectiveness on lukewarm invocations, falling considerably short of an ideal front-end. We demonstrate that the reason for this is the cold microarchitectural state of the branch identification and prediction units. In response, we introduce Ignite, a comprehensive restoration mechanism for front-end microarchitectural state targeting instructions, BTB and branch predictor via unified metadata. Ignite records an invocation’s control flow graph in compressed format and uses that to restore the front-end structures the next time the function is invoked. Ignite outperforms state-of-the-art front-end prefetchers, improving performance by an average of 43% by significantly reducing instruction, BTB and branch predictor MPKI.
- HotCarbonWhen Does Saving Power Save the Planet?Jackson Woodruff , David Schall, Michael F.P. O’Boyle , and 1 more authorIn Proceedings of the 2nd Workshop on Sustainable Computer Systems , 2023
The computing industry accounts for 2% of the world’s emissions. Power-efficient computing is a frequent topic of research, but saving power does not always save the environment. Jevons’ paradox states that resource savings from increases in efficiency will be more than compensated for by increased demand by a process called rebound — making these ineffective ways to decrease emissions.This is not the case for all applications within computing: applications whose demand is inelastic with respect to power consumption can have reduced power consumption. We analyze several large fields within computer science, including ML, the Internet and IoT, and provide directions on where power efficiency savings will help reduce carbon emissions.We present the economic tools needed to decide whether power-efficiency improvements are likely to result in reduced or increased emissions. We conclude that many problems in computer science do have characteristics of rebound, meaning that green energy is the only solution for many fields.
2022
- Lukewarm serverless functions: characterization and optimizationDavid Schall, Artemiy Margaritov , Dmitrii Ustiugov , and 2 more authorsIn Proceedings of the 49th Annual International Symposium on Computer Architecture , 2022
IEEE MICRO Top Picks Honorable Mention
Serverless computing has emerged as a widely-used paradigm for running services in the cloud. In serverless, developers organize their applications as a set of functions, which are invoked on-demand in response to events, such as an HTTP request. To avoid long start-up delays of launching a new function instance, cloud providers tend to keep recently-triggered instances idle (or warm) for some time after the most recent invocation in anticipation of future invocations. Thus, at any given moment on a server, there may be thousands of warm instances of various functions whose executions are interleaved in time based on incoming invocations.This paper observes that (1) there is a high degree of interleaving among warm instances on a given server; (2) the individual warm functions are invoked relatively infrequently, often at the granularity of seconds or minutes; and (3) many function invocations complete within a few milliseconds. Interleaved execution of rarely invoked functions on a server leads to thrashing of each function’s microarchitectural state between invocations. Meanwhile, the short execution time of a function impedes amortization of the warm-up latency of the cache hierarchy, causing a 31–114% increase in CPI compared to execution with warm microarchitectural state. We identify on-chip misses for instructions as a major contributor to the performance loss. In response we propose Jukebox, a record-and-replay instruction prefetcher specifically designed for reducing the start-up latency of warm function instances. Jukebox requires just 32KB of metadata per function instance and boosts performance by an average of 18.7% for a wide range of functions, which translates into a corresponding throughput improvement.