Location>code7788 >text

0 Basic Reading Top Session Paper - Kappa: A Programming Framework for Serverless Computing

Popularity:844 ℃/2024-11-06 02:37:24

Original link

Code: Quick use of kappa

First of all first of all, it's okay to go ahead and find outlambda architecture

Abstract

In this paper Kappa, a framework to simplify serverless development, is presented. It usescheckpointsto handle lambda function timeouts and provide theconcurrency mechanismRealization of parallel computation and coordination

1 Introduction

Serverless computing is a new cloud paradigm where, instead of configuring virtual machines (VMs), tenants register event handlers (e.g., Python functions) with the platform. When an event occurs, the platform invokes the handler on a lambda function, a short-lived stateless execution environment. lambda functions can execute for a bounded period of time (e.g., 15 minutes on AWS) before terminating.
There are two main challenges: (1) programmers must manually divide the computation to fit the time constraints of the lambda function; (2) programmers do not have concurrency or synchronization primitives available (e.g., threads, locks, semaphores, etc.), and must either implement such primitives, limit themselves to using unshared parallelism, or avoid the use of parallel lambda functions that have been developed to streamline development

2 Background and Motivation

2.1 Comparison to Existing Framework

2.2 Lambda Function Time Limit

The reason for the time constraints of the Lambda function in serverless computing is mainly to optimize the operator's task allocation and resource management, so that the operator no longer needs to predict the completion time of a task or carry out complex migration operations, and is able to be more flexible in task allocation and resource utilization

3 Kappa Design

Kappa has three components: (1) a coordinator, which is responsible for starting and resuming tasks and implementing Kappa's concurrency primitives; (2) a compiler, which generates the code needed for checkpointing; and (3) a library for tasks to use for checkpointing, concurrency processing, and synchronization

3.1 Coordinator

The Kappa coordinator is responsible for scheduling tasks over lambda functions, enabling synchronization and cross-task communication, keeping track of task metadata (including checkpoints), and providing fault tolerance, which the coordinator itself supports by replicating its state across backup stores (e.g., Redis clusters), and by managing remote procedure calls (RPCs), which include operations such as generating new tasks, checkpointing, operations such as generating new tasks, checkpointing, message queuing, and retrieving task results.

3.2 Checkpointing

Kappa uses checkpoints to tolerate lambda function timeouts and prevent RPC repetition, creating "checkpoints" at certain key points in the execution of a task to record the current state of the task (including variables, control flow, etc.). Use a technique called "continuations" to create checkpoints , continuations is a way to save the program control flow , this approach does not need to rely on traditional server persistence state , but through serialization of the state of the task data saved to external storage (such as Redis or S3) , the state of the task . Provides synchronous and asynchronous checkpointing modes. Synchronous checkpointing will briefly stop the task execution when saving the state , while asynchronous checkpointing allows the task to continue execution , checkpointing data is saved in the background , checkpointing data is distributed and stored on multiple nodes , and supports multi-task simultaneous creation of checkpoints .


(b) shows a continuation function generated by the compiler to preserve the flow of execution after a checkpoint, and (c) resumes at a pause point using an exception-handling mechanism that ensures that the task can be resumed from the interruption

3.3 Concurrency API

Two basic concurrency mechanisms: task initiation and task synchronization

Task Start (spawn) Parallel Mechanism: spawn RPC is used to start a new task, execute a certain function call (e.g., f(args)) in a parallel fashion, and return a Future object that can be used to track the results of the task. The working mechanism is by creating an initial checkpoint. When the system restores this checkpoint, it executes the corresponding function call. At this point, the coordinator (Coordinator) calls a new lambda function that recovers from that checkpoint and executes the task.

First-in, first-out (FIFO) queuing mechanism: If a task tries to queue into a full queue or out of an empty queue, the task will be blocked. This mechanism can be used not only to achieve inter-task communication, but also as a lock and signal quantity to control concurrent access to resources.

4 Implementation

The compiler is written in Python, the state is serialized through Python's pickle library, each task is managed by a Go program (goroutine), synchronization between tasks is achieved using Go's channels, and the atomicity of state updates is ensured through locking mechanisms and Redis transactions.

5 Evaluation

Checkpoint Overhead Test: Measure the latency of synchronous and asynchronous checkpoints by having the Lambda function create a checkpoint every 100ms. Synchronous checkpoints suspend application processing until the checkpoint data is persisted; asynchronous checkpoints complete the persistence operation in the background, allowing foreground computation to continue.

Concurrent operation performance test: inter-task messaging using multi-producer multi-consumer FIFO queues to evaluate the latency of task communication

End-to-end application evaluation: Testing included five Kappa application scenarios: TPC-DS SQL Query, Word Count, Parallel Grep, Streaming, and Web Crawler.

6 Limitations

The Kappa compiler does not yet fully support a number of Python features, including try/except, yield, async/await, nested function definitions and global variables, checkpointing only in code converted by its compiler, a garbage collection mechanism, and a lack of static checking for serializability.