Location>code7788 >text

Problems encountered with locust's multi-process implementation of distributed pressure testing

Popularity:997 ℃/2024-07-24 22:06:37

Multi-Process Distributed Implementation.

Locust distribution, you need to use the command locust to start the workers one by one, which is a bit cumbersome in use.
Below is a one-click startup with the help of multiprocessing, by a set number of workers;


from locust import FastHttpUser, task, User, events, HttpUser



#class WebsiteUser(FastHttpUser): misuse!!!
class WebsiteUser(HttpUser):  #Correct use
    tasks = [TaskSet]
    host = ""
    wait_time = between(0, 0)

def processFun(cmd):
    (cmd)


def start_by_process(tst_locust_file, slave_num, master_host='127.0.0.1', locust_web_port=8090, no_web=False,
                     user_num=10, user_rate=1, result_log='', run_log=''):
    p_lst = []
    if no_web:
        slave_cmd = f"locust -f {tst_locust_file}  --worker --master-host={master_host}"
        master_cmd = f"locust -f {tst_locust_file} --headless -u {user_num} -r {user_rate} --master"
    else:
        slave_cmd = f"locust -f {tst_locust_file}  --worker --master-host={master_host}"
        master_cmd = f"locust -f {tst_locust_file} --web-host {master_host} --web-port {locust_web_port} --master"
    master_cmd += f' --logfile {result_log} --loglevel INFO 1>{run_log} 2>&1'
    # activate (a plan)master
    process_master = (target=processFun, args=(master_cmd,))
    process_master.start()
    p_lst.append(process_master)
    # activate (a plan) worker
    for index_num in range(slave_num):
        process = (target=processFun, args=(slave_cmd,))
        ()
        p_lst.append(process)

    # Obstruction Waiting
    for process in p_lst:
        ()
        
        
if __name__ == "__main__":
    tst_locust_path = 'wms/wms_test'
    slave_num = 3 # 计划所activate (a plan)workerquantities, Do not exceed the operating machine'sCPUquantities
    master_host = '127.0.0.1'
    master_host = '192.168.1.102'
    locust_web_port = 8099 # locust webPage Mount Port
    no_web = False
    tst_locust_file = (__file__) # The name of this script
    (().replace(tst_locust_path.replace('/', ), ''))
    tst_locust_file = f'{tst_locust_path}/{tst_locust_file}'
    start_by_process(tst_locust_file, slave_num, master_host, locust_web_port, no_web=no_web)

Question.

In the above code, I've used theclass WebsiteUser(FastHttpUser): incorrect use!!!!In this way, to use the User class of Locust, when executing the pressure test, the work will miss due to high CPU, and the pressure test will be terminated. When switching toclass WebsiteUser(HttpUser): # Proper use of theWhen you run Locust's pressure test, you can run it normally.

Introduction to HttpUser and FastHttpUser.

In Locust.HttpUser cap (a poem)FastHttpUser are two different user behavior simulation classes that are used to simulate different HTTP client behaviors. Here are the main differences between these two classes:

HttpUser

  • HttpUser is Locust's basic HTTP user simulation class that uses therequestslibrary to send HTTP requests.
  • HttpUser Supports multi-threaded or multi-process mode, depending on your configuration.
  • It offers a wealth of features and flexibility, including support for retrying, session management, and the use ofrequestsAll the features of the library.
  • due torequestsThe library itself is synchronized, so in highly concurrent scenarios, theHttpUserMay result in high CPU usage, especially if there is not enough wait time between requests.
  • HttpUserSuitable for most HTTP load testing scenarios, especially those that require a high degree of complexity and flexibility.

FastHttpUser

  • FastHttpUser is a newer class that useshttpxlibrary to send HTTP requests, which is an asynchronous HTTP client library.
  • FastHttpUser Provides higher performance and lower CPU usage because it uses asynchronous I/O and can perform other tasks while waiting for a network response.
  • It is particularly well suited for highly concurrent scenarios and can significantly reduce CPU utilization, especially in the case of a large number of concurrent users.
  • FastHttpUser as opposed toHttpUserFor that matter, it may not supportrequestsAll the advanced features of the library are supported, but in most cases basic features such as GET, POST requests, etc. are supported.
  • If your goal is to perform massively concurrent tests while keeping CPU usage low, theFastHttpUserIt's a good choice.

summarize

  • If your test scenario requires highly customized request settings or you are already using therequestsadvanced features of the library, thenHttpUserIt might suit you better.
  • If you want to reduce CPU usage in high concurrency scenarios and are comfortable with certain feature limitations, then theFastHttpUserIt's a better option.

typical example

The following is a summary of the use of theHttpUserrespond in singingFastHttpUserThe simple example of the

HttpUser Example

from locust import HttpUser, task, between

class MyHttpUser(HttpUser):
    wait_time = between(1, 5)

    @task
    def my_task(self):
        ("/some_endpoint")

FastHttpUser Example

from locust import FastHttpUser, task, between

class MyFastHttpUser(FastHttpUser):
    wait_time = between(1, 5)

    @task
    def my_task(self):
        ("/some_endpoint")

Please note that when using theFastHttpUserYou need to make sure that your version of Locust supports this class. If you are not sure, check your version of Locust or consult the official documentation.

Cause Analysis.

  1. Asynchronous I/O and multi-process interaction:
  • FastHttpUser uses the httpx library for asynchronous HTTP requests, which is an asynchronous I/O library based on trio or anyio.
  • In a multi-process environment, each process has its own event loop, which may lead to asynchronous I/O operations in each process that cannot be efficiently coordinated with other processes, thus increasing the burden on the CPU.
  1. Multi-process and asynchronous I/O compatibility:
  • In multiprocess mode, each process has a separate memory space and event loop, which may mean that each process is running its event loop individually rather than sharing a global event loop. In this case, each process is trying to perform a large number of asynchronous tasks at the same time, which may lead to increased CPU utilization.
  1. Scheduling of event loops:
  • In FastHttpUser, each process may have its own event loop, and in multiprocess mode, these event loops may not be efficiently scheduled, resulting in increased CPU usage.
  • The asynchronous nature of httpx typically performs better in a single process because it can take full advantage of the event-driven model, but in a multi-process environment, each process needs to maintain its own event loop, which can lead to additional overhead.
  1. Mismatch of concurrency models:
  • FastHttpUser was originally designed to take advantage of asynchronous I/O to improve performance, especially in highly concurrent scenarios. However, in multi-process mode, this advantage may be offset by inter-process isolation and communication overhead.

Summary: FastHttpUser is more suitable for single-process use, HttpUser is more suitable for multi-process situations