Hello, everyone, I am Six! Today let's talk about how to use AI big model (such as GPT-3.5) to do automated testing, don't look at this thing sounds high, but not so God, follow me we step by step, to ensure that you can also easily get it done, learn to guarantee that you work in the old save things.
This article is geared towards functional testing and zero-based white, here I will try to use the vernacular way to give examples to explain, and strive to be understood by all, it is recommended that we firstfavorite, in case you can't find it later. 😎
Prepare in advance and be steady
First, we have to prepare something, just like we have to prepare the ingredients before cooking.
Installing Python
The computer must be installed on a Python, this needless to say, who now has a computer does not have a Python ah. If you do not know how to install, Baidu "Python installation tutorial", follow the line.
Installing the OpenAI library
Next, we have to install an OpenAI library, which is used to call the GPT-3.5 model. Open the command line (CMD on Windows, Terminal on Mac and Linux) and enter the following command:
pip install openai
Getting the API key
We need to figure out that API key thingy first. It's like you need a key to get out of the house, you can't get in without it.
api_key = "your_openai_api_key"
openai.api_key = api_key
See, it's just these two lines of code that set up your API key. You just replace "your_openai_api_key" with your own key, and that's it.
Make requirements and clarify objectives
Next we need to tell the big model what we want to do. We want test cases for user login, both normal and abnormal, but not more than two. We need to be very clear about what we want, otherwise the big model won't know how to work with us.
prompt = "Generate test cases for user login, only normal and abnormal cases, only 2 cases generated"
Just tell the big model what we want in one sentence. Simple and clear, no fuss.
Building messages that come and go
It's like talking to a person. You have to have something to say and something to say. We have to tell the big model what it does and what we do. That way it can serve us better.
messages = [
{"role": "system", "content": "You are a helpful assistant."} ,
{"role": "user", "content": prompt}
]
These two lines of code are building a list of messages. The first message tells the big model that it's a useful assistant, and the second message is the hint we just wrote. It's like saying hello to the big model and giving it the task.
Calling the API, firing on all cylinders
This is a very important step. It's like calling someone to do something, we have to get the call out there so that someone will respond to us. And we have to find the right person, the big, awesome GPT-3.5-turbo model.
try:
response = (
model="gpt-3.5-turbo",
messages=messages,
max_tokens=512
)
except Exception as e:
print(f"Error generating response: {e}")
exit(1)
This code is calling the big GPT-3.5-turbo model. If something goes wrong, it prints out an error message and the program quits. We have to be careful not to let that go wrong.
Extract use cases as if they were treasures
The big models are responding to us, and we have to pick out the useful information. It's like looking for treasure in a pile of junk. We have to be careful not to miss the good stuff.
generated_text = [0].()
test_cases = generated_text.split('\n')
The first line of code takes the response given to us by the big model and removes the margins. The second line of code takes that response and breaks it down into test cases by line breaks. Now we have a bunch of test cases, like a treasure trove.
Print the use case for a first look
We need to see what kind of test cases the big model generates for us. It's like when you get a delivery and you have to open it up and see what it is. We need to take a look at it to get a good idea of what's going on.
print("Generated Test Cases:")
for i, case in enumerate(test_cases):
print(f"Test Case {i+1}: {case}")
This code is printing out the test cases generated by the big model. It's typed out one by one so that we can see it clearly. This way, we can know whether the test cases generated by the big model are good or not, and whether they are useful or not.
Parsing use cases to get to the bottom of the problem
The test cases that the big model generates for us may not be very usable, so we have to make sense of it. It's like getting a coded letter and having to crack it. We have to find all the key information in it to make it work.
def parse_test_case(case):
parts = (',')
username = parts[0].split(':')[1].strip()
password = parts[1].split(':')[1].strip()
expected_result = parts[2].split(':')[1].strip()
return username, password, expected_result
This function is for parsing test cases. It breaks a test case into parts by commas, then breaks each part into two parts by colons, picks out the useful information, and removes the blanks on both sides. This way we get the username, password and expected results, so we can better automate our tests.
Print again to make sure there are no errors
We need to see what the parsed test case looks like. It's like cracking a password and seeing what it says. We need to make sure we're parsing it right and that there are no problems.
parsed_test_cases = []
for i, case in enumerate(test_cases):
try:
username, password, expected_result = parse_test_case(case)
parsed_test_cases.append((username, password, expected_result))
print(f"Parsed Test Case {i+1}: Username={username}, Password={password}, Expected Result={expected_result}")
except Exception as e:
print(f"Error parsing test case {i+1}: {e}")
This code iterates through the test cases generated by the big model, parses them one by one, and then prints out the parsed results. If something goes wrong, it prints out an error message. This way we can find problems and solve them in a timely manner.
Return to use cases and prepare for battle
Finally, we have to get the parsed test cases out so that we can use them somewhere else. It's like buying something from the grocery store and taking it home. We've got to have these cases ready to go for automated testing.
print("\nParsed Test Cases:")
for i, (username, password, expected_result) in enumerate(parsed_test_cases):
print(f"Test Case {i+1}: Username={username}, Password={password}, Expected Result={expected_result}")
This code just prints out the parsed test cases again so we can see them more clearly. That way we can use them with confidence.
Summarize and put the finishing touches
We've gone through this process, and we've got the big AI model working for us in a very clear way. First, we set up the API key, then we tell the big model what we want to do, then we call the API to let the big model generate test cases for us, then we extract the generated test cases, print them out and look at them, then we parse them, then we print them out and look at them, and then we finally take the parsed test cases and use them.
Old simple, you just follow me this step by step, designated to learn.
After running it, you can get the test cases generated by the big model for user login, and you can also parse these test cases in a clear way, so that you can use them in automated tests. It's really good!
The effect is as follows
Generated Test Cases.
Test Case 1: Below are some simple user login test cases, including normal cases and some common exceptions:
1. Normal Case:
- Enter the correct username and password.
- You should be allowed to log in successfully.
Sample code (Python):
`` Python
def test_login_normal(self):
("/login")
username = .find_element_by_name("username")
password = .find_element_by_name("password")
username.send_keys("test_user")
password.send_keys("test_password")
login_button = .find_element_by_css_selector(".login-button")
login_button.click()
# Ensure that the login page is displayed with the correct content
assert "Home" in .page_source, "Failed to navigate to Home page after successful login"
``
2. Exception:
- Invalid username or password entered.
- Failed to successfully login.
- The web page failed to load.
- Captcha error.
Sample code (Python):
``python
def test_login_failure(self):
("/login")
username = .find_element_by_name("username")
password = .find_element_by_name("password")
username.send_keys("invalid_username")
password.send_keys("invalid_password")
login_button = .find_element_by_css_selector(".login-button")
login_button.click()
# Make sure you don't go to the login page
assert .current_url ! = "/login", "Failed to navigate to login page"
# Make sure the captcha prompt does not appear
assert not .find_elements_by_class_name("captcha"), "Captcha should be hidden"
# Ensure that no error message is displayed
assert not .find_element_by_class_name("error-message").is_displayed(), "Error message should not be displayed"
# Ensure that there is no jump to the registration page
assert .current_url ! = "/register", "Failed to navigate to register page"
def test_login_timeout(self).
("/login")
username = .find_element_by_name("username")
password = .find_element_by_name("password")
username.send_keys("test_user")
password.send_keys("test_password")
login_button = .find_element_by_css_selector(".login-button
Error parsing test case 1: list index out of range
I dare say, this article is really full of dry goods, if you do not know how to use AI big model to engage in automation testing, then hurry to follow this article to learn it. Guaranteed to make you become a master of automation testing, in the work of the great color.
If you need all the source code, public reply "AI automation" Get, without the quotes!