Location>code7788 >text

Learning Artificial Intelligence from Zero - Python-Pytorch Learning (V)

Popularity:878 ℃/2024-08-18 14:34:38

preamble

There were some typos in the text above, which have been corrected.
This article mainly introduces the training model and use the model to predict the data, this article uses some numpy and tensor conversion, forget that can be the basis of the second lesson together.

linear regression (math.)

Used in conjunction with numpy

First use datasets to make a data x and y, then combine it with the previous to find y_predicted.

# pip install matplotlib
# pip install scikit-learn
import torch
import numpy as np
import as nn
from sklearn import datasets
import as plt
# 0) Prepare the data
# Generate 100 rows and 1 column of data X_numpy and Y_numpy
# noise=20: the larger the value, the more random the data is.
# random_state=1: set the random seed to ensure the same X_numpy and Y_numpy are generated each time
X_numpy, Y_numpy = datasets.make_regression( n_samples=100, n_features=1, noise=20, random_state=1)

x = torch.from_numpy(x_numpy.astype(np.float32))
y = torch.from_numpy(Y_numpy.astype(np.float32))
y = ([0], 1) # 100 rows by 1 column is 100 rows by 1 column, so [0]=100,[1]=1
n_samples, n_features =
# 1) Model
input_size = n_features
output_size = 1
model = (input_size, output_size)
# 2) Loss function and optimizer
learning_rate = 0.01
criterion = ()
optimizer = ((), lr=learning_rate)
n_iter = 100
#The following loop calls forward propagation in the model and iteratively backpropagates using the loss function to update w and b, that is, to train the model
for epoch in range(n_iter).
    # forward pass and loss
    y_predicted = model(X)
    loss = criterion(y_predicted, y)
    # backward pass
    ()
    # update
    ()
    optimizer.zero_grad()
    if (epoch+1) % 10 == 0: print(f'epoch:{epoch+1},loss ={():.
        print(f'epoch:{epoch+1},loss ={():.4f}')
# plot
# model(X).detach() is to detach a tensor from the model, this tensor has nothing to do with the model anymore, it is simply understood that a new tensor object has been generated with a different memory address than the model.
div_tensor = model(X).detach()
predicted = div_tensor.numpy() # return the tensor of type, equivalent to turning the type
# is to plot the image on the coordinate system
# parameter 1 is the value of the x-axis, parameter 2 is the value of the y-axis, parameter 3 = is fmt (format string)
# Parameter 3 is introduced: ro is red dot r=red o=circle bo is blue dot b is blue g is green
(X_numpy,Y_numpy,'ro')
(X_numpy, predicted, 'bo')
()

Here we have already mentioned the concept of trained models.
We call the forward propagation in the model in a for loop, and then iteratively backpropagate using the loss function to update w and b. This operation is training the model.
Once the training is complete, we can use the model, which accepts the new matrix, to predict y.
Here, we are predicting x all over again after the loop ends.
Run the following figure:
image

It can be seen that the predicted y's are all on a line, this is because the predicted values are calculated based on w and b, therefore, the values are always on a straight line.
Note: A so-called linear relationship between x and y is a very complex multiplicative relationship between the elements of x and y.
Regarding linear regression, you can refer to the following figure for just a little bit of understanding.

image

Full training example

Below is an example of a complete training model and then using the model to predict.
First use datasets.load_breast_cancer() to get the data. Data X can be interpreted as a patient's indicator data and Y can be interpreted as whether this patient is a cancer patient.
Then by training the model, we give a set of X-indicator data of a patient and we can predict whether the patient is a cancer patient or not.

# pip install matplotlib
# pip install scikit-learn
import torch
import numpy as np
import as nn
from sklearn import datasets
import as plt
from import StandardScaler
from sklearn.model_selection import train_test_split

# 0) prepare data
bc = datasets.load_breast_cancer() # load breast_cancer dataset
print(()) #data, target, feature_names
x, y = , # take feature data as input data x (usually a 2D array or DataFrame), take label as target data y (usually a 1D array or Series)
print("x 569*30",, "y 569*1",)
n_samples, n_features = # n_samples=569 n_features=30
#train_test_split is used to randomly split the dataset into training set and test
# X_train: the feature part of the split training set, containing most of the data used to train the model.
# X_test: The feature part of the split test set, containing a small portion of the data used to evaluate the model.
# y_train: target value corresponding to X_train, used to train the model.
# y_test: target value corresponding to X_test, used to evaluate the model.
# test_size=0.2 means 20% of the data will be used for testing, and the remaining 80% will be used for training.
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.2, random_state=1234)

print("X_train 455*30",X_train.shape, "X_test 114*1",X_test.shape, "y_train 455*1",y_train.shape, "y_test 114*1",y_test.shape)
print("Type 1: X_train", type(X_train), "X_test", type(X_test), "y_train", type(y_train), "y_test", type(y_test))
# scale
# StandardScaler is used to normalize (i.e., scale) the data, the scaling is done using [normalization formula], # scale(y_test), "y_train", "y_test", type(y_test)
# Roughly the logic is that each x is scaled to (x-mean of x set)/root ((x-mean of x set)² )
# After normalization, the mean of each feature (feature=column) becomes 0 and the standard deviation (squared difference) becomes 1.
sc = StandardScaler()

# The mean and standard deviation will be calculated in the fit_transform function, and then the following transform will use the mean and variance from fit_transform
X_train = sc.fit_transform(X_train) # normalize X_train with the calculated mean and variance
X_test = (X_test) #Normalize X_test with the mean and variance computed by fit_transform

#x,y related data to tensor array
X_train = torch.from_numpy(X_train.astype(np.float32))
x_test = torch.from_numpy(X_test.astype(np.float32))
y_train = torch.from_numpy(y_train.astype(np.float32))
y_test = torch.from_numpy(y_test.astype(np.float32))

print("Type 2: X_train", type(X_train), "X_test", type(X_test), "y_train", type(y_train), "y_test", type(y_test))

print("X_train 455*30",X_train.shape, "X_test 114*1",X_test.shape, "y_train 455*1",y_train.shape, "y_test 114*1",y_test.shape)

# y_train = y_train.shape, "y_test 114*1", "y_test 114*1", "y_test.shape", "y_test 114*1", "y_test.shape", "y_test 114*1", "y_test.shape".
y_train = y_train.view(y_train.shape[0],1)
y_test = y_test.view(y_test.shape[0], 1)

# 1) model
# f=wx + b,sigmoid at the end


class LogisticRegression().
    def __init__(self, n_input_features): super(LogisticRegression, self).
        super(LogisticRegression, self). __init__()
         = (n_input_features, 1) # Parameter 1: 30 (columns for x) Parameter 2: 1 (columns for predicting y)

    def forward(self, x).
        y_predicted = ((x)) # forward propagation performed by (x), sigmoid it converts the return value to a probability value between [0, 1] (a probability value is a percentage, e.g. returning 0.7 means 70%)
        return y_predicted


model = LogisticRegression(n_features)
# 2) loss and optimizer
learning_rate = 0.01
criterion = ()
optimizer = ((), lr=learning_rate)
# 3) training loop
num_epochs = 100
for epoch in range(num_epochs): # forward pass and loss.
    # forward pass and loss
    y_predicted = model(X_train)
    loss = criterion(y_predicted,y_train)
    # backward pass
    ()
    # updates
    ()
    # zero gradients
    optimizer.zero_grad()
    if(epoch+1)% 10 == 0:print(f'epoch:{epoch+1},loss ={():.4f}')
# Use of the model, here is what was not there before
with torch.no_grad():
    y_predicted = model(X_test) # pass X_test into the model to get the expected y for X_test
    y_predicted_cls=y_predicted.round() #round up y_predicted = ([0.3, 0.7, 0.5, 0.2, 0.9]) y_predicted.round() results in: tensor([0., 1., 0., 0., 0., 1.])
    #eq(y_test): This method compares each element between y_predicted_cls and y_test and returns True if the two values are equal, otherwise it returns False. the result is a boolean tensor.
    eq = y_predicted_cls.eq(y_test)
    print("equal",eq)
    eqsum = ()
    print("eqsum",eqsum)
    print("y_test.shape[0]", y_test.shape[0]) # y_test.shape[0] is the row that returns y_test, who turns out to be a matrix of 114 rows and 1 column, so returns 114
    float_y_test= float(y_test.shape[0]) # convert to float to prepare for the following division
    print("float(y_test.shape[0])", float_y_test)
    acc=eqsum/float_y_test
    print(f'Similarity accuracy of y predicted by X_test to y_test = {acc:.4f}')

In this way, we not only trained the model, but also used the model to predict a set of untrained data and successfully predicted, the value of Y (whether or not the patient was cancerous), and then took our predicted data, and compared it to the real data, and the correctness rate was 91.23 percent
image
Portal:
Learning Artificial Intelligence from Zero - Python-Pytorch Learning (I)
Learning Artificial Intelligence from Zero - Python-Pytorch Learning (II)
Learning Artificial Intelligence from Zero - Python-Pytorch Learning (III)
Learning Artificial Intelligence from Zero - Python-Pytorch Learning (IV)
Learning Artificial Intelligence from Zero - Python-Pytorch Learning (V)
That's enough studying for now.


Note: This post is original, please contact the author for authorization and attribution for any form of reproduction!



If you think this article is still good, please click [Recommend] below, thank you very much!

/kiba/p/18356904