preamble
This article describes the use of neural networks for practical purposes.
The code used is from Learning Artificial Intelligence from Scratch - Python-Pytorch Learning (IX).
code implementation
mudule definition
First we customize a module by creating a torch_test17_Model.py file (this module should be defined in a separate py file) as follows:
import as nn
import as F
class ConvNet():
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
= nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = (16*5*5, 120)
self.fc2 = (120, 84)
self.fc3 = (84, 10)
def forward(self, x):
x = ((self.conv1(x)))
x = ((self.conv2(x)))
x = (-1, 16*5*5)
x = (self.fc1(x))
x = (self.fc2(x))
x = self.fc3(x)
return x
return x
Module Creation
Write the py file that creates the module with the following code:
import torch
import as nn
import torchvision
import as transforms
import torch_test17_Model as tm
device = ('cuda' if .is_available() else 'cpu')
input_size = 784
hidden_size = 100
num_classes = 10
batch_size = 100
learning_rate = 0.001
num_epochs = 200 # 要训练200-400轮效果最好
transform = (
[(), ((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_dataset = .CIFAR10(
root='./data', train=True, download=True, transform=transform)
train_loader = . (
dataset=train_dataset, batch_size=batch_size, shuffle=True)
model = ().to(device)
criterion = ()
optimizer = ((), lr=learning_rate)
n_total_steps = len(train_loader)
print("number total epochs(训练的回合):",num_epochs)
print("number total steps(训练的次数):",n_total_steps)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# : ([100, 3, 32, 32])
# images张量的四个维度是(B, C, H, W)
# B 是批量大小(That is, the number of images)。
# C is the number of channels of the image(for example,RGB The number of channels of the image is 3)。
# H respond in singing W 分别是图像的高度respond in singing宽度。
print(":", ) #100classifier for objects in rows such as words,The latter dimension is3,32,32。This is a picture message。
# lablesis the correspondenceimagesthese100Tags for each image
print(":", )
print("labels[0].item():", labels[0].item()) # Example output labels[0].item()=6
images = (device)
labels = (device)
# forward propagation
outputs = model(images)
loss = criterion(outputs, labels)
print("()",()) # Example output ()=2.300053596496582
# 逆向传播respond in singing优化
optimizer.zero_grad()
() ##执classifier for objects in rows such as words逆向传播 know how to usecriterionto find the partial derivative of the functional relationship between,thencexvalue of,Bring in the partial derivative formula to find the value,And then multiply it byloss,updatedx(be) worth
()
print(f'training roundEpoch [{epoch}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {():.4f}')
print('==================')
print('End of training')
filePath = "" #No path,will be saved to thepythonDirectory where the file is located
(model, filePath)
print('Save complete')
The code will output the value of loss, which we are going to focus on.
A larger Loss value indicates a larger gap between the model's predictions and the true labels, and poorer model performance.
A smaller Loss value indicates that the model's predictions are closer to the true labels and the performance gradually improves.
That is, the model works when the loss value is close to 0.
Module Usage
Write the py file that validates the image using the module, note that it references the torch_test17_Model.py file with the following code:
import torch
import torchvision
import as transforms
import as F
import torch_test17_Model as tm
device = ('cuda' if .is_available() else 'cpu')
batch_size = 100
transform = (
[((32, 32)),# If the size of the image processed for prediction is different from the size used for training, e.g., if the size of the image input for evaluation is [100, 3, 64, 64] and the size used for model training is [100, 3, 32, 32], you can use this transformation
(), ((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]))
test_dataset = .CIFAR10(
root='. /data', train=False, download=True, transform=transform)
test_loader = (
dataset=test_dataset, batch_size=batch_size, shuffle=False)
filePath = "" #No path, will save to the python file directory
model = (filePath,weights_only=False)
() # Switch to evaluation mode
############################ use threshold to judge ######################################
threshold = 0.7 # set a threshold, indicating the confidence of the model, with threshold judgment, the model is required to be more accurate, if only two rounds of training, there will be all the judgment does not pass the case
with torch.no_grad().
for images, labels in test_loader.
print("############################ judging ######################################")
images = (device)
labels = (device)
outputs = model(images)
print("",)
# Calculate softmax probabilities
probabilities = (outputs, dim=1)
max_probs, predicted = (probabilities, 1)
for i in range(len(predicted)).
if max_probs[i] < threshold: # If the confidence is below the threshold, the category is considered unknown.
print(f "Image {i} is considered as unknown category with confidence {max_probs[i]:.4f}")
else.
print(f "Image {i} is considered to be category {predicted[i]} with confidence {max_probs[i]:.4f}")
判断图片是什么的时候,使用阈值模式。
结语
到此,我们对于神经网络,卷积神经网络,深度网络都有了一定了解。
然后我们就可以继续学习transformer了。
portal:
Learning Artificial Intelligence from Zero - Python-Pytorch Learning - Full Episode
Note: This post is original, please contact the author for authorization and attribution for any form of reproduction!
If you think this article is still good, please click [Recommend] below, thank you very much!
/kiba/p/18609581