Previous article "Artificial Intelligence Model Training Techniques, Regularized!
PREAMBLE: The real value of AI models lies in their practical applications, not just in the theoretical stage. This section will show how the models designed and trained in the previous sections can be applied to real-world problems through a simple and common application scenario. We will use the trained model to classify sentences, specifically, to identify the categories of user comments on social platforms. With such a technique, social platforms can analyze user sentiment in real time and quickly take appropriate responses, such as mitigating conflicts, enhancing user experience, or even optimizing the platform's recommendation algorithm.
Next, we'll walk you through the process step-by-step, from sentence coding to interpreting the results of model predictions, showing how to combine theory and practice to unleash the full potential of AI.
Classifying Sentences Using Models
Now that you've created the model, trained it, and solved many of the problems that cause overfitting through optimization, the next step is to run the model and check its results. To do this, you need to create an array containing new sentences. Example:
sentences = [
"granny starting to fear spiders in the garden might be real",
"game of thrones season finale showing this sunday night",
"TensorFlow book will be a best seller"
]
These sentences are then encoded using the same tokenizer that was used to create the vocabulary during training. This is important because only using the same tokenizer ensures that the same vocabulary and tokens from model training are used.
sequences = tokenizer.texts_to_sequences(sentences)
print(sequences)
The printout results in a sequence of the above sentences:
[[1, 816, 1, 691, 1, 1, 1, 1, 300, 1, 90],
[111, 1, 1044, 173, 1, 1, 1, 1463, 181],
[1, 234, 7, 1, 1, 46, 1]]
There are a lot of 1-markers here (for "
Before passing these sequences to the model, you need to make sure that their shape matches the model's expectations - that is, the target length. You can do this by using pad_sequences as you did when training the model:
padded = pad_sequences(sequences, maxlen=max_length,
padding=padding_type, truncating=trunc_type)
print(padded)
This will output the sentence as a sequence of length 100, so the output of the first sequence will look like this:
[ 1 816 1 691 1 1 1 1 300 1 90 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 ]
This sentence is very short indeed!
Now that the sentences have been tokenized and populated into a format that meets the model's input dimension requirements, it's time to pass them to the model and get the predictions. This is very simple and only requires this operation:
print((padded))
The results are returned and printed as a list, with a high score indicating a higher likelihood of irony. Here are the results for our example sentence:
[[0.7194135 ]
[0.02041999]
[0.13156283]]
For the first sentence, "granny starting to fear spiders in the garden might be real," despite the fact that it contains a number of stop words and has a large number of zero fillers, it received a high score (0.7194135), which indicates that it has a high degree of irony. The last two sentences have much lower scores, indicating that they have a lower likelihood of being ironic.
To summarize: yes, why do humans need AI models? Because there are many real-world application scenarios where traditional computing programs or programming codes cannot effectively solve problems, and these scenarios can often only be handled well by humans. It is in order to improve efficiency and reduce operational costs that major companies and enterprises invest huge amounts of money in developing and designing AI models.
Previous article "Artificial Intelligence Model Training Techniques, Regularized!
Preface:The real value of AI models lies in their practical applications, not just in the theoretical stage. In this section, we will show how the models designed and trained in the previous sections can be applied to real-world problems through a simple and common application scenario. We will use the trained model to classify sentences, specifically, to identify the categories of user comments on social platforms. With such a technique, social platforms can analyze user sentiment in real time and quickly take appropriate responses, such as mitigating conflicts, enhancing user experience, or even optimizing the platform's recommendation algorithm.
Next, we'll walk you through the process step-by-step, from sentence coding to interpreting the results of model predictions, showing how to combine theory and practice to unleash the full potential of AI.
Classifying Sentences Using Models
Now that you've created the model, trained it, and solved many of the problems that cause overfitting through optimization, the next step is to run the model and check its results. To do this, you need to create an array containing new sentences. Example:
sentences = [
"granny starting to fear spiders in the garden might be real",
"game of thrones season finale showing this sunday night",
"TensorFlow book will be a best seller"
]
These sentences are then encoded using the same tokenizer that was used to create the vocabulary during training. This is important because only using the same tokenizer ensures that the same vocabulary and tokens from model training are used.
sequences = tokenizer.texts_to_sequences(sentences)
print(sequences)
The printout results in a sequence of the above sentences:
[[1, 816, 1, 691, 1, 1, 1, 1, 300, 1, 90],
[111, 1, 1044, 173, 1, 1, 1, 1463, 181],
[1, 234, 7, 1, 1, 46, 1]]
There are a lot of 1-markers here (for "
Before passing these sequences to the model, you need to make sure that their shape matches the model's expectations - that is, the target length. You can do this by using pad_sequences as you did when training the model:
padded = pad_sequences(sequences, maxlen=max_length,
padding=padding_type, truncating=trunc_type)
print(padded)
This will output the sentence as a sequence of length 100, so the output of the first sequence will look like this:
[ 1 816 1 691 1 1 1 1 300 1 90 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 ]
This sentence is very short indeed!
Now that the sentences have been tokenized and populated into a format that meets the model's input dimension requirements, it's time to pass them to the model and get the predictions. This is very simple and only requires this operation:
print((padded))
The results are returned and printed as a list, with a high score indicating a higher likelihood of irony. Here are the results for our example sentence:
[[0.7194135 ]
[0.02041999]
[0.13156283]]
For the first sentence, "granny starting to fear spiders in the garden might be real," despite the fact that it contains a number of stop words and has a large number of zero fillers, it received a high score (0.7194135), which indicates that it has a high degree of irony. The last two sentences have much lower scores, indicating that they have a lower likelihood of being ironic.
Summary:Yes, why do humans need AI models? Because there are many real-world application scenarios where traditional computing programs or programming codes cannot effectively solve problems, and these scenarios can often only be handled well by humans. It is in order to improve efficiency and reduce operational costs that major companies and enterprises invest huge amounts of money in developing and designing AI models.