Calculation in the output layer

I will explain the calculation of the output layer in deep learning. As a prerequisite for reading this article, read " Calculations in the middle layer-Converting m inputs to n outputs".

Output layer in pattern recognition

This is an explanation of the calculation of the output layer in pattern recognition.

Expected final output in pattern recognition (classification problem)

The final expected correct output of the output layer in pattern recognition (classification problem) is "(1, 0)" "(0, 1)".

For example, if the person in this photo had the question of whether he was wearing glasses, the expected correct final output of "wearing glasses" would be "(1, 0)", "not wearing glasses". The expected correct final output for is "(0, 1)".

During learning, an output result such as "(0.3 1.2)" is obtained, but when the input is wearing glasses, it approaches "(1, 0)" and when the glasses are not worn. It is an image of adjusting the parameters so that they approach "(0, 1)". Writing an algorithm that automatically adjusts parameters means writing an algorithm for learning.

There can be more than one classification, such as dogs, cats, and pigs. The correct output for a dog is "(1, 0, 0)", the correct output for a cat is "(0, 1, 0"", and the correct output for a pig is "(0, 0, 0," 1) ”.

Use the output of the ReLU function as it is

It is easy to think that a special output function is needed to calculate such a final output, but for pattern recognition, the output of the ReLU function can be used as is. You don't have to do anything special.

Recall that the output of the Relu function was 0 if it wasn't activated, and if it was activated, it was greater than 0 and slightly above 1.

(1.99955473915037, 0.979640425704874, 0)

The above is by continuing learning

(1, 0, 0)

It is possible to get closer to.

Associated Information