Main Results
Ten types of leaves (Orange, Olive, Tangerine,...) were used in this . experiment There were twelve images for each of the ten types of leaves. Nine were used for learning, the remaining three for testing. The size of each image was 424*248 pixels, with pixels having 256 gray levels.
The architecture of the IRNN for recognition of leaves consisted of three layers of neurons. The first layer had 1643 blocks of neurons, each block containing a maximum of 20 neurons. Therefore, the total number of neurons in the bottom layer was 32,860 neurons. Each of the neurons in this layer took as an input subimages of size 8*8.
Table 4.1 shows the fixed weights used for localization. The fixed weights did not have to be uniform. The only requirement was that the central point must have the strongest connection. The weights of the connections decreased with increasing the distance from the center. All neurons had the same connections, with fixed weights.
There were 31 blocks of neurons in the second layer with again a maximumof 20 neurons in each block. Each neuron in the second layer took its inputs from 53 blocks of sensory neurons. Therefore, the total number of neurons in this layer was 620.
The number of outputs in the top layer of the IRNN was equal to the number of classes to be identified. In the case of leaves' images the number of outputs was ten.
Since the original size (424*248) of the images was too large for processing by the IRNN, it was reduced to 424*128 by cutting-off a portion (half) of an object from the original image, because the leaves were symmetrical with respect to their stalks. Fig. 4.22 shows how the IRNN worked for the leaves images. The IRNN classified all categories correctly.
Here, we can increase the training process by changing the parameters of the activation fuction. For example, if we set the stepness of the sigmoid curve to large number (say, a = 10), and adding parameter, B, to the activation function to become:
the training process will increase. Also, if we use quadratic sigmoidal function in the form
the learning method can be improved [65].