Classification improvement by an extended depth LSA machine

Albrecht, Andreas A. and Lappas, Georgios (2004) Classification improvement by an extended depth LSA machine. WSEAS Transactions on Systems, 3 (3). pp. 1120-1125. ISSN 1109-2777

Full text is not in this repository.

Abstract

Is there any potential benefit when training threshold circuits by adding extra layers (depth)
to the network? This question is investigated in a powerful recently introduced artificial intelligence system,
called the Logarithmic Simulated Annealing (LSA) machine, that combines the Simulated Annealing
Algorithm with a Logarithmic cooling schedule and the classical perceptron algorithm. The first and second
layers are trained with the LSA machine learning algorithm. For the learning procedure 75% of the available
data are used for training the first layer. The first layer consists of v voting functions of P threshold circuits
each one. After training the first layer, the fixed weights are exposed again to the training data in order to
produce new samples of length v that are used for training the second layer. The remaining 25% are used for
testing the entire network. The main idea is to smooth in the second layer the inaccuracies of the first layer, by
training the second layer to evaluate the significance of each output gate of the first layer. Results of the depth
investigation reveal that adding a second layer can produce stable better results. In this work extending the
depth of the network results to better improvement than extending the size of the network.

Item Type: Article
Keywords (uncontrolled): Simulated annealing, optimisation, perceptron algorithm, threshold circuits, classification, machine learning
Research Areas: A. > School of Science and Technology > Computer Science
Item ID: 12401
Useful Links:
Depositing User: Andreas Albrecht
Date Deposited: 11 Nov 2013 07:13
Last Modified: 13 Oct 2016 14:29
URI: http://eprints.mdx.ac.uk/id/eprint/12401

Actions (login required)

Edit Item Edit Item