AbstractsComputer Science

Abstract

Neural networks suffer from interference when they are evolved for different and unrelated tasks. This interference causes the networks to forget previously learned behaviours and worsens the more the network is trained in the new task, preventing networks being able to engage in such different unrelated tasks. This report proposes that a cause for this may be that the regular ANN architecture is too simple and that some additional enhancements may be needed. It is then proposed that modularity could be used for such purpose, protecting network structures responsible for particular behaviours. Further, it is proposed a selection scheme intended to handle the selection of modules based on input to the primary network. This module-with-selection scheme is tested and shows that modularity provide some benefits such as simplifying the search space and also enabling symmetric and repeating structures through module connectivity and reuse. The selection mechanism is also shown to be evolvable, something not readily apparent. It is further suggested that this selection mechanism can in the future enable networks to operate in fractured domains. Also the scheme has support for module within module structures, enabling a bottom-up approach to constructing networks through incremental evolution, and with the modules acting as safeguards against interference as the network becomes more complex.