How do neural networks use genetic algorithms and

2019-02-21 18:36发布

I came across this interesting video on YouTube on genetic algorithms.

As you can see in the video, the bots learn to fight.
Now, I have been studying neural networks for a while and I wanted to start learning genetic algorithms.. This somehow combines both.

How do you combine genetic algorithms and neural networks to do this?
And also how does one know the error in this case which you use to back-propagate and update your weights and train the net? And also how do you think the program in the video calculated its fitness function ? I guess mutation is definitely happening in the program in the video but what about crossover ?

Thanks!

3条回答
来,给爷笑一个
2楼-- · 2019-02-21 19:04

Well this is a reinforcement problem in which the outputs of the neural network are the keys on the keyboard to be pressed in order to maximize a score given by the fitness function. Using genetic algorithms (GAs) and starting from an initial neural network architecture the GA tends to find a better architecture that maximizes a fitness function, iteratively. The GA generates different architectures by breeding a population of them and then uses them for the task (playing the game), selects the one yielding a higher score (using the fitness function). Next time the GA uses the best architecture candidates (parents in GA terminology) to use for breeding and again repeats the process of generating new population (architectures). Of course, breeding includes mutation too.

This process continues until a termination criteria is met (a specific value for the fitness function or generating a number of populations). You may note that genetic algorithms are very computationally intensive and therefore are kind of abandoned for large-scale problems. Naturally, when a architecture is generated it is trained using backpropagation or any other applicable optimization technique, including GAs.

For instance, this video shows how genetic algorithms help selecting the "best" architecture to play Mario, and it does it very well! However, note that if GA selects an architecture to play Mario very well in one level, that architecture will not be necessarily doing well in next levels as shown in another video. In my opinion, this is because both genetic algorithms and backpropagation tend to find a local minima. So there is still a long way to go ...

Sources

查看更多
趁早两清
3楼-- · 2019-02-21 19:04

You can use generic algorithms as another way to optimize the neural network. Instead of using back propagation, which is the default algorithm, and the most used by far, you can optimize the weights using a genetic algorithm.

Please take a look at this paper. There we proposed an algorithm called neural evolution, which is a combination of neural networks with a genetic algorithm called differential evolution. It is used to make a humanoid robot detect human emotions and interact in accordance. There is also an extensive state of the art about the matter. Hope it helps.

查看更多
Fickle 薄情
4楼-- · 2019-02-21 19:21

How do you combine genetic algorithms and neural networks to do this?

Neural networks can be trained with a combination of genetic and back-propagation algorithms or you can train a batch of networks with backpropagation algorithm and chose that one form batch you think most promising using genetic algorithm.

And also how does one know the error in this case which you use to back-propagate and update your weights and train the net?

Error calculation may vary depending on the algorithm, but in general, if you use supervised learning methods you have to make your error calculation as some distance from the desired learning goal.

I suggest to look at one of most advanced (currently) genetic algorithms is NEAT.

查看更多
登录 后发表回答