I am trying to get a feel for the difference between the various classes of machine-learning algorithms.
I understand that the implementations of evolutionary algorithms are quite different from the implementations of neural networks.
However, they both seem to be geared at determining a correlation between inputs and outputs from a potentially noisy set of training/historical data.
From a qualitative perspective, are there problem domains that are better targets for neural networks as opposed to evolutionary algorithms?
I've skimmed some articles that suggest using them in a complementary fashion. Is there a decent example of a use case for that?
Evolutionary, or more generically genetic algorithms, and neural networks can both be used for similar objectives, and other answers describe well the difference.
However, there is one specific case where evolutionary algorithms are more indicated than neural networks: when the solution space is non-continuous/discrete.
Indeed, neural networks use gradient descent to learn from backpropagation (or similar algorithm). The calculation of a gradient relies on derivatives, which needs a continuous space, in other words that you can shift gradually and progressively from one solution to the next.
If your solution space is discrete (ie, either you can choose solution A, or B, or C, but nothing in the middle like 0.5% A + 0.5% B), then you are trying to fit a non-continuous function, and then neural networks cannot work.
In this case, evolutionary algorithms are perfect, one could even say a god send, since it can "jump" from one solution to the next without any issue.
Also worth mentioning is that evolutionary algorithms are not subject to the curse of dimensionality as much as any other machine learning algorithm, including neural networks.
This makes evolutionary algorithms a very versatile and generic tool to approach naively any problem, and one of the very few tools to deal with either non-continuous functions or with astronomically high dimensional datasets.