could it be possible to add gpu acceleration to the learning process? right now if I have more than say 50 creatures at once the simulation lags terribly. is there a way to make it at least run smoother?
Yeah, that's definitely one of the first things I want to try in the future when I find the time. I'm interested to see how much I can increase the performance by offloading all of the network calculations from the CPU to the GPU, since those calculations are all naturally parallelizable anyway.
Currently the only option you have if you want the benefits of a larger population size and you're okay with trading in the time it takes to simulate a single generation, is to simulate your generation in batches that your CPU can handle at once.