Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do you think is it that this evolutionary approach on the circuit level isn’t being pursued? I mean it looks as if it could produce cheap and smart devices that no human engine could ever come up with? Or are you aware of any such projects?

Thank you for sharing this!



I believe it's quite prone to overfitting; I vaguely recall reading about an article where an evolutionary design developed on an FPGA had the bitstream applied to another FPGA of the same model (that is, it should have been identical), but failed to operate on the new hardware. Something to do with spooky quantum (?) level tunneling effects that did not operate correctly when there was even the tiniest variation in the physical layout of the hardware.


Sure. But why not make that into one of the constraints? For instance, running the algorithm on many boards in parallel, and requiring that the found solution must work on all of them.


You could, but then you would need to make sure you have examples from several different batches to avoid behaviour specific to one manufacturing run, and hope that going forward the FPGA manufacturer doesn't make any changes which keep the same specification but a different internal implementation...

I suspect you'd be better served evolving your FPGA design on a simulator that doesn't model any of the lower-level physical phenomena, and then running the design on real hardware to validate it. But, I know next to nothing about FPGA stuff so I have no idea how useful that would be in reality / how physically accurate the simulation is.


> ...evolving your FPGA design on a simulator that doesn't model any of the lower-level physical phenomena

Sure, but then you trade away so many degrees of freedom offered on real hardware.

Near the end of the paper Thompson tries porting the algorithm to a different area of silicon, and I think found a "mini-training" of 100 evolutions restored performance.

He also speculates about a farm of FPGA's sampled from different batches as you suggest.

I suspect in production the engineering approach we take for granted would change - eg. maybe you'd load a "species" template that gets some coarsely close results, then each individual unit is tailored/optimized with a shorter, accelerated training stint. Kind of like how it works in humans (and other creatures who are born with instincts but fimish learning as they grow).


Also, one would-be researcher here shared their frustration that modern FPGA's don't provide as low-level rewiring abilities, which presumably makes it harder to exploit hardware peculiarities:

https://www.reddit.com/r/MachineLearning/comments/2t5ozk/com...


Good Reddit thread! Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: