Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Evolved Virtual Creatures (1994) (karlsims.com)
71 points by sirnicolaz on March 20, 2022 | hide | past | favorite | 33 comments


This is cool and reminds me of Dr. Adrian Thompson's pioneering experiment with evolutionary techniques on an FPGA [1][2].

He evolved a tone discriminator on real world hardware, and after a few thousand generations the resulting circuit was one no engineer would ever imagine - it made use of transistors operating outside their saturation region, and subtle secondary magnetic or PSU-line effects from nearby gates not even connected to the logic pathways. But it was effective and amazingly space-efficient.

Physics in the real world offers an incredibly rich and vast set of variables for evolution to play with, and I feel like our attempts to simulate it in software constructs may be too limiting to yield results approaching AGI.

[1] Article: http://www.damninteresting.com/on-the-origin-of-circuits/

[2] Paper: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50...


Thanks for the great links! The articles a bit hammy, but the paper is fantastic - papers in evolutionary training always make evolution in biology more understandable (in my own mind anyway).


Yeah I agree the paper is way better than the article (and is refreshingly approachable).


Why do you think is it that this evolutionary approach on the circuit level isn’t being pursued? I mean it looks as if it could produce cheap and smart devices that no human engine could ever come up with? Or are you aware of any such projects?

Thank you for sharing this!


I believe it's quite prone to overfitting; I vaguely recall reading about an article where an evolutionary design developed on an FPGA had the bitstream applied to another FPGA of the same model (that is, it should have been identical), but failed to operate on the new hardware. Something to do with spooky quantum (?) level tunneling effects that did not operate correctly when there was even the tiniest variation in the physical layout of the hardware.


Sure. But why not make that into one of the constraints? For instance, running the algorithm on many boards in parallel, and requiring that the found solution must work on all of them.


You could, but then you would need to make sure you have examples from several different batches to avoid behaviour specific to one manufacturing run, and hope that going forward the FPGA manufacturer doesn't make any changes which keep the same specification but a different internal implementation...

I suspect you'd be better served evolving your FPGA design on a simulator that doesn't model any of the lower-level physical phenomena, and then running the design on real hardware to validate it. But, I know next to nothing about FPGA stuff so I have no idea how useful that would be in reality / how physically accurate the simulation is.


> ...evolving your FPGA design on a simulator that doesn't model any of the lower-level physical phenomena

Sure, but then you trade away so many degrees of freedom offered on real hardware.

Near the end of the paper Thompson tries porting the algorithm to a different area of silicon, and I think found a "mini-training" of 100 evolutions restored performance.

He also speculates about a farm of FPGA's sampled from different batches as you suggest.

I suspect in production the engineering approach we take for granted would change - eg. maybe you'd load a "species" template that gets some coarsely close results, then each individual unit is tailored/optimized with a shorter, accelerated training stint. Kind of like how it works in humans (and other creatures who are born with instincts but fimish learning as they grow).


Also, one would-be researcher here shared their frustration that modern FPGA's don't provide as low-level rewiring abilities, which presumably makes it harder to exploit hardware peculiarities:

https://www.reddit.com/r/MachineLearning/comments/2t5ozk/com...


Good Reddit thread! Thank you.


That reminds me of the Golem project - http://demo.cs.brandeis.edu/golem/

I remember running that as a screen saver back in the day.


swangin.mp4 AKA "Flexible Muscle-Based Locomotion for Bipedal Creatures" also has a similar vibe: https://youtu.be/pgaEE27nsQw


Y’all might also enjoy the GOLEM project: http://www.demo.cs.brandeis.edu/golem/

It’s of a similar age and goals, but evolved physical creatures with a screensaver.

I ran that screensaver as a middle schooler and happened to work with Prof. Lipson at the Computational Synthesis Lab in college.

Like other posters here, I feel genetic algorithms got a bit overshadowed by neural networks. Circa 2010, GA’s were capable of some real feats that still seem cutting edge today: deriving the full set of differential equations of metabolism for a bacteria, self-modeling through exploration, finding fundamental laws of physics by watching a double pendulum video, and more.

A ton of good came out of that lab, including a big part of modern open-source 3d printing (which was originally pursued to print the multi-material GOLEM robots!)


“results from a research project involving simulated Darwinian evolutions of virtual block creatures”

It would be interesting to run this software today… probably would run on a phone, now.


The CM-5 it ran on probably could do a few Gflops peak.


https://www.osti.gov/biblio/46248-gflops-molecular-dynamics-...

> Typical production runs show sustained performance (including communication) in the range of 47--50 GFlops on a 1024 node CM-5 with vector units (VUs).

---

I did a comparison of a Raspberry Pi to a Cray-1. It was... very impressive for how far it has come.


That comparison sounds like it’d be a hit HN post.


                 Cray-1                 RPI         Factor
    Price        $33M (2020 dollars)    $75 (8GB)   440,000x
    Weight       5.5 tons               45 grams     74,510x
    Power        115 kW (@208 V 400Hz)  5.1 w        23,000x
    Memory       8.39 MB                8 GB              0.001046 (1/953x)
    Performance  160 MFLOPS             9.69 GFLOPS       0.01651  (1/60.56x)
Not too much to it. Just a couple numbers.


Got me to check the Sims paper. "With this approach, an evolution with population size 300, run for 100 generations, might take around three hours to complete on a 32 processor CM-5."

I guess that's 32 nodes in the terms of your quote?


Jesus Christ. Just think of what those people back in 1994 must have expected from us here in 2022. They would be utterly disappointed, if only they remembered.


I thought genetic programming was going to be The Way Forward for AI, because it seemed to take an intractable problem (write a strong AI) and turn it into one that might just be tractable (write a fitness function that can recognise a strong AI). Then neural networks came from the back of the pack to dominate the field, and here we are. At least we're really good at pattern matching now.


Nothing wrong with combining genetic programing with neural networks, there many ways to do it, for example, you can have genetically evolved network that produces actual network(s) (weights, locations, connections), to me biggest is issue is with escaping local minima as efficient/performant as other methods (does not beat deep RL methods), in simple sense it stops evolving and it is really hard to figure out why.


But I don’t really understand. If they could create those virtual creatures in 1994... How come we do not have any real world machines that can do that? Why do our robot lawnmowers still use wheels?


If I had to guess, it's because the physics model that these creatures are being tested against is really simple, and the physics model in your garden is really complex. Also, for this particular application, we shouldn't ignore the fact that wheels are cheaper.


Good point. Though would that same evolutionary algorithm work for adapting to more complex environments?

Thinking of it, why aren’t self driving cars “let loose” in a - real out virtual - playground environment with an ever increasing amount of complexity, given a bunch of rules and goals (“Do not crash into anything”, “Stay within the lane”, “Do not incur a fine”, etc.)... and then left to figure out the “how” themselves?

Sounds like a much more robust approach than teaching them to emulate human drivers (which is what I think Tesla is doing).


Leo... we* don't have any.


His name is really Sims?


Yet another example of nominative determinism: https://en.wikipedia.org/wiki/Nominative_determinism


For other Sims, Ben Sims is a techno DJ: https://ra.co/dj/bensims


How were the creatures actually 'evolved' or even 'born' in the first place? What biological rules were enforced over their development? How did they learn to overcome obstacles and opponents?

This is very interesting, and especially intriguing since this is a 1994 video and I haven't seen any modern examples of this, which would be equally interesting.


Wow -- I remember seeing this at the time!

Thanks so much for this. I'm amazed it is still there.


Is the source available? (Assuming it is CM-Lisp)


I don’t so—at least, I’ve never seen it shared, and this project shows up often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: