At ReGen, we believe robotics can help build a regenerative future: where products, services and experiences are not only within planetary boundaries but that the human experience is mind blowingly more awesome in every single way.
We’ve partnered with many founders rethinking how robots can interact with the natural world: Burnbot, Aigen, Ulysses, Anthrogen and Differential Bio. These teams aren’t just automating single processes—they’re redesigning systems from the ground up.
Right now the progress in robotics is real. Progress in simulation, foundational models, and embodied AI colliding with new physical capabilities. And while humanoids remain impressive feats of engineering, they represent a narrow interpretation of what robotics could be.
If we want robotics to play a serious role in regenerating the planet — from forests to oceans to soil — we need robots that can operate autonomously in extreme environments.
We believe robotics is at a structural inflection point — not just because of AI, but because of how we design the bodies of robots themselves. What excites us most is the possibility of moving beyond the human form altogether, toward machines that behave less like us and more like the world around us.
One design philosophy that embodies this approach is morphological computation— the idea that intelligence can be embedded not just in software, but in structure, materials, and motion. Biology has always done this — evolving body and control in tandem. We think robotics will do the same.
What is morphological computation?
Most robots today are still built in what ARIA calls the “Genesis paradigm”: the body is designed first, and software is later ‘breathed’ into it. It mirrors the biblical creation story — form first, cognition second.
Biology doesn’t work this way. Nervous systems and bodies co-evolve. A bird’s wing shape, its joint stiffness, and how it senses air all develop together. The result is a system that offloads much of its intelligence into its morphology — reacting locally, cheaply, and fast.
Morphological computation takes inspiration from this. It’s a robotics design principle that distributes control across a machine’s mechanical form. It means embedding intelligence in the structure itself — in compliant joints, soft actuators, and materials that react to the world without always needing permission from the CPU. Instead of using dense perception stacks to build detailed models of the environment, this approach to robotics allows the body to interpret and interact with the physical world. The body is not just the interface — it’s part of the computation.
Nature’s blueprint: the robofauna field guide
Biological organisms are masters of morphological intelligence. Evolution has optimised the body for control: pushing complexity to the edge so the brain doesn’t have to do as much. Form performs function.
Octopuses coordinate eight fully soft limbs with a decentralised nervous system. Insects flap their wings at resonant frequencies using asynchronous muscles — they adapt to load mid-flight using passive mechanics. Snakes propagate body waves to slither across uneven terrain without needing a map. A mountain goat’s hoof has been shown to exhibit passive mechanical intelligence — the shape alone helps prevent slippage without needing neural feedback. A kangaroo’s hop is a marvel of energy efficiency. A leatherback turtle can compress and expand its lungs to survive immense deep-sea pressure.
In all of these examples, the organism combines sensing, actuation, and feedback into a co-evolved physical system. The body filters signals, stores energy, and adapts in real time. If part of it breaks, the system often still works. It’s a resilient, distributed design.
What this enables:
Faster response times than centralised control
Lower energy consumption
Lower degradation and damage resilience
Biology doesn’t separate body and mind and we think neither should robotics.
Unlocking scalable, adaptive robots
Today most robots are built for more structured environments: warehouses, factory floors, labs.
If we want robots to operate in the real world, they need to be robust, adaptable, and computationally lean.
This is because the real world is messier. Unpredictable weather, deformable terrain, variable lighting, unstructured feedback. These environments are non-differentiable — hard to simulate, harder to model, and difficult to generalise across.
And while AI models like Gemini and π0 offer stunning generalisation, they’re compute-heavy. These models require datacenter-scale infrastructure and don’t run efficiently on mobile platforms. Even advanced edge processors like NVIDIA’s Jetson Orin can’t deliver real-time inference for large VLMs under tight power and thermal budgets. In the wild — underwater, underground, off-grid — inference must happen locally. Latency matters. And with Moore’s Law plateauing, we can’t just add more compute. Long-horizon tasks, essential for real-world autonomy, has also shown to be a limitation for current AI models.
Morphological computation offers a way forward. Instead of making smarter controllers, we make smarter bodies. Bodies that:
Adapt passively to their environment
Offload control via compliant mechanics
Reduce sensor requirements by reacting physically
Enable smaller, interpretable control policies
This radical shift is not just bio-inspired hardware improvements, it’s about collapsing the mind (software) / body (hardware) divide in robotics entirely.
What’s hard…
Like any new paradigm, morphological computation has required breakthroughs in material science, hardware design and software. But the momentum is real. We believe material progress has been made across the critical areas of generative design, differentiable simulation and materials:
Design complexity. Bio-inspired robots often employ soft bodies. These are insanely hard to model from a control perspective. What you win on compliance and flexibility, historically you have lost on responsiveness of the system. This has been difficult to simulate in design, given there are more possible combinations than conventional rigid kinematic robotics. As a result conventional CAD and FEA tools struggle to handle the large deformations or embedded sensors. But new novel tools are being built in-silico to level up the existing stack to improve the design of physically embodied soft robotics.
Sim-to-real is making real progress. Morphologically complex systems are harder to co-optimise, but there are tools being built to solve for design-to-deploy that aligns morphology and control.
Materials. Force density and responsiveness have historically been limitations for soft robotics, but there have been leaps and bounds in dielectric elastomer actuators, shape-memory polymers, and soft electrostatics. Low-voltage, embedded-sensing, silent motion is becoming viable.
…And why we’re optimistic
We’re seeing a new wave of founders building with morphology at the core:
Blue Grit Robotics is applying peristaltic locomotion to trenching, inspired by the mechanics of earthworms. Instead of brute force digging, they tunnel with minimal disruption — no hydraulics, no diesel, with minimal ecosystem disruption.
Squishy Robotics is deploying tensegrity robots for disaster response. Their soft geometry allows for drone-deployable sensors that can survive impact and navigate unstable rubble using body dynamics rather than expensive sensors.
Eelume has built snake-like subsea robots that live on the seafloor, inspecting pipelines with precision and minimal energy.
GOAT is a terrain-agnostic robot that shifts shape — from walking to rolling — to navigate unpredictable environments without overengineering the software stack.
RoBoa is a soft robotic system inspired by snake locomotion, designed to access confined or hazardous environments.
And on the software side, Opteran draws inspiration from insect brains to develop neuromorphic software for autonomous machines and build what they call “natural intelligence”.
Other are building the underlying hardware stack, the core muscle and skeletal primitives that future robotic platforms will rely on: Arthur Robotics, Embodied AI, Pliantics and many more.
Behind many of these startups are world-class research labs pushing the frontier of embodied robotics: EPFL’s CREATE Lab, Bristol Robotics Lab, Imperial’s Morph Lab, ARIA's Dextrous Robotics program, and BIRLab, among others.
Building with morphology?
Just as GPUs unlocked deep learning, we believe morphological intelligence will unlock real-world autonomy. But we need to build the stack: improved in-silico design and real-to-sim transfer, compliant actuators, sensorised materials, bioinspired skeletons, shape-shifting architectures.
The founders who get this — who treat the robot’s body as part of the intelligence stack — are laying the foundations for the next era of robotics.
If you’re building in this space — whether in the lab, on land, below ground, or underwater — we’d love to hear from you.
Let’s build the next generation of machines!