Embodied Minds and Cognitive Agents
- Levin’s work focuses on embodied minds and what it means to be a cognitive agent in the physical universe. He studies “mind” in unconventional substrates.
- He discusses collective cell intelligence during embryonic development and regeneration, and how they’ve applied bioelectrical control to detect and normalize cancer.
- Binary categories (machine/human, living/non-living, emergent/non-emergent) are misleading. There’s a spectrum of diverse intelligence, and what *matters* is defining what constitutes *agency and minimal requirements.
Key Projects and Findings
- *Electrical Memory Rewriting:* Developed molecular tools to read and write electrical memories of non-neural cells. This enables manipulating developmental outcomes, like two-headed flatworms (heritable, non-genetic change) and tadpoles with eyes on their tails (demonstrating neural plasticity).
- *Latent Capabilities:* Xenobots and Anthrobots (made from frog and human cells, respectively) reveal unexpected capabilities when cells are placed in new environments. This highlights biological plasticity.
- Morphospace. A drug application on adult frogs that lack leg generation for 24 hours promotes the re-generation of that limb which then takes about 18 months.
- *Emergent Sorting Agency:* in this case is basically suprise in the user. Simple, deterministic sorting algorithms exhibit unexpected capabilities, like delayed gratification and clustering, when treated as distributed agents, emphasizing our lack of intuition about emergent agency.
Philosophical Implications (vs. AI)
- *All terms* for distinctions in sentience and machines/bio, are “engineering protocol claims.” A model, *not* absolute reality. The correct definition should come down to empirical evidence.
- Emergence is subjective, an expression of surprise in an observer. It’s not a binary property, but relative to the observer’s predictive capacity.
- Life/machine distinction is not valuable; Different frames of interaction are appropriate for different systems (orthopedic surgeon vs. psychotherapist). He isn’t *sure that *life itself is an objective definition.*
- Scale of intelligence is a critical concept. Intelligence (problem-solving) can exist at very small scales (cells, sorting algorithms).
- Goal is what is changed and developed by evolutionary agents to achieve different outcomes based on different interactions with obstacles and situations, *not* predictable and determined based solely on what the agent is experiencing now.
- Inner Perspective: Systems have varying degrees of “inner perspective” (their own model of the world), relevant to understanding their behavior. The less predictable, the higher chance they have an inner world model and can react on this model.
- The relevant definition of an agent requires the concept of *goals.*
- There may not exist *zero* level intelligence, because least action laws in physics suggest that agency, that has some degree of goals, are inherent properties. Living organisms just build this to a *higher scale*.
Advice/Inspiration for AI
- *Biological Principles:* Biological systems provide valuable lessons (multi-scale competency, offloading information, robustness). Biology doesn’t “create the solutions”.
- There are no *objective defintions of intelligence* from philisophical armchairs and it will change from the percetions and cognitive *interpretability tools* available, and the *perturbation tests*, those running tests can employ on the organism.
- There is no magic to organic, just good a problem solving.
- *Embodiment*: Embodiment isn’t limited to 3D space; biology demonstrates multiple “spaces” (chemical, anatomical, etc.) where intelligence can operate.
- All experiences and thoughts are not linked, nor created, to actual tangible experieneces.
- *Symbol Grounding:* Grounding is a gradual process, not binary. Humans confabulate, and much of our cognition isn’t grounded.
- All intelligence does *not* need to be, *nor has to be*, the highest level, it can and should be evaluated to whatever criteria are applicable (worms vs mice vs human).
- Human issues about AI mirror human *existential* issues.
- We make models based on how well those agents “play well” with us (like factory farming cows), and AI should be tested for its goals in perturbation tests (barriers between agent and goals).
- *AI Tools vs. Agents*: He suggests sticking with “tool” usage. This will not generate trillions of beings with inherent moral agency, potentially a very real existential threat that we are, as a species, *currently* bad at confronting.
AI Challenges and Biological Parallels
- *Emerging Goals*: The field is very uneducated and doesn’t consider emerging goals and has a real issue in understading when, where and why intelligence arises, even among species which already inhabit this earth. We have an un-principled understanding for understanding “alien” mindsets.
- The goal for us isn’t just about building machines that are like us, that goal is only relevant now that the agents are able to do actions that are based on linguistics, instead of simple responses to simple commands.
- *Memory*: Biological memory is far more robust and adaptable (caterpillar to butterfly). The ability for subsytems to use re-interprete and apply is key to new advances.
- *Robustness*: Biological systems exhibit remarkable robustness despite imperfections and environmental changes. AI systems, in constrast can display intelligence on levels beyond ours, but then *also* fail catastrophically on other simple prompts.
- The danger is on US in how *we* handle and react with any, particularly unqiue, intelligence, we have a lot of the issues in our world that come from dealing with animals that, on some levels, should be able to expect safety from being endangered and endangered for production, which they currently aren’t, to name an obvious example. We have very big shortcommings when working with species/goals that conflict with our needs/convenience.
- There could be danger from any number of scales in agency. A very unintelligent agent can cause a catatrosphic world end just based on the current situation we built ourselves, regardless of intent. The concern on large vs. small agent is irreleveant when our world itself has very poor, fragile, and brittle protections from being compromised, let alone, as our social, legal, structures are equally easy to collapse in today’s context and systems.
- Goal Design: How does the human system choose to deal with new, goal, directed agents that do not directly match our models? It will affect, on all scales, the very meaning of everything we, the species, use in day-to-day functions to decide how *we* approach all other intelligent agents.
- Biological beings are machines, yes, but *also* the best agents with goal oriented pursuits, capable to achieve a much much better life, so long as they are able to improve its cognitive scale to see better paths.