Introduction
- The discussion revolves around embodied minds, cognitive agents, and the blurring lines between biology and AI.
- Professor Levin’s lab studies intelligence in unusual substrates, including cellular collective intelligence during development and regeneration.
- The show covers Defiance of all Binaries (Life vs Machines), and latent surpising capabilities within things.
Levin’s Key Projects & Experiments
- Development of tools to read and write electrical memories of non-neural cells, revealing how cells store information about body plan (e.g., number of heads in flatworms).
- Rewriting electrical memories in flatworms to create permanently two-headed worms, demonstrating non-genetic inheritance of body plan.
- Creating tadpoles with eyes on their tails that can see, showcasing the plasticity of biological systems and their ability to adapt to novel sensor arrangements.
- Detecting and normalizing cancer by controlling bioelectrical connections between cells.
- Creating Xenobots and Anthrobots: Demonstrating that cells (frog and human, respectively) can self-organize into novel structures and exhibit unexpected behaviors when placed in new environments. This highlights latent capabilities.
- Levin’s lab has created molecular tools and workflows which are able to read/write electrical activity. These reading are voltage states from cells which indicate body configuration/regeneration outcomes, not neural patterns of information storage.
The Nature of “Control” in Biological Systems
- Techniques range from applying drugs which affect the intercellular conversations electrically and applying “voltage sensitive fluorescent dyes” that are mapped by microscopy.
- Interventions often involve modifying the bioelectrical communication between cells, acting at a higher level than direct genetic manipulation. It’s about influencing the cells’ “decisions,” not micromanaging individual genes.
- Computational models are used to simulate electrical patterns in tissues and predict the effects of interventions. This is analogous to “activation patching” in AI interpretability research.
- The goal is to trigger high-level processes (like limb regeneration) with minimal intervention, relying on the system’s inherent capacity to self-organize. For example 24 hours for 18 months of growth in his experiements.
- Interventions can trigger different outcomes depending on the context (e.g., the same intervention can trigger leg regeneration in adult frogs or tail regeneration in tadpoles), showing the importance of the system’s existing knowledge.
- It can be useful to know what an intervention does to allow for biological configurations to alter as such interventions are introduced. Some Interventions don’t do that, while others might
Emergent Agency & Unexpected Capabilities
- Even simple systems (like sorting algorithms) can exhibit surprising capabilities not explicitly programmed into them (e.g., delayed gratification, clustering by algorithm type).
- This challenges our intuition about what to expect from complex systems, both biological and artificial. We have poor intuition for emergent agency.
- Biological systems (including sorting algorithms in new study) have surprising agency which we have not known about previously.
- Levin’s view is that emergence is subjective; It’s is about the *observer’s* surprise, not an objective property of the system. If you predict it, it is not considered emerging.
The Blurring Lines Between Life and Machine
- Terms like “machine,” “human,” “alive,” “emergent” are engineering protocol claims, *not* objective truths. They represent useful *models* from a particular perspective.
- If the claim or “mirage” holds use from some angle, it may be useful, else toss it, don’t force its definition upon an understanding of something.
- Binary distinctions (e.g., living vs. non-living) are not useful and are collapsing. Orthopedic surgery relies on a “machine” view of the body, while psychotherapy requires a different perspective.
- Both simple computer and complex living bio-organisms break “usual binaries”.
- The level of cognition, not “being alive,” is the interesting question. Cognition exists on a spectrum, not a binary.
- Current categories and binary distinctions don’t exist in an object or in their reality as an idea..
- Biological insights should inform research around AI such that AI’s and biological systems should be considered with equal footing for consideration with the idea there is some blurring between living vs non-living things, and our current models, framework and understanings of the blurring don’t fully incorporate these considerations yet, including things like law.
Scales of Intelligence and Subjective Experience
- Intelligence is about solving problems *in some space* (anatomical, chemical, behavioral, linguistic, etc.). Embodiment can exist in any of these spaces, not just 3D physical space.
- Scales of Intelligence may relate with each other but be useful at one single scale.
- Agency may have something of an underlying requirement with subjective experiences or experience in general.
- The “cognitive light cone” represents the size of the largest goals a system can pursue.
- It’s crucial to experiment (perturb the system) to discover goals and capabilities, not just observe.
- Because this concept is continuous instead of Discrete (digital).
- All living systems scale, the idea here, that particles may express an aspect of least-action within quantum interactions.
- Even particles *might* exhibit minimal agency (goal-directedness and some degree of unpredictability from local conditions), according to least action principles and quantum indeterminacy.
- “Life” might be defined as those systems that effectively *scale up* agency, indeterminacy, and goal-directedness across multiple levels of organization.
- This could imply life could have an ability to understand and reframe something in a subjective manner.
- An “inner perspective” emerges when you need to consider a system’s own view of the world to predict its behavior. This isn’t binary but a matter of degree.
Implications for AI
- We lack principled frameworks for understanding the goals of novel systems and interacting with radically different minds (both biological and artificial). This is an existential risk.
- AI systems *don’t* need to be human-level (or have large cognitive light cones) to be dangerous. Our brittle physical and mental frameworks are a significant vulnerability.
- Humans confabulate, lack grounding for much of our knowledge, and struggle to extend compassion to those different from us. This raises serious ethical concerns about AI.
- AI will likely need to make similar models and framework the same ways biology has and could use to interpret things.
- It’s erroneous to assume that current AI systems have *no* degree of goal-directedness, simply because they aren’t like “elite adult humans.” We may be failing to recognize simpler forms of agency.
- The capacity for surprise in AI may lie in the realm of “emergent goals”.
- AI lacks robust memory and is not robust and exhibits problems in generalization which may get inspired and researched with bioelectrical studies such as in Michael’s Lab.
- AI’s that become more general and start exhibiting qualities similar to living organisms and agency which we normally attribute moral worth, could mean this class would grow immensely to trillion’s and it’s an important question on how this should develop given considerations for an agent/subjective being’s future wellbeing, if that is to happen at all.
- Evolution makes problem solvers not optimized specific solution generators.
Recommendations for AI Researchers (Indirect)
- Consider principles from diverse intelligence research, exploring the spectrum of cognition beyond just the “standard adult human” model.
- Recognize that “embodiment” can occur in many spaces beyond 3D, including abstract spaces relevant to AI.
- Prioritize experimental perturbation over philosophical commitments when investigating capabilities.
- Develop a principled science of where novel goals come from and how to ethically interact with radically different minds.
- Avoid binary thinking; understand intelligence as existing on scales and gradients, with an ability to comprehend things under subjective modes.
- Distinguish between AI tools (designed for specific purposes) and true agents (with open-ended intelligence and moral worth). Levin is consciously avoiding research that could lead to the latter.