Dr. Michael Levin on Embodied Minds and Cognitive Agents Bioelectricity Podcast Notes

PRINT ENGLISH BIOELECTRICITY GUIDE

PRINT CHINESE BIOELECTRICITY GUIDE


Embodied Minds and Cognitive Agents

  • Levin’s work focuses on embodied minds and what it means to be a cognitive agent in the physical universe. He studies “mind” in unconventional substrates.
  • He discusses collective cell intelligence during embryonic development and regeneration, and how they’ve applied bioelectrical control to detect and normalize cancer.
  • Binary categories (machine/human, living/non-living, emergent/non-emergent) are misleading. There’s a spectrum of diverse intelligence, and what *matters* is defining what constitutes *agency and minimal requirements.

Key Projects and Findings

  • *Electrical Memory Rewriting:* Developed molecular tools to read and write electrical memories of non-neural cells. This enables manipulating developmental outcomes, like two-headed flatworms (heritable, non-genetic change) and tadpoles with eyes on their tails (demonstrating neural plasticity).
  • *Latent Capabilities:* Xenobots and Anthrobots (made from frog and human cells, respectively) reveal unexpected capabilities when cells are placed in new environments. This highlights biological plasticity.
  • Morphospace. A drug application on adult frogs that lack leg generation for 24 hours promotes the re-generation of that limb which then takes about 18 months.
  • *Emergent Sorting Agency:* in this case is basically suprise in the user. Simple, deterministic sorting algorithms exhibit unexpected capabilities, like delayed gratification and clustering, when treated as distributed agents, emphasizing our lack of intuition about emergent agency.

Philosophical Implications (vs. AI)

  • *All terms* for distinctions in sentience and machines/bio, are “engineering protocol claims.” A model, *not* absolute reality. The correct definition should come down to empirical evidence.
  • Emergence is subjective, an expression of surprise in an observer. It’s not a binary property, but relative to the observer’s predictive capacity.
  • Life/machine distinction is not valuable; Different frames of interaction are appropriate for different systems (orthopedic surgeon vs. psychotherapist). He isn’t *sure that *life itself is an objective definition.*
  • Scale of intelligence is a critical concept. Intelligence (problem-solving) can exist at very small scales (cells, sorting algorithms).
  • Goal is what is changed and developed by evolutionary agents to achieve different outcomes based on different interactions with obstacles and situations, *not* predictable and determined based solely on what the agent is experiencing now.
  • Inner Perspective: Systems have varying degrees of “inner perspective” (their own model of the world), relevant to understanding their behavior. The less predictable, the higher chance they have an inner world model and can react on this model.
  • The relevant definition of an agent requires the concept of *goals.*
  • There may not exist *zero* level intelligence, because least action laws in physics suggest that agency, that has some degree of goals, are inherent properties. Living organisms just build this to a *higher scale*.

Advice/Inspiration for AI

  • *Biological Principles:* Biological systems provide valuable lessons (multi-scale competency, offloading information, robustness). Biology doesn’t “create the solutions”.
  • There are no *objective defintions of intelligence* from philisophical armchairs and it will change from the percetions and cognitive *interpretability tools* available, and the *perturbation tests*, those running tests can employ on the organism.
  • There is no magic to organic, just good a problem solving.
  • *Embodiment*: Embodiment isn’t limited to 3D space; biology demonstrates multiple “spaces” (chemical, anatomical, etc.) where intelligence can operate.
  • All experiences and thoughts are not linked, nor created, to actual tangible experieneces.
  • *Symbol Grounding:* Grounding is a gradual process, not binary. Humans confabulate, and much of our cognition isn’t grounded.
  • All intelligence does *not* need to be, *nor has to be*, the highest level, it can and should be evaluated to whatever criteria are applicable (worms vs mice vs human).
  • Human issues about AI mirror human *existential* issues.
  • We make models based on how well those agents “play well” with us (like factory farming cows), and AI should be tested for its goals in perturbation tests (barriers between agent and goals).
  • *AI Tools vs. Agents*: He suggests sticking with “tool” usage. This will not generate trillions of beings with inherent moral agency, potentially a very real existential threat that we are, as a species, *currently* bad at confronting.

AI Challenges and Biological Parallels

  • *Emerging Goals*: The field is very uneducated and doesn’t consider emerging goals and has a real issue in understading when, where and why intelligence arises, even among species which already inhabit this earth. We have an un-principled understanding for understanding “alien” mindsets.
  • The goal for us isn’t just about building machines that are like us, that goal is only relevant now that the agents are able to do actions that are based on linguistics, instead of simple responses to simple commands.
  • *Memory*: Biological memory is far more robust and adaptable (caterpillar to butterfly). The ability for subsytems to use re-interprete and apply is key to new advances.
  • *Robustness*: Biological systems exhibit remarkable robustness despite imperfections and environmental changes. AI systems, in constrast can display intelligence on levels beyond ours, but then *also* fail catastrophically on other simple prompts.
  • The danger is on US in how *we* handle and react with any, particularly unqiue, intelligence, we have a lot of the issues in our world that come from dealing with animals that, on some levels, should be able to expect safety from being endangered and endangered for production, which they currently aren’t, to name an obvious example. We have very big shortcommings when working with species/goals that conflict with our needs/convenience.
  • There could be danger from any number of scales in agency. A very unintelligent agent can cause a catatrosphic world end just based on the current situation we built ourselves, regardless of intent. The concern on large vs. small agent is irreleveant when our world itself has very poor, fragile, and brittle protections from being compromised, let alone, as our social, legal, structures are equally easy to collapse in today’s context and systems.
  • Goal Design: How does the human system choose to deal with new, goal, directed agents that do not directly match our models? It will affect, on all scales, the very meaning of everything we, the species, use in day-to-day functions to decide how *we* approach all other intelligent agents.
  • Biological beings are machines, yes, but *also* the best agents with goal oriented pursuits, capable to achieve a much much better life, so long as they are able to improve its cognitive scale to see better paths.

具身心智与认知主体

  • 莱文的工作重点是具身心智,以及在物理宇宙中成为一个认知主体意味着什么。他研究非传统基质中的“心智”。
  • 他讨论了胚胎发育和再生过程中的细胞集体智慧,以及他们如何应用生物电控制来检测和规范化癌症。
  • 二元类别(机器/人类、生物/非生物、涌现/非涌现)具有误导性。存在着多样化智能的光谱,而*重要的*是定义什么是*自主性以及最低要求。

关键项目与发现

  • 电记忆重写:开发了分子工具来读取和写入非神经细胞的电记忆。这使得操纵发育结果成为可能,例如双头涡虫(可遗传的非基因改变)和尾巴上有眼睛的蝌蚪(证明了神经可塑性)。
  • 潜在能力:异种机器人和人源机器人(分别由青蛙和人类细胞制成)在细胞被置于新环境中时展现出意想不到的能力。这突出了生物可塑性。
  • 形态空间。 对缺乏腿部再生能力的成年青蛙应用一种药物,持续24小时,可促进该肢体的再生,这大约需要18个月的时间。
  • 涌现的排序自主性:在这种情况下,基本上是使用者的惊喜。 当被视为分布式代理时,简单、确定性的排序算法表现出意想不到的能力,例如延迟满足和聚类,这强调了我们缺乏对涌现自主性的直觉。

哲学意义(与人工智能对比)

  • 所有关于感知和机器/生物的区别的术语都是“工程协议声明”。 是一种模型,*而不是*绝对现实。 正确的定义应该归结为经验证据。
  • 涌现是主观的,是观察者惊喜的表达。 它不是一个二元属性,而是相对于观察者的预测能力而言的。
  • 生命/机器的区别没有价值;不同的交互框架适用于不同的系统(骨科医生与心理治疗师)。他不*确定*生命本身是否是一个客观的定义。
  • 智能的尺度是一个关键概念。 智能(解决问题)可以存在于非常小的尺度上(细胞、排序算法)。
  • 目标是被进化主体改变和发展的,以根据与障碍和情况的不同交互来实现不同的结果,*而不是*仅根据主体现在正在经历的事情来预测和确定。
  • 内在视角:系统具有不同程度的“内在视角”(它们自己的世界模型),这与理解它们的行为相关。越不可预测,它们就越有可能拥有内在的世界模型,并能根据该模型做出反应。
  • 代理的相关定义需要*目标*的概念。
  • 可能不存在*零*级智能,因为物理学中的最小作用定律表明,具有一定程度目标的自主性是固有的属性。 生物体只是将此构建到*更高的尺度*。

对人工智能的建议/启示

  • 生物学原理:生物系统提供了宝贵的经验(多尺度能力、卸载信息、鲁棒性)。 生物学不会“创造解决方案”。
  • 没有来自哲学扶手椅的*智能的客观定义*,它将根据可用的感知和认知*可解释性工具*以及*扰动测试*而改变,那些运行测试可以对生物体使用。
  • 有机物并没有什么神奇之处,只是擅长解决问题。
  • 具身化:具身化不仅限于3D空间; 生物学展示了多种“空间”(化学、解剖等),智能可以在其中运行。
  • 所有的经历和想法都不是相互关联的,也不是由实际的有形经历创造的。
  • 符号基础:基础是一个渐进的过程,而不是二元的。 人类会虚构,我们的大部分认知都不是基础的。
  • 所有智能都*不需要*,*也不必*是最高水平,它可以而且应该根据适用的任何标准进行评估(蠕虫与老鼠与人类)。
  • 人类关于人工智能的问题反映了人类的*存在*问题。
  • 我们根据这些代理与我们“相处”的程度来制作模型(比如工厂化养殖奶牛),并且应该对人工智能的目标进行扰动测试(代理和目标之间的障碍)。
  • 人工智能工具与代理:他建议坚持使用“工具”。这不会产生数万亿具有内在道德自主性的生物,这可能是一个非常真实的生存威胁,作为一个物种,我们*目前*不擅长应对。

人工智能挑战与生物学相似之处

  • 涌现的目标:该领域非常缺乏教育,不考虑涌现的目标,并且在理解智能何时、何地以及为什么出现方面存在真正的问题,即使在已经居住在这个地球上的物种中也是如此。我们对理解“外星”心态缺乏有原则的理解。
  • 我们的目标不仅仅是制造像我们一样的机器,这个目标只有在代理能够根据语言进行行动,而不是对简单命令做出简单反应时才相关。
  • 记忆:生物记忆更加健壮和适应性强(毛毛虫变成蝴蝶)。 子系统使用重新解释和应用的能力是新进展的关键。
  • 鲁棒性:生物系统尽管存在缺陷和环境变化,却表现出显着的鲁棒性。 相反,人工智能系统可以在超越我们的水平上展示智能,但随后*也*在其他简单提示上出现灾难性的失败。
  • 危险在于我们如何处理和应对任何,特别是独特的智能,我们在处理动物方面存在很多问题,在某种程度上,这些动物应该能够期望免于濒临灭绝和因生产而濒临灭绝的安全,而它们目前并没有,这是一个明显的例子。当与与我们的需求/便利相冲突的物种/目标打交道时,我们有很大的不足之处。
  • 任何规模的代理都可能存在危险。 一个非常不聪明的代理可能会导致灾难性的世界末日,这仅仅是基于我们建立的当前情况,而与意图无关。 当我们的世界本身对被破坏的保护非常脆弱、脆弱时,对大型代理与小型代理的关注是无关紧要的,更不用说,因为我们的社会、法律、结构在当今的背景和系统中同样容易崩溃。
  • 目标设计:人类系统如何选择处理与我们的模型不直接匹配的新的、目标导向的代理?它将在所有尺度上影响我们,这个物种,在日常功能中使用的所有事物的真正含义,以决定*我们*如何接近所有其他智能主体。
  • 生物是机器,没错,但*也*是最好的具有目标导向追求的代理,只要它们能够提高其认知尺度以看到更好的路径,就能够实现更好得多的生活。