Embodied Minds and Cognitive Agents with Dr Michael Levin – YouTube Bioelectricity Podcast Notes

PRINT ENGLISH BIOELECTRICITY GUIDE

PRINT CHINESE BIOELECTRICITY GUIDE


Introduction

  • The discussion revolves around embodied minds, cognitive agents, and the blurring lines between biology and AI.
  • Professor Levin’s lab studies intelligence in unusual substrates, including cellular collective intelligence during development and regeneration.
  • The show covers Defiance of all Binaries (Life vs Machines), and latent surpising capabilities within things.

Levin’s Key Projects & Experiments

  • Development of tools to read and write electrical memories of non-neural cells, revealing how cells store information about body plan (e.g., number of heads in flatworms).
  • Rewriting electrical memories in flatworms to create permanently two-headed worms, demonstrating non-genetic inheritance of body plan.
  • Creating tadpoles with eyes on their tails that can see, showcasing the plasticity of biological systems and their ability to adapt to novel sensor arrangements.
  • Detecting and normalizing cancer by controlling bioelectrical connections between cells.
  • Creating Xenobots and Anthrobots: Demonstrating that cells (frog and human, respectively) can self-organize into novel structures and exhibit unexpected behaviors when placed in new environments. This highlights latent capabilities.
  • Levin’s lab has created molecular tools and workflows which are able to read/write electrical activity. These reading are voltage states from cells which indicate body configuration/regeneration outcomes, not neural patterns of information storage.

The Nature of “Control” in Biological Systems

  • Techniques range from applying drugs which affect the intercellular conversations electrically and applying “voltage sensitive fluorescent dyes” that are mapped by microscopy.
  • Interventions often involve modifying the bioelectrical communication between cells, acting at a higher level than direct genetic manipulation. It’s about influencing the cells’ “decisions,” not micromanaging individual genes.
  • Computational models are used to simulate electrical patterns in tissues and predict the effects of interventions. This is analogous to “activation patching” in AI interpretability research.
  • The goal is to trigger high-level processes (like limb regeneration) with minimal intervention, relying on the system’s inherent capacity to self-organize. For example 24 hours for 18 months of growth in his experiements.
  • Interventions can trigger different outcomes depending on the context (e.g., the same intervention can trigger leg regeneration in adult frogs or tail regeneration in tadpoles), showing the importance of the system’s existing knowledge.
  • It can be useful to know what an intervention does to allow for biological configurations to alter as such interventions are introduced. Some Interventions don’t do that, while others might

Emergent Agency & Unexpected Capabilities

  • Even simple systems (like sorting algorithms) can exhibit surprising capabilities not explicitly programmed into them (e.g., delayed gratification, clustering by algorithm type).
  • This challenges our intuition about what to expect from complex systems, both biological and artificial. We have poor intuition for emergent agency.
  • Biological systems (including sorting algorithms in new study) have surprising agency which we have not known about previously.
  • Levin’s view is that emergence is subjective; It’s is about the *observer’s* surprise, not an objective property of the system. If you predict it, it is not considered emerging.

The Blurring Lines Between Life and Machine

  • Terms like “machine,” “human,” “alive,” “emergent” are engineering protocol claims, *not* objective truths. They represent useful *models* from a particular perspective.
  • If the claim or “mirage” holds use from some angle, it may be useful, else toss it, don’t force its definition upon an understanding of something.
  • Binary distinctions (e.g., living vs. non-living) are not useful and are collapsing. Orthopedic surgery relies on a “machine” view of the body, while psychotherapy requires a different perspective.
  • Both simple computer and complex living bio-organisms break “usual binaries”.
  • The level of cognition, not “being alive,” is the interesting question. Cognition exists on a spectrum, not a binary.
  • Current categories and binary distinctions don’t exist in an object or in their reality as an idea..
  • Biological insights should inform research around AI such that AI’s and biological systems should be considered with equal footing for consideration with the idea there is some blurring between living vs non-living things, and our current models, framework and understanings of the blurring don’t fully incorporate these considerations yet, including things like law.

Scales of Intelligence and Subjective Experience

  • Intelligence is about solving problems *in some space* (anatomical, chemical, behavioral, linguistic, etc.). Embodiment can exist in any of these spaces, not just 3D physical space.
  • Scales of Intelligence may relate with each other but be useful at one single scale.
  • Agency may have something of an underlying requirement with subjective experiences or experience in general.
  • The “cognitive light cone” represents the size of the largest goals a system can pursue.
  • It’s crucial to experiment (perturb the system) to discover goals and capabilities, not just observe.
  • Because this concept is continuous instead of Discrete (digital).
  • All living systems scale, the idea here, that particles may express an aspect of least-action within quantum interactions.
  • Even particles *might* exhibit minimal agency (goal-directedness and some degree of unpredictability from local conditions), according to least action principles and quantum indeterminacy.
  • “Life” might be defined as those systems that effectively *scale up* agency, indeterminacy, and goal-directedness across multiple levels of organization.
  • This could imply life could have an ability to understand and reframe something in a subjective manner.
  • An “inner perspective” emerges when you need to consider a system’s own view of the world to predict its behavior. This isn’t binary but a matter of degree.

Implications for AI

  • We lack principled frameworks for understanding the goals of novel systems and interacting with radically different minds (both biological and artificial). This is an existential risk.
  • AI systems *don’t* need to be human-level (or have large cognitive light cones) to be dangerous. Our brittle physical and mental frameworks are a significant vulnerability.
  • Humans confabulate, lack grounding for much of our knowledge, and struggle to extend compassion to those different from us. This raises serious ethical concerns about AI.
  • AI will likely need to make similar models and framework the same ways biology has and could use to interpret things.
  • It’s erroneous to assume that current AI systems have *no* degree of goal-directedness, simply because they aren’t like “elite adult humans.” We may be failing to recognize simpler forms of agency.
  • The capacity for surprise in AI may lie in the realm of “emergent goals”.
  • AI lacks robust memory and is not robust and exhibits problems in generalization which may get inspired and researched with bioelectrical studies such as in Michael’s Lab.
  • AI’s that become more general and start exhibiting qualities similar to living organisms and agency which we normally attribute moral worth, could mean this class would grow immensely to trillion’s and it’s an important question on how this should develop given considerations for an agent/subjective being’s future wellbeing, if that is to happen at all.
  • Evolution makes problem solvers not optimized specific solution generators.

Recommendations for AI Researchers (Indirect)

  • Consider principles from diverse intelligence research, exploring the spectrum of cognition beyond just the “standard adult human” model.
  • Recognize that “embodiment” can occur in many spaces beyond 3D, including abstract spaces relevant to AI.
  • Prioritize experimental perturbation over philosophical commitments when investigating capabilities.
  • Develop a principled science of where novel goals come from and how to ethically interact with radically different minds.
  • Avoid binary thinking; understand intelligence as existing on scales and gradients, with an ability to comprehend things under subjective modes.
  • Distinguish between AI tools (designed for specific purposes) and true agents (with open-ended intelligence and moral worth). Levin is consciously avoiding research that could lead to the latter.

导言

  • 讨论围绕着具身心智、认知主体,以及生物学与人工智能之间模糊的界限。
  • 莱文教授的实验室研究非典型基质中的智能,包括发育和再生过程中的细胞集体智能。
  • 本节目涵盖了对所有二元对立(生命 vs. 机器)的挑战,以及事物中潜在的、令人惊讶的能力。

莱文的关键项目与实验

  • 开发读取和写入非神经细胞电记忆的工具,揭示细胞如何存储有关身体蓝图的信息(例如,扁虫头部的数量)。
  • 重写扁虫的电记忆以创造永久性的双头蠕虫,证明了身体蓝图的非基因遗传。
  • 创造尾巴上长有眼睛并且能看见的蝌蚪,展示了生物系统的可塑性及其适应新型传感器排列的能力。
  • 通过控制细胞之间的生物电连接来检测和规范化癌症。
  • 创造异种机器人(Xenobots)和人源机器人(Anthrobots):证明细胞(分别为青蛙和人类)在置于新环境中时可以自组织成新颖的结构并表现出意想不到的行为。这突出了潜在的能力。
  • 莱文的实验室已经创造了能够读取/写入电活动的分子工具和工作流程。 这些读数是来自细胞的电压状态,表明身体的构造/再生结果,而不是信息存储的神经模式。

生物系统中“控制”的本质

  • 技术范围包括应用影响细胞间电交流的药物,以及应用通过显微镜映射的“电压敏感荧光染料”。
  • 干预措施通常涉及修改细胞之间的生物电通讯,作用于比直接基因操作更高的层次。这是关于影响细胞的“决策”,而不是微观管理单个基因。
  • 计算模型用于模拟组织中的电模式并预测干预措施的影响。这类似于人工智能可解释性研究中的“激活修补”。
  • 目标是用最小的干预触发高级过程(如肢体再生),依赖于系统固有的自组织能力。例如,在他的实验中,24小时的干预带来了18个月的生长。
  • 根据具体情况,干预措施可以触发不同的结果(例如,相同的干预可以在成年青蛙中触发腿部再生,或者在蝌蚪中触发尾巴再生),这表明了系统现有知识的重要性。
  • 了解干预措施的作用可能很有用,从而允许生物构造随着此类干预措施的引入而改变。 一些干预措施不会这样做,而另一些可能会。

涌现的主体性与意想不到的能力

  • 即使是简单的系统(如排序算法)也可能表现出未明确编程到其中的令人惊讶的能力(例如,延迟满足、按算法类型聚类)。
  • 这挑战了我们对复杂系统(无论是生物系统还是人工系统)的期望的直觉。我们对涌现的主体性缺乏良好的直觉。
  • 生物系统(包括新研究中的排序算法)具有我们以前不知道的令人惊讶的主体性。
  • 莱文的观点是,涌现是主观的;它关乎的是*观察者的*惊讶,而不是系统的客观属性。如果你能预测到它,那就不算涌现。

生命与机器之间模糊的界限

  • 诸如“机器”、“人类”、“活着”、“涌现”之类的术语是工程协议主张,*而不是*客观真理。它们代表了来自特定视角的有用*模型*。
  • 如果某个主张或“幻觉”从某个角度来看有用,那就可以使用它,否则就扔掉它,不要强行将其定义强加于对某事的理解之上。
  • 二元区分(例如,活着 vs. 非活着)没有用处,并且正在瓦解。骨科手术依赖于身体的“机器”视角,而心理治疗则需要不同的视角。
  • 简单的计算机和复杂的活体生物都会打破“通常的二元对立”。
  • 认知水平,而不是“活着”,才是有趣的问题。认知存在于一个光谱上,而不是二元对立的。
  • 当前类别和二元区分并不存在于一个对象中,也不存在于作为思想的现实中。
  • 生物学的见解应该为围绕人工智能的研究提供信息,从而在考虑生物系统和AI系统之间的一些模糊性的情况下,将AI系统视为同样的重要对象, 包括法律。

智能的尺度与主观体验

  • 智能是在*某个空间*(解剖、化学、行为、语言等)中解决问题。具身性可以存在于任何这些空间中,而不仅仅是三维物理空间。
  • 智能的尺度可能相互关联,但在单个尺度上是有用的。
  • 主观能动性可能与主观体验或一般的体验有一些潜在的要求。
  • “认知光锥”代表了系统可以追求的最大目标的范围。
  • 进行实验(扰动系统)以发现目标和能力至关重要,而不仅仅是观察。
  • 因为这个概念是连续的,而不是离散的(数字的)。
  • 所有生命系统都具有可扩展性,这里的观点是,粒子可能在量子相互作用中表达出最小作用原理的某个方面。
  • 根据最小作用量原理和量子不确定性,即使是粒子*也可能*表现出最小的主体性(目标导向性和来自局部条件的一定程度的不可预测性)。
  • “生命”可以定义为那些在多个组织层面上有效地*放大*主体性、不确定性和目标导向性的系统。
  • 这可能意味着生命可能有能力以主观方式理解和重新构建事物。
  • 当你需要考虑系统自身对世界的看法来预测其行为时,“内在视角”就会出现。 这不是二元的,而是一个程度问题。

对人工智能的影响

  • 我们缺乏理解新颖系统目标并与截然不同的心智(无论是生物还是人工)互动的原则性框架。这是一个生存风险。
  • 人工智能系统*不需要*达到人类水平(或具有大的认知光锥)才具有危险性。 我们脆弱的身体和心理框架是一个重大的弱点。
  • 人类编造故事,缺乏我们大部分知识的基础,并且难以将同情心扩展到那些与我们不同的人。这对人工智能提出了严重的伦理问题。
  • 人工智能可能需要制作类似的模型,并且框架生物学已经和可以使用的方式来解释事物的方式。
  • 仅仅因为当前的人工智能系统不像“精英成年人”就假设它们*没有*任何程度的目标导向性是错误的。 我们可能未能认识到更简单的主体性形式。
  • 人工智能令人惊讶的能力可能在于“涌现目标”领域。
  • 人工智能缺乏稳健的记忆,并且不稳健,并且在泛化方面表现出问题,这些问题可能会受到诸如迈克尔实验室中的生物电研究的启发和研究。
  • 人工智能变得更加通用,并开始表现出类似于生物体和我们通常赋予道德价值的主体性的品质,这可能意味着这一类别将极大地增长到数万亿,而这又是一个重要的点。 如何考量,假如拥有自我认知的个体发展了起来。如果这件事情有可能发生。
  • 进化造就了解决问题的人,而不是优化特定解决方案的产生器。

对人工智能研究人员的建议(间接)

  • 考虑来自多样化智能研究的原则,探索超越“标准成年人”模型的认知范围。
  • 认识到“具身化”可以发生在除三维之外的许多空间中,包括与人工智能相关的抽象空间。
  • 在调查能力时优先考虑实验扰动,而不是哲学承诺。
  • 发展一门关于新目标来自何处以及如何与截然不同的心智进行伦理互动的原则科学。
  • 避免二元思维; 将智能理解为存在于尺度和梯度上,能够以主观模式理解事物。
  • 区分人工智能工具(为特定目的设计)和真正的主体(具有开放式智能和道德价值)。莱文有意识地避免可能导致后者的研究。