Michael Levin – Why Intelligence Isn’t Limited To Brains. Bioelectricity Podcast Notes

PRINT ENGLISH BIOELECTRICITY GUIDE

PRINT CHINESE BIOELECTRICITY GUIDE


Introduction and Key Concepts

  • Persistence in current form is impossible, both individually and as a species; change and adaptation are inevitable. The key question is what we will be replaced *by*.
  • Levin works in “diverse intelligence,” aiming to understand what it means to be an intelligent, embodied agent beyond human-centric biases.
  • We’re good at recognizing intelligence similar to ours (medium size, speed, 3D space), but there are many other forms of intelligence throughout biology (and beyond).
  • Science fiction often explores possibilities beyond current limitations, offering valuable thought experiments about alternative forms of intelligence.
  • Our current perspective isn’t privileged; what seems like science fiction today might be commonplace tomorrow.

Humanity and Its Future

  • Levin is against “human chauvinism”—the idea that our current form is the only valid one. We shouldn’t fear being supplanted by superior beings.
  • What we value as “humanity” isn’t necessarily tied to Homo sapiens DNA or anatomy, but rather to traits like compassion and shared existential concerns.
  • He argues we desire human companionship on long trips due to similar levels or exceeding compassion, similar concerns and goals on large projects.
  • It is possible there exists highly intelligent but lacking the compasion and similar conerns as modern AI is showing that tendancy.
  • Limitations we perceive (e.g., lower back pain, communication bottlenecks) aren’t optimal designs, but products of evolution’s path.
  • Evolution’s “bow tie” architecture (compression and re-expansion, like DNA to organism, or thought to language) isn’t a flaw, but a source of adaptability and creativity. It forces interpretation and problem-solving in novel situations.
  • We might fear that care/compasion might not extend enough, and that fear might create resistance to ideas in transhumanism.
  • We often project limitations/fears of other’s intellgience that is greater than ourselves onto that of other beings.
  • He argues evolution is geared to have systems able to tackle novel situations as normal because every part of biology is very undependable, such as DNA to organism (morphogenesis) example.

Agency and Intelligence Beyond Biology

  • Levin proposes a continuum of agency, starting with very basic forms (e.g., least action principles in physics). This scales up through biology.
  • Intelligence is the ability to generally be a good problem solver, or able to learn well.
  • He reframes things such as an insect’s metamorphosis (Caterpillar to Butterfly) as evolution is the machine geared to solve novel problems and repurposing.
  • He doesn’t believe there’s an objective “view from nowhere” about what has a mind. It’s observer-relative, including the system itself as an observer.
  • Agency is an ability to interact/affect a domain using inputs from that domain.
  • Using an “agential lens” (considering goals, learning, memory) can be useful even for very simple systems (gene regulatory networks, sorting algorithms).
  • Applying behavioral science protocols (like Pavlovian conditioning) to gene networks reveals learning capabilities *without* needing gene therapy (a non-reductionist approach).
  • This holistic perspective doesn’t imply randomness; it’s guided by principles from higher-level systems.
  • Biology leverages a “multi-scale competency architecture” where higher-level goals guide lower-level actions, making control *easier* despite increased complexity. This is “engineering with agential materials.”
  • Hollistic, behaviour shaping and the normal interface is hijacked in Biology through it’s various encapsulated trainable and behavour systems.
  • As scientists, and philosophers, it is our jobs to widen/open our understanding of intelligence and to improve our models (i.e. metaphors).

Implications for Understanding Systems

  • Current AI (LLMs) may *not* be truly agentic, but he emphasizes *humility*: our intuitions are often wrong, even for very simple systems.
  • A “being” with embodiment (the ability to generally solve problems, measure against a perference and acting on the environment in a loop) *may* be applicable in domains that is non-biological and virtual worlds are not that different than “real” experiences..
  • Intelligence *isn’t* limited to brains or the physical world. Embodiment is about the perception-prediction-action loop, which can occur in various “spaces” (physiological, transcriptional, etc.).
  • Being part of a larger collective intelligence doesn’t guarantee individual well-being (e.g., skin cells sacrificed during rock climbing). The composite system’s goals may differ from its parts’.
  • Cancer can be viewed as cells shrinking their “cognitive light cone” (the scope of their goals) back to a single-cell level. Treatment can focus on re-expanding this connection, not just killing cells.
  • LLMs can model/talk impressively but that’s all it currently has been observed to do as the current system, LLMs, lacks the systems necessary to have proper agency.
  • That there exists degrees, or stages to reach the state of agency and to scale higher for complex intelligence systems.

Concluding Thoughts

  • The idea of fixed categories is of question, even humans may be an aspect of an overall collective intelligence.
  • There are multiple levels of agentic goal seeking behaviour such as Clam, single-celled, cancers surviving very well or even just regular cancers defacting a higher level systems.
  • Developing the field of “diverse intelligence” is crucial, as we’re creating large-scale, emergent cognitive systems (social structures, IoT) with unpredictable goals. We must get better at understanding, shaping, and communicating with them.
  • It would be foolish to place bounds on “how much” intelligence because it cannot be calculated from the vantage of a less capable agent/system.
  • Evolution doesn’t aim for specific solutions, it’s geared up systems which enables problem solvers to exist to be flexible in new environments.
  • We have to make up the terms to understand and model a phenomenon (aka a metaphor) as necessary.
  • Being humble of what we claim systems cannot do/or is doing because what we might be perciving it doing might be different, with evolution/morphogensis example, the defult assumption should be towards the more capable side.
  • The system is agentic, or not, will be dependent on your point of view (a very clear, non-abstract example: the physics perspective (physics guy)).

导言与关键概念

  • 无论是个人还是物种,都不可能永远保持现有形态;变化和适应是不可避免的。关键问题在于,我们将*被*什么所取代。
  • 莱文从事“多元智能”研究,旨在超越以人类为中心的偏见,理解成为一个有智慧的、具身的行动者意味着什么。
  • 我们擅长识别与我们相似的智能(中等大小、速度、三维空间),但在整个生物学(及其他领域)中,存在许多其他形式的智能。
  • 科幻小说经常探索超越当前限制的可能性,提供了关于智能替代形式的有价值的思想实验。
  • 我们目前的观点并非特权;今天看起来像科幻小说的事物,明天可能成为常态。

人类及其未来

  • 莱文反对“人类沙文主义”——即认为我们目前的形式是唯一有效的形式。我们不应该害怕被更高级的生命所取代。
  • 我们所珍视的“人性”不一定与智人DNA或解剖结构相关,而是与诸如同情心和共同的存在关注等特征相关。
  • 他认为,我们渴望在长途旅行中有人类陪伴,是因为在大型项目上具有相似或更高的同情心,以及相似的关注点和目标。
  • 可能存在非常聪明但缺乏同情心和类似关切的个体,正如现代人工智能正在显示出这种趋势一样.
  • 我们认为的局限性(例如,腰痛、沟通瓶颈)并非最佳设计,而是进化路径的产物。
  • 进化的“蝴蝶结”架构(压缩和再扩展,如DNA到生物体,或思想语言)不是缺陷,而是适应性和创造力的源泉。它迫使我们在新的情况下进行解释和解决问题。
  • 我们可能会担心关怀/同情心可能延伸得不够,而这种恐惧可能会对超人类主义的思想产生抵制。
  • 我们经常将自己对他人智慧的局限性/恐惧(大于我们自己)投射到其他生命体身上。
  • 他认为,进化旨在让系统能够应对新的情况,这是正常的,因为生物学的每个部分都非常不可靠,例如DNA到生物体(形态发生)的例子。

超越生物学的自主性和智能

  • 莱文提出了一个自主性的连续体,从非常基本的形式开始(例如,物理学中的最小作用量原理)。这通过生物学逐步升级。
  • 智能是一种能够出色解决问题,或学习良好的通能力。
  • 他重新定义了诸如昆虫变态(毛毛虫变成蝴蝶)之类的事物,因为进化是一种旨在解决新问题和重新利用资源的机器。
  • 他不相信存在关于什么拥有思想的客观的“无处可看的观点”。这是相对于观察者的,包括系统本身作为一个观察者。
  • 自主性是一种使用来自该领域的输入来交互/影响该领域的能力。
  • 即使对于非常简单的系统(基因调控网络、排序算法),使用“自主性视角”(考虑目标、学习、记忆)也可能是有用的。
  • 将行为科学方案(如巴甫洛夫条件反射)应用于基因网络,揭示了*无需*基因治疗的学习能力(一种非还原论方法)。
  • 这种整体视角并不意味着随机性;它受到来自更高级别系统的原则的指导。
  • 生物学利用了一种“多尺度能力架构”,其中更高级别的目标指导较低级别的行动,尽管复杂性增加,但控制却*更容易*。这就是“用自主材料进行工程设计”。
  • 整体的、行为塑造和正常的界面在生物学中通过其各种封装的可训练和行为系统被劫持。
  • 作为科学家和哲学家,我们的工作是拓宽/开放我们对智能的理解,并改进我们的模型(即隐喻)。

对理解系统的影响

  • 当前的人工智能(LLM)可能*不是*真正的自主体,但他强调*谦逊*:我们的直觉经常出错,即使对于非常简单的系统也是如此。
  • 一个具有具身性(通常解决问题、根据偏好进行衡量并在循环中对环境采取行动的能力)的“存在”,*可能*适用于非生物领域,并且虚拟世界与“真实”体验并没有太大不同。
  • 智能*不*局限于大脑或物理世界。具身性是关于感知-预测-行动循环的,这可以在各种“空间”(生理、转录等)中发生。
  • 成为更大集体智能的一部分并不能保证个体的福祉(例如,攀岩时牺牲的皮肤细胞)。复合系统的目标可能与其组成部分的目标不同。
  • 癌症可以被视为细胞将其“认知光锥”(其目标的范围)缩小到单细胞水平。治疗可以专注于重新扩大这种联系,而不仅仅是杀死细胞。
  • 大型语言模型 (LLM) 可以令人印象深刻地进行建模/交谈,但这只是目前观察到的它的全部功能,因为当前系统 LLM 缺乏拥有适当自主性所必需的系统。
  • 存在达到自主性状态的程度或阶段,以及为复杂的智能系统向上扩展。

总结思考

  • 固定类别的概念是值得怀疑的,即使是人类也可能是整体集体智能的一个方面。
  • 存在多个层次的自主性目标寻求行为,例如蛤蜊、单细胞生物、存活得很好的癌症,甚至只是反抗更高级别系统的常规癌症。
  • 发展“多元智能”领域至关重要,因为我们正在创造具有不可预测目标的大规模、涌现的认知系统(社会结构、物联网)。我们必须更好地理解、塑造和与它们沟通。
  • 将“智能有多少”的界限强加于人是愚蠢的,因为它无法从能力较低的行动者/系统的角度进行计算。
  • 进化并不旨在实现特定的解决方案,它配备了能够让问题解决者存在的系统,以便在新环境中保持灵活性。
  • 我们必须根据需要创造术语来理解和模拟一种现象(也就是隐喻)。
  • 对我们声称系统不能做什么/正在做什么保持谦逊,因为我们可能感知的它正在做的事情可能有所不同,以进化/形态发生的例子来说,默认假设应该倾向于更有能力的一方。
  • 系统是否具有自主性,将取决于你的观点(一个非常清晰、不抽象的例子:物理学视角(物理学家))。