Michael Levin on Multi-Scale Intelligence and Teleophobia Bioelectricity Podcast Notes

PRINT ENGLISH BIOELECTRICITY GUIDE

PRINT CHINESE BIOELECTRICITY GUIDE


Introduction and Concepts

  • Levin discusses a framework for understanding intelligence that applies to diverse agents (biological, artificial, exobiological, etc.).
  • Intelligence is competency in navigating various spaces (physical, transcriptional, anatomical, physiological, etc.).
  • Agents possess goals, preferring certain states within a space, and have varying capacities to achieve those goals.
  • William James’ definition of intelligence: Ability to reach the same goal by different means, emphasizing adaptability and resilience.
  • Teleophobia: Resistance to the idea of goals in nature. Levin argues against this, referencing cybernetics, control theory, and computer science as examples of non-magical, goal-directed systems.
  • The “proof is in the pudding”: Focus on empirical results. Levin’s framework leads to experiments others wouldn’t do, like regenerating frog limbs.
  • Levin cites his definition as: this is useful, and welcomes superior frameworks and results, referencing also yakir aharnav’s two-time interpretation.

Multi-Scale Competency and Scaling

  • Multi-scale competency architecture: Levin proposes a continuum of cognition, scaling from basic physics (least action principles, quantum indeterminacy) to complex organisms.
  • Scaling involves increasing the size of an agent’s “cognitive light cone,” representing the spatial and temporal extent of the goals it can pursue.
  • Emergent minds: Collective intelligences form when individual agents (like cells) connect via gap junctions/electrical synapses, sharing memory traces and becoming functionally entangled.
  • The system grows on three sections: the ability of measure, larger area and a larger space to remember, so when we take measurements we take large scale measurements, and bigger memories/goals.
  • Stress is the difference between the current state and the goal state. The things that stress a system reveal its cognitive sophistication.
  • Cells both compete, cooperate, all dictated at least partially, by an ultimate solution.

Optimality, Goals, and the Future

  • Where do goals come from? Beyond rational design and evolution, novel agents (like chimeras or self-assembling systems) exhibit emergent goals whose origins are not fully understood.
  • We lack a science to predict goals and capacities of emergent minds. This is crucial for understanding and interacting with increasingly complex systems.
  • Levin connects and is inspired, partially, to and by least action principles.
  • Suggests possible connection of “optimality” to Aharonov’s two-time interpretation of quantum mechanics, where the future may have a causal impact on the present.

Artificial Intelligence

  • Rejects the term “artificial intelligence” as creating a false dichotomy. There isn’t “real” intelligence vs. “artificial” intelligence. Chimera combinations (biological/technological) blur these lines.
  • Argues against the idea that evolutionarily-derived intelligence is inherently superior. Engineers can potentially create intelligences exceeding those found in nature.
  • The “proof is in the pudding”: Focus on empirical results. Levin says there is a symmetry between nature and engineering.
  • Current AI (machine learning) is missing key aspects of true cognitive agency, but Levin is optimistic about future progress.

Social and Political Implications

  • Discussions around societal impacts of future technology (AI, genetic engineering, etc.).
  • Levin cautions against over-regulation. He believes restricting advancements in tech by those capable of such, is a slippery slope.
  • Expresses concern about top-down attempts to enforce uniformity or limit individual expression.
  • Advocates for freedom and allowing individuals to pursue their potential, even if it involves radical technological augmentation.
  • Evolutionary success is not a basis for morality. Levin thinks optimization and guiding are the best practices and can yield improved success and greater freedom, even to some individual losses.
  • Acknowledges potential downsides (inequality, harmful choices) but favors a libertarian approach, emphasizing freedom of choice and adaptation.
  • No guarantee that collective interests always aligns with individual well-being, referring to climbing example and its cost to skincells as analogy.
  • Suggests studying scaling principles in biology to inform how we design social structures, promoting a balance between individual welfare and collective goals.
  • Highlights the company that uses bio-electrical signals (limb regen and similar), and bio-synthetic AI to further discover the secrets and unlock applications that benefit from knowledge.
  • For information see his website.

导言与概念 (Introduction and Concepts)

  • 莱文 (Levin) 探讨了一个理解智能的框架,该框架适用于各种主体(生物的、人工的、外星生物的等等)。
  • 智能是在各种空间(物理空间、转录空间、解剖空间、生理空间等)中导航的能力。
  • 主体拥有目标,偏好空间内的某些状态,并具有不同程度的能力来实现这些目标。
  • 威廉·詹姆斯 (William James) 对智能的定义:通过不同手段达到相同目标的能力,强调适应性和韧性。
  • 目的恐惧症 (Teleophobia):抵制自然界中存在目标的概念。莱文反对这一点,他引用控制论、控制理论和计算机科学作为非魔法的、目标导向系统的例子。
  • “实践出真知” (The “proof is in the pudding”):关注经验结果。莱文的框架引导了其他人不会做的实验,比如再生青蛙的肢体。
  • 莱文引用他自己的定义:这是有用的,并欢迎更优越的框架和结果,同时也参考了亚基尔·阿哈罗诺夫 (Yakir Aharonov) 的双时间解释。

多尺度能力与尺度扩展 (Multi-Scale Competency and Scaling)

  • 多尺度能力架构:莱文提出了一个认知的连续体,从基本物理学(最小作用量原理、量子不确定性)扩展到复杂的生物体。
  • 尺度扩展涉及增加主体的“认知光锥”的大小,代表其可以追求的目标的空间和时间范围。
  • 涌现心智 (Emergent minds):当个体主体(如细胞)通过间隙连接/电突触连接时,就会形成集体智能,共享记忆痕迹并变得功能上纠缠。
  • 系统在三个方面增长:测量能力、更大的区域和更大的记忆空间,所以当我们进行测量时,我们进行大规模测量,并拥有更大的记忆/目标。
  • 压力是当前状态和目标状态之间的差异。给系统带来压力的事物揭示了它的认知复杂性。
  • 细胞既竞争又合作,所有这些都至少部分地由最终的解决方案决定。

最优性、目标和未来 (Optimality, Goals, and the Future)

  • 目标从何而来?除了理性设计和进化之外,新的主体(如嵌合体或自组装系统)表现出其起源尚不完全清楚的涌现目标。
  • 我们缺乏预测涌现心智的目标和能力的科学。这对于理解和与日益复杂的系统交互至关重要。
  • 莱文联系并受到最小作用量原理的启发(部分地)。
  • 建议将“最优性”与阿哈罗诺夫 (Aharonov) 的量子力学双时间解释联系起来,其中未来可能对现在产生因果影响。

人工智能 (Artificial Intelligence)

  • 拒绝使用“人工智能”一词,因为它制造了一个错误的二分法。不存在“真正的”智能与“人工”智能。嵌合体组合(生物/技术)模糊了这些界限。
  • 反对进化衍生的智能本质上更优越的观点。工程师有可能创造出超越自然界中发现的智能。
  • “实践出真知”:关注经验结果。莱文说自然和工程之间存在对称性。
  • 当前的人工智能(机器学习)缺少真正认知主体的关键方面,但莱文对未来的进展持乐观态度。

社会和政治影响 (Social and Political Implications)

  • 关于未来技术(人工智能、基因工程等)对社会影响的讨论。
  • 莱文警告不要过度监管。他认为,限制那些有能力的人的技术进步,是一个滑坡。
  • 对自上而下试图强制统一或限制个人表达的做法表示担忧。
  • 倡导自由,允许个人追求他们的潜力,即使这涉及激进的技术增强。
  • 进化的成功不是道德的基础。莱文认为优化和指导是最佳实践,可以产生更高的成功和更大的自由,即使以一些个人的损失为代价。
  • 承认潜在的缺点(不平等、有害的选择),但赞成自由主义的方法,强调选择和适应的自由。
  • 不能保证集体利益总是与个人福祉一致,参考攀登的例子及其对皮肤细胞的代价作为类比。
  • 建议研究生物学中的尺度扩展原理,以指导我们如何设计社会结构,促进个人福利和集体目标之间的平衡。
  • 强调了利用生物电信号(肢体再生等)和生物合成人工智能的公司,以进一步发现秘密并解锁受益于知识的应用。
  • 有关信息,请访问他的网站。