Introduction and Concepts
- Levin discusses a framework for understanding intelligence that applies to diverse agents (biological, artificial, exobiological, etc.).
- Intelligence is competency in navigating various spaces (physical, transcriptional, anatomical, physiological, etc.).
- Agents possess goals, preferring certain states within a space, and have varying capacities to achieve those goals.
- William James’ definition of intelligence: Ability to reach the same goal by different means, emphasizing adaptability and resilience.
- Teleophobia: Resistance to the idea of goals in nature. Levin argues against this, referencing cybernetics, control theory, and computer science as examples of non-magical, goal-directed systems.
- The “proof is in the pudding”: Focus on empirical results. Levin’s framework leads to experiments others wouldn’t do, like regenerating frog limbs.
- Levin cites his definition as: this is useful, and welcomes superior frameworks and results, referencing also yakir aharnav’s two-time interpretation.
Multi-Scale Competency and Scaling
- Multi-scale competency architecture: Levin proposes a continuum of cognition, scaling from basic physics (least action principles, quantum indeterminacy) to complex organisms.
- Scaling involves increasing the size of an agent’s “cognitive light cone,” representing the spatial and temporal extent of the goals it can pursue.
- Emergent minds: Collective intelligences form when individual agents (like cells) connect via gap junctions/electrical synapses, sharing memory traces and becoming functionally entangled.
- The system grows on three sections: the ability of measure, larger area and a larger space to remember, so when we take measurements we take large scale measurements, and bigger memories/goals.
- Stress is the difference between the current state and the goal state. The things that stress a system reveal its cognitive sophistication.
- Cells both compete, cooperate, all dictated at least partially, by an ultimate solution.
Optimality, Goals, and the Future
- Where do goals come from? Beyond rational design and evolution, novel agents (like chimeras or self-assembling systems) exhibit emergent goals whose origins are not fully understood.
- We lack a science to predict goals and capacities of emergent minds. This is crucial for understanding and interacting with increasingly complex systems.
- Levin connects and is inspired, partially, to and by least action principles.
- Suggests possible connection of “optimality” to Aharonov’s two-time interpretation of quantum mechanics, where the future may have a causal impact on the present.
Artificial Intelligence
- Rejects the term “artificial intelligence” as creating a false dichotomy. There isn’t “real” intelligence vs. “artificial” intelligence. Chimera combinations (biological/technological) blur these lines.
- Argues against the idea that evolutionarily-derived intelligence is inherently superior. Engineers can potentially create intelligences exceeding those found in nature.
- The “proof is in the pudding”: Focus on empirical results. Levin says there is a symmetry between nature and engineering.
- Current AI (machine learning) is missing key aspects of true cognitive agency, but Levin is optimistic about future progress.
Social and Political Implications
- Discussions around societal impacts of future technology (AI, genetic engineering, etc.).
- Levin cautions against over-regulation. He believes restricting advancements in tech by those capable of such, is a slippery slope.
- Expresses concern about top-down attempts to enforce uniformity or limit individual expression.
- Advocates for freedom and allowing individuals to pursue their potential, even if it involves radical technological augmentation.
- Evolutionary success is not a basis for morality. Levin thinks optimization and guiding are the best practices and can yield improved success and greater freedom, even to some individual losses.
- Acknowledges potential downsides (inequality, harmful choices) but favors a libertarian approach, emphasizing freedom of choice and adaptation.
- No guarantee that collective interests always aligns with individual well-being, referring to climbing example and its cost to skincells as analogy.
- Suggests studying scaling principles in biology to inform how we design social structures, promoting a balance between individual welfare and collective goals.
- Highlights the company that uses bio-electrical signals (limb regen and similar), and bio-synthetic AI to further discover the secrets and unlock applications that benefit from knowledge.
- For information see his website.