Introduction and Key Concepts
- Persistence in current form is impossible, both individually and as a species; change and adaptation are inevitable. The key question is what we will be replaced *by*.
- Levin works in “diverse intelligence,” aiming to understand what it means to be an intelligent, embodied agent beyond human-centric biases.
- We’re good at recognizing intelligence similar to ours (medium size, speed, 3D space), but there are many other forms of intelligence throughout biology (and beyond).
- Science fiction often explores possibilities beyond current limitations, offering valuable thought experiments about alternative forms of intelligence.
- Our current perspective isn’t privileged; what seems like science fiction today might be commonplace tomorrow.
Humanity and Its Future
- Levin is against “human chauvinism”—the idea that our current form is the only valid one. We shouldn’t fear being supplanted by superior beings.
- What we value as “humanity” isn’t necessarily tied to Homo sapiens DNA or anatomy, but rather to traits like compassion and shared existential concerns.
- He argues we desire human companionship on long trips due to similar levels or exceeding compassion, similar concerns and goals on large projects.
- It is possible there exists highly intelligent but lacking the compasion and similar conerns as modern AI is showing that tendancy.
- Limitations we perceive (e.g., lower back pain, communication bottlenecks) aren’t optimal designs, but products of evolution’s path.
- Evolution’s “bow tie” architecture (compression and re-expansion, like DNA to organism, or thought to language) isn’t a flaw, but a source of adaptability and creativity. It forces interpretation and problem-solving in novel situations.
- We might fear that care/compasion might not extend enough, and that fear might create resistance to ideas in transhumanism.
- We often project limitations/fears of other’s intellgience that is greater than ourselves onto that of other beings.
- He argues evolution is geared to have systems able to tackle novel situations as normal because every part of biology is very undependable, such as DNA to organism (morphogenesis) example.
Agency and Intelligence Beyond Biology
- Levin proposes a continuum of agency, starting with very basic forms (e.g., least action principles in physics). This scales up through biology.
- Intelligence is the ability to generally be a good problem solver, or able to learn well.
- He reframes things such as an insect’s metamorphosis (Caterpillar to Butterfly) as evolution is the machine geared to solve novel problems and repurposing.
- He doesn’t believe there’s an objective “view from nowhere” about what has a mind. It’s observer-relative, including the system itself as an observer.
- Agency is an ability to interact/affect a domain using inputs from that domain.
- Using an “agential lens” (considering goals, learning, memory) can be useful even for very simple systems (gene regulatory networks, sorting algorithms).
- Applying behavioral science protocols (like Pavlovian conditioning) to gene networks reveals learning capabilities *without* needing gene therapy (a non-reductionist approach).
- This holistic perspective doesn’t imply randomness; it’s guided by principles from higher-level systems.
- Biology leverages a “multi-scale competency architecture” where higher-level goals guide lower-level actions, making control *easier* despite increased complexity. This is “engineering with agential materials.”
- Hollistic, behaviour shaping and the normal interface is hijacked in Biology through it’s various encapsulated trainable and behavour systems.
- As scientists, and philosophers, it is our jobs to widen/open our understanding of intelligence and to improve our models (i.e. metaphors).
Implications for Understanding Systems
- Current AI (LLMs) may *not* be truly agentic, but he emphasizes *humility*: our intuitions are often wrong, even for very simple systems.
- A “being” with embodiment (the ability to generally solve problems, measure against a perference and acting on the environment in a loop) *may* be applicable in domains that is non-biological and virtual worlds are not that different than “real” experiences..
- Intelligence *isn’t* limited to brains or the physical world. Embodiment is about the perception-prediction-action loop, which can occur in various “spaces” (physiological, transcriptional, etc.).
- Being part of a larger collective intelligence doesn’t guarantee individual well-being (e.g., skin cells sacrificed during rock climbing). The composite system’s goals may differ from its parts’.
- Cancer can be viewed as cells shrinking their “cognitive light cone” (the scope of their goals) back to a single-cell level. Treatment can focus on re-expanding this connection, not just killing cells.
- LLMs can model/talk impressively but that’s all it currently has been observed to do as the current system, LLMs, lacks the systems necessary to have proper agency.
- That there exists degrees, or stages to reach the state of agency and to scale higher for complex intelligence systems.
Concluding Thoughts
- The idea of fixed categories is of question, even humans may be an aspect of an overall collective intelligence.
- There are multiple levels of agentic goal seeking behaviour such as Clam, single-celled, cancers surviving very well or even just regular cancers defacting a higher level systems.
- Developing the field of “diverse intelligence” is crucial, as we’re creating large-scale, emergent cognitive systems (social structures, IoT) with unpredictable goals. We must get better at understanding, shaping, and communicating with them.
- It would be foolish to place bounds on “how much” intelligence because it cannot be calculated from the vantage of a less capable agent/system.
- Evolution doesn’t aim for specific solutions, it’s geared up systems which enables problem solvers to exist to be flexible in new environments.
- We have to make up the terms to understand and model a phenomenon (aka a metaphor) as necessary.
- Being humble of what we claim systems cannot do/or is doing because what we might be perciving it doing might be different, with evolution/morphogensis example, the defult assumption should be towards the more capable side.
- The system is agentic, or not, will be dependent on your point of view (a very clear, non-abstract example: the physics perspective (physics guy)).