Introduction: Children as “AI”
- Levin starts with a description that sounds like advanced AI, but he’s actually referring to human children, highlighting the inherent unpredictability and replacement that come with creating *any* new intelligence. This underscores that anxieties about AI replacing us are not new; they are ancient, existential concerns.
- Existing systems are created, given autonomy, released, and some, whether good or bad are empowered to perform those behaviors in the future, a license is required to fish but not for parenting.
- All forms of existing, adaptable life eventually cease.
Synthetic vs. Natural: A False Dichotomy
- People tend to categorize synthetic beings (like AI) as fundamentally different, but the real questions about creating high-capability beings, releasing them into the world, and having limited control are the *same* for both natural (children) and synthetic entities.
- People may have made judgements and statements without actually doing research first.
Adaptation, Persistence, and the Future
- A species that doesn’t change/adapt will die out. A species that does change is also, technically, “gone,” replaced by its adapted form. This paradox applies to all evolving systems, including humanity.
- The key is not “persistence as a fixed object” but “persistence as a process” (like process philosophy). The interesting question is not *if* we change, but *how* we want to change, individually and as a species.
- Humans in 100-200 years might not accept the limitations of the current human condition (e.g., lower back pain, diseases, birth defects) as inevitable; and future might consider those who refused to adapt unfathomable. Freedom of embodiment and deliberate change will likely become norms.
- It is not necessarily intelligence as brains, which evolve in nature, but different things altogether that should also fall under the terminology.
Diverse Intelligence and Overcoming Categorical Distinctions
- The field of “Diverse Intelligence” is growing, and challenges traditional, narrow definitions of intelligence and mind.
- Diverse intelligence seeks commonalities across *all possible* intelligent agents, not just those that are biological or brain-based. This includes considering radically different substrates, sizes, and “spaces” where intelligence can operate (not just 3D space).
- Confabulation (creating plausible explanations without complete information) is a feature of intelligence, not a bug. It’s essential for compression of experience, learning, and creativity, but it is present in biological intelligences as well as in certain simple mechanical processes as well, not exclusive to machines and software.
- The interviewer refers to previous chat about bioelectrical intelligence which led into a discussion about diverse intelligence, the rapidly-growing field.
- The question of where to look for mind and agency should remain at large for all of existence.
- People tend to assume a categorical distinction between a physical, cognitive system and the thoughts being carried and transferred between those physical entities, which isn’t necesarrily valid; both could exist on a fluid spectrum, and physical objects we percieve to be simplictic might turn out not to be so, due to our current perspectives of the subject matter being in their infancy.
Ethical Implications and “Synthbiosis”
- The ethical considerations of creating/interacting with diverse intelligences (including augmented humans) are significant. Categorical thinking (“us” vs. “them,” natural vs. artificial) is dangerous. The focus should be on matching “cognitive light cones” and sharing existential concerns.
- “Synthbiosis” (a term coined by chat gbt at Levin’s request): A positive, creative collaboration between biology and synthetic entities.
- We can learn things that we would not discover alone from any community.
Machine vs. Human: A Misguided Question
- The question “are we machines?” is ill-posed. “Machine” is a *lens* or interaction protocol, not an essential property. Different lenses reveal different aspects. We shouldn’t argue about *what something “really is”* but about the *utility of different perspectives*.
- There are certain areas and levels within which complex intelligence can operate and have very high level efficiency with it’s operations.
- We have profoundly misunderstood “simple” machines. There are “protocognitive” properties even in very simple systems (e.g., unexpected capacities in the “bubble sort” algorithm).
- Humility is crucial. We haven’t achieved anything that even science-fiction works could’ve predicted.
Moving Forward: Kindness and Avoiding Fear
- Fear and scarcity mentalities, the feeling like care is a ‘zero-sum-game’ hinder a greater more caring environment being created by humans.
- Unwarranted certainty about consciousness and cognition is dangerous. The field of Diverse Intelligence is just beginning, and many fundamental questions remain unanswered.
- The two primary ways we get it wrong with ethics: to value something less, or to give things compassion which don’t need them, with the former a bigger danger.
- Levin suggests prioritizing kindness, compassion, and recognizing the potential for sentience in unconventional forms, rather than being driven by fear of the other. This includes acknowledging that we may need to greatly expand our “circle of compassion.”
- We shouldn’t go backward towards ‘simpler’ less-developed civilizations that have a “one-ness with all life”, but use them as inspiration or guidance towards a scientific discovery in similar manner; or, a *starting point*.
- Dan Dennett’s philosophy is brought up: To discuss his generosity as an inspirational, intelligent human and that what made philosophy so compelling, was that Dennet did actual physical science/experimentation in additon to discussing philosophy and working out problems and discovering more together, even those that he disagreed with.