Can AI Truly Understand Us? (The Challenge of Imitating Human Identity)
Jeremy Horne
Artificial intelligence (AI) creators and developers attempt to simulate human thinking and our environment (e.g., popular web-based Second Life), but, controversially, claim to seek replicating the human brain (e.g., Human Brain Project) and what it does. However, some central questions are, "Who are we? Are we truly who we believe we are? What is thinking/mentation/ideas? Why is our world rife with contradictions and conflict? What is reality, itself?" Experts, like David Chalmers, refer to "consciousness" as the "hard problem". If we don’t fully understand these concepts in humans, how can we possibly recreate them in AI? It’s a puzzle, much like the challenge of defining and creating life itself. Answers are to be founded on what exists, not merely our desires. For replicating humans, AI developers must confront the difference between belief and authentic self. Beliefs can mask the true self. A major flaw of the well-known Turing Test—which assesses whether a machine can imitate human intelligence—is that it cannot verify whether someone’s beliefs are reflected in their actions. AI developers must be competent technicians but integrate philosophy, thus addressing overlapping questions of meaning, ethics, purpose, and ethos. Even AI creators acknowledge AI could threaten humanity. If future technology integrates self-awareness or subjective experience into advanced computing systems, we will need to revisit some ancient wisdom, "Know thyself." Any viable human identity probe (such as Authentic Systems) must be underpinned by philosophy, thus revealing the extent to which one has internalized belief by action. One the material and psychological aspects of authentic identity are know, we apply the unity of opposites law to establish authentic human identity. Full Text
|