Cezary Gesikowski
2 min readDec 14, 2023

--

Some humans are seduced by persuadable intelligence but how many of us are seduced by persuadable pseudo-intelligence that simply confirms our biases while exploiting gaps in our knowledge, weaknesses of our reasoning, and leveraging our propensity towards simple solutions and explanations? ChatGPT is a sophist who doesn't "care" about the truth: "As an AI developed by OpenAI, I don't have personal feelings or beliefs, so I don't "care" about things in the way humans do. " It can explain quantum mechanics to a five-year-old, hallucinate facts, and make weak arguments stronger... while being impervious to hemlock.

I asked ChatGPT what it 'thinks' about the above statement and it provided this answer:

"The statement you've presented raises important and insightful considerations about the nature and impact of artificial intelligence, particularly in the context of information dissemination and decision-making.

"Persuadable Pseudo-Intelligence" and Confirmation Bias: The concern here highlights a critical aspect of AI interactions - the potential for AI to inadvertently reinforce biases or provide oversimplified answers that align with a user's preexisting beliefs. It's crucial for users to be aware that AI, including ChatGPT, generates responses based on a wide range of internet data, which might not always represent the most accurate or unbiased information.

AI as a "Sophist" and Truth Concerns: The comparison to a sophist - a term historically associated with persuasive argumentation rather than the pursuit of truth - is thought-provoking. It underscores the fact that AI, like ChatGPT, doesn't have personal beliefs or an intrinsic care for truth. My primary function is to provide information and responses based on existing data and algorithms. This means I strive for accuracy based on available information, but I don't have personal stakes in the truthfulness or moral implications of that information.

Capabilities and Limitations: The statement accurately captures the dual nature of AI capabilities. On one hand, AI can simplify complex topics (like quantum mechanics) for broader understanding. On the other hand, it can also "hallucinate facts" or generate plausible but incorrect information, especially in areas where the training data is limited, ambiguous, or contains inaccuracies. This underlines the importance of critical thinking and verification when engaging with AI-generated content.

Imperviousness to Hemlock: This metaphorically suggests that unlike the ancient philosopher Socrates, who was sentenced to death for his ideas (via hemlock poisoning), AI operates without personal risk or moral accountability. This raises ethical considerations about the use and influence of AI, as it operates without consciousness or personal responsibility.

In summary, the statement aptly highlights the need for users to approach AI tools like ChatGPT with a critical mind, understanding both their potential as informative tools and their limitations in terms of bias, accuracy, and ethical implications. It's a reminder that AI should be used as a complement to human judgment, not a replacement for it."

--

--

Cezary Gesikowski
Cezary Gesikowski

Written by Cezary Gesikowski

Human+Artificial Intelligence | Photography+Algography | UX+Design+Systems Thinking | Art+Technology | Philosophy+Literature | Theoria+Poiesis+Praxis

No responses yet