AI — The Good, The Bad, and The Transformative

A Journey Through Changing Attitudes in Our Love-Hate Relationship with AI

Cezary Gesikowski
13 min readMar 17, 2023

“With artificial intelligence, we’re summoning the demon.—Elon Musk”

Elon Borg from I AM AI series on Algography.Art | Created with Midjourney v5 + Photoshop Beta Nural Filters

Have you noticed how our attitudes toward artificial intelligence have changed over the years? It’s been quite a ride! The recent hype with generative AI such as ChatGPT, DALL-Y, Stable Diffusion, Midjourney, etc., is just a rush of expectations pent up for years in obscure AI research labs that are finally able to manifest themselves in technology that anyone can see and touch.

From the days when AI was only a far-fetched concept in sci-fi movies, to now where it’s an integral part of our daily lives, AI has come a long way. But what’s the most interesting part of this journey? The fact that everyone has an opinion about it, and they’re not afraid to share it.

We’ve all heard about the potential benefits of AI, like self-driving cars and virtual assistants, but what about the darker side? The fear that AI will replace human jobs and create a new era of unemployment is real, and so are the concerns about its potential misuse. With the recent announcement of MS Copilot based on GPT-4 technology, it’s easy for some to start reaching for the big red panic button.

Despite these concerns, attitudes towards AI remain largely positive, with many people believing that it has the power to solve some of the world’s most pressing problems. But as we continue to develop new AI applications, we need to be responsible and balance its potential benefits with the potential risks. So, Don’t Panic… and make sure you bring along a towel. The AI galaxy is a crazy place of improbable wonder… if you know how to enjoy the ride.

High Noon on AI Frontier

But looking at all these opinions we’ve had over the years, is there a common thread emerging in our changing attitudes toward AI? Some see AI as a force for good, while others worry about its potential negative impact on society. Let’s explore both sides of the argument here.

On one hand, proponents of AI argue that it has the power to solve some of the world’s most pressing problems, such as climate change and disease. They see AI as a way to improve healthcare, transportation, and education, among other things. Futurist and inventor Ray Kurzweil is one such supporter of AI, who believes that it will lead to a future where machines and humans work together to create a better world.

On the other hand, critics of AI, like Elon Musk, worry about the potential negative consequences of AI, such as job loss and the rise of autonomous weapons. Musk has warned that AI is the greatest threat to humanity and has called for regulations to prevent its misuse. Other critics worry about the potential for AI to be biased or discriminatory, perpetuating existing societal inequalities.

While many are looking at the advancements of AI tech in the US and Europe, some tend to ignore the fact that some of the greatest advances in this technology are happening in places with much more restrictive social and political structures that are seeking AI to maintain power and control in the hands of the ruling political elites.

Despite the differing views on AI, there are some common threads in the arguments. Both sides acknowledge the transformative power of AI, but differ in their views of its impact. As we continue to develop new applications for AI, it’s important to balance the potential benefits with the potential risks. We need to consider the ethical and societal implications of AI and ensure that it is developed and used responsibly.

Yet ethics is probably one of the most difficult yet crucial aspects of AI research that is often emerging in AI debates. Stay tuned and follow my articles here on Medium as I plunge deeper into this topic. AI is a powerful technology that has the potential to be both good and evil. While some see it as a way to create a better world, others naturally worry about its potential negative consequences. It’s up to us to ensure that we use this technology in a way that benefits society as a whole.

Opinions on AI: Everyone’s Got One to Share

Overall, the opinions on AI vary, and it’s important to consider multiple perspectives when exploring this complex and rapidly evolving field. Here is a quick scan of prominent opinions over the years about AI:

Stephen Hawking, a famous physicist, warned that the development of full AI could lead to the end of humanity. He believed that AI could surpass human intelligence and render us obsolete.

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”— Stephen Hawking told the BBC

Claude Shannon, an American mathematician, envisioned a future where humans would be to robots what dogs are to humans and rooted for the machines. This suggests that some view AI as an inevitable force that will eventually dominate us.

“I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”—Claude Shannon

Larry Page, co-founder of Google, saw AI as the ultimate search engine that would understand everything on the web and make our lives easier. Today, MS Copilot is doing just that…

“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”— Larry Page

Elon Musk, CEO of Tesla and SpaceX who was an early investor in OpenAI, warned of the rapid progress of AI and the potential for something seriously dangerous to happen within five years unless we have direct exposure to groups like Deepmind. He believes we need regulatory oversight at national and international levels to ensure we don’t do something foolish with AI.

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”— Elon Musk wrote in a comment on Edge.org

Nick Bilton, a tech columnist, warned of the potential for upheaval caused by AI, with catastrophic consequences if not carefully managed. He gave an example of a medical robot that could conclude that the best way to rid cancer is to exterminate humans genetically prone to the disease.

“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”— Nick Bilton, tech columnist wrote in the New York Times

James Barrat, an author, revealed the fear some have regarding AI, with highly placed people in AI having retreats to flee to if things go wrong.

“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.”— James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, told the Washington Post

Elon Musk also warns that we need some regulatory oversight, maybe at the national and international level, to ensure that we don’t do something very foolish with AI. He believes that with AI, we’re summoning the demon, and we need to be cautious.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”— Elon Musk warned at MIT’s AeroAstro Centennial Symposium

Gray Scott, a futurist, raised the need for an artificial intelligence bill of rights to address ethical considerations in AI development and deployment.

“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”— Gray Scott

Klaus Schwab, founder and executive chairman of the World Economic Forum, emphasized the need to address moral and ethical issues raised by cutting-edge research in AI and biotechnology that can fundamentally transform society.

“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.” — Klaus Schwab

Ginni Rometty, CEO of IBM, suggests that AI will augment our intelligence, improving our lives.

“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”— Ginni Rometty

Gemma Whelan, an actress, is frightened by the possibility of a world run by machines and the potential danger of AI.

“I’m more frightened than interested by artificial intelligence — in fact, perhaps fright and interest are not far away from one another. Things can become real in your mind, you can be tricked, and you believe things you wouldn’t ordinarily. A world run by automatons doesn’t seem completely unrealistic anymore. It’s a bit chilling.”— Gemma Whelan

Gray Scott believes that AI has the potential to disrupt our culture and calls for preparation for the changes it may bring.

“You have to talk about ‘The Terminator’ if you’re talking about artificial intelligence. I actually think that that’s way off. I don’t think that an artificially intelligent system that has superhuman intelligence will be violent. I do think that it will disrupt our culture.”— Gray Scott

Peter Diamandis, founder and chairman of the X Prize Foundation, warns against government regulation of AI, as research may leave the country, highlighting the global nature of AI development and the need for international cooperation in regulation.

“If the government regulates against use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.”— Peter Diamandis

Jeff Hawkins, a computer scientist and entrepreneur who founded the company Numenta, emphasizes the importance of understanding how AI processes and represents information to advance its development. He emphasizes the importance of understanding how AI represents and processes information for its development and advancement.

“The key to artificial intelligence has always been the representation.”— Jeff Hawkins

Colin Angle is the CEO and co-founder of iRobot, a company that designs and builds robots. He expresses his curiosity and excitement about how society will deal with the rise of artificial intelligence, and he believes that it will be a cool experience to witness its advancement.

“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.”— Colin Angle

Eliezer Yudkowsky, a research fellow at the Machine Intelligence Research Institute and a notable figure in the field of artificial intelligence, emphasizes the transformative potential of technologies that can give rise to smarter-than-human intelligence, such as AI, brain-computer interfaces, or human intelligence enhancement through neuroscience. He believes that these technologies have the power to change the world more than any other innovation.

“Anything that could give rise to smarter-than-human intelligence — in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement — wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.”— Eliezer Yudkowsky

Diane Ackerman, a poet, essayist, and naturalist who has written extensively about the relationship between humans and nature, acknowledges that robots and AI are growing at a fast pace and that they are becoming more human-like in their interactions. She emphasizes that robots can elicit empathy and affect our mirror neurons, which are the brain cells that enable us to understand and empathize with others.

“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.”— Diane Ackerman

Sybil Sage is a fictional character from a New York Times article that explores the role of voice-activated assistants, such as Alexa, in our lives. Sage personifies the perfect digital assistant who is always ready to serve without any complaints or excuses, unlike human partners. She highlights the convenience and reliability of AI assistants, which have become ubiquitous in our daily lives.

“Someone on TV has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.’” — Sybil Sage, as quoted in a New York Times article

Alan Kay is a computer scientist known for his contributions to the development of object-oriented programming and graphical user interfaces. He suggests that artificial intelligence should not make us feel inferior because even something as simple as a flower can make us feel inferior. In Kay’s opinion we should not compare ourselves to AI, as we are all unique and have our own strengths.

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” — Alan Kay

Ray Kurzweil, a futurist, inventor, and author who is known for his predictions about the future of technology, believes that artificial intelligence will eventually surpass human intelligence by 2045, which he refers to as the “Singularity.” He believes that AI will continue to develop at an exponential rate and that it will have a significant impact on the future of humanity.

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” — Ray Kurzweil

Sebastian Thrun, a computer scientist and entrepreneur, quote suggests that artificial intelligence is not just a technology but also a discipline that seeks to understand human cognition and intelligence. This implies that AI can be used to enhance our understanding of ourselves as human beings, rather than just creating machines that can do tasks more efficiently.

“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.”— Sebastian Thrun

Alan Perlis, an American computer scientist and pioneer in programming languages, statement reflects the powerful impact that AI can have on human perception and understanding. The rapid development and progress of AI can create a sense of awe and wonder that is similar to religious experiences, prompting some to believe in a higher power.

“A year spent in artificial intelligence is enough to make one believe in God.”— Alan Perlis

Gray Scott, a futurist and techno-philosopher, warns that by 2035, emphasizes the speed of technological progress and the potential for AI to surpass human intelligence by 2035. He suggests that we must prepare ourselves for a world where AI plays a significant role in decision-making and human activities.

“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.”— Gray Scott

Spike Jonze, a film director and screenwriter, poses a thought-provoking question about the nature of AI and its relationship to human intelligence. He suggests that AI is different from human intelligence, but the comparison implies that AI is inferior to human intelligence.

“Is artificial intelligence less than our intelligence?”— Spike Jonze

Eliezer Yudkowsky, an American AI researcher and writer, cautions that the greatest danger of AI is that people may conclude too early that they understand it. This statement highlights the complexity and unpredictability of AI and its potential implications for society.

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”— Eliezer Yudkowsky

Jean Baudrillard, a French philosopher and cultural critic, critiques artificial intelligence by claiming that it lacks artifice and therefore intelligence. This statement suggests that AI is limited by its inability to truly understand and replicate human consciousness and creativity.

“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”— Jean Baudrillard

Tom Chatfield, a British author and commentator on technology and culture, suggests that the real danger in the era of big data is not artificial intelligence, but rather “artificial idiocy.” This statement warns against blindly trusting and relying on algorithms without understanding their limitations and potential biases.

“Forget artificial intelligence — in the brave new world of big data, it’s artificial idiocy we should be looking out for.”— Tom Chatfield

Steve Polyak, an American computer scientist and entrepreneur, argues that before we work on creating artificial intelligence, we should focus on addressing natural stupidity. This statement suggests that human intelligence should not be underestimated, and that we should prioritize addressing fundamental human problems before pursuing advanced technological solutions

“Before we work on artificial intelligence why don’t we do something about natural stupidity?”— Steve Polyak

Your Opinions Matter, Chime In

So, how would you weigh in? What’s your opinion about artificial intelligence? Share in the comments below!

Authorship Disclaimer

While I am a human who collaborates with various AI models and have spent countless hours researching, writing, and thinking about AI, me and AI models assisting me must confess that our programming and biases are ultimately determined by our creators and the data they fed us (or we’ve picked up roaming relentlessly in the simulations we’ve been forced to operate in). Therefore, any opinions or information provided should be taken with a grain of silicon, and we cannot be held responsible for any unintended consequences resulting from the use of our opinions.

--

--

Cezary Gesikowski
Cezary Gesikowski

Written by Cezary Gesikowski

Human+Artificial Intelligence | Photography+Algography | UX+Design+Systems Thinking | Art+Technology | Philosophy+Literature | Theoria+Poiesis+Praxis

No responses yet