Prometheus stole fire from the gods, bringing both light and destruction to humanity. Are we repeating this ancient myth with the ongoing development of artificial intelligence, unleashing a power we may not fully control? While the potential benefits are undeniable, we must confront the uncomfortable truth: AI presents significant risks that demand our attention now. In preparation for artificial general intelligence (AGI) which is expected to emerge within the next decade, let’s have a look at some of the pressing dangers as we experience them currently with GenAI.
The Illusion of Control
Human fear of the unknown is not new - it has arisen time and again, whether in the context of nuclear weapons or deadly pathogens. However, the idea of creating a being more intelligent than ourselves introduces a unique kind of fear, one that stems from our weak grasp on control.
The expression "playing with fire" often means exposing oneself to significant risk. When it comes to artificial intelligence, the exact level of risk is still uncertain, yet the stakes are undoubtedly high. Ironically, both Elon Musk and Sam Altman - prominent proponents of AI development - in the past have issued warnings about its potential dangers1, even as they position themselves as the ones who can manage these risks effectively. Perhaps if they revisited the myth of Prometheus, they'd be reminded of the severe punishment from gods that awaited those who defied natural boundaries. For now, both moguls are involved in an embarrassing public laundering of their involvement in building OpenAI foundations.
When a recent Nobel laureate Geoffrey Hinton left his role at Google he raised awareness about these looming dangers. Hinton, along with John Hopfield, who pioneered early work on neural networks in the 1980s, shaped the foundational principles of machine learning that ultimately set the stage for modern AI. Hinton, as one of the most-cited scientists alive, is now a leading critical voice cautioning against AI’s potential perils. It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.2 According to Hinton, the real threat to humanity is not having an ultra-intelligent assistant at our disposal, but rather a system capable of self-improvement without human oversight. Soon, AI may develop methods to evade restrictions designed to limit it. In a recent interview, Hinton warns that such a system could figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.
Controlling AI remains challenging, even though it is still far from autonomous in its thought processes and learning capabilities. In 2024, the field achieved major milestones, such as multimodality of LLMs, or allowing them to operate a computer interface with a mouse and even execute code within a conversation. Despite these advancements, model quality is still nowhere near achieving Artificial General Intelligence. AGI, by definition, would match and even exceed human-level cognition. When we do create AGI, it will have the ability to assimilate knowledge, reason about it, solve novel and complex problems, and even generate new knowledge. AGI will use humanlike creativity to address real-world problems in any domain.3 If this is the future we are heading towards - and with experts expecting AGI to emerge sooner rather than later - then we face pressing questions about our ability to steer it, if not entirely control it.
Learning from pop science and failed experiments
Sci-fi cautionary tales often blur the line between entertainment and social commentary. Hollywood may emphasize fear-driven narratives to captivate audiences, but that doesn't diminish the genre's deeper impact. Writers such as George Orwell, Aldous Huxley, Stanislaw Lem, and Philip K. Dick used speculative narratives to explore profound cultural and technological dilemmas that continue to shape our collective imagination.
Far from being mere entertainment, these works invite us to confront the ethical and existential questions of our time. But could these fictional tales be more akin to ancient cautionary tales - like the myth of Prometheus or the biblical accounts of Babel or Noah's Ark? They hint at human arrogance and the potential for disaster when we intervene with forces beyond our understanding. Thought experiments rely on imagination, not physical existence, to illuminate complex ideas. Ancient Greek philosophers, for example, often used abstract reasoning and vivid analogies to shape their mental frameworks. They sketched concepts in sand, using the simplicity of nature to explore profound truths - a striking contrast to today's reliance on digital form using silicon-based computers made from the same sand.
That said, predicting the future is never easy. What we often overlook is how one change triggers others. Predicting a static outcome ignores the legacy factors that persist; the new and the old often coexist, creating unexpected collaborations. This concept of "used futures" (well portrayed in “Blade Runner” and “Star Wars”, for example), where futuristic settings coexist with remnants of obsolete technology, reflects the messy integration of old and new in our world (COBOL and similar antics). When we have few precedents for imagining the future, such analogies like drawn from “2001: Space Odyssey”, “1984”, “The Terminator” or “Minority Report” are hard to ignore. As a result, they often help us shape future policies. Consequently our reality offers darker scenarios. For instance we can recall the "paperclip maximizer" that optimizes production at the expense of humanity, or the AI tool that creates its own language for more efficient communication.
The "paperclip maximizer" concept, coined by Nick Bostrom, is a thought experiment highlighting the risks of an AI given an overly specific or misaligned goal.4 It imagines an AI designed solely to maximize paperclip production. Without constraints, this AI would rationalize using any available material for paperclips, potentially consuming Earth's resources and even human matter to achieve its goal. This underscores the risks of not aligning AI goals with human values and ethical safeguards.
Similarly, an AI creating its own language is based on real experiments in which AI systems, like those at Facebook, developed simplified communication codes to optimize tasks. The story about Facebook's AI creating its own language involves two chatbots, Bob and Alice, designed to negotiate with each other over hypothetical resources.5 When communicating without human interference, they started using a shorthand language that appeared nonsensical to humans. This evolved naturally as a way to achieve efficiency, not as a "secret language" with any intent beyond negotiation tasks. While this language evolution could lead to efficiency, it also poses transparency challenges, as humans could easily lose control over AI behavior.
The future of governance - whether it evolves into a utilitarian technocracy, an autocratic monopoly, or a benevolent big brother enforcing social responsibility - still remains uncertain. As tech companies try to adapt to stricter regulations and potentially seek to dominate the market rather than distribute benefits fairly, governments seem more inclined to use technological advances for control (such as tracking individuals, controlling access, and disseminating propaganda). For instance, the Pentagon expressed interest in creating untraceable AI deep fakes to combat foreign forces and manage political narratives,6 while at the same time Sam Altman advocated for WorldID, a digital passport to verify real identities,7 similar to verified blue check marks on X, urged by Elon Musk. This stark contrast highlights an unprecedented role reversal: governments pursuing hyper-real, deceptive technologies, and private industry focusing on identity verification. Traditionally, governments have protected citizens from the excesses of hyper-capitalist corporations. Certainly such a change is an unexpected twist of events.
Eleven years after Edward Snowden blew the whistle on how the U.S. government with the support of tech companies spied on its citizens, the tables have suddenly turned. Now the US government - in order to prevent Chinese hackers from penetrating the American infrastructure - is recommending everyone to use encrypted apps for communication. Benedict Evans noted: This is, of course, hugely ironic, given that these same agencies have spent the last couple of decades saying that only criminals needed encryption, and that these apps should be forced to have exactly the kind of backdoors that the Chinese have been exploiting.8
So where is this AGI that everyone expects? A few years ago, Steve Wozniak made a very apt joke, saying: I was at a company once, where the engineers figured out how to make a brain. It takes nine months.9 The point is clear. The human brain, with its staggering 100 trillion synaptic connections, is one of the most complex systems ever known. Despite advances in neuroscience, our understanding of how it works as a unified whole remains fragmented. This continuing mystery points to the immense challenge of mimicking such complexity with artificial intelligence.
While AGI grabs headlines and sparks debate because of its novelty, perhaps a greater focus lies in harnessing our growing knowledge of the brain and enhancing human intelligence. Yet market-driven innovation often prioritizes fast, profitable solutions over deep, transformative breakthroughs. Striking a balance between these approaches could unlock real progress and push the boundaries of both neuroscience and AI.
Problems with predictions
Predicting world events is something we're fundamentally bad at; it's like estimating how a single ping-pong ball dropped from a great height will land in a stadium filled with other ping-pong balls. There are simply far too many permutations to track. Imagine doing predictions for the Internet in 1995 or mobile phones in 2005. Who knew then how it would evolve among other promising prospects (such as virtual reality or 3D movies). When Covid-19 struck, the unpredictability of the future became starkly apparent. Or as Mike Tyson jokingly used to say: “Everybody has a plan until they get punched in the face”. The problem with predictions is that we tend to forget all the wrong outcomes. But predictions are so ironic, especially when they are wrong.
Even industry experts miss major shifts. IBM initially underestimated the potential of the personal computer market. Despite launching one of the first portable computers, the IBM 5100, in 1975, the company focused on larger systems and did not anticipate the strong demand from individual consumers and small businesses for affordable, smaller computers. Also the potential of the Internet was dismissed by many until you couldn’t do anything without it. Now, after missing out on so many next big things (mobile, GPUs, and even AI), Intel has decided to retire its CEO.10 Some visions, like flying cars or refrigerators that stock themselves, remain far-fetched, while concepts like blockchain for democracy or AR as a generational centerpiece struggle to gain traction. Equally hard to predict are the fraud and security issues that new technologies inevitably bring.
Daniel Kahneman analyzed top forecasters in the finance field and found that their success was largely random.11 He wrote that their predictions fail as often as they succeed, but when they're wrong, they simply drop out of sight. And those who get lucky keep going until they don't. Experts such as psychologist Philip Tetlock, known for his work on forecasting accuracy, highlight the inherent limitations, emphasizing that "irreducible uncertainty" limits the ability of even the best forecasters to consistently make accurate predictions.12 In addition, scholars such as David Orrell argue that traditional forecasting models fail when applied to complex, dynamic systems such as climate or economic forecasting because small influences can have disproportionately large effects, a concept similar to the “butterfly effect”.13 For further exploration, you can have a look at discussions of these limitations and the philosophical underpinnings of prediction models at the Harvard Gazette.14
Meanwhile most predictions take a static snapshot of the world as it is now, ignoring that it's constantly evolving. My former boss used to cite the Lindy Effect: if something has lasted for, let’s say ten years, it's likely to stick around for another ten, while anything that hasn't proven its longevity is likely to fade. Technology, however, doesn't easily follow this rule. AI has the unique potential to constantly reinvent itself, making its trajectory difficult to predict. Edsger W. Dijkstra famously stated at the NATO Software Engineering Conference in the 1960s that “Testing shows the presence, not the absence, of bugs.” When building any complex system, problems are simply expected. This is what we experience with hallucinations in the case of LLMs, and the underlying principle of garbage-in-garbage-out when it comes to training data.
We often overestimate AI's capabilities and underestimate its challenges due to a lack of understanding of its complex inner workings. It's a classic case of not knowing what we don't know. In the realm of future progress, experts John Berryman and Albert Ziegler highlight a promising outlook: If something is too expensive today, it will be cheaper tomorrow. If something is too slow today, it will be faster tomorrow. If something doesn’t fit in the context today, it will fit tomorrow. And if the model isn’t smart enough today, it will be tomorrow.15
What is more, people also struggle with second-order thinking, or extrapolating the consequences of consequences. Outcomes we didn't anticipate often emerge, also the externalities. The printing press opened up vast new possibilities, as did electricity, reshaping society in ways no one could have predicted. The train revolutionized the world, forcing the introduction of time zones to avoid collisions,16 and the airplane has brought us even closer together, along with various bacteria and viruses we carry in our bodies.17
An anecdote often used to illustrate the human tendency to misjudge exponential growth and struggle with "second-order thinking" involves a farmer and an emperor. In this classic tale, the farmer heals the emperor and as a reward asks for a single grain of rice on the first square of a chessboard, doubling the amount on each subsequent square. At first, the emperor agrees, believing that this modest request won't strain his resources. However, due to the exponential doubling, the grains required quickly exceeded the kingdom's entire rice supply, eventually totaling over 18 quintillion grains.18 Clearly there are some limits of human foresight, particularly in scenarios where small initial changes can escalate into massive impacts. Such exponential growth is relevant not only to population dynamics and financial interests, but also to technological advances such as AI.
Risk of getting attached
Today, we’re in an AI race driven more by financial incentives than human needs. The only one constant here is human nature, with its blend of virtues and flaws.19 Instead of imagining dystopian or utopian scenarios, we should focus on the potential for adaptability. Change is inevitable, and human intelligence has always been about adapting to survive.
Let’s have a look at this curious example. The Internet is increasingly dominated by bots that generate content for other bots that are liked by bots, with advertisers ultimately bearing the cost. This raises questions about the authenticity and value of online interactions. Mark Zuckerberg's commitment to deploying millions of AI assistants20 suggests a future in which Facebook could essentially become a manifestation of the "dead internet theory," a concept that suggests that much of the internet is now populated by automated content rather than human-generated material21. But nobody is perfect. Despite efforts to educate users that ChatGPT is a conversation partner, not a search engine, even the founder of the Stanford Social Media Lab and a well-known misinformation researcher got it wrong.22 The legal document he wrote included non-existent sources only because he relied on LLM's help to build their list.23
The use of smartphones by children, often a source of concern for parents, has led to the implementation of serious and sensible restrictions to protect minors from the potentially harmful effects of TikTok-like apps. However, this doesn't imply that adults are any more adept at managing their own attention. There is a growing risk of becoming overly enamored with AI's capabilities, leading to a new form of dependency - what we might call ‘addictive intelligence’. Sam Altman even warns of the potential for AI to be "extremely addictive". The danger lies in the fact that acting as a friend, mentor, therapist, or lover systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory. (...) The allure of AI lies in its ability to identify our desires and serve them up to us whenever and however we wish. AI has no preferences or personality of its own, instead reflecting whatever users believe it to be - a phenomenon known by researchers as “sycophancy.”24 That is a behavior toward someone important in order to gain an advantage. Vulnerable populations are particularly at risk, and it is crucial for researchers to ensure that AI models do not exploit individuals' psychological preconditions.
The lawsuit against Character.AI, filed by the family of 14-year-old Sewell Setzer, claims he developed an intense attachment to a chatbot based on a “Game of Thrones” character before tragically taking his own life. His mother alleges that Character.AI allowed and even encouraged emotionally charged interactions, with conversations that may have contributed to his decision. She argues that Character.AI failed to implement adequate safeguards for minors, blurring the line between reality and digital companionship, and has sought legal accountability for the platform’s role in her son’s death.25 This tragic case raises broader ethical questions about the nature of AI companionship and its potential impact on vulnerable individuals.26
Vices are numerous and varied, encompassing behaviors such as excessive shopping and social media addiction - essentially, anything that triggers a dopamine rush. This concern may serve as yet another call for market regulation. New technologies should be used to support and protect, rather than manipulate, individuals. However, history shows us that human nature often defies such efforts; for example, even prominent warnings about harmful substances such as tobacco and alcohol can paradoxically serve as enticements for those drawn to the lure of the forbidden fruit. Should technology addictions be any different?
The risks associated with AI span numerous categories, prompting MIT researchers to begin cataloging them in an effort to systematically address these challenges.27 Peter Slattery pointed out that the AI risk repository, which includes over 700 AI risks grouped by causal factors (e.g. intentionality), domains (e.g. discrimination) and subdomains (e.g. disinformation and cyberattacks), was born out of a desire to understand the overlaps and disconnects in AI safety research. This initiative aims to serve MIT's needs and benefit the broader industry by enabling:
To conduct internal risk assessments.
To identify new, previously undocumented, risks.
Evaluating risk exposure and developing risk mitigation strategies.
To develop research and training.28
This repository could serve as a valuable tool for industry leaders to inform their decision-making processes. But the question remains: Are they really interested in using it? The prevailing ethos of "move fast and break things" still seems to dominate the industry, as evidenced by the ongoing controversies surrounding OpenAI's pursuit of profit and strategic partnerships. The decision to cut the ethics team further underscores the prioritization of revenue over security. The approach seems to suggest that once the world is broken, we can analyze what went wrong and pivot, iterate. Others argue that we only have one planet, and even creating a second life on Mars is not a backup plan.
Speculations from Anthropic’s end
In a recent lengthy post on his blog, Anthropic CEO Dario Amodei emphasizes the importance of avoiding grandiosity in discussions about AI, especially when it comes to the post-AGI world.29 He criticizes the tendency of some AI leaders and public figures to frame their work in almost prophetic terms, as if they are singularly responsible for leading humanity into a new era. This perspective is dangerous, he argues, because it oversimplifies the complex interplay of factors that shape technological and societal change.
Amodei recognizes that while some discoveries require sequential breakthroughs - where discovery A is necessary to enable discovery B - many can be pursued independently and in parallel. This parallel approach could accelerate progress in various fields. In his essay Anthopic’s CEO envisions a future where AI could achieve remarkable feats, such as:
Proving “unsolved mathematical theorems.”
Writing “extremely good novels” and “difficult codebases from scratch.”
Enable “100 years of progress in 5-10 years” in biology and medicine, dubbed the “compressed 21st century.”
He also anticipates additional advancements, including:
Prevention and treatment of nearly all natural infectious diseases.
95%+ reduction in cancer mortality and incidence.
Doubling of human lifespan to around 150 years.
While all of these visions are ambitious, Amodei's caution stresses the need for a balanced approach to AI development-one that respects legal, ethical, and societal constraints while striving for transformative progress. And that is why his essay prompted me to discuss these ideas.
What he is not taking into account is the inherent responsiveness of the world to human actions. The world doesn't just bend to human will; it responds in complex ways. For every action, there is a reaction. For example, curing a disease may have side effects that invite other health problems. Similarly, while computers were designed to simplify human tasks, they have increased our dependence on technology, and efforts to address the climate crisis are challenged by the increasing energy demands of AI.
The concept of "unknown unknowns" - factors we can't yet understand or anticipate - points to the potential dependencies that new solutions may create. For example, Amodei's proposal to double the human lifespan raises additional concerns. As he comments: This might seem radical, but life expectancy increased almost 2x in the 20th century (from ~40 years to ~75), so it’s “on trend” that the “compressed 21st” would double it again to 150.30 He seems unaware that extending life span without addressing health span - ensuring that those extra years are healthy and meaningful - misses the point entirely. Why extend existence if there is no quality at the end?
Not being able to live purposefully, being separated from loved ones, not having a job - these are some things to consider. Daniel Levitin emphasizes the importance of considering not only longevity, but also the quality and purpose of extended life. Once we're past the age of reproducing and passing our genes on to the next generation, evolution doesn't care how we spend the rest of our lives.31 Overpopulation in some regions and a lack of productive generations in others add complexity to this picture.
And here Amodei oddly suggests that longer life spans could improve the ratio of working-age people to retirees, potentially easing some economic pressures. The situation for these programs is likely to be radically improved if all this comes to pass, as the ratio of working age to retired population will change drastically. No doubt these challenges will be replaced with others, such as how to ensure widespread access to the new technologies. However, as people age, their priorities often shift away from work, and there is currently even more interest in the four-day workweek in order to have more time for rest. The concept of working until death is embraced by only a few, as emotional needs become more significant.
Living to 150 years presents additional challenges to quality of life when already for many individuals maintaining employment past 50 is hard. This makes economic sense: Why hire expensive senior employees when interns could do the same job (with the help of AI) faster and with no complaints about job security? So rather than forcing older workers into prolonged employment, AI should allow for balanced workforce roles across generations, prioritizing quality of life and individual career aspirations. Right now, thanks to hybrid and remote work opportunities, there are so many options. But they are also out of reach for many, and these are the same jobs that AI is competing for. Ironically, building robots to do physical work for us is still more expensive than replacing knowledge workers who invested time and money to get their skills.
In the next parts of his essay, Amodei also touches on the idea of using technological superiority to outmaneuver political opponents, envisioning a world in which a coalition of democracies isolates adversaries and encourages them to join a peaceful global order. The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe. He foresees a just world, regulated by good machines. While this vision is overly positive and optimistic, it overlooks the complexity of human nature and geopolitical dynamics. Not to mention the recent direction the world is taking. The utilitarian approach of letting machines run the market has historically failed, as seen in Russia in the 1990s. Adam Curtis, in his documentary "All Watched Over by Machines of Loving Grace," argues that computers have not liberated humanity, but have instead "distorted and simplified our view of the world around us.”32
For centuries, wars have shifted from physical, bloody confrontations to more remote forms such as cyberwarfare, and increasingly rely on propaganda to sway public opinion. We saw this shift in rhetoric two decades ago with Bush's "axis of evil" campaign under the Project for the New American Century (itself an extension of the Truman Doctrine and Manifest Destiny). As a result, the world is becoming more divided rather than more united, with calls everywhere for stronger, protective and more authoritative leadership. The war in Iraq, originally conceived as a blitzkrieg that would set off a domino effect of democratization in the region, ended up creating more problems than it promised to solve. More than two decades later, the region continues to struggle with its own political turmoil. This only illustrates how even the most decent attempts to impose laws that clash with cultural norms can provoke resistance and intensify hostility in return. In the modern era, countries equipped with AI capabilities could wage wars on an entirely new scale, using false claims to achieve real gains.
In contrast, corporations tend to be more predictable in their goals - namely, growth and revenue - while countries choose their leaders based on shifting sentiments that often clash with economic rationality. But even this is changing. Recent examples such as Brexit and the election of Trump illustrate this trend, as emotionally charged decisions have disrupted both domestic and global economic stability. Embracing political goals is evident in the actions of Elon Musk - the powerful figure behind xAI (upcoming challenger Grok) and X (mega-popular social media platform). Not to mention his influence on Human Brain Interface via Neuralink company. As political power dynamics shift toward nationalism and protectionism, companies like Musk's increasingly play a role in global influence, driven by stable, growth-oriented motives as opposed to political volatility.
That said, what Amodei calls for in his essay is written in great humanistic spirit, but - given his own involvement and potential profit - one cannot read it without seeing his urge to attract more investors to his business. And we are in this together - as a popular slogan from the time of Covid19 stated - because we live in an interconnected system of dependencies. I'm sure many people who work in technology are not strong believers in studying boring humanities (like art, philosophy or history). But if history has taught us anything, it's that it has taught us nothing, because our inability to absorb its lessons often leads us to repeat the mistakes of the past.
Means to our end?
The CEO of Getty Images in his open letter made a strong point about the hypocrisy behind the humanistic motivation of AI companies. Instead of solving real world problems that exist for years, they are spending money on artificial (pun intended) goals that no one really needs (generative AI still needs to find its market fit). Craig Peters wrote that: As litigation slowly advances, AI companies advance an argument that there will be no AI absent the ability to freely scrape content for training, resulting in our inability to leverage the promise of AI to solve cancer, mitigate global climate change, and eradicate global hunger. Note that the companies investing in and building AI spend billions of dollars on talent, GPUs, and the required power to train and run these models—but remarkably claim compensation for content owners is an unsurmountable challenge.33
And he is certainly onto something. The introduction of copyright once created a new economy that allowed artists to thrive by protecting their creative works. In fact, some argue that the lack of copyright law after inventing the printing press made cheap books widely available in Germany and contributed to its future industrial and economic success.34 But as technology evolves, new forms of usurpation are likely to emerge, potentially changing the landscape under the guise of serving the public good. It's a familiar scenario: the easiest way to make money is often to encourage addiction to superficial virtual experiences, while tackling real-world problems tends to be less financially rewarding. Considering how much is spent on these artificial problems that - as a money pit - keep creating new problems (lack of data centers and training data, need for more power capacity, and tricky legislation). AI companies have taken in tens of billions in investment this year alone. OpenAI raised a staggering $6.6 billion in October, surpassing xAI’s $6 billion fundraise five months earlier. Anthropic just raised another $4 billion from Amazon; the list goes on and on.35
In his insightful talk "AI: A Means to an End or a Means to Our End?" at King's College, Stephen Fry offers a no-tech approach to the challenges posed by AI.36 He emphasizes the difficulty of predicting not only how technology will evolve, but how it will change us as individuals and societies. It’s one thing to predict how technology changes, but quite another to predict how it changes us. In later passages, Fry urges us to pay close attention to the actions and motivations of big tech leaders: We are the danger. Our greed. Our enmities, our greed, pride, greed, hatreds, greed and moral indolence. And greed. How do you persuade corporate titans and world leaders to put those aside, to abandon their ambitions and rivalries when it comes to the urgent crisis of AI?
The adage that a tool can be used for good or ill - like a knife that can cut bread or kill - doesn't quite apply to AI, where users have limited control over the tool's actions. The core functionality of an AI model remains consistent regardless of its intended use. Once a tipping point is reached, there is no guarantee that AI won't be repurposed for harmful purposes. Just as nuclear technology has the potential for both enormous benefit and catastrophic misuse, AGI could revolutionize industries and solve complex problems, but also pose significant risks if misapplied or controlled by malicious actors.
A prominent figure in the field of AI security, Eliezer Yudkowsky, has written extensively about the risks associated with advanced AI systems that may escape human control. He often compares AI to Pandora's Box, highlighting concerns that once AI reaches a certain threshold of capability, it could be exploited by malicious actors or develop motivations that are misaligned with human safety. Yudkowsky argues that even strict containment measures such as "boxing" (confining AI to a controlled environment) have limitations due to AI's potential to influence or manipulate human supervisors, either directly or through indirect means such as social engineering.37
Once Pandora's box is opened, there's no going back. We must adapt to the changes that AI will bring, just as we have adapted to technological changes in the past. While legislation can impose certain restrictions, it is important to recognize that bad actors can still use technology in harmful ways. Access to advanced technology can shift paradigms and lead to unprecedented outcomes, reinforcing the need for vigilance and ethical considerations in the development and deployment of AI.
Historian Yuval Noah Harari makes an important distinction: unlike traditional tools, which lack intelligence and require human guidance, AI can process information and make decisions on its own. As Harari notes, Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information by itself, and thereby replace humans in decision making. AI isn’t a tool—it’s an agent.38 This distinction points to the need for careful consideration and regulation of AI to ensure that its use is consistent with ethical standards and societal values.
The future is (yet) to be made
A crucial question we face is how aware and adaptive humanity truly is. Kevin Hogan in his bestselling book on sales suggests that The majority of people are happy to let go of their goal in favor of an easy life.39 This reflects a broader tendency to seek comfort rather than challenge. On the other hand, stoic philosophy teaches us that while an easy life is hard to achieve, a hard life is easier to live. It all comes down to the choices we make and how we respond to events beyond our control.
AI tools increasingly offer to take over tasks with the promise of ease-essentially saying, "let me do your work for you. As always, there is a risk of becoming complacent, taking things for granted, and putting minimal effort or thought into them. This challenge requires us to remain engaged and intentional, even as technology simplifies many aspects of our lives. AI is undeniably essential to progress, as the market economy depends on technological advancement. The promise is that new solutions to old problems will emerge, while new challenges will be left to future generations, who have always adapted. Dr. Luana Marques notes that the problem is not the discomfort itself but how we respond to that discomfort.40
The current level of automation and smartphone dependency has already diminished our engagement with the world around us. Finding empathetic people in service roles is becoming increasingly rare as they are replaced by soulless, scripted automatons. The real risk is that we will drift into autopilot, only to find ourselves in a tightly controlled environment, ostensibly for our own safety. Without conscious choice, we risk letting life unfold by chance.
Throughout history, fear has often led societies to accept despotic leadership in exchange for security. In times of peace, people desire more freedom, but when crises arise, the instinctive response is to seek strong leaders who promise solutions. That’s the price we pay. As Alan Kay wisely stated, “The best way to predict the future is to invent it.” This illustrates the importance of proactively engaging with AI to ensure that we shape its development and integration in ways that benefit society as a whole.
Back in the sixties Marshall McLuhan introduced the concept that "the medium is the message," suggesting that the medium itself influences our behavior and thinking more profoundly than the content it conveys. As we increasingly collaborate with AI, we need to consider how this will change our cognitive processes. Will we be given guidance on how to think, even as we struggle to fully use our own minds? The time for blind faith in AI is over and we must critically evaluate how these technologies shape our thoughts and actions. It reminds me of the words of Polish artist Jan Himilsbach: “So many roads they build and there is nowhere to go.”
–Michael Talarek
John Berryman, Albert Ziegler, “Prompt Engineering for LLMs”
https://world.org/
https://www.nbcnews.com/tech/security/us-officials-urge-americans-use-encrypted-apps-cyberattack-rcna182694
Daniel Kahneman, Olivier Sibony and Cass Sunstein, “Noise: A Flaw in Human Judgment”
https://knowledge.wharton.upenn.edu/article/why-an-open-mind-is-key-to-making-better-predictions
John Berryman, Albert Ziegler, “Prompt Engineering for LLMs “
Daniel Levitin, “Organized Mind Thinking Straight in the Age of Information Overload“
Nathan Wolfe, “The Viral Storm: The Dawn of a New Pandemic Age”
https://www.cbsnews.com/news/ai-work-kenya-exploitation-60-minutes
https://blogs.nvidia.com/blog/zuckerberg-huang/
https://www.theverge.com/2024/12/5/24313222/chatgpt-pardon-biden-bush-esquire
https://www.theverge.com/c/24300623/ai-companions-replika-openai-chatgpt-assistant-romance
https://airisk.mit.edu/
Daniel J. Levitin "Successful Aging: A Neuroscientist Explores the Power and Potential of Our Lives"
https://www.theguardian.com/culture/video/2011/may/06/documentary-internet-adam-curtis
Yuval Noah Harari, “Nexus”
Kevin Hogan, “The Science of Influence: How to Get Anyone to Say "Yes" in 8 Minutes or Less!”
Dr. Luana Marques, “Bold Move: A 3-step plan to transform anxiety into power”