The current AI boom, driven more by profit than societal needs, presents significant risks. The whole industry is experiencing an ongoing arms race reminiscent of the Red Queen effect from "Alice in Wonderland," where everyone has to run faster just to stay in place. Initially, AI tools like ChatGPT offered simple chat interfaces, but the landscape has rapidly evolved to include online browsing, multimodal input, and now an agentic approach. Keeping up with this pace is challenging for both developers and users. Focusing on human value and augmenting it with productivity gains seems to be the better direction. This article explores these challenges, from technological limitations and ethical concerns to the environmental and economic costs of unchecked AI development.
Competitive pressures drove OpenAI to introduce browsing capabilities to address a significant limitation of ChatGPT, which initially relied on a static knowledge base limited to data available through September 2021. This often resulted in outdated or incorrect answers, such as "Who is the CEO of Twitter?" Occasionally, the model even fabricated information, undermining user confidence and causing some to abandon it. While ChatGPT was not intended to be a search engine, its limitations highlighted a gap between user needs and the model's functionality. In contrast, Perplexity provided more balanced results, while Google's later release of SGE Search failed to live up to expectations. The ad tech market is now rethinking its strategy, as users can get summaries directly within chat interfaces, reducing the need to visit individual pages (Perplexity expands now the results with the display of ads). In response, Google has enhanced its leading model by adding search capabilities to Gemini to provide better functionality and keep pace with evolving user expectations.1 Obviously things are changing a lot these days.
Despite significant investment and effort, the tangible impact on core business functions remains limited. A recent Boston Consulting Group (BCG) study highlights the current state of generative AI deployments in the enterprise.2 Predictably, many companies are still in the pilot phase, with few achieving substantial integration or value creation. While CEOs have approved investments, hired talent, and launched pilots, only 22% of companies have moved beyond the proof-of-concept stage to create value. Even more striking, only 4% of companies are generating significant value from their AI initiatives. The road from pilot to meaningful implementation is complex, requiring not only technological integration, but also strategic alignment and cultural adaptation within organizations.
For all the progress that has been made, the average person may not be overly concerned if it stops at some point. We have already achieved so many new ideas to play with that one can worry about how to put them into practice and gain some substantial value. And here Ethan Mollick offers a thoughtful perspective: Even if AI development stopped today, we would have years of change ahead of us integrating these systems into our world. (...) Organizations need to move beyond viewing AI deployment as purely a technical challenge. Instead, they must consider the human impact of these technologies. Long before AIs achieve human-level performance, their impact on work and society will be profound and far-reaching.3 Let's take a look at some of the major concerns of the GenAI leaders at the end of 2024.
Expanding the infrastructure
Recently, the CEOs of leading AI labs OpenAI and Anthropic shared their perspectives on the future, which could be seen as a call for increased investment. OpenAI in particular has been criticized for not delivering on some promises, such as the rumored GPT-5 and the delayed release of SearchGPT and Sora for video generation. The CEO attributes these delays to a bottleneck in computing power for data centers.4
Amodei warns against viewing companies as unilateral forces capable of reshaping the world, and he cautions against interpreting technological goals in religious or messianic terms.Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful (...)5
The significance of the infrastructure is such that AI could become a limited resource. This scenario could unfold if more users do not start paying for access, as ChatGPT currently has just over 10 million paying subscribers and 1 million business users.6 ChatGPT now has 250M weekly active users, and converts its free users to paying subscribers at a rate of 5-6%—as a result, 75% of its revenue is from consumer subs.7 Edward Zitron commenting in the article “OpenAI Is A Bad Business“ notes that: OpenAI currently spends $2.35 to make $1. OpenAI loses money every single time that somebody uses their product, and while it might make money selling premium subscriptions, I severely doubt it’s turning a profit on these customers, and certainly losing money on any and all power users.8
While burning cash is a common strategy in the startup scene to gain traction and growth, this approach is unsustainable in the long run. The potential for inflated API prices raises concerns about what will happen if the company raises costs, forcing dependent developers to either pay more or find cheaper alternatives to incorporate LLMs into their products. The size and dynamics of this market remain uncertain.
The need for a robust infrastructure to support the growth of AI is so pressing that Sam Altman has addressed it in an open letter on his website. There he argues that: We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Altman warns that without sufficient infrastructure, AI could become a scarce resource, potentially leading to conflict and primarily serving only the wealthy. If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.9 Despite OpenAI's shift to a for-profit model, this message can be seen as both a humanitarian appeal for collective growth and a personal plea to investors to support the mission he champions.
Compressing the world
Recent advancements have shown that the number of parameters in early LLMs can be reduced, optimizing latency and processing. This has been demonstrated with models such as Llama, Claude Haiku, and ChatGPT 4o-mini. Techniques such as summarizing large blocks of text before processing can save tokens and provide a path forward. Research from a consortium of Ivy League institutions challenges the assumption that bigger AI models are always better. The study reveals a nuanced understanding of how LLMs perform when trained with rich data and then optimized for speed. Training a large language model with more data generally improves its performance. This is a fundamental principle in AI development, where more data typically leads to better model accuracy and capability. However, efforts to make these models run faster by reducing their precision - essentially using fewer bits to store numerical values - can degrade their performance. This is similar to data compression, where packing more information into a model during training makes it more susceptible to performance loss when compressed.10
Now the landscape of AI development is undergoing a significant shift with a new scaling paradigm that prioritizes reasoning models, such as the o1 model. This approach aims to improve AI capabilities without relying solely on increasing data and computing power. OpenAI researcher Noam Brown has pointed out that the current laws of scaling will eventually become financially unsustainable. At the TED AI conference, Brown highlighted a future where AI isn’t just a tool, but a core engine of innovation and decision-making across sectors. Strategic thinking, rather than sheer data volume, can significantly enhance AI performance.11
Building on the success of the o1 model, OpenAI has created a new team focused on improving models without additional data. Brown illustrates the potential of this approach by noting that allowing a bot to "think" for just 20 seconds in a poker game is equivalent to scaling the model 100,000 times and training it 100,000 times longer. He advocates a shift to "system two thinking," a slower, more deliberate form of reasoning that mirrors how humans approach complex problems. His approach aligns with human cognitive processes and could lead to more nuanced and effective AI decision-making.12
Traditional LLMs have focused primarily on using the model to make predictions during training and inference. However, the o1 model spends significantly more time "thinking" during the inference phase, marking a major change in how AI models operate. This shift opens the door to scaling inference, rather than just training, by focusing on finding optimal reasoning strategies. It paves the way for a new type of model known as reasoning cores. These models differ from traditional LLMs in that they do not focus on memorizing large amounts of knowledge, but instead emphasize dynamic reasoning and search strategies. They are more adept at using different tools for different tasks, providing a more versatile and capable approach to AI problem solving.13
However, achieving these optimizations requires a significant initial investment. As a Google representative noted, As our business and industry continue to evolve, we expect our total GHG (greenhouse gas) emissions to rise before dropping toward our absolute emissions reduction target.14
Getting nuclear
As I was jokingly mentioned in one of my previous articles “with great power comes great electricity bill”.15 No wonder that data centers' power usage has enormously increased in recent years. OpenAI has even proposed an ambitious plan to the White House to build a massive data center, reflecting the escalating demands of AI development. The facility would cover 30 million square feet, the equivalent of about 520 football fields, and require 2 million GPUs, highlighting the immense computing power required for advanced AI models. It would also require 5GW of power, far more than the typical 100MW used by most data centers, indicating significant electricity demands.16
It seems that the fastest way to meet the calculated future demand would be to build nuclear reactors. In 2023, researchers at AI startup Hugging Face and Carnegie Mellon University found that generating a single image using artificial intelligence can use as much energy as charging a smartphone. (...) Google’s greenhouse gas emissions climbed nearly 50 percent in five years due to AI. (...) Microsoft, which also pledged to go “carbon negative” by the end of this decade, reported that its greenhouse gas emissions had risen nearly 30 percent since 2020 due to the construction of data centers.17
As AI technology continues to advance, the energy demands of powering AI data centers are becoming increasingly significant. According to a recent report, Google is turning to nuclear energy to help power its AI drive. On Monday, the company said it will partner with the startup Kairos Power to build seven small nuclear reactors in the US. The deal targets adding 500 megawatts of nuclear power from the small modular reactors (SMRs) by the decade’s end. The first is expected to be up and running by 2030, with the remainder arriving through 2035.18
This move is part of a broader trend among tech giants like Amazon, Meta, and Apple, who are investing heavily in energy sources with significant environmental impacts. Even the infamous Three Mile Island plant is being restarted at Microsoft's request. Also the enfant terrible Elon Musk is creating his own energy source for Grok, leveraging industrial and business connections to secure power and chips, albeit through the installation of highly polluting gas turbines.19
However, these nuclear power ambitions face regulatory hurdles. Amazon's expansion plans are stalled due to potential blackouts for other power users, while Meta's nuclear build-out could impact rare bee populations.20
These needs of AI go beyond environmental concerns and pose significant economic risks. Even if increased electricity costs don't affect society, or if another nuclear disaster is avoided, we could still face a recession driven by an AI bubble. Financial experts draw parallels to the dot-com crisis nearly 25 years ago, when companies with ".com" in their names experienced inflated valuations without revolutionary products. The potential impact of an AI bubble could be even more severe today.21
Avoiding the bubble
The huge disparity between massive infrastructure investments and modest revenues may drive companies to secure profits before potential losses mount. Based on some similarities in the market frenzy, many experts foresee another crisis similar to the dot-com bubble. Big Tech is consolidating, with partnerships forming between giants such as Apple, Microsoft, Google, Amazon, and Meta with AI leaders such as OpenAI and Anthropic. These alliances often blur the lines between collaboration and competition, particularly in the context of massive infrastructure investments aimed at advancing AI development, even as the financial returns on these investments remain uncertain. Microsoft's dual role as both a major investor in and competitor to OpenAI could see dramatic shifts as the development of AGI progresses, further complicating the competitive landscape.
In October, Microsoft also allowed the use of Gemini 1.5 and Claude 3.5 Sonnet on its Github Copilot. It was a bold strategic move, emphasizing their commitment to not solely rely on OpenAI. GitHub Copilot - an AI pair programmer - exemplifies the monetization potential of generative AI, with a $2 billion revenue run rate. By integrating Claude and Gemini alongside GPT-4, Microsoft is diversifying its AI offerings, underscoring the competitive landscape as companies like Anthropic and Google strive to showcase superior coding capabilities.22
Menlo Ventures has released “State of GenAI in the Enterprise” report, highlighting key predictions for the future of generative AI. They foresee agents driving the next wave of transformation, with more established companies likely to fall as customers rapidly switch to startups that offer superior experiences. The report also predicts intense competition for the limited talent available to power the next wave of AI innovation. Organizations are taking a pragmatic, multi-model approach rather than relying on a single vendor.
Research shows that enterprises typically deploy three or more foundational models within their AI stacks, selecting different models based on specific use cases or desired outcomes. Organizations prioritize tools that can deliver measurable value (30%) and that understand the unique context of their work (26%) over those offering the lowest price tag (1%).23 The Age of AI is born and the winners are, of course, the users. Marc Andreessen likens current AI development to "selling rice," suggesting minimal product differentiation. It turns out anybody can make an LLM - he notes, indicating a potential race to the bottom in large language model development.24
Ethical concerns have been at the forefront of recent upheavals within OpenAI, a company that has faced criticism for its lack of transparency and strategic shifts. The departures of top leaders like Ilya Sutskever, Greg Brockman and Mira Murati, who followed Dario Amodei in starting their own ventures, have fueled speculation and intrigue. Benedict Evans humorously observed,it’s funny how everyone who leaves OpenAI wants to start a company that does exactly the same thing, except without Sam Altman.25
OpenAI's transition from a non-profit to a for-profit entity has raised questions about its motivations and transparency, particularly as it struggles to deliver on its ambitious promises. This environment of overpromising and under delivering complicates predictions about future developments, as companies vie for bandwidth to expand their offerings. The pressure of projected losses further compels AI firms to prove their value.
The intrigue surrounding Elon Musk and Sam Altman intensifies as Musk, a former OpenAI investor, attempts for the third time to sue Altman for allegedly abusing OpenAI's mission, which Musk claims makes the company not truly "open." In a publicized email exchange, the power dynamics between the two are evident. Altman chose to keep OpenAI’s cutting-edge AI behind closed doors, claiming it was too dangerous to be openly released.26
In a cautionary op-ed for Dow Jones & Co, Jeffry Funk and Gary Smith express concerns about the sustainability of the AI market. They argue that AI needs $600B annual revenue to justify investments; current revenue = ~1% of that. Internet revenue in 2000 was $1.5T (2024 dollars)—and it still burst. AI revenue today < $10B, suggesting a way bigger pop. (…) Smith argues that AI’s danger isn’t computers outsmarting us, but us trusting them too much (because we think they’re smarter).27
The Neuron newsletter writers concluded, Our take: It’s definitely a bubble, but even the dotcom bust saw winners. Right now, GenAI providers are targeting enterprise use-cases pretty heavily, which indicates a new stage in the industry’s growth. This shift from consumer to enterprise follows previous tech waves like cloud storage, social media, and most recently, blockchain. This transition suggests a maturation of the industry, as companies focus on enterprise applications to drive growth.
Despite the challenges, the willingness of innovators to take risks is essential for progress. Venture capitalist Fred Wilson highlighted the importance of speculative investment in driving technological advancements: you need some of this mania to cause investors to open up their pocketbooks and finance the building of the railroads or the automobile or aerospace industry or whatever. And in this case, much of the capital invested was lost, but also much of it was invested in a very high throughput backbone for the Internet, and lots of software that works, and databases and server structure. All that stuff has allowed what we have today, which has changed all our lives... that's what all this speculative mania built.28
Regulating the market
If the upcoming warnings about infrastructure needs and profit tensions were not enough, we need to remember that the whole market is still quite unregulated. It's still the Wild West, much like the not-so-recent blockchain hype, and the first movers set the pace for others. That is, unless governments have something to say. This time, however, things are pretty serious, and the challenges of regulating AI are fueling global tensions reminiscent of Cold War divisions. The European Union, known for its methodical and consensus-driven approach, has implemented strict regulations to strengthen citizens' rights. For example, training data cannot include user data from platforms like Meta or LinkedIn, effectively limiting facial recognition and other potential AI developments.
The AI Act, considered the most comprehensive AI regulation in the world as of 2023, categorizes AI applications by risk.29 Notably, it includes a specific category for general-purpose generative AI. Applications deemed to pose an "unacceptable risk" will be prohibited, unless specific exemptions apply. This includes AI systems that manipulate human behavior, use real-time remote biometric identification in public spaces, or engage in social scoring.
Although the AI Act is European, its impact is global and affects international companies that want to operate in Europe. Apple, for example, has scaled back its services in the EU to comply with these regulations, and will not release Apple Intelligence in the EU until April 2025. Since announcing Apple Intelligence, we have been working to find a path to deliver as many features as we can in the EU in a way that complies with the DMA while maintaining user privacy and security, and to determine what additional product engineering would be required to do so.30
The Digital Markets Act (DMA) and the AI Act stand in stark contrast to countries like China, which rely heavily on social scoring and facial recognition to increase control and security for its citizens. Critics argue that such regulations could hinder European startups, making them less competitive compared to their American and Chinese counterparts.
The United States is putting pressure on China by limiting the market reach of Chinese tech products (ie. Huawei, Xiaomi), and restricting China's access to Nvidia chips. These actions are part of a broader strategy to slow China's progress in AI. While Chinese AI development has suffered some setbacks, particularly due to limited access to powerful language models, China is pursuing alternative approaches, such as developing its own internet infrastructure to expand state control over digital information.
Meanwhile, Chinese regulators are also taking a cautious approach to generative AI. They require that GenAI content be explicitly tagged with text watermarks or implicitly tagged with metadata. Content must be categorized as definitely, probably, or suspiciously AI-generated, and third parties, such as app stores, must ensure compliance with these standards. As AI regulation continues to evolve, the global landscape remains divided, with each region balancing innovation, privacy, and control in its own way.31
You might notice that all the leading companies are based in California. There, an important center, the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," was introduced to hold companies accountable for the risks associated with their AI models. However, California's governor recently vetoed an AI safety bill, citing concerns that the legislation could impose unworkable restrictions on AI development. For smaller companies, such limitations may make the entire AI race unworthy of participation.
January 2025 will be a challenging month for AI companies operating in Europe, as the EU AI Act comes into effect. This regulation mandates that every AI model must prove its safety, ethics, and compliance with the new rules. According to recent research from ETH Zurich, no major model currently meets these stringent standards. The findings show that "almost all of the models examined" struggle with issues of non-discrimination and fairness.32
The EU AI law will be implemented in phases: bans on certain AI systems will begin in February 2025, rules for general-purpose AI will begin in August 2025, and requirements for high-risk AI systems will take effect in August 2026. While there are concerns about EU bureaucracy and over-regulation, the bottom line is to keep AI developments in check.
Industry experts like Ben Evans and Steven Sinofsky offer a different perspective on regulating AI. Arguably, the issue is misunderstood by those who rely on pattern matching with similar cases from the past. Evans mentions that trying to regulate AI with a single comprehensive law is misguided, akin to regulating "databases" or "spreadsheets."33
Steven Sinofsky, investor and ex-Microsoft engineer, writes that: The AI safety world was in a rush to say AI safety requires that AI not be used or should be regulated in whole new ways. Except AI as a tool is regulated just as a PC or calculator or PDR is regulated—the regulation is not the tool but the person using the tool. The liability is with the person that deployed or employed the tool, not the tool itself.34
Managing sides
In 2023, the large language model landscape was rapidly evolving, with new entrants like Mistral and Llama intensifying market competition. In the midst of this growth, some industry leaders called for a temporary pause in AI development to assess potential risks.35 They argued that a six-month pause would provide sufficient time to assess societal and ethical concerns. Elon Musk emerged as a prominent proponent of this caution, suggesting that AI could pose one of the greatest threats to society. However, questions arose as to whether his caution was strategic, possibly aimed at buying time for his own AI ventures. During this time, Musk via his xAI company launched Grok, an AI model that he claimed would be more transparent than OpenAI's models, and restructured Twitter to increase profitability while using its data for Grok's training. Musk's actions have consistently challenged traditional business ethics. Now Musk claims on Grok's website that its model supports all of its competitors' products, highlighting the complex and often contradictory nature of his approach to AI development.
The potential risks associated with AI and social media often stem from individuals with a fixation on power who may spread misinformation. Elon Musk has already sparked controversy with his decisions about what content is allowed on Twitter, opting for a more free-speech world and less PC-driven censorship. In our post-truth world, it has historically been the prerogative of those in power to determine which truths are disseminated and which are suppressed. Social media platforms, much like recent AI developments, offer harassment-free chatbots that are designed to be predictable and avoid human-like unpredictability.36
Known for his rebellious persona and right-wing leanings, Elon Musk continues to push the envelope by enabling tools that create fake images (Grok image generator) and influence political discourse (X, formerly known as Twitter). He became a key political and business advisor to Donald Trump, who has offered Musk the role of head of Department of Government Efficiency. This role, along with Musk's influence over X, a platform that now reportedly favors Republican and Musk-related content, demonstrates his growing political influence.37 According to the researchers, platform-level changes were implemented to amplify posts by Musk and other conservatives, especially after Musk endorsed Donald Trump.
However, even more revolutionary changes can be expected, as the new president is known for his sympathy towards entrepreneurs. During Trump's victory speech, he praised Musk as a "super genius" and emphasized the need to "protect our geniuses" from the vast federal bureaucracy...that is holding America back in a big way.38 Another Musk's company, xAI, could benefit from Trump's expected light-touch approach to AI regulation. Experts suggest that Trump's administration may prefer to rely on existing laws rather than introduce new ones, potentially giving Musk's company more freedom to innovate without regulatory constraints.
Even partial implementation of Musk's plans could have a significant impact on policies and regulations, especially those that affect his businesses. This collaboration between Musk and the government could be seen as a modern-day reflection of the sentiment from “Catch-22” classic novel, where "what's good for Milo Minderbinder is good for the country," suggesting that Musk's success is connected to national interests.
Opening the sources
Mark Zuckerberg's release of the open-source Llama model is intended to position Meta as a leader in the free software movement. However, details about Llama's training data, which includes content from Meta's platforms (Instagram, Facebook, Threads, Whatsapp) unless users opt out, have been limited. There is an ongoing debate within the AI industry about what truly constitutes "open source." For some, calling a model open source can be a strategic marketing tactic, much like the term "democratic" in a country's name doesn't necessarily indicate its style of government. Brands and certifications don't guarantee authenticity.
The Open Source AI Definition (OSAID) provides now clear criteria for what qualifies as an open source AI model. According to OSAID, a model must provide enough information about its design to allow someone to "substantially" recreate it, and it must disclose relevant details about its training data, including its origin, processing methods, and accessibility or licensing terms.39 While there is consensus on these criteria, companies like Meta resist this definition because their branding relies heavily on the open source label. The challenge lies in the proprietary nature of training data and the significant computational resources required to develop AI models, which are barriers for many developers. In addition, the complexity of fine-tuning these models adds another layer of difficulty.
To make matters worse, there is an inherent danger in being too open. A Reuters report revealed that Chinese researchers had used Meta's Llama 2 model to develop an AI system for the country's military. In response, a Meta spokesperson downplayed the significance, noting that the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI.40
This raises the question of whether such openness will allow other nations to circumvent U.S. regulations and prohibitions and potentially use AI in ways that challenge U.S. interests. Those familiar with the strategies outlined in the ancient "Art of War" may not be surprised by such tactical maneuvers.
In contrast to Zuckerberg's (sort of) open source approach, companies like Anthropic, Google, OpenAI, and Microsoft are focusing on enterprise-ready models. They prioritize ethical and privacy considerations within business applications and drive enterprise adoption through comprehensive suites or standalone products. This approach ensures that AI serves as a useful business tool while navigating the complex landscape of ethics and privacy.
Creating new divisions
Meanwhile, Trump's AI philosophy, articulated during a dinner with Silicon Valley venture capitalists Marc Andreessen and Ben Horowitz, stresses the importance of winning the AI race against China to maintain global leadership. This approach is consistent with his promise to "prohibit the use of AI to censor the speech of American citizens," and reflects a broader commitment to minimal regulation and increased competition with China.41
The election of Donald Trump may further complicate the economic development of AI companies, not only those based in the US. In a surprising twist, the French Mistral AI has just released a 124b parameter model in a free version, which includes the vision model, canvas feature, PDF analysis, web search and agent creation. At the end of the day, it is really the users who benefit the most. But that may be short-sighted, because sooner or later, someone will have to pay the bills. While AI is being championed as a revolutionary equalizer that will level the playing field for those in underdeveloped countries, we know from history that some balancing is always required. No matter how far we have come in dethroning feudalists, harnessing industrialists, and liberating the market, the rules are still written by humans.
In his latest book, Nexus, Yuval Noah Harari explores humanity's fascination with information and fictional worlds, and offers a sobering perspective on the future. He warns that AI could intensify existing human conflicts and potentially divide humanity, echoing the separation caused by the Iron Curtain during the Cold War. Harari introduces the concept of a "silicon curtain" - an invisible barrier created by silicon chips and computer code - that could not only separate global powers, but also isolate humanity from AI entities. He further suggests that this divide might come to divide not one group of humans from another but rather all humans from our new AI overlords. No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even reengineer our bodies and minds. One might think of Leviathan or Big Brother - ideas that have shaped our policy tremendously so far. Will AI embody a similar paradigm of centralized power, or could it evolve to offer real societal benefits? Or, to put it in more human terms, is this about the prophecy of a new, better way of life, or is it just about profit for the few who hold the reins of power?
—Michael Talarek
https://www.theneurondaily.com/p/the-150b-open-ai-question
https://ia.samaltman.com
https://techcrunch.com/2024/11/09/openai-reportedly-developing-new-strategies-to-deal-with-ai-improvement-slowdown
https://www.metadocs.co/2024/09/17/how-openai-o1-works-in-a-simple-way-and-why-it-matters-for-rag-and-agentic/#:~:text=Quiet%2DSTaR%20approach!-,The%20Quiet%2DSTaR%20Approach,-The%20Quiet%2DSTaR
https://media.datacenterdynamics.com/media/documents/openai-infra-economics-10.09.24.pdf
https://www.theneurondaily.com/p/ai-bubble-ramifications
https://artificialintelligenceact.eu
https://www.wsj.com/articles/elon-musk-other-ai-bigwigs-call-for-pause-in-technologys-development-56327f
John Berryman, Albert Ziegler, “Prompt Engineering for LLMs”
https://www.theverge.com/2024/11/17/24298669/musk-trump-endorsement-x-boosting-republican-posts-july-algorithm-change
https://www.washingtonpost.com/technology/2024/07/16/trump-ai-executive-order-regulations-military/