The massive IT outage that rocked companies around the world highlights just how deeply intertwined society and the systems powering it are with Big Tech — and how a single misstep can trigger widespread chaos.
It also exposes the fragility of those systems and raises the question: Does Big Tech deserve our trust to properly safeguard a technology as powerful as AI?
The software issue occurred an update from cybersecurity firm CrowdStrike on Friday, and it resulted in a Microsoft IT outage that disrupted airlines, banks, retailers, emergency services, and healthcare providers around the world. A software fix has been deployed, according to CrowdStrike, but many systems were still offline on Friday as companies grappled with bringing their services back online, some of which required manual updates.
Gary Marcus, an AI researcher and founder of Geometric Intelligence, a machine learning AI startup acquired by Uber in 2016, told Business Insider that the Microsoft-CrowdStrike outage should be a “wake-up call” to consumers — and that the impact of a similar issue with AI would be tenfold.
“If a single bug can take down airlines, banks, retailers, media outlets, and more, what on earth makes you think we are ready for AGI?” Marcus wrote in a post on X.
AGI, also known as artificial general intelligence, is a term for a version of AI that can achieve human capabilities like reasoning and judgment. OpenAI cofounder John Schulman previously predicted it was just a few years away.
Marcus, who has been critical of OpenAI in the past, told BI that could prove to be problematic with the current systems in place, and consumers are handing over enormous amounts of power to Big Tech companies and AI.
Dan O’Dowd, founder of safety advocacy group, The Dawn Project, has campaigned against Tesla’s self-driving systems. He told BI that the situation with CrowdStrike and Microsoft is a reminder that critical infrastructures are not secure enough or reliable. He said Big Tech companies evaluate systems based on if they work “pretty well most of the time,” because there’s a rush to get products to market.
Some of that has already been made apparent when it comes to AI.
Companies across the board have released a deluge of AI products and offerings in the last six months, some of which have begun to transform how people work. But the AI models, which are prone to hallucination, have also spit out some well-publicized errors along the way, like Google’s AI Overviews, which told users to eat pizza with glue, or Gemini’s inaccurate portrayals of historical people.
Companies have also taken turns announcing flashy new products and then delaying or rolling them back because they weren’t ready — or when public launches reveal issues. OpenAI, Microsoft, Google, and Adobe have all rolled back or delayed AI offerings this year as the AI race heats up.
While some of these mistakes or product delays may not seem like a big deal, the potential risks could be more severe as technology advances.
The US Department of State commissioned a risk assessment report on AI, which was published earlier this year. The report indicated AI poses a high risk of weaponization, which could take the form of biowarfare, mass cyber-attacks, disinformation campaigns, or autonomous robots. The results could lead to “catastrophic risks” including human extinction, the report said.
Javad Abed, assistant professor of information systems at Johns Hopkins’ Carey Business School, told Business Insider that incidents like the Microsoft-CrowdStrike outage continue to occur because companies still view cybersecurity as a “cost rather than a necessary investment.” He said big tech companies should have alternative vendors and a multi-layered defense strategy.
“Investing an additional million dollars in such a critical aspect of cybersecurity is far more prudent than facing the potential loss of millions later,” Abed said. “Along with continuous damage to reputation and customer trust.”
Public trust in institutions has steadily declined for the past five years, according to a 2023 study by the Brookings Institution, a nonprofit public policy organization. This erosion of confidence is particularly pronounced in the technology sector. Big Tech companies, including Facebook, Amazon, and Google, saw the sharpest drop in trust, with an average decline in confidence ratings of 13% to 18%, according to Brookings.
That trust is likely to continue to be tested as both consumers and the workers at the companies impacted by the IT outage face the reality of how a software update gone wrong can cause things to come to a screeching halt.
Sanjay Patnaik, a director at the Brookings Institution, told BI the government has dropped the ball and failed to properly regulate social media and AI. Without adequate defenses in place, the technology could become a national security threat, he said.
Big Tech companies have had “free rein,” Patnaik said. “And today is a day companies are starting to realize that.”
Marcus agreed that companies can’t be trusted on their own to build reliable infrastructure and the outage should be a reminder that “we’re playing double or nothing when we allow AI systems to be unregulated.”
Read the full article here