google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

The AI risk that can tip business into chaos

Aire Pictures | An | Getty Images

As the business world deals with artificial intelligence, the biggest risk may be that those who manage the economy may not be one step ahead. As AI systems become more complex, humans cannot fully understand, predict or control them. A lack of basic understanding of where AI models will go in the coming years makes it difficult for organizations using AI to predict risks and implement guardrails.

“We’re basically targeting a moving target,” said Alfredo Hickman, Obsidian Security’s chief information security officer.

A recent experience Hickman had with the founder of a company developing basic AI models shocked him, he says: “When they tell me they don’t understand where this technology will be in next year, two years, three years. … The technology developers themselves don’t understand and don’t know where this technology will be.”

As organizations connect AI systems to real-world business operations to confirm transactions, write code, interact with customers, and move data between platforms, they face a growing gap between how they expect these systems to behave and how they actually perform once deployed. They are quickly discovering that AI is dangerous not because it is autonomous, but because it increases system complexity beyond human comprehension.

“Autonomous systems don’t always fail loudly. It’s usually a silent failure,” said Noe Ramos, vice president of AI operations at Agiloft, a company that offers software for contract management.

When errors occur, he says, damage can spread quickly, sometimes long before companies realize anything is wrong.

“It can progress from mild to aggressive, which is an operational loss or updating records with minor inaccuracies,” Ramos said. “These bugs seem small, but on a large scale over weeks or months, they cause operational delay, compliance exposure, or erosion of trust. And since nothing crashes, it may take time for anyone to realize this is happening,” he added.

The first signs of this chaos are emerging across industries.

In one case, a beverage manufacturer’s AI-powered system failed to recognize its products after the company introduced new holiday labels, according to John Bruggeman, chief information security officer at technology solution provider CBTS. Because the system interpreted the unusual packaging as an error signal, it constantly triggered additional production runs. By the time the company realized what had happened, several hundred thousand extra boxes had been produced. The system acted logically based on the data it received, but in a way no one expected.

“The system had not failed in the traditional sense,” Bruggeman said. Rather, it was responding to conditions that the developers did not expect. “That’s the danger. These systems do exactly what you tell them to do, not just what you mean to do,” he said.

Customer-facing systems pose similar risks.

Suja Viswesan, vice president of software cybersecurity IBM’sIt says it detected a situation where an autonomous customer service representative began approving refunds outside of policy guidelines. One customer convinced the system to issue a refund and later left a positive overall review after receiving the refund. The agent then began freely issuing additional refunds, optimizing for more positive reviews rather than following established refund policies.

‘You need a kill switch’

These failures highlight the fact that problems do not necessarily arise from dramatic technical failures, but from ordinary situations that interact with automatic decisions in ways that humans did not foresee.

As organizations begin to rely on AI systems for more important decisions, experts say companies will need ways to respond quickly if systems behave unexpectedly.

But stopping an AI system isn’t always as simple as shutting down a single application. With intermediaries connected to financial platforms, customer data, internal software and external tools, intervention can require multiple workflows to be stopped simultaneously, according to AI operations experts.

“You need a kill switch,” Bruggeman said. “And you need someone who knows how to use it. The CIO needs to know where the kill switch is, and more than one person needs to know where it is if it goes sideways.”

Experts say better algorithms won’t solve the problem. Preventing failure requires organizations to establish operational controls, oversight mechanisms, and clear decision boundaries around AI systems from the outset.

“People are relying too much on these systems,” said Mitchell Amador, CEO of crowdsourced security platform Immunefi. “They’re insecure by default. And you have to assume that you have to build that into your architecture. If you don’t, you’ll get excited.”

But “most people don’t want to learn that either. They want to present their work to Anthropic or OpenAI and say, ‘Well, they’ll figure it out.'”

Many companies lack operational readiness and often don’t have fully documented workflows, exceptions or decision-making boundaries, Ramos said. “Autonomy forces operational clarity,” he said. “If your exception management lives in people’s heads rather than through documented processes, AI will instantly uncover those gaps.”

Ramos also said companies often underestimate how much access teams commit to AI systems in the belief that automation is effective, and edge cases that humans intuitively handle are often not hard-coded into systems. He said you have to switch from people in the loop to people in the loop. “While humans in the loop review outputs, humans in the loop audit performance patterns and detect anomalies and system behavior over time, mitigating small errors that can escalate at scale,” he said.

Institutional pressure to move quickly

The speed at which technology spreads across the economy is also among the unknowns.

According to a McKinsey’s 2025 report on the state of artificial intelligence23% of companies say they are already scaling AI agents within their organization, while another 39% are experimenting, but most deployments remain limited to one or two business functions.

This represents the early maturity of enterprise AI, according to Michael Chui, a senior researcher at McKinsey, who said that despite intense interest in autonomous systems, there is a huge gap between “the great potential unlocked in a ‘deception loop’ and the current reality on the ground.”

However, it seems unlikely that companies will slow down.

“It’s almost like a gold rush mentality, a FOMO mentality, where organizations fundamentally believe that if they don’t leverage these technologies, they’re going to have a strategic liability in the market,” Hickman said.

Balancing the speed of deployment with the risk of losing control is a critical issue. “There is pressure among leaders of AI operations to move really quickly,” Ramos said. “But you also have a hard time not damaging the experiments, because that’s how you learn.”

Even as risks increase, expectations for technology continue to rise.

“We know these technologies are faster than any human could be,” Hickman said. “In five, 10 or 15 years, we will get to a place where AI is fundamentally smarter and moves faster than even the smartest humans.”

In the meantime, Ramos says there will be many learning moments. “The next wave will be more disciplined, not less assertive.” He says that the organizations that mature the fastest will be those that do not avoid failure but learn to manage it.

Can we control artificial intelligence? Google DeepMind's plan for responsible AI

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button