Six AI Questions I Want Answered in 2026

(Bloomberg Opinion) — According to Merriam-Webster, the word of the year for 2025 was “slop,” referring to the flood of low-quality content produced by artificial intelligence.
It’s an apt reflection of the strange phase we’ve entered three years after ChatGPT launched the global AI boom. We were promised tools to cure diseases and solve climate change. What we encountered most in 2025 was machine-made pornography, fake rabbits jumping on trampolines, and an internet that became more spammy with each passing day.
Artificial intelligence is now fueling lively discussions from the boardroom to the classroom. But despite all the hype and all this money, some of the biggest questions about how this tech revolution will play out remain unanswered. Here are some questions I want to get clearer answers to in 2026:
What’s in the training data?
Images of child sexual abuse? Thousands of copyrighted creative works? Or is there a huge amount of material in the English language that perpetuates Eurocentric perspectives?
The answer to all of the above seems to be “yes.” But we can’t be sure because the companies that install these systems refuse to say so.
Privacy becomes increasingly untenable as AI systems infiltrate risky environments such as schools, hospitals, recruiting tools, and government services. The more we delegate decision-making and delegation to machines, the more urgent it becomes to understand what is happening to them.
Instead, companies treated training data as a trade secret (or a liability due to copyright lawsuits). But this fight over transparency will likely come to a head in the new year. The European Union will require companies to share detailed summaries of their training data by mid-2027. Other jurisdictions should follow their lead.
I don’t expect anyone to credibly declare that we have achieved artificial general intelligence by 2026. But before we discuss whether we have achieved it, it would be helpful to collectively agree on what it actually is.
As Google Deepmind researchers wrote in a paper last year: “If you asked 100 AI experts to explain what they mean by ‘AGI,’ you would probably get 100 related but different definitions.” Meanwhile, this vague concept has become the North Star of the entire global industry, used to justify hundreds of billions of dollars of investment.
The most widely used definition in OpenAI’s charter describes it as “highly autonomous systems that outperform humans at the most economically valuable tasks.” But even that is a bit unclear, as Chief Executive Officer Sam Altman admitted last year. This makes it a moving target as automation impacts a larger portion of the economy.
OpenAI and Microsoft had previously set a financial goal for AGI internally: to make a combined profit of $100 billion, at least according to information. But getting consumers to pay for brain-rotting apps seems far from a true measure of “intelligence.”
I don’t think AGI is a useful expression. It fuels cycles of excitement and fear rather than serious debate about the societal impact of AI or how it should be regulated. The industry won’t be dropping this term anytime soon. But at least he can agree on an empirical way to measure it.
It’s no surprise that Big Tech companies don’t want to bear the burden of regulation, and governments don’t want to do anything that would risk falling behind in geopolitical competition.
But it will be harder for policymakers to ignore growing societal concerns about the impact of AI on everything from young, developing minds to utility bills. Outside Europe, few jurisdictions have taken serious action against these threats.
Lawmakers would be wise to forestall backlash before the harms worsen. We can’t trust companies that want to profit from this technology to write all the rules.
What will it take to pop the bubble?
In recent months, more people in the industry seem to have accepted that we are in the throes of some kind of bubble. That doesn’t mean AI won’t be transformative, but eye-watering valuations and seemingly cyclical investments for companies that can’t turn a profit are starting to look like red flags.
Yet the enthusiasm proved to be quite durable for three years; There were some tensions here and there, but no signs of slowing down. The feeling of fear of being kidnapped is still strong. Eventually something will test this. Perhaps revenue growth will slow as early adopters reach saturation, or perhaps there will be a rise of powerful, free open source models that erode the pricing power of closed systems.
There probably won’t be a single event that will derail the global excitement train. But in 2026, I expect more investors to start questioning how they can avoid being the ones still dancing when the music stops, and to make more clear-eyed assessments of risk and return.
Companies will soon have to start proving that there is at least a sustainable path to profitability for all the money spent on AI. Chip makers have already paid the money. However, the situation is much darker for model manufacturers.
This will be especially a problem in China, where competition is fierce and frugal consumers are reluctant to spend on software services. But even in Silicon Valley, where the biggest players are starting to generate real revenue, the numbers are still dwarfed by the huge amounts of money spent on data centers and scaling.
Whether consumers want it or not, we’ll likely see a lot more attempts to offer new revenue streams like targeted ads or TikTok clones. But at some point, investors will demand more than just promises of AGI’s future potential.
This is by far the question I get most often when I talk about AI in the real world. The anxiety is already here.
We’ve already seen investments in AI used as cover for layoffs in the tech sector, and I predict we’ll see much more of this in other sectors as well. Policymakers and business leaders will increasingly have to find solutions on how to deal with mass labor market disruptions on the horizon.
If there’s one silver lining to this year, it’s that there seems to be a hunger for human ideas and creativity that machines haven’t yet caught up with at this scale.
I don’t expect 2026 to provide all the answers. But the questions we ask about power, accountability, money, and meaning will decide how we let AI reshape our world. It means remaining curious, skeptical and stubbornly human in the new year.
More from Bloomberg Opinion:
This column reflects the author’s personal views and do not necessarily reflect the views of the editorial board or Bloomberg LP and its owners.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asian technology. He previously worked as a technology reporter for CNN and ABC News.
More stories like this available Bloomberg.com/opinion




