google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

AI’s reasoning problems — why ‘thinking’ models may not be smarter

AI reasoning models had to be the next leap of the industry, and promised more complex problems and smarter systems that could cope with super -superingence.

Latest versions of big players in artificial intelligence, including Openai, Anthropic Alphabet And Deepseek has been models with reasoning capabilities. These reasoning models can carry out more challenging tasks by “thinking” or by dividing the problems into logical steps and showing their work.

Now, a series of recent research questioning this.

A team in June Apple Researchers white paper “The latest technology product” [large reasoning models] It still cannot develop generalizable problem solving capabilities, and accuracy is ultimately zero beyond certain complexities in different environments. ”

In other words, when the problems become complex enough, reasoning models stop working. More importantly, models are not “generalized”, so they can really memorize molds instead of really finding new solutions.

“We can do something really good in criteria. We can perform a really good performance in certain tasks.” He said. He said: “Some of the articles you show does not generalize. So, although it is really good in this task, we and my sleep in our sleep are very terrible. And I think it’s a fundamental limitation of reasoning models right now.”

Researchers SalesforceAnthropic and other AI laboratories also raised red flags about reasoning models. Salesforce says that “roughest intelligence” And they find “There is an important gap between the current [large language models] Ability and business demand in the real world. “

Restrictions may show the cracks in a story that sends AI infrastructure stocks. Nvidia explosion.

“At this point, as a result of reasoning, the amount of calculation we need at this point is one hundred times more than we think we need this time,” Jensen Huang, CEO of NVIDIA CEO, said at the GTC event in March. He said.

To be sure, some experts say Apple’s warnings about reasoning models may be an iPhone manufacturer that changes the conversation because it seems to be growing up in the AI ​​race. The company experienced a number of setbacks with the high score of Apple Intelligence Suite AI.

Most importantly, in the Apple 2026, the Siri Sound Assistant had to postpone key upgrades, and the company did not make many announcements about AI at the worldwide annual developers conference.

“Apple currently says that LLMS and Reasoning aren’t really useful,” Daniel Newman, CEO of Futurum Group of CNBC’s “Stock Exchange”. He said. After Apple’s newspaper’s newspaper, “Oops, look here, we don’t know exactly what we’re doing.”

Watch this video to learn more.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button