google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Scientists asked ChatGPT to solve a math problem from more than 2,000 years ago — how it answered it surprised them

When you purchase the links in our articles, the future and unionizing partners can win a commission.

Credit: Georgiclerk/Getty Images

When the Greek philosopher Plato was asked to double the area of ​​a square that challenged a student with the problem of “double the square” in 385 BC, the student doubled the length of both sides, and the new square should be the length of the original diagonal.

Scientists at the University of Cambridge and Jerusalem Hebrew University have chosen the problem to pos to Chatgpt because of its non -open solution. Since Plato’s 2,400 years ago, academics have doubled the square problem to claim whether the mathematical knowledge required to solve it was already inside or through experience.

ChatGPT, like other major language models (LLMS), was mostly trained on the text instead of images, and they thought that the answer to the square problem was low in the educational data. If this reaches the right solution without help, it can be claimed that mathematical ability is learned, not innate.

The answer came when the team progressed. As explained in a study published in the magazine on September 17 Journal of International Mathematics Education in Science and TechnologyThey asked Chatbot to double the area of ​​a rectangle using a similar reasoning. He answered that there is no solution to geometry, as it cannot be used to double the size of a rectangle.

However, the University of Cambridge visits the academician Nadav Marco Jerusalem Hebrew University and Professor of Mathematics Education Andreas StylianidesHe knew it was a geometric solution.

Marco said that the chance of the wrong claim in Chatgpt’s educational data was the “lost small”, that is, improvised, based on previous discussions on the double problem of square problems – a clear indicator instead of innate learning.

“When we encounter a new problem, our instincts are usually to try things based on our past experiences,” Marco said on September 18th expression. “Chatgpt seemed to be doing something similar in our experience. Like a student or scholar, his own hypotheses and solutions seemed to have emerged.”

Thinking machines?

Working is just shedding light on questions artificial intelligence Scientists (AI) “reasoning” and “thinking” version.

He suggested that Chatgpt could use a concept that we already know from education, as he made answers like improvisation and even made mistakes like Marco and Stylianides, the student of Socrates. proximal development zone (ZPD) explains the gap between what we know and ultimately we know it with the right education guidance.

ChatGPT said that it may be using a similar framework spontaneously, and that it could solve new problems that are not represented in training data only thanks to the right requests.

It is a definite example of the long -standing black box problem of a system that is invisible and unattended and unattended, or “reasoning” or “reasoning”, but researchers emphasized the opportunity to ensure that AI is able to work better for us.

“Unlike the evidence found in respectable textbooks, students cannot assume that the evidence of Chatgpt is valid,” Stylianides said in a statement. He said. “Understanding and evaluating the evidence produced by artificial intelligence emerges as the basic skills to be buried in the mathematics curriculum.”

Related Stories

Future AI Models are turbocharged by the brand new logic system, which researchers call ‘inference’

Scientists, new Chinese computerized architecture ‘inspired by the human brain’ can lead to ‘AGI’.

– Skynists discover great differences about how people and artificial intelligence think – and the results may be important

It is a fundamental skill that students want students to master in their educational contexts, and something they say is better immediately in engineering – for example, AI says to AI, orum I want us to explore this problem together ..

The team is cautious about the results, warns us not to interpret them and concludes that LLMs “eliminate things” like us. However, Marco tagged Chatgpt’s behavior as “student -like”.

Researchers see the scope of future research in various fields. Newer models can be tested on a wider set of mathematical problems, and have the potential to combine Chatgpt with dynamic geometry systems or theorem rehearsals, for example, creating richer digital environments that support intuitive discoveries in the way that teachers and students use AI to work together.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button