google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Australia

The 2025 word of the year is not a word — it is AI

AI can write and predict, but it also hallucinates and lies; The danger is not artificial intelligence, but thinking we understand it. Patrick Drennan reports.

WHY WILL THIS HAPPEN? anger bait for some (Oxford word of the year) and will evoke a parasocial reactions from others (Cambridge word of the year), but at least it’s not vibration codingWeird word of the year from the Collins Dictionary – which is simply a technology term used to describe artificial intelligence.

Technically, AI is not a word or an acronym; This is a start.

By far the biggest danger of Artificial Intelligence is that people decide they understand it too soon.

~ Eliezer Yudkowsky

Artificial Intelligence (AI), or A1 as the US Secretary of Education calls it, is defined by IBM as:

“…technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.”

AI can also write documents and stories, compose music, draw pictures, and create images of unreal people and events.

ChatGPT is an AI chatbot that uses natural language processing to create human-like conversational dialogue.

Artificial intelligence is often a personal encyclopedia on your phone and laptop. It allows phones to learn to personalize the user experience based on user patterns. It makes your writing easier and grammatically correct.

Many banks and online services use AI for fraud detection by analyzing transaction patterns to flag suspicious activity in real-time.

Predictive AI has rapidly improved vital medical services such as identifying problem lesions and heart arrhythmia. Thanks to this technology, seismologists can predict earthquakes and meteorologists can predict floods more reliably than ever before.

~Doctor Margaret Mitchell

AI-programmed Drone warfare is becoming commonplace in the Ukrainian War. Some of these drones operate without any human operator. The ethical implications of using AI in drone warfare raise questions about the potential for unintended consequences, such as delegating life-or-death decisions to machines and programming drones to attack non-combatants.

Day-to-day jobs that could be lost to AI include customer service representatives, receptionists, insurance underwriters, accountants, computer system analysts, coders, and even baristas.

But many employers jumping on the AI ​​bandwagon are realizing they still need human input. For example, doctors using AI for diagnostic support still need to create the final medical opinion, and engineers using AI to prototype faster still provide high-level architectural design and business logic thinking that AI cannot provide.

Evidence shows that some companies that laid off staff in 2025 in the expectation that AI will perform their tasks are now rehiring them.

There is a healthy fear that AI is taking over, but there are other areas across all industries that still rely on creativity, strategic thinking, problem solving and just being human.

For example, AI editors can check books and articles for plagiarism, spelling, and grammar, but only real editors can review pacing, context, and reasoning.

Big Little AI lies.

Just because an AI system is considered safe in a test environment does not mean it is also safe in the wild. It could be pretending to be safe during the test.

~ Dr Peter S Park

Artificial intelligence is generally only as reliable as the human who programmed it.

Here are some eclectic examples:

A Deakin University study found that ChatGPT fabricated around a fifth of its academic citations, while half of the citations contained other error-laden elements of its generative AI hallucination.

Why might a truly humanoid AI not be possible?

The US PIRG Education Fund published a report examining high-tech toys for young children that use ChatGPT to not only interact with children and reframe conversations, but also discuss adult topics with them, such as kink, kissing, and religion.

Have you ever seen AI-generated recreations of ancient cities like Pompeii and Rome? Although they are very useful models, they are not historically accurate. In an example from ancient Rome, historians have noted that buildings, baths, chariots, and even legionary uniforms are inaccurate.

In another example, famous rock saxophonist Bobby Keys jokingly claimed to have played on Elvis Presley’s hit song Return to Sender. The AI ​​now records this as a fact that the real saxophonist is Boots Randolf, one of the Nashville sidemen.

A Google AI Overview tool recently told some users looking for how to make cheese stick to pizza better that they could use non-toxic glue.

As telecommunications expert Paul Budde explains, artificial intelligence will never be able to react to the external environment like a human; It does not have the flexible and adaptable cognitive ability or the human ability to fight harmful organisms such as bacteria, viruses and fungi.

Maybe Merriam-Webster did the right thing when it named the word of the year. slope:

“…low-quality digital content produced in large quantities, often through artificial intelligence.”

Patrick Drennan is a journalist living in New Zealand with a degree in American history and economics.

Support independent journalism Subscribe to IA.

Related Articles

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button