google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Australia

Before we ask what AI can do, let’s ask what kind of AI we’re talking about

We continue to ask what AI can do, but the real danger is to use it without knowing what it is Paul writes Budde.

There is no shortage of headline that heralds the power of artificial intelligence. From the promises of economic productivity explosions to the fear of being a mass business, AI is hyperful as a savior and a threat. But in the midst of all noise, a critical question is often not asked: What kind of AI are we talking about?

For most people, “AI” now means productive artificial intelligence – ChatgptGeminiClaude and human -like text, images and even codes. However, productive artificial intelligence is just a branch of a much wider area and, more importantly, it is one of the least determinists. This is particularly important when dealing with high -risk decisions in sectors such as health, justice, education or energy. While discussing Deepseek, I mentioned such developments before.

Means productive, non -decisive

Productive AI models are probability for design. Therefore, they can write poems, create recipes or simulate the discussion, at the same time why they can sometimes see the truth or contradict themselves. These systems do not know anything in the sense of human. They produce outputs based on molds in wide data clusters without real understanding or awareness. This is not an error – the descriptive feature of architecture.

Compare this with rule -based expert systems or computer vision algorithms. These are usually appropriate, transparent and suitable for purpose. A specialist system can be designed to diagnose official logic -based medical symptoms. A vision algorithm can determine whether an object is a car or a high -reliability tree. These systems are not flashy, but they are often much more suitable for critical applications.

When someone proposes “AI as a solution, the urgent question should be: What kind of AI? Are we looking for a pattern recognition? Reasoning under uncertainty? Natural Language Production? Or is it completely different?

Second Question: What are their powerful aspects and limitations?

After determining the method, we must face the abilities and blind spots.

For example, a productive AI is an imitation master. It can imitate the tone, recommend reasonable arguments and summarize a large amount of content. However, this is not a search engine, a calculator or a real source. There is no grounding in the facts unless these facts are explicitly placed in their training or unless they are accessed through an external database.

This is something that I observe from the first -hand in an organization I know well – a service and advice that develops its own AI system as basically existing tools. Instead of adopting a general purpose model, they build a customized artificial intelligence according to their operations. They feed CT to internal reports, data, contracts, industrial statistics, public research and more, and they all want to create a system that really understands their jobs.

A large and developing project that requires continuous fine adjustment. In this process, they discover the gaps in their documents and see the need for more detail.

Interestingly, they report to learn a lot about their organizations in the process – strong aspects, weaknesses, assumptions buried in their systems. This is not only an AI training, but also a descriptive and meticulous exercise to better understand its own internal complexities.

Progress and Danger: Balance of technology

In the meantime, more traditional symbolic AI systems are perfect in areas that require clear rules, traceability and repeatability. But they don’t have flexibility. They don’t improvise. They are not designed to address uncertainty.

Often, policy makers and technology managers combine these very different systems under the same “AI” umbrella, leading to excessive forecasts and wrong practices.

The question that should come first

Before even reaching methods or tools, we have a more basic question: What convinces that any type of AI is the correct answer here?

We saw this game in the training in which it was launched as the correction of teacher scarcity or student distinction, without a serious questioning about whether real problems are not technical but social. Or in policing, it is used in places where foreseeing models are used in the educational data without taking prejudice into account.

In such cases, AI does not solve the problem – strengthens or hides.

In fact, some of the AI ​​charm can be its uncertainty. It allows innovation to emerge without corporate reform discomfort. However, if we apply an wrong type of artificial intelligence or if we apply where there is no artificial intelligence, we face the risk of placing defective systems under the guise of progress, not not only to waste resources.

Closing Thoughts

AI is not magic. Some probability that requires clarity, context and critical evaluation, some are a number of tools. “What can AI do?” The wrong place to start. We should start by asking: Which problem do we try to solve and understand even enough to choose the right tool?

Until this is the assumed approach, the public opinion debate around AI will continue to shake between the fear of utopian hype and dystopia, and it will be very little on how these systems actually work.

https://www.youtube.com/watch?v=qynweedhiyu

Paul Budde is an independent Australian columnist and general manager Paul Budde ConsultingAn independent telecommunications research and consultancy organization. You can follow Paul on Twitter @Paulbudde.

Support independent journalism subscribe to IA.

Related articles

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button