google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

South Korea’s ‘world-first’ AI laws face pushback amid bid to become leading tech power | South Korea

S.South Korea has launched a push towards regulating artificial intelligence, launching what is being billed as the most comprehensive set of laws anywhere in the world and could serve as a model for other countries, but the new legislation has already faced pushback.

Laws that would force companies to label AI-generated content have been criticized by local tech startups who say they go too far and civil society groups who say they don’t go far enough.

The artificial intelligence basic law, which came into force on Thursday last week, comes at a time when global unease over artificially created media and automated decision-making is growing, as governments struggle to keep up with rapidly advancing technologies.

The law will force companies providing AI services to:

  • Add invisible digital watermarks for clearly artificial output such as cartoons or works of art. Visible tags are required for realistic deepfakes.

  • “High-impact AI,” including systems used for medical diagnosis, hiring and loan approvals, will require operators to conduct risk assessments and document how decisions are made. If a human makes the final decision, the system may fall outside this category.

  • Extremely powerful AI models will require safety reports, but the threshold is set so high that government officials currently recognize that no models worldwide meet it.

Companies that break the rules will face fines of up to 30 million won (£15,000), but the government has promised a grace period of at least a year before penalties are imposed.

The legislation is billed as the “world’s first” to be fully implemented by a country and is central to South Korea’s ambition to become one of the world’s three leading AI powers alongside the United States and China. Government officials argue that the law is 80-90% focused on encouraging the industry rather than restricting it.

Alice Oh, a professor of computer science at the Korea Advanced Institute of Science and Technology (KAIST), said that although the law is not perfect, it is intended to evolve without hindering innovation. But a survey A study from Startup Alliance in December found that 98% of AI startups were unprepared for compliance. Co-chairman Lim Jung-wook said disappointment was widespread. “There is some resentment,” he said. “Why do we have to be the first to do this?”

Companies need to decide for themselves whether their systems qualify as high-impact AI, critics say. This process takes a long time and creates uncertainty.

They also warn of a competitive imbalance: All Korean companies face regulation regardless of their size, while only foreign firms that meet certain thresholds, such as Google and OpenAI, are required to comply.

The push for regulation has emerged in a unique domestic environment that has civil society groups concerned the legislation does not go far enough.

According to a 2023 report by US-based identity protection company Security Hero, South Korea accounts for 53% of global deepfake pornography victims. An August 2024 investigation revealed massive Telegram chat room networks creating and distributing AI-generated sexual images of women and girls, foreshadowing the scandal that would later erupt around Elon Musk’s Grok chatbot.

However, the origins of the law predate this crisis, with the first bill on artificial intelligence being introduced to parliament in July 2020. The bill has stalled repeatedly, in part because of provisions accused of prioritizing industry interests over the protection of citizens.

Civil society groups argue that the new legislation provides limited protection for people harmed by AI systems.

Four organizations, including Minbyun, a collective of human rights lawyers, issued a joint statement a day after the law came into force, arguing that the law contains almost no provisions to protect citizens from AI risks.

The groups stated that the law provides protection for “users”, but these users are not people affected by artificial intelligence, but hospitals, financial companies and public institutions that use artificial intelligence systems. They argued that the law did not create a prohibited AI system and that exemptions for “human involvement” created significant loopholes.

The country’s human rights commission criticized The decree states that there are no clear definitions of high-impact artificial intelligence, and that those who are most likely to be exposed to rights violations remain in blind spots in terms of regulation.

The science and ICT ministry said in a statement that it expects the law to “remove legal uncertainty” and create a “healthy and safe domestic artificial intelligence ecosystem”, adding that it will continue to clarify the rules with revised guidelines.

South Korea deliberately chose a different path than other jurisdictions, experts said.

Unlike the EU’s rigid risk-based regulatory model, the U.S. and U.K.’s largely sector-specific, market-driven approaches, or China’s combination of state-led industrial policy and detailed service-specific regulations, South Korea favors a more flexible, principles-based framework, said Melissa Hyesun Yoon, a law professor at Hanyang University who specializes in AI governance.

This approach focuses on what Yoon describes as “trust-based promotion and regulation.”

“Korea’s framework will serve as a useful reference point in global AI governance discussions,” he said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button