Focus on regulations related to facial recognition data and associated penalties
The European Union (EU) has been accelerating its efforts to dominate the regulation of artificial intelligence (AI) technology since it agreed to introduce the world’s first AI regulation bill last year.
The EU legislation focuses on regulations prohibiting AI developers from collecting and classifying facial recognition data. Companies that break these regulations could face fines of up to 35 million euros (about $39.6 million) or 7% of their global sales, highlighting the importance of the legislation’s detailed rules and penalties.
The EU and developed countries are actively preparing AI regulations as big tech companies, including OpenAI’s GPT4, Google’s Jemini, and Elon Musk’s X.AI, launch generative AI. Last October, the Biden administration announced an AI executive order requiring AI models to notify the federal government if they pose security or economic security risks. In November, South Korea agreed to cooperate in preventing AI risks with 28 other countries following the announcement of the Digital Rights Charter in September last year.
The reasons why developed countries are keen on leading AI regulations
Why are developed countries keen on leading AI regulations, just as they are on AI technology development? The answer is to secure the leadership in standardizing the AI field worldwide and to secure a position that favors their country.
This is why President Yoon Suk Yeol proposed the basic principles of the Digital Bill of Rights at the New York Digital Vision Forum in September last year, aiming to create international standards that favor South Korea and secure a dominant position in the AI field.
In Europe, which lacks leading AI companies, it is interpreted that they hurriedly drew the sword of regulation to check American AI big tech like OpenAI, Google, and Meta. The EU has already introduced the Digital Services Act (DSA) and the Digital Markets Act (DMA) to restrict the entry of big tech platforms into the European market.
Initially, South Korea had set an ambitious goal to formulate the world’s first AI law and propose international standards. However, progress has been hindered within the National Assembly due to various challenges, including reductions in research and development (R&D) budgets, legislation about the Space Aviation Agency, and Broadcasting laws. The Basic AI Act, officially known as the Act on Promotion of AI Industry and Establishment of Trust Base, advanced through the bill subcommittee of the Science, Technology, Information, Broadcasting, and Communications Committee in February last year but still awaits final approval.
The Importance of Specific AI Regulations
Some experts believe that since advanced technology fields, including AI, tie directly to national security, it’s essential to push for specific regulations. Following the UK’s lead, South Korea plans to host the second AI Safety Summit in May this year.
Yoon Jung Hyun, associate research fellow at the National Security Strategy Research Institute, explained, “We need to explore the strategic position of Korea in the global competition structure surrounding the use of AI, the acquisition of data, and the application of norms, and what we should lead in terms of national interest.” He added, “We need to consider the potential for various negative incidents due to the evolved generative AI’s societal threats and seek directions for policy strategies and legislative improvements at the government-wide level.”
By. Nari Kim
Most Commented