As the local elections on June 3 approach, South Korea is ramping up its efforts to combat misinformation fueled by artificial intelligence. Within the National Election Commission (NEC), teams are dedicated to tracking AI-generated content on social media, a battle that promises to be remarkable given the speed at which this technology evolves. This fight raises concerns about voters’ ability to distinguish true from false and highlights the challenge of regulating information in the digital age.
A battle on social media
In a typically South Korean office, employees are busy detecting potentially misleading content disseminated on social media, particularly on platforms like Instagram and YouTube. As elections draw near, this mission takes on considerable scale, with the need to spot videos and articles that are often very realistic, produced by next-generation AI models. The team is particularly vigilant, as each new technological advancement makes their work increasingly difficult, as indicated by Choi Ji-hee, one of the leaders of this initiative.
Identification technology and misinformation
State-developed software to identify AI-generated content boasts an estimated accuracy of 92%. However, this figure does not provide respite for the teams, as manipulated content is becoming increasingly sophisticated. For example, fake videos can simulate significant events or create misleading narratives from viral elements. NEC employees must therefore be both meticulous and innovative in their approach to expose and block these harmful elements. The war against misinformation resembles a true “whack-a-mole” game, emphasizes analyst Kim Ma-ru, who points to the constant challenge of detecting online fakes.
Increased regulatory measures
To support this fight, regulations have been implemented following the revision of legislation in 2023, increasing penalties for the abusive use of manipulation techniques, such as deepfakes. The legislation imposes penalties of up to seven years in prison for repeat offenders in creating fraudulent content. In this regard, Kim Myuhng-joo, head of the Korea AI Safety Institute, acknowledges that while these rules may seem drastic, public opinion concurs on the necessity of strict regulation to maintain the integrity of the electoral process.
The challenges of discernment for voters
The rise of AI in South Korea poses a real challenge for democracy, as a report indicates that more than 45% of South Koreans use generative AI tools. This enthusiasm raises questions about citizens’ ability to differentiate between true information and false information. Jung Hui-hun, a digital forensics specialist at the NEC, highlights how challenging it has become for voters to discern the truth, a disturbing observation given the impact that misinformation can have on political decisions.
The potential consequences of fake news
The stakes of misinformation are not merely theoretical: they can have real and sometimes dramatic consequences. During the last elections, similar events had already highlighted the dangers of false information. For instance, in 2024, an AI-generated video simulating a hunger strike by a candidate was widely shared. Such manipulations contribute to the proliferation of conspiracy theories, threatening public trust in the electoral process.
The fight against misinformation has therefore become a crucial issue for South Korean institutions, which seek to ensure fair and transparent elections in an increasingly complex media landscape. The need to educate voters and effectively detect misleading content proves essential for maintaining democratic integrity.
To learn more about the growing phenomenon of misinformation and the use of AI in politics, you can consult works like those from authorities on the rise of fake videos or the analysis of the impact of deepfakes on current society. Also, follow the discussions around international regulations in response to this threat.







