Does technological progress equal social progress? Many have lost faith in this idea in an era of hatred, “fake news” and echo chambers. Can regulation make X, TikTok and co. better?

In January 2025, Elon Musk conducted an interview on X with Alice Weidel, the leader of Germany’s far-right AfD party, some regional branches of which are considered right-wing extremist by German intelligence services.
“Only the AfD can save Germany. End of story,” he said in an undisguised interference by a powerful social network in Germany’s election campaign.
In Romania in 2024, the far-right presidential candidate Calin Georgescu won the first round of the elections to the surprise of many: The political outsider had not participated in any TV debates and had not invested any money in his campaign. His success came mainly through the video platform TikTok; his videos were very prominent in the feeds of many Romanians.
Suspicions quickly arose that social bots (automated accounts) and trolls (human users who are sometimes paid to act on behalf of a foreign body or government agency) must have been involved. The election was annulled. It is also known that bots and trolls have been used to manipulate public opinion in many other digital discussions and topics, such as Brexit and the COVID-19 pandemic.
Social media: Extreme positions and vocal minorities get most attention
What happens in the digital sphere can have a huge influence on public opinion. At a conference entitled “Big Tech and digital democracy: How much regulation does public discourse need?” organized by DW and the University of Cologne as part of a series of events on Global Media Law, media and constitutional law expert Dieter Dörr stated that “democracy is under serious threat.”
Established and respected media outlets are present on these platforms and use Instagram, YouTube and others as channels for their content. But there are also numerous other players. They don’t even have to be bots or trolls: There are many accounts that do not maintain certain standards, andincite hatred against others, spread false claims or use artificial intelligence (AI) to manipulate and generate images and videos.
The algorithms used by social media to decide what content is displayed when and shown to whom reward this kind of behavior.
“Extreme opinions, which have a wide-reaching scope, are pushed to the top,” said Dörr, explaining that this is what keeps users on platforms for longer, allowing for more money to be earned from them.
EU’s Digital Services Act offers glimmer of hope
Social media platforms have become an important, if not the only, source of information for many people. Politicians and researchers have long recognized that the power wielded by these platforms is a problem. But can anything be done about it?
The European Union (EU) has stepped up efforts in recent years to regulate the digital world, primarily through its Digital Services Act (DSA), which came into force in February 2024. It requires major online platforms and search engines such as Amazon, Google, X and Facebook to provide greater transparency and protection for users.
Renate Nikolay, the deputy director-general of the Directorate-General for Communications Networks, Content and Technology (DG CONNECT) at the European Commission, which is responsible for enforcing the Digital Services Act, says: “We are pursuing three principles: First, platforms must assess and minimize systemic risks. Second, we are strengthening users’ rights, for example by providing complaint mechanisms. Third, we demand transparency in algorithms and require platforms to give researchers access to their data.”
This sounds like a big step forward: Platforms have to provide information on their algorithms and even offer users the option to disable personalized content or advertising. After all, algorithms tend not only to disadvantage moderate and differentiated content. Ultimately, they also create filter bubbles or echo chambers, in which users are mainly surrounded by content and other users that reflect their own views. This puts them at risk of falling into a spiral of radicalization.
The TikTok algorithm is particularly notorious. A recent study by the University of Potsdam and the Bertelsmann Foundation showed that during the last German election campaign, political parties were not equally visible in the TikTok feeds of young users. Videos from official party accounts on the political fringes, especially the AfD, were played more frequently than those from the accounts of more centrist parties.
During the period under review, the AfD uploaded 21.5% percent of all the videos, but these accounted for 37.4% of videos that appeared in feeds. The AfD’s videos were therefore overrepresented. For its part, the center-right CDU/CSU party of Chancellor Friedrich Merz uploaded 17.1% of all party videos, but these accounted for only 4.9% of videos in feeds.
When asked about this at the conference, Tim Klaws, Director of Government Relations and Public Policy for DACH, Israel and BeNeLux at TikTok, gave an evasive answer. He said that digital platforms had no interest in operating in an environment full of disinformation and populism, and were trying to minimize “fake news”, hate speech, etc. with the help of AI and their staff members.
Source : https://www.dw.com/en/digital-platforms-are-a-danger-for-democracy-what-can-be-done/a-74668315