Direkt zum Seiteninhalt springen

Disinformation and Elections to the European Parliament

SWP Comment 2019/C 16, 20.03.2019, 8 Seiten

doi:10.18449/2019C16

Forschungsgebiete

Elections to the European Parliament (EP) will take place in May 2019. Politicians and experts fear that the election process might be disrupted by disinformation cam­paigns and cyber attacks. In December 2018, the European Commission presented an action plan against disinformation. It provided 5 million euros for raising awareness amongst voters and policymakers about manipulation, and for increasing the cyber security of electoral systems and processes. The strategy relies on voluntary and non­binding approaches by Internet companies to fight disinformation. To protect the integrity of elections in the medium term, independent research into technical, legal and market-regulating reforms must be boosted. The objective should be to preserve the functionality of democracies and elections in the age of digitalisation.

The next European elections will be held in EU member states from 23 to 26 May 2019. Since right-wing nationalist and Euro-sceptic movements have gained in strength, there is already talk of a “defining election” that could decisively influence the future ori­enta­tion of the EU. Euro-sceptic parties already account for almost one-third of parliamentarians, a proportion that might rise following the elections.

EP elections have thus far been seen as “second-rank elections” and therefore as a good opportunity by the electorate to teach the respective member state’s government a lesson. This attitude fails to appreciate the mobilisation potential of the current debate on the pros and cons of European integration, the influence of third parties, and the growing importance of the EP. The elections are extremely significant for the strategic orientation of European integration. A suc­cess for EU opponents could push the EU to the very limits of its capacity to act, for example through further exit demands along the lines of Brexit, or a blockade of the complex decision-making process. The elections not only decide the renewal of the EP, but also the inauguration of the new EU Commission for the 2019–2024 parliamentary term. The EP influences the appointment of the Commissioners and can force the entire Commission to resign with a two-thirds majority and realign the Multiannual Financial Framework.

Challenges

The EU’s structure and functions are not easy to understand. European issues are unfamiliar to many, and it is relatively simple to spread false information about the EU. Considering the upcoming election, the European Commissioner for the Secu­rity Union, Sir Julian King, urged member states to “take seriously the threat to demo­cratic processes and institutions posed by cyber attacks and disinformation” and to draw up “national prevention plans” to pre­vent “state and non-state actors from under­mining our democratic systems and using them as weapons against us”. This specifi­cally includes disinformation campaigns and cyber attacks on the electronic electoral infrastructure, which can affect the con­fidentiality, availability and integrity of the electoral process.

Disinformation already appears to have had an impact in Europe: researchers at Edin­burgh University identified over 400 false accounts on social networks, operated by so-called trolls based in St Petersburg, which were used to influence the Brexit referendum. Security and defence policy defines disinformation and cyber attacks as elements of hybrid threats, i.e. covert actions by third parties aimed at destabilising Europe or the EU system. The term “hybrid threats” usually refers to a form of warfare that remains below the threshold of using military force. This ambiguity gen­er­ally complicates a military response accord­ing to international humanitarian law.

Disinformation Campaigns

Disinformation is not a new phenomenon. In security research it is regarded as “black” propaganda, since it seeks to influence pub­lic opinion from the shadows. It uses the same means as modern public relations (PR) and advertising campaigns.

In contrast to PR, however, disinfor­mation wants to destabilise the pillars of democ­racy by attacking parties, elected politicians or the EU as a political system. Disinformation does not necessarily mean false information, since even true statements taken out of context can be misused for suggestive conclusions. Disinformation campaigns can be short-term, for example to influence an election result, or long-term, for instance to undermine confidence in the EU. Attempts can thus be made to dis­credit individual politicians so as to pre­vent them from being re-elected. For exam­ple, “negative campaigning” can uncover alleged scandals or make accusations of cor­ruption. During the last presidential elec­tion campaign in the USA, automated com­puter programmes known as Twitter bots, probably of Russian origin, spread predomi­nantly negative reports about Hillary Clin­ton and relatively positive reports about Donald Trump. In the medium term, this promotes social division and the polarisation of public discourse.

The negotiation of political interests in social discourses is the key element – but also the Achilles heel – of democracies. Tac­tics such as disseminating dubious claims (“muddying the waters”) or constantly repeating large volumes of false information or conspiracy theories (firehose of falsehood”) are used to undermine political cer­tainties and dissolve a socially shared con­cept of truth. One example was the reaction to the downing of a Malaysian passenger plane in July 2014: on social networks, there were attempts to discredit the investi­gation report which found that the Russian armed forces had caused the catastrophe.

IT-Enabled Disinformation

A distinction must be made between digital and IT-enabled disinformation: digital dis­information encompasses the entire range of digital mechanisms for disseminating information. IT-enabled disinformation, on the other hand, includes hacking incidents or cyber attacks that compromise IT secu­rity, namely confidentiality, availability and integrity of data or systems. The technical hack is only one of many means by which the confidentiality of information can be violated, for example by stealing sensitive information from the accounts of politicians, parties or officials and then publishing it with harmful intent (doxing). Well-known examples are the publication of e‑mails from the US Democratic National Committee (DNC) on the WikiLeaks plat­form in 2016 and from the Emmanuel Macron campaign team in 2017.

The restriction of the availability of tech­nical systems via cyber attacks can facilitate disinformation campaigns as well. Espe­cially in authoritarian regimes, websites of opposition politicians, parties and services such as Twitter and Facebook are deliberately paralysed shortly before elections by “distributed denial of service” attacks, meaning the deliberate overload of the server concerned. Similarly, the digital voting infrastructure with its voting com­puters and counting systems can be dis­rupted and manipulated.

Digital Disinformation

Digital disinformation has the advantage of having low costs while having a high im­pact: with few resources, a global audience can be reached with customised disinformation through digital technologies. Digital disinformation employs the legitimate means of the advertising industry to target users based on their individual behaviour profiles (so-called “targeted ads” and “micro-targeting”).

Social networks such as Facebook were not developed for the purpose of democratic discourse, but to analyse and categorise their users’ interests and behaviour, and sell this information to third parties for advertising purposes. According to their behaviour patterns, users will be shown content that other users of the same cat­egory or with a similar behaviour profile also prefer. Algorithms thus ensure that users are shown more of the same so as to hold their attention and keep it on the platforms as long as possible. These so-called filter bubbles arise directly from the business model of online platforms to bring advertising to as many users as possible. If the same opinions are grouped together and, simultaneously, differing views are hidden, a self-referential “echo chamber” can develop. In online forums that bring together only like-minded users, the latter’s perceptions tend to be strengthened be­cause they do not experience any contra­diction.

Disinformation has a particularly polarising effect on already politicised groups with strong ideological stances. These can be de­lib­erately targeted with conspiracy theories that fit their worldview. One example is the campaign against alleged rape by asylum seekers, the so called Lisa case of 2016. Dur­ing the 2016 US election campaign, there were incidents where supporters of the right-wing Alternative Right movement and left-wing groups were separately invited via Facebook to take part in the same demonstration, in the hope of provoking a violent escalation.

Conspiracy theories and disinformation can quickly be shared worldwide over social networks. This can be accomplished using a mix of automated accounts (“social bots”), hybrid accounts (partly human, partly auto­mated) and so-called troll armies or 50-cent armies. Such “armies” consist of state actors or privately organised commentators who systematically disseminate certain narratives in social media or on news sites. Often volunteers also unknowingly spread dis­information (“unwitting agents”). In the 2016 US election campaign, US citizens spread Kremlin propaganda without know­ing its source. But traditional media cover­age is also involved, as it increasingly takes up trending topics from social networks. If these contain disinformation, and the media carries them unreflectively, they re­inforce the narratives or false reports. Dis­infor­mation has a cumulative effect over longer periods of time.

EU Counter-Strategies

Holding EP elections is the responsibility of member states. Although they are doing much to protect the integrity of elections, mostly this is in the form of patchwork measures. There are concerns that the EP elections will be manipulated, disrupted or unlawfully influenced by opponents of the EU, whether during the election campaign, at the ballot box or during the counting of votes. According to a Eurobarometer survey, 83 percent of Europeans are worried about targeted disinformation on the Internet. The EU expects that targeted disinformation cam­paigns will be present during election cam­paigns.

Disinformation Warfare

Since 2015, the European Commission has been attempting to combat disinformation and technical influences using foreign and domestic policy measures. It has, inter alia, increased staffing and funding for the Euro­pean Network and Information Security Agency (ENISA) and set up an East StratCom Task Force within the European External Action Service (EEAS). The Task Force docu­ments and regularly informs about disinfor­mation campaigns in the north-eastern member states. This was followed in 2016 by a Joint Communication and a Joint EU Framework for Countering Hybrid Threats. The Commission and the EEAS agree that such threats are increasingly causing trouble in the EU.

The EU defines hybrid threats as “a mix­ture of military and civilian warfare by state and non-state actors such as covert military operations, intense propaganda and economic harassment”. These aggressions, it believes, not only cause direct damage and exploit vulnerabilities, but also destabilise societies and promote the divi­sion of the EU “through cover-ups”. Internal and external security must therefore be even more closely interlinked.

Commission President Jean-Claude Juncker, in his speech on the state of the Union 2018, proposed a series of concrete measures to ensure that the May 2019 elec­tions are free, fair and secure. Among other things, he called for more transparency in (often covert) political advertising on the Internet, and the possibility of sanctions if personal data are used illegally to influence the outcome of the European elections.

Networks such as Facebook, Twitter and YouTube have agreed on a Code of Practice on Disinformation to combat disinforma­tion and fake accounts on their platforms. In October 2018, this Code was signed by Facebook, Google, Twitter and Mozilla, as well as professional associations operating on­line platforms and the advertising in­dustry.

Two months later, the Commission and the EU High Representative for Foreign Affairs and Security Policy presented an action plan against disinformation. Both launched the creation of an early warning system for information about disinforma­tion. Five million euros and 50 staff posi­tions were approved for it. The system is meant to be able to identify campaigns in real time and raise awareness of the prob­lem.

Since the EU fears being misrepresented beyond its borders as well, other teams are monitoring the spread of misinformation in North Africa, the Middle East and the Bal­kans. Furthermore, it has set up an elec­toral network, elaborated a guide to the application of EU data protection law in elections, and given guidance on cyber secu­rity. As of February 2019, member states will be running a simulation of what would need to be done in the event of an attack. EU states rely on the exchange of experience. Further meetings are scheduled for spring 2019. In late January 2019, the Commission warned Internet companies that their transparency initiatives against covert advertising were not sufficient to protect the integrity of EP elections.

Cyber Security Measures

What is the EU doing about IT-enabled dis­information? Critical infrastructure protec­tion has long been subject to EU regulation. However, member states were unable to agree on defining voting systems as critical infrastructure as part of the 2016 Network and Information Security (NIS) Directive. The IT security of voting technology was considered a purely national task. However, reports of alleged influence on the Brexit referendum and elections in France, Cata­lonia and Belgium, have increased sensi­tivity to the problem. In September 2017, the EU proposed a whole range of cyber-security measures, including a pan-Euro­pean network of cooperation between data protection authorities, to share knowledge on how elections are influenced. Only in December 2018 did EU states agree on a cyber security law that will strengthen the cyber security agency ENISA, and for the first time create a certification framework for the protection of critical infrastructures.

When, that same month, a hacker pub­lished explosive data on Twitter under the pseudonym “0rbit”, politicians demanded an “emergency plan to be able to react with­in a short time to the outflow of sen­sitive data, digital industrial espionage or sabotage”. There are also calls for uniform minimum legal standards for the security of information technology equipment, which would mean replacing the voluntary cer­tifi­cation framework of the EU by a European regulation. This would apply, for example, to end-user devices such as mobile phones and laptops. Providers of online services and manufacturers of devices connected to the Internet would need to design their prod­ucts in such a way that users must choose strong passwords and update them regularly.

As well as making technical infrastructures more robust, the EU relies on opera­tional cyber security measures. These in­clude the development of better attribution capabilities for cyber attacks, an exchange of information, and a stronger role for Euro­pol in the fight against cybercrime. If member states become the target of such attacks, they should be able to find out for themselves where the attacker came from, which security gaps were used, and which data was affected or extracted. The discus­sion will focus on harsher penalties for cybercriminals and new criminal offences, such as the operation of criminal infrastructures. With principles such as “security by design”, i.e. the development of hard­ware and software that seeks to avoid weak points and manipulations from the outset, the General Data Protection Regulation (GDPR) contains a further building block for action against cyber attacks and disinformation. In January 2019, the EU also agreed on a relevant law that allows for fines to be imposed on political parties and founda­tions that violate data protection rules in the European election campaign in order to influence voters. Parties can even lose all claims to EU party funding. The reason for this regulation was that Facebook had passed on user data to the British company Cambridge Analytica, which evaluated the data records of 220 million American Face­book users to create user profiles for tar­geted advertising.

Cyber Security in Elections

What measures are being taken to ensure the confidentiality, availability and integ­rity of electronic voting systems? Following reports alleging that the US elections were unlawfully influenced, the Council of Europe’s Venice Commission has been in close contact with the electoral agencies of the 61 Council members. Electronic voting systems in member states vary widely. Elec­tronic voting in the EU has so far only been used in Belgium, Bulgaria, Estonia and France. In Belgium, Flemish municipalities in particular use voting machines. In Bul­garia, such machines will only be used in smaller polling stations in the 2019 EP elec­tions. In France, the use of voting machines was suspended during the 2017 presidential election due to the alleged incidents in the US election. In other countries, such as Ger­many or Austria, voting is exclusively by ballot paper, with information technology being used to determine the election result. The security of the IT systems is therefore essential when establishing the provisional election results. Estonia is the only country in the world that allows online voting via the Internet.

Overarching assessments of the technical vulnerability of electronic voting systems are not possible, as EU countries use differ­ent voting computers and systems. How­ever, since all voting computers can be mani­pu­lated, experts recommend a physi­cal paper printout for each individual vote. In July 2018, under Article 11 of the NIS Directive, representatives from 20 member states pre­pared a compendium on the cyber security of elections. They called on member states to put in place specific security arrangements and contact points for an overarching Euro­pean cooperation network.

If individual constituencies experience irregularities during the actual voting, or technical problems with the vote count, elections in individual countries could be held again at short notice without the need for the entire European Parliament to be re-elected. A cyber attack on a member state would mean that the allocation of seats in the EP could not be confirmed immediately. Targeted cyber attacks launched by third countries on individual elections can be sanctioned by the EU applying its Joint Diplomatic Response (Bendiek 2018). A comprehensive and serious attack on the EP elections would be seen as an attack on the EU. Under certain conditions this would allow the use of the solidarity clause under Article 222 TFEU or even the mutual assis­tance clause under Article 42 para 7 TEU.

Promoting Independent Research

The EP elections decide on the new com­position of the European Parliament, but election rules are a national responsibility. In many EU countries, local electoral author­ities are responsible for conducting the election. Although they are aware of the danger of disinformation and cyber attacks, they are not sufficiently technically prepared for them. The credibility of the EP elections and thus of the EU is at stake. European policy-makers prefer short-term and more technical measures in close co­operation with Internet companies to com­bat disinformation and hold cyber-security exercises. Research on causes, however, is lacking. The findings of the various in­dependent interdisciplinary research pro­grammes on disinformation, cyber attacks and the conditions of democracy must there­fore be taken into account more closely.

Hybrid Threats?

There is competition for responsibilities and resources between security and defence policy on the one hand, and domestic policy on the other. From the perspective of de­fence policy, the phenomenon of dis­infor­mation belongs in the category of hybrid threats. But narrowing the subject in this way is not sufficient. In a 2017 congressional hearing, heads of American secret ser­vices rightly stated that disinformation rep­resents a new normal. According to NATO and the European Commission, Russia leads the way in the targeted dissemination of false information, but more than 30 other countries are also involved. Governments man­date think tanks and non-govern­mental organisations to provide analyses, so there is no shortage of relevant reports. The Ameri­can Alliance for Securing Democracy, for example, or the Digital Forensic Re­search Lab, financed by the Atlantic Coun­cil and Facebook, concentrate their work primarily on Russia and China. Think tanks and political foundations dealing with disinformation must identify clients and finan­ciers of their projects so as to avoid suspicions of partiality.

However, false information does not only come from countries outside the EU, but is also disseminated within its member states. Political activism, especially from the anti-European spectrum; the pretence of a grass­roots movement (“astroturfing”); and the role of the tabloid media are at least as signifi­cant as external attempts at influence. Their impact on Brexit, for example, probably outweighed that of Twitter bots, which only has a user adoption of 17 per­cent of the British population.

The effectiveness of digital disinformation has not been scientifically proven. Recent studies on the relevance of filter bubbles have come to diverging conclusions. Empirical data indicate that users deliberately choose certain formats and contents that differ from those of the estab­lished media. Filter bubbles of dissent do not seem to arise because users are un­aware that information can be one-sided or false. Rather, the explicit interest of users in divergent opinions seems to be the decisive factor, accompanied by a steady loss of trust within democratic societies in political and public institutions. The idea that filter bubbles are deliberately formed and con­trolled is reinforced by the fact that it seems to be small groups that spread “alternative facts”, disinformation and manifestly false reports in a particularly vocal way. The fear that digital algorithms could largely destroy social communication is thus probably ex­aggerated.

IT-Enabled Disinformation

The EU’s technical measures to combat dis­information campaigns and cyber attacks are only a first step. Ideally, they will direct member states to try to improve protection for the EP elections during the election cam­paigns, the actual voting and the vote count. Constant exchange and regular cyber security exercises are necessary to minimise dangers. However, most member states have so far failed to consider elections as a criti­cal infrastructure for democracy and to secure them at a high level. Manufacturers and suppliers of critical IT products there­fore urgently need to be made more ac­count­able. The problem of unsecured IT hard­ware and software in voting technology is still underestimated. In the long term, the EU must also be enabled to respond strategically, communicatively and with technical effectiveness to attempts at mani­pulating elections, and must be provided with the necessary financial and human resources. Until this goal has been achieved, emergency teams can be deployed around the clock during the elections.

The Supremacy of Internet Companies

It is questionable, however, whether the weak­nesses of European democracies as dis­cussed above can be addressed effectively with short-term task forces and medium-term action plans. Linguistic research shows that mere fact checking is more likely to inadvertently reinforce false information. The effectiveness of automated artificial-intelligence systems in combating disinfor­mation is also overestimated. Obviously, it is unrealistic to hope to eliminate false information completely. Instead of tackling symptoms, it would be useful to promote independent research to analyse proposals for short-term technical and policy meas­ures. These should provide the blueprint for fundamental reforms in the data economy.

Google’s global market share of 80 per­cent of all search queries and Facebook’s and YouTube’s market share of 70 percent in social networks are an expression of the unprecedented concentration processes with­in communication infrastructure. Along­side the growing importance of digi­tal audiences, communication in society is shifting towards a market-orientated arena where every “speech act” or announcement has its price. Private companies provide spaces for public digital discourse; access to them is controlled. Only those who enter into a private contractual relationship and make their contribution either financially or in the form of commercially usable data have a say.

These social networks were developed for marketing purposes and do not cater for unconditional democratic participation based only on citizen status. They are com­parable to a situation in which the parlia­ment building is owned by a private pro­vider, access to it is regulated according to economic criteria, and the loudspeaker volume and transmission of speeches to the outside world are assessed in line with mar­ket conditions. The EU’s previous regulatory approaches, for example its insistence on voluntary commitments, do not do justice to this concentration of power. The Council and Commission were right to criticise the code of conduct currently in force. It con­tained “no common measures, no substantial obligations, no compliance or enforcement measures”. When the personal data of numerous German politicians were illegally published in December 2018, the online plat­form Twitter dragged its feet despite its voluntary commitment under the code. Large platform providers have hardly any competition to fear in Europe, meaning that a fundamental reform of the antitrust legislation is the last resort. Previous pro­cedures for the evaluation and control of monopolies have often been inadequate.

A key problem is merger control. Large companies buy burgeoning smaller com­petitor start-ups before they can become a threat to their business model. A striking example of this is Facebook’s acquisition of WhatsApp and Instagram, and its merging of user data, against former promises not to do so. Election advertising on television and a stall on the high street are no longer what decides elections, but rather artificial-intel­ligence technologies such as microtargeting. These are used to specifically address voters who are willing to change their minds and who can often tip the scales. Only the EU, with its economic power as a whole, can fight the power of transnational digital cor­porations. In this context, the EP elections are a historic turning point: European policy means tackling the major fundamental issues of the European communication order, such as the control of platform monopolies and excessive communicative power. During EP election campaigns, po­litical parties and organisations must com­mit themselves to bringing transparency to their campaign activities and to preventing the use of social bots.

Dr Annegret Bendiek is Senior Associate in the EU / Europe Division at SWP.
Dr Matthias Schulze is Associate in the International Security Division at SWP.

© Stiftung Wissenschaft und Politik, 2019

SWP

Stiftung Wissenschaft und Politik

ISSN 1861-1761

(English version of SWP‑Aktuell 10/2019)