Disinformation – Political advertising

Herunder har EACA besvaret Kommissionens spørgsmål ud fra deres kendskab til branchen. Vi har både brug for input, hvis du er enig eller uenig i deres tilgang samt tilføjelser.

Political advertising

  • Q1: What do you think about micro-targeting (acceptable and should not be limited, criteria for targeting should be disclosed, strictly limited, banned)

EACA proposed answer: Micro-targeting is acceptable for online political content and it should not be limited.

Explanation: The GDPR sets high standards with regards to the protection of citizens’/users’ personal data. Data revealing users’ political opinion and affiliations are sensitive personal data, which require explicit consent to be processed. This also applies in the context of advertising targeting. The data controller must ensure to collect the appropriate consent before using such data for micro-targeting. This obligation is accompanied by different transparency requirements which aim to inform users about the data processing taking place and by which entity. Additionally, national regulation stipulates what kind of disclosures must be made in terms of funding (amount, entity behind it), candidates’ identity etc.

  • Q2: Should online political advertising be regulated like offline political advertising on traditional media? 

EACA proposed answer: INPUTS NEEDED

Explanation: Electoral law including advertising in a political context is a matter of national competence. It is up to the Member States to agree on standards, rules etc. should they wish to harmonise such rules across the EU. Agencies, on behalf of their clients, adhere to the rules set out by the national regulator.

Questions on tackling disinformation

  • Q3: Are you happy with the definition of disinformation that the Commission has been using since 2018 or should it be broadened or complemented, i.e.
    verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm”?
    Public harm is defined to include “threats to democratic processes as well as to public goods such as Union citizens’ health, environment or security”.

EACA proposed answer: No

Explanation: Disinformation is per se not illegal. Any broad definition risks causing legal uncertainty and making compliance difficult. “Verifiable” implies that content has been put online already and needs subsequent checking, whereas it would need to be clear by whom this is to be done. “Harm” requires a legally sound definition, so as not to open doors to the misuse of “harm” and subsequent claims or redress. Also, any definition must be measurable and implementable by companies, such as technology providers and platforms, who are expected to enforce it. To that end, the advertising industry is working on a definition that would be applied globally by platforms, advertisers, agencies and technology partners.

Disinformation should be clearly distinguished from commercial communication.

  • Q4: Do you think the Code of Practice on Disinformation (of which EACA and 4 other NAC members are signatories including Kreativitet & Kommunikation), we intend to pass the message that the Code should be continued given the Commission’s failure to provide an assessment of the Code’s effectiveness.

EACA proposed answer: Continue as it is currently pursued.

Explanation: The Code has already led to substantial changes and achievements among platforms and heightened awareness among industry, civil society and political stakeholders in different Member States. We strongly urge the Commission to publish its assessment of the Code’s effectiveness prior to initiating further steps and/or to ask other stakeholders to assess the Code without the required background information. We also would like to recall that the Code is a voluntary, industry-led Code; signing/joining the Code is equally voluntary. The signatories are committed to the commitments agreed by them and set out in the Code. Therefore, unilateral expansion of signatories or content or commitments cannot happen outside the framework of this Code.

Disrupting the economic drivers of disinformation

  • Q5: even though this question applies to online platforms and ad networks, nevertheless, we might want to share an opinion): What type of measures should online platforms and ad network operators take in order to demonetise websites that create, present or disseminate disinformation? (this only applies to cases where such disinformation content is not illegal)

Options proposed are: (a probably mandatory use) of blacklists; Blacklist approach plus mandatory ad removal; Blacklist approach and temporary suspension of ad accounts; Grey list approach giving advertisers the possibility to selective exclude such websites; block ad accounts; ensure systematic scrutiny of websites providing ad space and limit ad placement only to websites considered trustworthy (whitelist approach); ensure transparency of platforms vis-à-vis advertisers and provide 3rd party verification

EACA proposed answer: INPUT NEEDED.

Explanation: Agencies primary concern is to keep their clients’ (advertisers’) brands safe. This includes avoiding that ads show up next to disinformation content.

Agencies provide their clients with brand safety services based on contractual agreements and on a risk profile. Reference to exclusion lists, where this is allowed, is a regular practice. However, such lists, if public, need to be backed, verified or maintained by a public authority or law enforcement to ensure its legal validity, to give users of these lists legal certain and to not make agencies liable should a URL be placed erroneously on such an exclusion list.

Platforms and ad networks can enforce effectively their community guidelines, allow full independent verification of ad placements (pre-bid monitoring and blocking, and post-bid blocking) and  allow proprietary controls –  domain or app exclusion  and inclusion lists.

  • Q6: Paid for content on issues of public interested promoted on social media (issue-based advertising)

Options proposed: should be systematically labelled, labelled and collected in public repositories, subject to the same rules as political ads, should not be regulated.

EACA proposed answer: [unfortunately, it is not possible to provide an explanation], therefore we propose to answer “should not be regulated”

Explanation: In the absence of a proper definition of what issue-based advertising refers to, there should be no regulation on it. As long as this is not the case, there is also no means to distinguish it from commercial communications that could contain references to social or societal issues.