NetMission Digest – Issue #16: The Nexus of Tech, Trust, and Elections (Monday, June 3, 2024)

For You Page, Not For Them: TikTok Limits Reach of State-Backed Accounts

On May 2, 2024, the Brookings Institution, a non-profit organization, published a commentary highlighting an increase in activity by Russian “state-affiliated accounts” on TikTok. These accounts have been attempting to exert influence abroad during a crucial election year. The report found that while only about 5% of the TikTok content posted by these accounts was related to U.S. political topics, their posts received higher engagement—measured by views, likes, shares, and comments—compared to posts on platforms like X or Telegram. This increased presence of Russian state-backed accounts on TikTok coincides with the start of the 2024 U.S. presidential election cycle.

In response, on May 23, 2024, TikTok announced an expansion of its policies for state-affiliated media and introduced a new transparency report detailing its efforts to eliminate covert influence operations. In this statement, TikTok revealed that accounts identified as attempting to “reach communities outside their home country on current global events and affairs” will no longer appear on the main feed, significantly reducing their reach.

In the first four months of 2024 alone, TikTok took down 15 influence operations and removed 3,001 associated accounts that attempted to influence political discourse, including election-related information. This included accounts spreading propaganda to influence the Indonesian presidential election and those attempting to sway British public opinion regarding the U.K.’s domestic political discourse.

The spread of foreign propaganda is also a challenge on other social media platforms, such as Facebook, Instagram, and X. However, TikTok, owned by Beijing-based ByteDance, has been at the center of a heated political debate. Many federal lawmakers and some administration officials argue that TikTok poses a more serious national security threat and could potentially operate at the whims of China’s government. TikTok has repeatedly denied these claims and is currently suing the federal government over the new law, commonly known as The TikTok Ban Bill, which would force it to sever ties with its parent company to continue operating in the U.S.

TikTok’s proactive approach to content moderation aims to curb the spread of manipulative content and foster a more trustworthy environment. Similarly, on May 30, 2024, OpenAI published its first-of-a-kind report which surveys campaigns by threat actors that have used their products to further covert influence operations online. These companies seek to safeguard the electoral process from foreign interference, aligning with broader efforts to protect democratic institutions.

Relevant reading material

Copy, Paste, Profit? The Ethics of AI-Generated Content

One of the key issues surrounding the rapid growth of generative AI tools is copyright. These tools are trained with vast amounts of data, including substantial amounts of copyrighted material, which is then used to generate responses to user queries. Organizations argue that this practice effectively makes these AI platforms competitors, as they are trained on proprietary content without authorization. To address this challenge, OpenAI has sought to establish partnerships with news companies, recently securing deals with The Financial Times, Dotdash Meredith, Reddit, The Atlantic and Vox Media. Earlier this year, OpenAI signed a similar contract with Axel Springer, the parent company of Business Insider and Politico.

On May 22, 2024, OpenAI announced its most significant partnership to date, a historic multi-year agreement with News Corp valued at over $350 million. This partnership grants OpenAI permission to display content from News Corp mastheads, such as the Wall Street Journal, the New York Post, the Times, and the Sunday Times, in response to user queries. The goal is to provide users with reliable information and news sources to make informed decisions.

In contrast, The New York Times has taken a different approach, suing Microsoft and OpenAI for copyright infringement. In January 2024, they filed a 69-page complaint in the Federal District Court in Manhattan, alleging that OpenAI used their copyrighted content to train its generative AI tool, which then reproduces this content for users, effectively making it a competitor. The complaint accuses the companies of operating business models based on “mass copyright infringement.”

OpenAI has faced other legal challenges as well. Hollywood actress Scarlett Johansson threatened to sue the company for giving its chatbot named “SKY” a voice that was “eerily” similar to her own, drawing parallels to her character Samantha in the 2013 movie ‘Her.’ OpenAI CEO Sam Altman promised to remove the voice and pause its use in OpenAI products.

Additionally, OpenAI is reportedly negotiating with several Australian media companies to reach commercial agreements over the use of their content. It is offering as little as $1.5 million to these companies for their content. In response to these ongoing issues, the Australian federal government launched the Copyright and Artificial Intelligence Reference Group (CAIRG) to assist with future copyright challenges arising from the increased use of generative AI.

The interplay between generative AI and copyright is a microcosm of the broader challenges facing digital innovation and intellectual property. While strategic partnerships and emerging regulatory frameworks offer some solutions, a holistic approach that balances innovation, ethical considerations, and the rights of content creators is imperative. As the field evolves, ongoing dialogue and adaptive policies will be crucial in navigating this dynamic and multifaceted landscape.

Relevant reading material

Microsoft’s Security Shakeup: Can Bonuses Buy Better Breach Prevention?

Microsoft has recently faced significant criticism from both the U.S. government and rival companies for its failure to prevent a Chinese hack of its systems last summer. On April 2, 2024, the U.S. Department of Homeland Security (DHS) released the Cyber Safety Review Board (CSRB) findings and recommendations following its independent review of the Summer 2023 Microsoft Exchange Online intrusion. The Board described the hack attributed to China as “preventable,” pointing to “a cascade of errors” and a corporate culture at Microsoft that deprioritized enterprise security investments and rigorous risk management. Microsoft has been a frequent target of nation-state attacks from China and Russia. The U.S. government continues to pressure the company to improve its cybersecurity protocols and initiate effective damage control measures.

In response, Microsoft launched the Secure Future Initiative (SFI) last November to address the increasing scale and high stakes of cyberattacks. This initiative aims to unify efforts across Microsoft to enhance cybersecurity protection company-wide. However, the cybersecurity landscape has evolved significantly since then. The recent CSRB findings regarding the Midnight Blizzard attack reported in January highlight the severity of the threats facing Microsoft and its customers.

Prioritizing security, on May 3, 2024, Microsoft announced that it will expand the scope of SFI by integrating the recent recommendations from the CSRB, as well as lessons learned from the Midnight Blizzard incident, to ensure a robust and adaptive cybersecurity approach. To instill accountability, Microsoft will now tie a portion of the Senior Leadership Team’s compensation to the progress in meeting security plans and milestones.

It has become increasingly common for corporations to link executive bonuses to goals beyond sales and profit targets. Given the escalation of hacking threats and the growing importance of cybersecurity spending, this new executive pay metric is timely. However, the lack of details on the compensation formula makes it challenging to evaluate the effectiveness of this incentive by Microsoft. 

The U.S. government’s pressure on Microsoft reflects a broader trend of holding private companies accountable for national security implications of cyber threats, emphasizing the necessity of public-private collaboration in developing comprehensive cybersecurity frameworks. This situation serves as a cautionary tale for other organizations, underscoring the importance of robust cybersecurity measures, a security-centric corporate culture, and executive accountability. As cyber threats grow more sophisticated, the entire industry must elevate its standards and practices to safeguard against potential breaches and consider following some of the best practices in cybersecurity adopted by Microsoft like Zero Trust security model, Microsoft Threat Intelligence, etc. 

Relevant reading material 

Electoral Espionage: When Campaigns Cross the (Data) Line

On May 27, 2024, the Hellenic Data Protection Authority imposed a fine of €400,000 on Greece’s Interior Minister, Niki Kerameus, for failing to safeguard the personal data of voters ahead of this month’s European elections. This penalty responded to the unauthorized leak of expatriate voters’ information to Anna-Michelle Asimakopoulou, a member of the ruling New Democracy Party and a Member of the European Parliament (MEP). Asimakopoulou was also fined €40,000 for violating voter privacy. The scandal unfolded when Asimakopoulou emailed expatriate voters on March 1 to promote her reelection bid, coinciding with the Interior Ministry’s announcement that expatriates could cast their votes by mail in the upcoming election.

The Data Protection Authority’s investigation, as reported by Politico, revealed that Asimakopoulou received the voter data file from a New Democracy official responsible for diaspora affairs and used it to send mass campaign emails. Following numerous complaints from voters in March, an investigation was launched ahead of the European parliamentary elections scheduled for June 6-9. Asimakopoulou, who previously served as vice-chair of the Committee on International Trade, subsequently withdrew from the New Democracy campaign without explaining how she obtained the voters’ email addresses. The investigation continues to scrutinize the role of New Democracy in this scandal, potentially leading to additional fines, while numerous lawsuits have been filed by expatriates against Asimakopoulou and Kerameus. Opposition parties are currently demanding the Interior Minister’s resignation.

The breach of voter data fundamentally compromises electoral integrity and erodes public confidence in the government. This misuse of voter data for political campaigning raises serious concerns about electoral fairness, especially in an election year when scrutiny is intense. In an era where protecting personal data and safeguarding democratic processes are paramount, this incident significantly tarnishes Greece’s national and international reputation. It underscores the critical importance of robust data governance and the protection of personal information in maintaining the credibility of democratic institutions.

Relevant reading material 

Written by Ankita Rathi (Reviewed by Jenna Manhau Fung)