Connect with us

Tech

Twitter research group stall complicates compliance with new EU law

Published

on

Twitter research group stall complicates compliance with new EU lawTwitter research group stall complicates compliance with new EU law

The stalling of a Twitter program that was critical for outside researchers studying disinformation campaigns throws into question the company s strategy to comply with upcoming regulation in Europe, former employees and experts told Reuters.

The European Union s new Digital Services Act (DSA), one of the world s strictest regulations on internet platforms, has sent tech companies scrambling to meet its requirements on having measures in place against illegal content and explaining the steps they take on content moderation, before the law comes into full effect in early 2024.

Twitter signed a voluntary agreement in June with the EU related to the DSA committing to “empowering the research community” through means including sharing datasets about disinformation with researchers. The EU s goal with the law is to create a safer internet for users and have a mechanism to hold companies accountable.

According to Yoel Roth, Twitter s former head of trust and safety, the Twitter Moderation Research Consortium was a key part of Twitter s plan to do that, since it compiled data on state-backed manipulation of the platform and provided that to researchers. “Twitter was uniquely well-positioned,” he said.

Advertisement

Nearly all of the 10 to 15 employees who worked on the consortium have left the company since Elon Musk s takeover in October, according to Roth, who resigned in November, and three other former employees who were involved with the program.

The EU law would require platforms with over 45 million EU users to respond to EU-vetted researcher proposals.

Failure to comply with the DSA once it comes into effect could lead to fines of up to 6% of global revenue or even a ban from operating in the EU, according to the European Commission website.
Reuters was unable to determine if Twitter has made alternative plans to comply with the DSA.

In an email, Twitter s head of trust and safety, Ella Irwin, said: “We intend to fully comply with the DSA, have many employees working on DSA compliance internally and have communicated our intent to comply to (EU Commissioner Thierry) Breton and his team.”

She did not comment on detailed questions about the status of the consortium, how many employees were working on it, or how Twitter planned to comply with the DSA.

Advertisement

Breton has met at least twice with Musk to discuss Twitter s intent to comply with the upcoming law. In November, Breton said Twitter had “huge work ahead” because the company will have to “tackle disinformation with resolve” and significantly reinforce content moderation. read more In May, Musk appeared in a video with Breton expressing agreement with the Digital Services Act. Breton s spokesperson declined to comment for this story.
Across the company, at least 5,000 staff (about two thirds of the total before the takeover) have either quit or been fired as Musk overhauls Twitter, hitting the trust and safety and public policy teams particularly hard. read more

“I just don t see how the absolutely skeletal staff … will be able to readily comply (with the DSA),” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University.

THE WORK OF THE CONSORTIUM

The research consortium was formed in response to backlash against Russian interference in the 2016 U.S. presidential election. According to the company s website, its aim is “to increase transparency around Twitter’s content moderation policies and enforcement decisions.”

Twitter prohibits people, organizations or governments from manipulating conversation on the service, such as using multiple or fake accounts to make content appear more popular.

Advertisement

Early last year, Twitter launched a pilot version of the consortium to disclose examples of manipulation of the platform to some outside researchers.

As Twitter investigated and took down accounts that were suspected of foreign interference, it released data on that to the researchers to help them study the misinformation strategies and where they originated.

In September, Twitter opened an application process to expand the consortium and had accepted about 50 researchers by the time of Musk s acquisition on Oct. 27, two of the former employees said.

Twitter had been preparing to disclose at least a dozen new datasets to researchers before then, the former employees said.

Of the three former Twitter employees, who asked not to be identified for fear of reprisals, one spoke with current employees recently and was told they do not have the personnel or bandwidth to continue working on the consortium.

Advertisement

Five outside researchers told Reuters that without a program like the research consortium, it will be more difficult to study how governments use Twitter to interfere with elections or political events globally.

Two of those who are members of the consortium said Twitter has not sent a memo to close the program formally and previously-released data remain available to them, but they had not received data from it in at least two months.

The research consortium was an important tool to make the internet safer, according to two U.S. lawmakers who introduced a bill last year that would require social media platforms to provide data access to academic researchers. Their Digital Services Oversight and Safety Act has not been voted on.

Rep. Lori Trahan of Massachusetts and Rep. Sean Casten of Illinois also wrote an open letter to Twitter on Nov. 18 asking whether Twitter would maintain the consortium, following layoffs that halved the staff.

Asked about the consortium by Reuters this month, Trahan said failure to maintain the program would be “a massive step back.”

Advertisement

The Stanford Internet Observatory, a consortium member that studies internet risks, said it has not received any communication from the program since mid-November and no longer has a point of contact at Twitter.

The Stanford team has published at least three papers using data from the consortium, including one about Twitter accounts that promoted India’s military activities in Kashmir, and one on U.S.-linked attempts to spread pro-Western narratives abroad.

If the research consortium is eliminated, “we will be returning to the 2017 era of limited shared communication about malicious state actor activity,” said Renée DiResta, research manager at Stanford Internet Observatory.

Cazadores de Fake News, a Venezuela-based consortium member that fact checks online news, told Reuters the research program “seems to have fallen into a hiatus,” and the organization has not heard from Twitter since Musk s acquisition.

“But we hope that it will resurface at some point, as it is a very valuable initiative,” said spokesperson Adrian Gonzalez. 

Advertisement

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

UK competition regulator lays out AI principles

Published

on

By

UK competition regulator lays out AI principles

Britain’s competition regulator proposed principles to govern new artificial intelligence (AI) models on Monday, including accountability, access and transparency, as it seeks to foster competitive growth in the fast-moving technology.

The Competition and Markets Authority (CMA) started looking at the impact of generative AI applications such as ChatGPT in May to try to ensure the technology benefited businesses and consumers.

The CMA’s chief executive Sarah Cardell said there was real potential for the technology to turbocharge productivity and make millions of everyday tasks easier – but a positive future could not be taken for granted.

“That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers,” she said.

Advertisement

Prime Minister Rishi Sunak has touted the UK as a global leader in AI regulation and the country will host an AI safety summit in November.

Continue Reading

Tech

China’s industry ministry to work on standards for the metaverse

Published

on

By

China's industry ministry to work on standards for the metaverse

 China’s Ministry of Industry and Information Technology (MIIT)said on Monday that it will form a working group to establish standards for the metaverse sector as Beijing seeks to be a global standards-setter for new technology.

The ministry released a draft proposal to form a working group for the metaverse, shared virtual worlds accessible via the internet, on Monday. The proposal said that the metaverse is one of the nine emerging tech sectors which China should strive to establish standards for.

The metaverse has become one of the hottest tech trends since 2021, but there is yet to be consensus on what qualifies as a metaverse despite the hype, an issue the MIIT highlighted in the proposal.

“[The metaverse industry] faces many challenges,” the MIIT said, “It is urgent to promote healthy and orderly development of the metaverse industry through standardization and guidance.”

Advertisement

It added that the metaverse industry suffers from a lack of clear definitions, which had allowed some capitalists and companies to drum up speculation in the market.

The MIIT also described the metaverse as “an integrated innovation combining various cutting-edge technologies”. It said that the metaverse will spur many innovative business models, new business opportunities and growth for the digital economy. 

Continue Reading

Tech

BoE official says public need reassurance on digital pound and privacy

Published

on

By

BoE official says public need reassurance on digital pound and privacy

 A “national conversation” is needed to assuage public fears that a digital version of the pound would allow the government to spy on them, Bank of England deputy governor designate Sarah Breeden said on Tuesday.

The BoE and Britain’s finance ministry have been consulting on whether and how to introduce a digital pound, probably in the second half of this decade.

But critics of the concept say a digital currency could be used by governments to track what people spend their money on, and make it harder to make payments and purchases using cash.

European Union policymakers have already sought to reassure the public that a digital euro is not a “Big Brother” surveillance project.

Advertisement

“I think on the back of that we need to start a national conversation, actually, because while I’m supportive of that technology, as was apparent in the responses we got to the discussion paper there’s a lot of concern about privacy,” Breeden told a hearing in parliament’s Treasury Select Committee on her appointment.

A digital pound would be the anchor for all money in the digital world to ensure trust in money, she said.

“So analytically, it’s the right thing – I can see a case for it. How you manage the privacy challenges, the role of the state – I think we are at the start of the debate on that,” Breeden said.

“The privacy concerns about programmability, I recognise those as real concerns, and what we need to do … is reassure the public on how privacy is going to be delivered, terms and conditions set in legislation, we must not assume trust in practice,” she told lawmakers.

There should be equal focus on privacy in private-sector digital currencies as well, said Breeden, who is currently an executive director at the BoE.

Advertisement

Eleven countries have already launched digital versions of their currencies and, like the European Central Bank, the U.S. Federal Reserve is considering doing so.

Breeden said the impact on financial stability is also a concern for her and responses to the public consultation will be published towards the end of the year.

Breeden rejected suggestions by critics of a digital currency that it would force out the availability of cash. 

Advertisement
Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN