Connect with us

Tech

G7 should adopt ‘risk-based’ AI regulation, ministers say

The group of Seven advanced nations should adopt “risk-based” regulation on artificial intelligence, their digital ministers agreed on Sunday, as European lawmakers hurry to introduce an AI Act to enforce rules on emerging tools such as ChatGPT.

Published

on

The group of Seven advanced nations should adopt "risk-based" regulation on artificial intelligence, their digital ministers agreed on Sunday, as European lawmakers hurry to introduce an AI Act to enforce rules on emerging tools such as ChatGPT. But such regulation should also "preserve an open and enabling environment" for the development of AI technologies and be based on democratic values, G7 ministers said in a joint statement issued at the end of a two-day meeting in Japan. While the ministers recognised that "policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members", the agreement sets a landmark for how major countries govern AI amid privacy concerns and security risks. "The conclusions of this G7 meeting show that we are not alone in this," European Commission Executive Vice President Margrethe Vestager told Reuters ahead of the agreement. Governments have especially paid attention to the popularity of generative AI tools such as ChatGPT, a chatbot developed by Microsoft Corp-backed (MSFT.O) OpenAI that has become the fastest-growing app in history since its November launch. "We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation" including information manipulation by foreign forces, the ministerial statement said. Italy, a G7 member, took ChatGPT offline last month to investigate its potential breach of personal data rules. While Italy lifted the ban on Friday, the move has inspired fellow European privacy regulators to launch probes. EU lawmakers on Thursday reached a preliminary agreement on a new draft of its upcoming AI Act, including copyright protection measures for generative AI, following a call for world leaders to convene a summit to control such technology. Vestager, EU's tech regulation chief, said the bloc "will have the political agreement this year" on the AI legislation, such as labeling obligations for AI-generated images or music, to address copyright and educational risks. Japan, this year's chair of G7, meanwhile, has taken an accommodative approach to AI developers, pledging support for public and industrial adoption of AI. Japan hoped to get the G7 "to agree on agile or flexible governance, rather than preemptive, catch-all regulation" over AI technology, industry minister Yasutoshi Nishimura said on Friday ahead of the ministerial talks. "Pausing (AI development) is not the right response - innovation should keep developing but within certain guardrails that democracies have to set," Jean-Noel Barrot, French Minister for Digital Transition, told Reuters, adding France will provide some exceptions to small AI developers under the upcoming EU regulation. Besides intellectual property concerns, G7 countries recognised security risks. "Generative AI...produces fake news and disruptive solutions to the society if the data it's based on is fake," Japanese digital minister Taro Kono told a press conference after the agreement. The top tech officials from G7 - Britain, Canada, the EU, France, Germany, Italy, Japan and the United States - met in Takasaki, a city about 100 km (60 miles) northwest of Tokyo, following energy and foreign ministers' meetings this month. Japan will host the G7 Summit in Hiroshima in late May, where Prime Minister Fumio Kishida will discuss AI rules with world leaders.

The group of Seven advanced nations should adopt “risk-based” regulation on artificial intelligence, their digital ministers agreed on Sunday, as European lawmakers hurry to introduce an AI Act to enforce rules on emerging tools such as ChatGPT.

But such regulation should also “preserve an open and enabling environment” for the development of AI technologies and be based on democratic values, G7 ministers said in a joint statement issued at the end of a two-day meeting in Japan.

While the ministers recognised that “policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members”, the agreement sets a landmark for how major countries govern AI amid privacy concerns and security risks.

“The conclusions of this G7 meeting show that we are not alone in this,” European Commission Executive Vice President Margrethe Vestager told Reuters ahead of the agreement.

Governments have especially paid attention to the popularity of generative AI tools such as ChatGPT, a chatbot developed by Microsoft Corp-backed (MSFT.O) OpenAI that has become the fastest-growing app in history since its November launch.

Advertisement

“We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation” including information manipulation by foreign forces, the ministerial statement said.

Italy, a G7 member, took ChatGPT offline last month to investigate its potential breach of personal data rules. While Italy lifted the ban on Friday, the move has inspired fellow European privacy regulators to launch probes.

EU lawmakers on Thursday reached a preliminary agreement on a new draft of its upcoming AI Act, including copyright protection measures for generative AI, following a call for world leaders to convene a summit to control such technology.

Vestager, EU’s tech regulation chief, said the bloc “will have the political agreement this year” on the AI legislation, such as labeling obligations for AI-generated images or music, to address copyright and educational risks.

Japan, this year’s chair of G7, meanwhile, has taken an accommodative approach to AI developers, pledging support for public and industrial adoption of AI.

Advertisement

Japan hoped to get the G7 “to agree on agile or flexible governance, rather than preemptive, catch-all regulation” over AI technology, industry minister Yasutoshi Nishimura said on Friday ahead of the ministerial talks.

“Pausing (AI development) is not the right response – innovation should keep developing but within certain guardrails that democracies have to set,” Jean-Noel Barrot, French Minister for Digital Transition, told Reuters, adding France will provide some exceptions to small AI developers under the upcoming EU regulation.

Besides intellectual property concerns, G7 countries recognised security risks. “Generative AI…produces fake news and disruptive solutions to the society if the data it’s based on is fake,” Japanese digital minister Taro Kono told a press conference after the agreement.

The top tech officials from G7 – Britain, Canada, the EU, France, Germany, Italy, Japan and the United States – met in Takasaki, a city about 100 km (60 miles) northwest of Tokyo, following energy and foreign ministers’ meetings this month.

Japan will host the G7 Summit in Hiroshima in late May, where Prime Minister Fumio Kishida will discuss AI rules with world leaders.

Advertisement

Tech

OpenAI, SoftBank each commit 19bn dollars to Stargate AI data center

Published

on

By

OpenAI, SoftBank each commit 19bn dollars to Stargate AI data center

OpenAI and Japanese conglomerate SoftBank (9984.T) will each commit $19 billion to fund Stargate, a joint venture to develop data centers for artificial intelligence in the U.S., the Information reported on Wednesday.

The ChatGPT maker will hold a 40% interest in Stargate, and would act as an extension of OpenAI, the report said, citing OpenAI CEO Sam Altman speaking to colleagues. His comments imply SoftBank would also have a 40% interest, the report added.

OpenAI and SoftBank did not immediately respond to Reuters’ requests for comment.

On Tuesday, U.S. President Donald Trump announced that OpenAI, SoftBank Group and Oracle (ORCL.N) will unveil Stargate and invest $500 billion over the next four years to help the United States stay ahead of China and other rivals in the global AI race.

Stargate will initially deploy $100 billion and the rest of the funding is expected over the next four years. The project is being led by SoftBank and OpenAI.

Continue Reading

Tech

Taiwan’s HTC to sell part of XR unit to Google for 250mn dollars

Published

on

By

Taiwan's HTC to sell part of XR unit to Google for 250mn dollars

Taiwan’s HTC (2498.TW) said on Thursday it will sell part of its unit for extended reality (XR) headsets and glasses to Google (GOOGL.O) for $250 million and transfer some of its employees to the U.S. company.

The transaction is expected to close in the first quarter of this year, HTC said.

The two companies will also explore further collaboration opportunities, HTC added.

Google said in a separate statement that the deal will accelerate the development of the Android XR platform and strengthen the ecosystem for headsets and glasses.

Lu Chia-te, HTC vice president and general counsel, told reporters the company had granted its intellectual property rights to Google as a non-exclusive license.

“Therefore, this is not a buyout nor an exclusive licence. In the future, HTC will still retain the ability to use, utilise, and even further develop it without any restrictions,” he said.

Continue Reading

Tech

Microsoft’s LinkedIn sued for disclosing customer information to train AI models

Published

on

By

Microsoft's LinkedIn sued for disclosing customer information to train AI models

Microsoft’s (MSFT.O) LinkedIn has been sued by Premium customers who said the business-focused social media platform disclosed their private messages to third parties without permission to train generative artificial intelligence models.

According to a proposed class action filed on Tuesday night on behalf of millions of LinkedIn Premium customers, LinkedIn quietly introduced a privacy setting last August that let users enable or disable the sharing of their personal data.

Customers said LinkedIn then discreetly updated its privacy policy on Sept. 18 to say data could be used to train AI models, and in a “frequently asked questions” hyperlink said opting out “does not affect training that has already taken place.”

This attempt to “cover its tracks” suggests LinkedIn was fully aware it violated customers’ privacy and its promise to use personal data only to support and improve its platform, in order to minimize public scrutiny and legal fallout, the complaint said.

The lawsuit was filed in the San Jose, California, federal court on behalf of LinkedIn Premium customers who sent or received InMail messages, and whose private information was disclosed to third parties for AI training before Sept. 18.

It seeks unspecified damages for breach of contract and violations of California’s unfair competition law, and $1,000 per person for violations of the federal Stored Communications Act.

A lawyer for Prince Harry on Wednesday said the Duke of Sussex had reached a settlement with Rupert Murdoch’s news conglomerate.

LinkedIn said in a statement: “These are false claims with no merit.”

A lawyer for the plaintiffs had no immediate additional comment.

The lawsuit was filed several hours after U.S. President Donald Trump announced a joint venture among Microsoft-backed OpenAI, Oracle (ORCL.N) and SoftBank (9984.T), with a potential $500 billion of investment, to build AI infrastructure in the United States.

Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN