Connect with us

Tech

G7 should adopt ‘risk-based’ AI regulation, ministers say

The group of Seven advanced nations should adopt “risk-based” regulation on artificial intelligence, their digital ministers agreed on Sunday, as European lawmakers hurry to introduce an AI Act to enforce rules on emerging tools such as ChatGPT.

Published

on

The group of Seven advanced nations should adopt "risk-based" regulation on artificial intelligence, their digital ministers agreed on Sunday, as European lawmakers hurry to introduce an AI Act to enforce rules on emerging tools such as ChatGPT. But such regulation should also "preserve an open and enabling environment" for the development of AI technologies and be based on democratic values, G7 ministers said in a joint statement issued at the end of a two-day meeting in Japan. While the ministers recognised that "policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members", the agreement sets a landmark for how major countries govern AI amid privacy concerns and security risks. "The conclusions of this G7 meeting show that we are not alone in this," European Commission Executive Vice President Margrethe Vestager told Reuters ahead of the agreement. Governments have especially paid attention to the popularity of generative AI tools such as ChatGPT, a chatbot developed by Microsoft Corp-backed (MSFT.O) OpenAI that has become the fastest-growing app in history since its November launch. "We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation" including information manipulation by foreign forces, the ministerial statement said. Italy, a G7 member, took ChatGPT offline last month to investigate its potential breach of personal data rules. While Italy lifted the ban on Friday, the move has inspired fellow European privacy regulators to launch probes. EU lawmakers on Thursday reached a preliminary agreement on a new draft of its upcoming AI Act, including copyright protection measures for generative AI, following a call for world leaders to convene a summit to control such technology. Vestager, EU's tech regulation chief, said the bloc "will have the political agreement this year" on the AI legislation, such as labeling obligations for AI-generated images or music, to address copyright and educational risks. Japan, this year's chair of G7, meanwhile, has taken an accommodative approach to AI developers, pledging support for public and industrial adoption of AI. Japan hoped to get the G7 "to agree on agile or flexible governance, rather than preemptive, catch-all regulation" over AI technology, industry minister Yasutoshi Nishimura said on Friday ahead of the ministerial talks. "Pausing (AI development) is not the right response - innovation should keep developing but within certain guardrails that democracies have to set," Jean-Noel Barrot, French Minister for Digital Transition, told Reuters, adding France will provide some exceptions to small AI developers under the upcoming EU regulation. Besides intellectual property concerns, G7 countries recognised security risks. "Generative AI...produces fake news and disruptive solutions to the society if the data it's based on is fake," Japanese digital minister Taro Kono told a press conference after the agreement. The top tech officials from G7 - Britain, Canada, the EU, France, Germany, Italy, Japan and the United States - met in Takasaki, a city about 100 km (60 miles) northwest of Tokyo, following energy and foreign ministers' meetings this month. Japan will host the G7 Summit in Hiroshima in late May, where Prime Minister Fumio Kishida will discuss AI rules with world leaders.

The group of Seven advanced nations should adopt “risk-based” regulation on artificial intelligence, their digital ministers agreed on Sunday, as European lawmakers hurry to introduce an AI Act to enforce rules on emerging tools such as ChatGPT.

But such regulation should also “preserve an open and enabling environment” for the development of AI technologies and be based on democratic values, G7 ministers said in a joint statement issued at the end of a two-day meeting in Japan.

While the ministers recognised that “policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members”, the agreement sets a landmark for how major countries govern AI amid privacy concerns and security risks.

“The conclusions of this G7 meeting show that we are not alone in this,” European Commission Executive Vice President Margrethe Vestager told Reuters ahead of the agreement.

Advertisement

Governments have especially paid attention to the popularity of generative AI tools such as ChatGPT, a chatbot developed by Microsoft Corp-backed (MSFT.O) OpenAI that has become the fastest-growing app in history since its November launch.

“We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation” including information manipulation by foreign forces, the ministerial statement said.

Italy, a G7 member, took ChatGPT offline last month to investigate its potential breach of personal data rules. While Italy lifted the ban on Friday, the move has inspired fellow European privacy regulators to launch probes.

EU lawmakers on Thursday reached a preliminary agreement on a new draft of its upcoming AI Act, including copyright protection measures for generative AI, following a call for world leaders to convene a summit to control such technology.

Vestager, EU’s tech regulation chief, said the bloc “will have the political agreement this year” on the AI legislation, such as labeling obligations for AI-generated images or music, to address copyright and educational risks.

Advertisement

Japan, this year’s chair of G7, meanwhile, has taken an accommodative approach to AI developers, pledging support for public and industrial adoption of AI.

Japan hoped to get the G7 “to agree on agile or flexible governance, rather than preemptive, catch-all regulation” over AI technology, industry minister Yasutoshi Nishimura said on Friday ahead of the ministerial talks.

“Pausing (AI development) is not the right response – innovation should keep developing but within certain guardrails that democracies have to set,” Jean-Noel Barrot, French Minister for Digital Transition, told Reuters, adding France will provide some exceptions to small AI developers under the upcoming EU regulation.

Besides intellectual property concerns, G7 countries recognised security risks. “Generative AI…produces fake news and disruptive solutions to the society if the data it’s based on is fake,” Japanese digital minister Taro Kono told a press conference after the agreement.

The top tech officials from G7 – Britain, Canada, the EU, France, Germany, Italy, Japan and the United States – met in Takasaki, a city about 100 km (60 miles) northwest of Tokyo, following energy and foreign ministers’ meetings this month.

Advertisement

Japan will host the G7 Summit in Hiroshima in late May, where Prime Minister Fumio Kishida will discuss AI rules with world leaders.

Tech

Don’t worry if your Android gets stolen, new Theft Detection Lock comes to rescue

Don’t worry if your Android gets stolen, new Theft Detection Lock comes to rescue

Published

on

By

Don't worry if your Android gets stolen, new Theft Detection Lock comes to rescue

Google revealed plans to introduce a ground-breaking security feature for Android devices: Theft Detection Lock at the Google I/O 2024 developer conference held on Wednesday.

This innovative addition is specifically designed to combat the rising threat of smartphone theft by automatically locking the device when suspicious activity is detected.

Powered by artificial intelligence, Theft Detection Lock utilizes advanced algorithms to identify common motions associated with theft.

For instance, if a device suddenly begins moving rapidly in the opposite direction, indicative of a potential theft scenario, the feature swiftly triggers a screen lock mechanism.

Advertisement

This proactive measure aims to thwart thieves from easily accessing sensitive user data stored on the device.

In addition to Theft Detection Lock, Google also announced the introduction of an Offline Device Lock feature. This functionality serves as a safeguard against intentional disconnection from the network, a common tactic employed by thieves to bypass security measures.

Instances such as repeated failed authentication attempts will prompt the Offline Device Lock, providing an added layer of protection for users’ devices.

Google revealed plans to enhance device security with measures aimed at preventing remote factory resets initiated by thieves.

Under the forthcoming update, if a thief attempts to reset a stolen device, they will be unable to set it up again without the necessary device or Google account credentials. This strategic move renders stolen devices essentially unsellable, significantly diminishing the incentives for phone theft.

Advertisement

Continue Reading

Tech

Tesla must face vehicle owners’ lawsuit over self-driving claims

Tesla must face vehicle owners’ lawsuit over self-driving claims

Published

on

By

Tesla must face vehicle owners' lawsuit over self-driving claims

A U.S. judge on Wednesday rejected Tesla’s bid to dismiss a lawsuit accusing Elon Musk’s electric car company of misleading owners into believing that their vehicles could soon have self-driving capabilities.

The proposed nationwide class action accused Tesla and Musk of having since 2016 falsely advertised Autopilot and other self-driving technology as functional or “just around the corner,” inducing drivers to pay more for their vehicles. 

U.S. District Judge Rita Lin in San Francisco said owners could pursue negligence and fraud-based claims, to the extent they relied on Tesla’s representations regarding vehicles’ hardware and ability to drive coast-to-coast across the U.S.

Without ruling on the merits, Lin said that “if Tesla meant to convey that its hardware was sufficient to reach high or full automation, the plainly alleges sufficient falsity.”

Advertisement

The judge dismissed some other claims.

Tesla and its lawyers did not immediately respond to requests for comment. Lawyers for Tesla vehicle owners did not immediately respond to similar requests.

The case was led by Thomas LoSavio, a retired California lawyer who said he paid an $8,000 premium in 2017 for Full Self-Driving capabilities on a Tesla Model S, believing it would make driving safer if his reflexes deteriorated as he aged.

LoSavio said he was still waiting for the technology six years later, with Tesla remaining unable “even remotely” to produce a fully self-driving car.

The lawsuit seeks unspecified damages for people who since 2016 bought or leased Tesla vehicles with Autopilot, Enhanced Autopilot and Full Self-Driving features.

Advertisement

Tesla has for many years faced federal probes into whether its self-driving technology might have contributed to fatal crashes.

Federal prosecutors are separately examining whether Tesla committed securities fraud or wire fraud by misleading investors about its vehicles’ self-driving capabilities, according to three people familiar with the matter.

Tesla has said Autopilot lets vehicles steer, accelerate and brake in their lanes, and Full Self-Driving lets vehicles obey traffic signals and change lanes.

But it had acknowledged that neither technology makes vehicles autonomous, or excuses drivers from paying attention to the roads.

The case is In re Tesla Advanced Driver Assistance Systems Litigation, U.S. District Court, Northern District of California, No. 22-05240.

Advertisement

Continue Reading

Tech

Microsoft asks hundreds of China staff to relocate

Microsoft asks hundreds of China staff to relocate

Published

on

By

Microsoft asks hundreds of China staff to relocate

Microsoft is asking about 700 to 800 people in its China-based cloud-computing and artificial-intelligence operations to consider transferring outside the country, the Wall Street Journal reported on Thursday.

The employees, mostly engineers with Chinese nationality, were earlier in the week offered an option to transfer to countries including the U.S., Ireland, Australia and New Zealand, the report said, citing people familiar with the matter.

The move comes amid spiralling US-China relations as the Biden administration cracks down on various sectors of Chinese imports, including electric vehicle (EV) batteries, computer chips and medical products.

A Microsoft spokesperson told the Journal that providing internal opportunities is part of its global business and confirmed the company had shared an optional internal transfer opportunity with a subset of employees. 

Advertisement

Reuters reported earlier this month that the U.S. Commerce Department is considering a new regulatory push to restrict the export of proprietary or closed source AI models, whose software and the data it is trained on are kept under wraps.

The spokesperson, however, told the newspaper that the company remains committed to the region and will continue to operate in China.

Microsoft didn’t immediately respond to a Reuters request for comment.

Advertisement
Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN