Connect with us

Tech

OpenAI CEO warns that ‘societal misalignments’ could make artificial intelligence dangerous

OpenAI CEO warns that ‘societal misalignments’ could make artificial intelligence dangerous

Published

on

OpenAI CEO warns that 'societal misalignments' could make artificial intelligence dangerous

The CEO of ChatGPT-maker OpenAI said that the dangers that keep him awake at night regarding artificial intelligence are the “very subtle societal misalignments” that could make the systems wreak havoc.

Sam Altman, speaking at the World Governments Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that’s likely advancing faster than the world expects.

“There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said.

 “I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

Advertisement

However, Altman stressed that the AI industry, like OpenAI, shouldn’t be in the driver’s seat when it comes to making regulations governing the industry.

“We’re still in the stage of a lot of discussion. So there’s you know, everybody in the world is having a conference. Everyone’s got an idea, a policy paper, and that’s OK,” Altman said.

“I think we’re still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.”

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested billions of dollars in OpenAI.

The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI’s chatbots.

Advertisement

OpenAI’s success has made Altman the public face for generative AI’s rapid commercialization — and the fears over what may come from the new technology.

The UAE, an autocratic federation of seven hereditarily ruled sheikhdoms, has signs of that risk. Speech remains tightly controlled. Those restrictions affect the flow of accurate information, the same details AI programs like ChatGPT rely on as machine-learning systems to provide their answers for users.

The Emirates also has the Abu Dhabi firm G42, overseen by the country’s powerful national security adviser. G42 has what experts suggest is the world’s leading Arabic-language artificial intelligence model.

The company has faced spying allegations for its ties to a mobile phone app identified as spyware. It has also faced claims it could have gathered genetic material secretly from Americans for the Chinese government.

G42 has said it would cut ties to Chinese suppliers over American concerns. However, the discussion with Altman, moderated by the UAE’s Minister of State for Artificial Intelligence Omar al-Olama, touched on none of the local concerns.

Advertisement

For his part, Altman said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

“I think the reason is the current technology that we have is like … that very first cellphone with a black-and-white screen,” Altman said.
“So give us some time. But I will say I think in a few more years it’ll be much better than it is now. And in a decade it should be pretty remarkable.”

Tech

Europe sets benchmark for rest of the world with landmark AI laws

Europe sets benchmark for rest of the world with landmark AI laws

Published

on

By

Europe sets benchmark for rest of the world with landmark AI laws

 Europe’s landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life.

The European Union’s AI Act is more comprehensive than the United States’ light-touch voluntary compliance approach while China’s approach aims to maintain social stability and state control. 

The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes.

Concerns about AI contributing to misinformation, fake news and copyrighted material have intensified globally in recent months amid the growing popularity of generative AI systems such as Microsoft-backed OpenAI’s ChatGPT, and Google’s chatbot Gemini.

Advertisement

“This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies,” Belgian digitisation minister Mathieu Michel said in a statement.

“With the AI Act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” he said.

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter.

It restricts governments’ use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes.

The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley.

Advertisement

“The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR,” he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force.

Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months.

Fines for violations range from 7.5 million euros ($8.2 million) or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Advertisement
Continue Reading

Tech

Microsoft promotes new tools for making AI software

Microsoft promotes new tools for making AI software

Published

on

By

Microsoft talked up new tools on Tuesday aimed at encouraging programmers to build AI-focused technology into Windows software as it races against Alphabet, Amazon and Apple to dominate the emerging field. At a developer conference in Seattle, Chief Executive Satya Nadella promoted new application programming interfaces, or APIs, that make it easier for developers to tap in to AI technology offered by Microsoft. The company said 1.8 million developers are now using Github Copilot, Microsoft's generative AI tool that helps computer programmers be more productive. "What stands out to me as I look back at this past year, is how you all as developers have taken all of these capabilities and are applying them, quite frankly, to change the world around us," Nadella said during his keynote address at the Build conference. Microsoft detailed new features for its Copilot AI software that helps business productivity applications such as email and its Teams video and text chat product. At its developer conference last week, Alphabet's Google unveiled a similar batch of AI tools to help people with office applications. Microsoft announced details of its new developer tools last week. Shares of Microsoft were up 1.2% at $430.67 on Tuesday afternoon after hitting a record high of $432.97 earlier in the session. Microsoft's stock has now gained 14% in 2024. Also aimed at developers, Microsoft said last Thursday it would offer its cloud computing customers a platform of AMD AI chips that will compete with Nvidia whose graphics processing units have become the gold standard for AI computing. The platform of AMD chips created by Microsoft uses networking technology made by Nvidia called Infiniband to string the processors together. OpenAI's new GPT4-o model, which runs on Microsoft's infrastructure, is 12 times cheaper for developers to use in their software than earlier versions of the technology, Microsoft's chief technology officer Kevin Scott said. Microsoft is the largest investor in OpenAI and uses some of the AI heavyweight's technology in its own products. On Monday, Microsoft debuted a line of Copilot+ personal computers with AI features such as software that lets users search through their past actions in nearly any software. The new computers feature Arm-based, processors made by Qualcomm.

 Microsoft talked up new tools on Tuesday aimed at encouraging programmers to build AI-focused technology into Windows software as it races against Alphabet, Amazon and Apple to dominate the emerging field.

At a developer conference in Seattle, Chief Executive Satya Nadella promoted new application programming interfaces, or APIs, that make it easier for developers to tap in to AI technology offered by Microsoft. 

The company said 1.8 million developers are now using Github Copilot, Microsoft’s generative AI tool that helps computer programmers be more productive.

“What stands out to me as I look back at this past year, is how you all as developers have taken all of these capabilities and are applying them, quite frankly, to change the world around us,” Nadella said during his keynote address at the Build conference.

Advertisement

Microsoft detailed new features for its Copilot AI software that helps business productivity applications such as email and its Teams video and text chat product.

At its developer conference last week, Alphabet’s Google unveiled a similar batch of AI tools to help people with office applications. Microsoft announced details of its new developer tools last week.

Shares of Microsoft were up 1.2% at $430.67 on Tuesday afternoon after hitting a record high of $432.97 earlier in the session. Microsoft’s stock has now gained 14% in 2024.

Also aimed at developers, Microsoft said last Thursday it would offer its cloud computing customers a platform of AMD AI chips that will compete with Nvidia whose graphics processing units have become the gold standard for AI computing.

The platform of AMD chips created by Microsoft uses networking technology made by Nvidia called Infiniband to string the processors together.

Advertisement

OpenAI’s new GPT4-o model, which runs on Microsoft’s infrastructure, is 12 times cheaper for developers to use in their software than earlier versions of the technology, Microsoft’s chief technology officer Kevin Scott said.

Microsoft is the largest investor in OpenAI and uses some of the AI heavyweight’s technology in its own products.

On Monday, Microsoft debuted a line of Copilot+ personal computers with AI features such as software that lets users search through their past actions in nearly any software. The new computers feature Arm-based, processors made by Qualcomm.

Advertisement
Continue Reading

Tech

Explainer: What are AI PCs? How do they differ from traditional PC?

Explainer: What are AI PCs? How do they differ from traditional PC?

Published

on

By

Explainer: What are AI PCs? How do they differ from traditional PC?

The PC just got an AI makeover, raising hopes that the buzzy technology would help revive an industry that has been on a steady decline over the last few years.

Here’s everything we know about AI PCs:

WHAT DOES “AI PC” MEAN?

Manufacturers say these devices process data more swiftly than traditional PCs and can handle a greater volume of AI tasks directly on the device, including chatbots.

Advertisement

That means they do not have to rely on cloud data centers that currently power most AI applications, including OpenAI’s ChatGPT.

Some models can even support the training of AI models, a task that requires significant computing power and is typically performed on servers.

PC makers are hoping such features will help draw in buyers as more people lean on generative AI for everything from sending emails to planning vacations.

Research firm Canalys estimates AI PC shipments will surpass 100 million in 2025, constituting 40% of all PCs shipped. 

WHAT TECHNOLOGY IS USED IN AI PCS?

Advertisement

AI PCs come with specialized processors called neural processing units (NPUs) that handle the majority of on-device AI workloads.

These NPUs work in tandem with central processing units and graphics processors to manage complex tasks, deliver enhanced processing speeds and power applications such as AI assistants.

WHAT ARE SOME OF THE AI PCS AVAILABLE ON THE MARKET?

Brands including Dell, HP, Samsung Electronics, Lenovo, Asus and Acer have unveiled new computers under Microsoft’s Copilot+ branding, which was announced on Monday.

Among these, Microsoft’s refreshed Surface Laptop and Surface Pro tablet are some of the most affordable Copilot+ devices, starting at $999.

Advertisement

Lenovo ThinkPad T14s Gen 6, expected to start at $1,699, stands as the priciest option based on the pricing disclosed by some manufacturers.

ARE THERE ANY CONCERNS?

A new flagship feature from Microsoft called “recall” has raised some privacy concerns. The Windows maker’s Copilot+ PCs “recall” capability within the AI assistant allows it to search and retrieve information on any past activity on the computer.

The recall feature tracks every action performed on the laptop from voice chats to web browsing, and creates a detailed history stored on the device. The user can then search this repository and go through past actions.

Some social media users have expressed fears that the feature could enable spying, while billionaire technologist Elon Musk compared it to “Black Mirror,” the Netflix series that explores the harmful effects of advanced technology.

Advertisement

The main concern with the feature is whether the data is stored on the device or centrally, International Data Corp analyst Ryan O’Leary said, adding that there would be “significant privacy risk” if Microsoft stored the data.

On the other hand, some experts say that managing more AI-related tasks directly on the device offers greater privacy.

Research from Forrester showed AI PCs could help avoid the use of personal data to train AI systems, as well as copyright and patent violations, making them preferable for enterprise use.

Advertisement
Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN