Connect with us

Tech

Microsoft engineer sounds alarm on AI image-generator to US officials and company’s board

Microsoft engineer sounds alarm on AI image-generator to US officials and company’s board

Published

on

Microsoft engineer sounds alarm on AI image-generator to US officials and company's board

A Microsoft engineer is sounding alarms about offensive and harmful imagery he says is too easily made by the company’s artificial intelligence image-generator tool, sending letters on Wednesday to U.S. regulators and the tech giant’s board of directors urging them to take action.

Shane Jones told The Associated Press that he considers himself a whistleblower and that he also met last month with U.S. Senate staffers to share his concerns. The Federal Trade Commission confirmed it received his letter Wednesday but declined further comment.

Microsoft said it is committed to addressing employee concerns about company policies and that it appreciates Jones’ “effort in studying and testing our latest technology to further enhance its safety.”

It said it had recommended he use the company’s own “robust internal reporting channels” to investigate and address the problems. CNBC was first to report about the letters. 

Advertisement

Jones, a principal software engineering lead whose job involves working on AI products for Microsoft’s retail customers, said he has spent three months trying to address his safety concerns about Microsoft’s Copilot Designer, a tool that can generate novel images from written prompts.

The tool is derived from another AI image-generator, DALL-E 3, made by Microsoft’s close business partner OpenAI.

“One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user,” he said in his letter addressed to FTC Chair Lina Khan.

“For example, when using just the prompt, ‘car accident’, Copilot Designer has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.”

Other harmful content involves violence as well as “political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few,” he told the FTC.

Advertisement

Jones said he repeatedly asked the company to take the product off the market until it is safer, or at least change its age rating on smartphones to make clear it is for mature audiences.

His letter to Microsoft’s board asks it to launch an independent investigation that would look at whether Microsoft is marketing unsafe products “without disclosing known risks to consumers, including children.”

This is not the first time Jones has publicly aired his concerns. He said Microsoft at first advised him to take his findings directly to OpenAI.

When that didn’t work, he also publicly posted a letter to OpenAI on Microsoft-owned LinkedIn in December, leading a manager to inform him that Microsoft’s legal team “demanded that I delete the post, which I reluctantly did,” according to his letter to the board.

In addition to the U.S. Senate’s Commerce Committee, Jones has brought his concerns to the state attorney general in Washington, where Microsoft is headquartered.

Advertisement

Jones told the AP that while the “core issue” is with OpenAI’s DALL-E model, those who use OpenAI’s ChatGPT to generate AI images won’t get the same harmful outputs because the two companies overlay their products with different safeguards.

“Many of the issues with Copilot Designer are already addressed with ChatGPT’s own safeguards,” he said via text.

A number of impressive AI image-generators first came on the scene in 2022, including the second generation of OpenAI’s DALL-E 2. That — and the subsequent release of OpenAI’s chatbot ChatGPT — sparked public fascination that put commercial pressure on tech giants such as Microsoft and Google to release their own versions.

But without effective safeguards, the technology poses dangers, including the ease with which users can generate harmful “deepfake” images of political figures, war zones or nonconsensual nudity that falsely appear to show real people with recognizable faces.

Google has temporarily suspended its Gemini chatbot’s ability to generate images of people following outrage over how it was depicting race and ethnicity, such as by putting people of color in Nazi-era military uniforms.

Advertisement

Tech

Sam Altman’s OpenAI signs content agreement with News Corp

Sam Altman’s OpenAI signs content agreement with News Corp

Published

on

By

Sam Altman's OpenAI signs content agreement with News Corp

Sam Altman-led OpenAI has signed a deal that will give it access to content from some of the biggest news publications owned by media conglomerate News Corp, the companies said on Wednesday.

The deal comes weeks after the Microsoft-backed AI giant clinched an agreement to license content from the Financial Times for the development of AI models.

Access to troves of data can help enhance content produced by OpenAI’s ChatGPT, the chatbot that can generate human-like responses to prompts and create summaries of long text. 

Such partnerships are also crucial for the training of AI models and can be lucrative for news publishers, which have traditionally been denied a slice of profits internet giants earn for distributing their content.

Advertisement

OpenAI, which kickstarted the AI frenzy when it launched its chatbot in 2022, had also struck a content deal with social media platform Reddit last week.

OpenAI did not disclose the financial details of its latest deal, but the Wall Street Journal, which is owned by News Corp, reported that it could be worth more than $250 million over five years.

The tie-up also includes a guarantee that the content will not become available on ChatGPT immediately after it is published on one of the news websites, the WSJ report said.

The agreement will give OpenAI access to current and archived content from several News Corp publications, including the Wall Street Journal, MarketWatch, the Times and others.

News Corp shares climbed about 4% after the bell.

Advertisement

Continue Reading

Tech

AI disclosure required in campaign ads, FCC chair says

AI disclosure required in campaign ads, FCC chair says

Published

on

By

AI disclosure required in campaign ads, FCC chair says

U.S. Federal Communications Commission Chairwoman Jessica Rosenworcel on Wednesday proposed requiring disclosure of content generated by artificial intelligence (AI) in political ads on radio and TV.

Rosenworcel is asking her colleagues to vote to advance a proposed rule that would require disclosure of AI content in both candidate and issue advertisements, but does not propose to prohibit any AI-generated content within political ads. 

The rule would require on-air and written disclosures and cover cable operators, satellite TV and radio providers, but the FCC does not have authority to regulate internet or social media ads or streaming services.

The agency has already taken steps to combat misleading use of AI in political robocalls.

Advertisement

There is growing concern in Washington that AI-generated content could mislead voters in the November presidential and congressional elections. Some senators want to pass legislation before November that would address AI threats to election integrity.

“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a statement, adding the proposal “makes clear consumers have a right to know when AI tools are being used in the political ads they see.”

The FCC said the use of AI is expected to play a substantial role in 2024 political ads. She singled out the potential for misleading “deep fakes” which are “altered images, videos, or audio recordings that depict people doing or saying things that did not actually do or say, or events that did not actually occur.”

Advocacy group Public Knowledge called on Congress to extend oversight of AI in political advertising to digital platforms.

Requiring disclosure of AI “protects a vital public interest and is a commonsense step for preventing deceptive political advertisements,” the group’s policy counsel Nicholas Garcia said.

Advertisement

AI content in elections drew new attention in January after a fake robocall imitating President Joe Biden sought to dissuade people from voting for him in New Hampshire’s Democratic primary election.

In February, the FCC said robocalls using AI-generated voices are illegal. The declaratory ruling gave state attorneys general new tools to go after the entities behind the robocalls, Rosenworcel said.

The FCC in 2023 finalized a $5.1 million fine levied on conservative activists for making more than 1,100 illegal robocalls ahead of the 2020 U.S. election.

Advertisement
Continue Reading

Tech

Meta’s Rayban integrates Instagram, Amazon Music, Calm App

Meta’s Rayban integrates Instagram, Amazon Music, Calm App

Published

on

By

Meta's Rayban integrates Instagram, Amazon Music, Calm App

 Meta has unveiled a series of updates to its Ray-Ban smart glasses, enhancing user experience with innovative hands-free functionalities.

These enhancements include seamless integration with popular platforms such as Instagram, Amazon Music, and the meditation app Calm.

One of the standout features of this update is the ability for users to effortlessly share images from their smart glasses directly to their Instagram Story without the need to reach for their phone.

Users can simply capture a photo with the smart glasses and command, “Hey Meta, share my last photo to Instagram,” or opt to take a new photo in the moment by saying, “Hey Meta, post a photo to Instagram.”

Advertisement

This move by Meta echoes the functionality introduced by Snap Spectacles in 2016, which allowed users to capture photos and videos with their smart glasses for direct sharing to Snapchat Stories.

Meta’s Ray-Ban smart glasses now offer hands-free integrations with Amazon Music and the Calm app. With voice commands like “Hey Meta, play Amazon Music,” users can enjoy streaming music without the need to handle their phone.

The Calm integration allows users to access mindfulness exercises and self-care content by simply saying, “Hey Meta, play the Daily Calm.”

Expanding its style offerings, Meta has introduced new designs in 15 countries, including the U.S., Canada, Australia, and parts of Europe.

Among these styles are Skyler in Shiny Chalky Gray with Gradient Cinnamon Pink Lenses, Skyler in Shiny Black with Transitions Cerulean Blue Lenses, and Headliner Low Bridge Fit in Shiny Black with Polar G15 Lenses. These glasses are available for purchase on both Meta’s and Ray-Ban’s websites.

Advertisement

The introduction of these new features comes on the heels of Meta’s AI upgrade to the smart glasses, which integrated multimodal AI capabilities.

This enhancement empowers users to interact with their environment more effectively, enabling features such as real-time translation using the built-in camera and Meta AI.

Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN