Connect with us

Tech

US-China competition to field military drone swarms could fuel global arms race

US-China competition to field military drone swarms could fuel global arms race

Published

on

US-China competition to field military drone swarms could fuel global arms race

As their rivalry intensifies, U.S. and Chinese military planners are gearing up for a new kind of warfare in which squadrons of air and sea drones equipped with artificial intelligence work together like a swarm of bees to overwhelm an enemy.

The planners envision a scenario in which hundreds, even thousands of the machines engage in coordinated battle. A single controller might oversee dozens of drones. Some would scout, others attack. Some would be able to pivot to new objectives in the middle of a mission based on prior programming rather than a direct order.

The world’s only AI superpowers are engaged in an arms race for swarming drones that is reminiscent of the Cold War, except drone technology will be far more difficult to contain than nuclear weapons. Because software drives the drones’ swarming abilities, it could be relatively easy and cheap for rogue nations and militants to acquire their own fleets of killer robots.

The Pentagon is pushing urgent development of inexpensive, expendable drones as a deterrent against China acting on its territorial claim on Taiwan. Washington says it has no choice but to keep pace with Beijing. Chinese officials say AI-enabled weapons are inevitable so they, too, must have them.

Advertisement

The unchecked spread of swarm technology “could lead to more instability and conflict around the world,” said Margarita Konaev, an analyst with Georgetown University’s Center for Security and Emerging Technology.

As the undisputed leaders in the field, Washington and Beijing are best equipped to set an example by putting limits on military uses of drone swarms. But their intense competition, China’s military aggression in the South China Sea and persistent tensions over Taiwan make the prospect of cooperation look dim.

The idea is not new. The United Nations has tried for more than a decade to advance drone non-proliferation efforts that could include limits such as forbidding the targeting of civilians or banning the use of swarms for ethnic cleansing.

MILITARY CONTRACTS OFFER CLUES

Drones have been a priority for both powers for years, and each side has kept its advances secret, so it’s unclear which country might have an edge.

Advertisement

A 2023 Georgetown study of AI-related military spending found that more than a third of known contracts issued by both U.S. and Chinese military services over eight months in 2020 were for intelligent uncrewed systems.

The Pentagon sought bids in January for small, unmanned maritime “interceptors.” The specifications reflect the military’s ambition: The drones must be able to transit hundreds of miles of “contested waterspace,” work in groups in waters without GPS, carry 1,000-pound payloads, attack hostile craft at 40 mph and execute “complex autonomous behaviors” to adapt to a target’s evasive tactics.

It’s not clear how many drones a single person would control. A spokesman for the defense secretary declined to say, but a recently published Pentagon-backed study offers a clue: A single operator supervised a swarm of more than 100 cheap air and land drones in late 2021 in an urban warfare exercise at an Army training site at Fort Campbell, Tennessee.

The CEO of a company developing software to allow multiple drones to collaborate said in an interview that the technology is bounding ahead.

“We’re enabling a single operator to direct right now half a dozen,” said Lorenz Meier of Auterion, which is working on the technology for the U.S. military and its allies. He said that number is expected to increase to dozens and within a year to hundreds.

Advertisement

Not to be outdone, China’s military claimed last year that dozens of aerial drones “self-healed” after jamming cut their communications. An official documentary said they regrouped, switched to self-guidance and completed a search-and-destroy mission unaided, detonating explosive-laden drones on a target.

In justifying the push for drone swarms, China hawks in Washington offer this scenario: Beijing invades Taiwan then stymies U.S. intervention efforts with waves of air and sea drones that deny American and allied planes, ships and troops a foothold.

A year ago, CIA Director William Burns said Chinese Communist Party leader Xi Jinping had instructed his military to “be ready by 2027” to invade. But that doesn’t mean an invasion is likely, or that the U.S.-China arms race over AI will not aggravate global instability.

KISSINGER URGED ACTION

Just before he died last year, former U.S. Secretary of State Henry Kissinger urged Beijing and Washington to work together to discourage AI arms proliferation. They have “a narrow window of opportunity,” he said.

Advertisement

“Restraints for AI need to occur before AI is built into the security structure of each society,” Kissinger wrote with Harvard’s Graham Allison.

Xi and President Joe Biden made a verbal agreement in November to set up working groups on AI safety, but that effort has so far taken a back seat to the arms race for autonomous drones.

The competition is not apt to build trust or reduce the risk of conflict, said William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft.

If the U.S. is “going full speed ahead, it’s most likely China will accelerate whatever it’s doing,” Hartung said.

There’s a risk China could offer swarm technology to U.S. foes or repressive countries, analysts say. Or it could be stolen. Other countries developing the tech, such as Russia, Israel, Iran and Turkey, could also spread the know-how.

Advertisement

U.S. national security adviser Jake Sullivan said in January that U.S.-China talks set to begin sometime this spring will address AI safety. Neither the defense secretary’s office nor the National Security Council would comment on whether the military use of drone swarms might be on the agenda.

A FIVE-YEAR WAIT

Military analysts, drone makers and AI researchers don’t expect fully capable, combat-ready swarms to be fielded for five years or so, though big breakthroughs could happen sooner.

“The Chinese have an edge in hardware right now. I think we have an edge in software,” said CEO Adam Bry of U.S. drone maker Skydio, which supplies the Army, the Drug Enforcement Agency and the State Department, among other agencies.

Chinese military analyst Song Zhongping said the U.S. has “stronger basic scientific and technological capabilities” but added that the American advantage is not “impossible to surpass.” He said Washington also tends to overestimate the effect of its computer chip export restrictions on China’s drone swarm advances.

Advertisement

Paul Scharre, an AI expert at the Center for a New American Security think tank, believes the rivals are at rough parity.

“The bigger question for each country is about how do you use a drone swarm effectively?” he said.

That’s one reason all eyes are on the war in Ukraine, where drones work as eyes in the sky to make undetected front-line maneuvers all but impossible. They also deliver explosives and serve as sea-skimming ship killers.

Drones in Ukraine are often lost to jamming. Electronic interference is just one of many challenges for drone swarm development. Researchers are also focused on the difficulty of marshaling hundreds of air and sea drones in semi-autonomous swarms over vast expanses of the western Pacific for a potential war over Taiwan.

A secretive, now-inactive $78 million program announced early last year by the Pentagon’s Defense Advanced Research Projects Agency, or DARPA, seemed tailor-made for the Taiwan invasion scenario.

Advertisement

The Autonomous Multi-Domain Adaptive Swarms-of-Swarms is a mouthful to say, but the mission is clear: Develop ways for thousands of autonomous land, sea and air drones to “degrade or defeat” a foe in seizing contested turf.

DRONES IMPROVISE — BUT MUST STICK TO ORDERS

A separate DARPA program called OFFensive Swarm-Enabled Tactics, had the goal of marshaling upwards of 250 land-based drones to assist Army troops in urban warfare.

Project coordinator Julie Adams, an Oregon State robotics professor, said swarm commanders in the exercise managed to choreograph up to 133 ground and air vehicles at a time. The drones were programmed with a set of tactics they could perform semi-autonomously, including indoor reconnaissance and simulated enemy kills.

Under the direction of a swarm commander, the fleet acted something like an infantry squad whose soldiers are permitted some improvisation as long as they stick to orders.

Advertisement

“It’s what I would call supervisory interaction, in that the human could stop the command or stop the tactic,” Adams said. But once a course of action — such as an attack — was set in motion, the drone was on its own.

Adams said she was particularly impressed with a swarm commander in a different exercise last year at Fort Moore, Georgia, who single-handedly managed a 45-drone swarm over 2.5 hours with just 20 minutes of training.

“It was a pleasant surprise,” she said.

A reporter had to ask: Was he a video game player?

Yes, she said. “And he had a VR headset at home.” 

Advertisement

Tech

Europe sets benchmark for rest of the world with landmark AI laws

Europe sets benchmark for rest of the world with landmark AI laws

Published

on

By

Europe sets benchmark for rest of the world with landmark AI laws

 Europe’s landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life.

The European Union’s AI Act is more comprehensive than the United States’ light-touch voluntary compliance approach while China’s approach aims to maintain social stability and state control. 

The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes.

Concerns about AI contributing to misinformation, fake news and copyrighted material have intensified globally in recent months amid the growing popularity of generative AI systems such as Microsoft-backed OpenAI’s ChatGPT, and Google’s chatbot Gemini.

Advertisement

“This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies,” Belgian digitisation minister Mathieu Michel said in a statement.

“With the AI Act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” he said.

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter.

It restricts governments’ use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes.

The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley.

Advertisement

“The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR,” he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force.

Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months.

Fines for violations range from 7.5 million euros ($8.2 million) or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Advertisement
Continue Reading

Tech

Microsoft promotes new tools for making AI software

Microsoft promotes new tools for making AI software

Published

on

By

Microsoft talked up new tools on Tuesday aimed at encouraging programmers to build AI-focused technology into Windows software as it races against Alphabet, Amazon and Apple to dominate the emerging field. At a developer conference in Seattle, Chief Executive Satya Nadella promoted new application programming interfaces, or APIs, that make it easier for developers to tap in to AI technology offered by Microsoft. The company said 1.8 million developers are now using Github Copilot, Microsoft's generative AI tool that helps computer programmers be more productive. "What stands out to me as I look back at this past year, is how you all as developers have taken all of these capabilities and are applying them, quite frankly, to change the world around us," Nadella said during his keynote address at the Build conference. Microsoft detailed new features for its Copilot AI software that helps business productivity applications such as email and its Teams video and text chat product. At its developer conference last week, Alphabet's Google unveiled a similar batch of AI tools to help people with office applications. Microsoft announced details of its new developer tools last week. Shares of Microsoft were up 1.2% at $430.67 on Tuesday afternoon after hitting a record high of $432.97 earlier in the session. Microsoft's stock has now gained 14% in 2024. Also aimed at developers, Microsoft said last Thursday it would offer its cloud computing customers a platform of AMD AI chips that will compete with Nvidia whose graphics processing units have become the gold standard for AI computing. The platform of AMD chips created by Microsoft uses networking technology made by Nvidia called Infiniband to string the processors together. OpenAI's new GPT4-o model, which runs on Microsoft's infrastructure, is 12 times cheaper for developers to use in their software than earlier versions of the technology, Microsoft's chief technology officer Kevin Scott said. Microsoft is the largest investor in OpenAI and uses some of the AI heavyweight's technology in its own products. On Monday, Microsoft debuted a line of Copilot+ personal computers with AI features such as software that lets users search through their past actions in nearly any software. The new computers feature Arm-based, processors made by Qualcomm.

 Microsoft talked up new tools on Tuesday aimed at encouraging programmers to build AI-focused technology into Windows software as it races against Alphabet, Amazon and Apple to dominate the emerging field.

At a developer conference in Seattle, Chief Executive Satya Nadella promoted new application programming interfaces, or APIs, that make it easier for developers to tap in to AI technology offered by Microsoft. 

The company said 1.8 million developers are now using Github Copilot, Microsoft’s generative AI tool that helps computer programmers be more productive.

“What stands out to me as I look back at this past year, is how you all as developers have taken all of these capabilities and are applying them, quite frankly, to change the world around us,” Nadella said during his keynote address at the Build conference.

Advertisement

Microsoft detailed new features for its Copilot AI software that helps business productivity applications such as email and its Teams video and text chat product.

At its developer conference last week, Alphabet’s Google unveiled a similar batch of AI tools to help people with office applications. Microsoft announced details of its new developer tools last week.

Shares of Microsoft were up 1.2% at $430.67 on Tuesday afternoon after hitting a record high of $432.97 earlier in the session. Microsoft’s stock has now gained 14% in 2024.

Also aimed at developers, Microsoft said last Thursday it would offer its cloud computing customers a platform of AMD AI chips that will compete with Nvidia whose graphics processing units have become the gold standard for AI computing.

The platform of AMD chips created by Microsoft uses networking technology made by Nvidia called Infiniband to string the processors together.

Advertisement

OpenAI’s new GPT4-o model, which runs on Microsoft’s infrastructure, is 12 times cheaper for developers to use in their software than earlier versions of the technology, Microsoft’s chief technology officer Kevin Scott said.

Microsoft is the largest investor in OpenAI and uses some of the AI heavyweight’s technology in its own products.

On Monday, Microsoft debuted a line of Copilot+ personal computers with AI features such as software that lets users search through their past actions in nearly any software. The new computers feature Arm-based, processors made by Qualcomm.

Advertisement
Continue Reading

Tech

Explainer: What are AI PCs? How do they differ from traditional PC?

Explainer: What are AI PCs? How do they differ from traditional PC?

Published

on

By

Explainer: What are AI PCs? How do they differ from traditional PC?

The PC just got an AI makeover, raising hopes that the buzzy technology would help revive an industry that has been on a steady decline over the last few years.

Here’s everything we know about AI PCs:

WHAT DOES “AI PC” MEAN?

Manufacturers say these devices process data more swiftly than traditional PCs and can handle a greater volume of AI tasks directly on the device, including chatbots.

Advertisement

That means they do not have to rely on cloud data centers that currently power most AI applications, including OpenAI’s ChatGPT.

Some models can even support the training of AI models, a task that requires significant computing power and is typically performed on servers.

PC makers are hoping such features will help draw in buyers as more people lean on generative AI for everything from sending emails to planning vacations.

Research firm Canalys estimates AI PC shipments will surpass 100 million in 2025, constituting 40% of all PCs shipped. 

WHAT TECHNOLOGY IS USED IN AI PCS?

Advertisement

AI PCs come with specialized processors called neural processing units (NPUs) that handle the majority of on-device AI workloads.

These NPUs work in tandem with central processing units and graphics processors to manage complex tasks, deliver enhanced processing speeds and power applications such as AI assistants.

WHAT ARE SOME OF THE AI PCS AVAILABLE ON THE MARKET?

Brands including Dell, HP, Samsung Electronics, Lenovo, Asus and Acer have unveiled new computers under Microsoft’s Copilot+ branding, which was announced on Monday.

Among these, Microsoft’s refreshed Surface Laptop and Surface Pro tablet are some of the most affordable Copilot+ devices, starting at $999.

Advertisement

Lenovo ThinkPad T14s Gen 6, expected to start at $1,699, stands as the priciest option based on the pricing disclosed by some manufacturers.

ARE THERE ANY CONCERNS?

A new flagship feature from Microsoft called “recall” has raised some privacy concerns. The Windows maker’s Copilot+ PCs “recall” capability within the AI assistant allows it to search and retrieve information on any past activity on the computer.

The recall feature tracks every action performed on the laptop from voice chats to web browsing, and creates a detailed history stored on the device. The user can then search this repository and go through past actions.

Some social media users have expressed fears that the feature could enable spying, while billionaire technologist Elon Musk compared it to “Black Mirror,” the Netflix series that explores the harmful effects of advanced technology.

Advertisement

The main concern with the feature is whether the data is stored on the device or centrally, International Data Corp analyst Ryan O’Leary said, adding that there would be “significant privacy risk” if Microsoft stored the data.

On the other hand, some experts say that managing more AI-related tasks directly on the device offers greater privacy.

Research from Forrester showed AI PCs could help avoid the use of personal data to train AI systems, as well as copyright and patent violations, making them preferable for enterprise use.

Advertisement
Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN