Connect with us

Tech

No one is driving this taxi. What possibly could go wrong?

Published

on

I won’t forget the first time I took a ride in a car without anyone sitting in the driver’s seat.

It happened one night last September when a Chevy Bolt named Peaches picked me up outside a San Francisco bar. Our ensuing half-hour ride together produced, at first, a titillating display of technology’s promise. Then an unexpected twist made me worry that the encounter had turned into a mistake I would regret.

Peaches and I were getting along great for most of our time together as the car deftly navigated through hilly San Francisco streets similar to those that Steve McQueen careened through during a famous chase scene in the 1968 film “Bullitt.” Unlike McQueen, Peaches never exceeded 30 miles per hour (48 kilometers per hour) because of restrictions imposed by state regulators on a ride-hailing service operated by Cruise, a General Motors subsidiary, since it won approval to transport fare-paying passengers last June.

It was all going so smoothly that I was starting to buy into the vision of Cruise and Waymo, a self-driving car pioneer spun off from a Google project that is also trying launch a ride-hailing service in San Francisco.

Advertisement

The theory fueling the ambition is that driverless cars will be safer than vehicles operated by frequently distracted, occasionally intoxicated humans — and, in the case of robotaxis, be less expensive to ride in than automobiles that require a human behind the wheel.

The concept does sound good. And the technology to pull it off is advancing steadily, just like other artificial intelligence applications such as chatbots that can write college-level essays and produce impressive pieces of art within seconds.

But when something goes awry, as it did near the end of my encounter with Peaches, that sense of astonishment and delight can evaporate very quickly.

DESTINATION: UNCERTAIN

As we approached my designated drop-off location near the Fairmont Hotel — where presidents have stayed and Tony Bennett first sang “I Left My Heart In San Francisco” — Peaches advised me to gather my belongings and prepare to get out of the car.

Advertisement

While I grabbed my bag as the robotaxi appeared to be pulling over to the curb, Peaches suddenly sped up and — inexplicably — started driving away in the opposite direction.

After seeing the dashboard display screen indicating I was now somehow an estimated 20 minutes away from my destination, I grew frantic. I asked Peaches what was going on. There was no response, so I used a feature on Cruise’s ride-hailing center that enables a passenger to contact a human in a call center.

The Cruise representative confirmed that Peaches had gotten confused, apologized and assured me the robotaxi had been reprogrammed to get me to my original destination.

Indeed, the car did seem to be headed back to where I requested. Then it started doing the old same thing again, making me wonder whether Peaches might like me a little too much to let me go. Feeling more like I was stuck on Mr. Toad’s Wild Ride at Disneyland than riding in an artificially intelligent car, I contacted Cruise’s call center. Peaches, they told me apologetically, seemed to be malfunctioning.

Suddenly, Peaches came to a halt right in the middle of the street. I bolted from the Bolt, marooned several blocks away from my destination shortly before 10 pm.

Advertisement

Fortunately, I know my way around San Francisco, so I walked the rest of the way to where I needed to be. But what if this had happened to tourists? Would they know where to go? How would they feel being forced to walk around a strange neighborhood in a big city late at night?

MAYBE DON’T STOP HERE

When I discussed the incident during an interview for a recent story about robotaxis, Cruise CEO Kyle Vogt apologized and assured me the problem had been fixed.

Sure enough, I was picked up and dropped off at my designated destinations in rides I took with another Associated Press reporter in two different Cruise robotaxis — one named Cherry and the other Hollandaise — on a mid-February night in San Francisco. But Cherry chose to drop us off at a bus stop just as a bus was trying to pull up to pick up a bunch of passengers. They weren’t happy about their ride on mass transit being delayed; they began jeering us.

My experience apparently isn’t isolated. The San Francisco County Transportation Authority has raised a red flag about robotaxis making unexpected, prolonged stops in the middle of streets and identified other problems that threaten to cause headaches and potentially imperil public safety.

Advertisement

Earlier this month, Vogt revealed that Cruise had voluntarily recalled the software in 300 robotaxis after one of them rear-ended a bus in San Francisco and declared the problem that led to the fender-bender had been fixed. Not long after that, five Waymo vehicles blocked traffic after becoming disoriented in San Francisco’s famously foggy conditions and coming to a stop.

And my experience with Peaches? Whenever I reminisce about that ride, I am also reminded of another trip to New York that I took two days after the robotaxi couldn’t deliver me to my destination.

After I landed at JFK Airport, I hopped into an old-fashioned taxi driven by a fellow named Talid. I remember having a pleasant conversation with Talid, who chuckled as I recounted what happened with Peaches. At the end of the ride, Talid dropped me off at Grand Central Terminal, as I had requested. Then his cab drove off — with, of course, a human still behind the wheel.

Advertisement

Tech

A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company

A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company

Published

on

By

A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.

Jan Leike, who ran OpenAI’s “Superalignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies.

Advertisement

He said building “smarter-than-human machines is an inherently dangerous endeavor” and that the company “is shouldering an enormous responsibility on behalf of all of humanity.”

“OpenAI must become a safety-first AGI company,” wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leike’s posts that he was “super appreciative” of Leike’s contributions to the company was “very sad to see him leave.”

Leike is “right we have a lot more to do; we are committed to doing it,” Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leike’s Superalignment team, which was launched last year to focus on AI risks, and is integrating the team’s members across its research efforts.

Advertisement

Leike’s resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade.

Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project that’s meaningful to him without offering additional details.

He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki “also easily one of the greatest minds of our generation” and said he is “very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.”

On Monday, OpenAI showed off the latest update to its artificial intelligence m

Advertisement

Continue Reading

Tech

US, TikTok seek fast-track schedule, ruling by Dec. 6 on potential ban

US, TikTok seek fast-track schedule, ruling by Dec. 6 on potential ban

Published

on

By

US, TikTok seek fast-track schedule, ruling by Dec. 6 on potential ban

The U.S. Justice Department and TikTok on Friday asked a U.S. appeals court to set a fast-track schedule to consider the legal challenges to a new law requiring China-based ByteDance to divest TikTok’s U.S. assets by Jan. 19 or face a ban.

TikTok, ByteDance and a group of TikTok content creators joined with the Justice Department in asking the U.S. Court of Appeals for the District of Columbia to rule by Dec. 6 to be able to seek review from the Supreme Court if needed before the U.S. deadline. 

On Tuesday, a group of TikTok creators filed suit to block the law that could ban the app used by 170 million Americans, saying it has had “a profound effect on American life.”

Last week, TikTok and parent company ByteDance filed a similar lawsuit, arguing that the law violates the U.S. Constitution on a number of grounds including running afoul of First Amendment free speech protections.

Advertisement

“In light of the large number of users of the TikTok platform, the public at large has a significant interest in the prompt disposition of this matter,” the U.S. Justice Department and TikTok petitioners said.

TikTok said with a fast-track schedule it believes the legal challenge can be resolved without it needing to request
emergency preliminary injunctive relief.

The law, signed by President Joe Biden on April 24, gives ByteDance until Jan. 19 to sell TikTok or face a ban. The White House says it wants to see Chinese-based ownership ended on national security grounds, but not a ban on TikTok.

The parties asked the court to set the case for oral arguments as soon as practical during the September case calendar. The Justice Department said it may file classified material to support the national security justifications in secret with the court.

Earlier this week the Justice Department said the TikTok law “addresses critical national security concerns in a manner that is consistent with the First Amendment and other constitutional limitations.”

Advertisement

The law prohibits app stores like Apple and Alphabet’s Google from offering TikTok and bars internet hosting services from supporting TikTok unless ByteDance divests TikTok.

Driven by worries among U.S. lawmakers that China could access data on Americans or spy on them with the app, the measure was passed overwhelmingly in Congress just weeks after being introduced.

Continue Reading

Tech

Spotify sued over alleged unpaid royalties

Spotify sued over alleged unpaid royalties

Published

on

By

Spotify sued over alleged unpaid royalties

Music streaming giant Spotify has been sued in a US federal court for allegedly underpaying songwriters, composers and publishers by tens of millions of dollars.

The lawsuit against Spotify USA was filed in New York on Thursday by the Mechanical Licensing Collective (MLC), a non-profit that collects and distributes royalties owed from music streaming services.

The suit alleges that Spotify on March 1, without advance notice, reclassified its paid subscription services, resulting in a nearly 50 percent reduction in royalty payments to MLC.

“The financial consequences of Spotify’s failure to meet its statutory obligations are enormous for Songwriters and Music Publishers,” MLC said.

Advertisement

“If unchecked, the impact on Songwriters and Music Publishers of Spotify’s unlawful underreporting could run into the hundreds of millions of dollars.”

According to MLC, Spotify reclassified its Premium Individual, Duo and Family subscription streaming plans as Bundled Subscription Offerings because they now include audiobooks.

Royalties paid on bundled services are significantly less. MLC said Premium subscribers already had access to audiobooks and “nothing has been bundled with it.”

“Premium is exactly the same service that Spotify offered to its subscribers before the launch of Audiobooks Access,” it said. In a statement, Spotify said the lawsuit “concerns terms that publishers and streaming services agreed to and celebrated years ago.”

Spotify said it paid a “record amount” in royalties last year and “is on track to pay out an even larger amount in 2024.” “We look forward to a swift resolution of this matter,” the Swedish company said.

Advertisement

In February, Spotify said it paid $9 billion to musicians and publishers last year, about half of which went to independent artists. 

Continue Reading

Trending

Copyright © GLOBAL TIMES PAKISTAN