Politics & Government

MTA Explores How To Use AI To Monitor Thousands Of Cameras In Transit System

The transportation authority wants to know how technology can be used to detect weapons, monitor unattended items or even foresee stampedes.

The MTA had surveillance cameras at the Chambers Street station, Jan. 6, 2026.
The MTA had surveillance cameras at the Chambers Street station, Jan. 6, 2026. (Alex Krales/THE CITY)

Jan. 8, 2026, 5:00 a.m.

The MTA has begun exploring whether artificial intelligence can be harnessed within the transit system to detect weapons, monitor unattended items or even anticipate subway stampedes.

Find out what's happening in New York Cityfor free with the latest updates from Patch.

An unspecified number of tech providers and systems integrators responded by the Dec. 30 deadline to a request for information from the transportation authority early last month, officials said.

“There’s interest across the board,” Michael Kemper, MTA chief security officer, told THE CITY. “It’s not only coming from the MTA, but from the business world, the AI business world, in working with us.”

Find out what's happening in New York Cityfor free with the latest updates from Patch.

The request spells out early steps in the MTA’s shift toward potentially using AI to perform complex public-safety work, such as analyzing real-time video feeds from subways and buses and predicting potentially unsafe behavior via cameras in the transit system.

“Not only is this the norm, but it’s also the expected — AI is here, AI is the future,” Kemper said. “For us not to explore it, research it and investigate it, it would be malpractice on our side.”

But technology watchdogs warn that the AI boom comes with privacy risks and tracking capabilities that could extend beyond what the MTA says it needs out of video analytics.

Jerome Greco, supervising attorney of The Legal Aid Society’s Digital Forensics Unit, said the technology’s ability to possibly scope out “unusual” or “unsafe” behavior within a transit setting comes with many potential problems, including “very negative” interactions with police.

“These uses of AI are not like Netflix telling you what movie you should watch next,” Greco said. “The consequences of it being wrong could be pretty significant and I think that’s something the MTA should not be so cavalier about.”

William Owen, communications director for Surveillance Technology Oversight Project, likened the effort from transit officials to the weapons-detector pilot program that then-Mayor Eric Adams and the NYPD implemented in the subway in 2024. During a monthlong test with more than 3,000 searches at 20 stations, the AI-powered scanners turned up 12 knives, no guns and more than 100 false positives.

“It really turned out to be just a metal detector that found a lot of umbrellas and other items instead of actual weapons,” Owen said.

Kemper said the MTA understands the issues raised over the use in the transit system of AI video analytics, which he described as a “tool” to enhance human decision-making.

“People have concerns and questions about it — it’s our job to be transparent, answer those questions,” he said. “But we need to move forward and explore these technologies to keep our riders safe.”

Notably, the request does not at all mention the use of controversial facial recognition technology, used by the NYPD in an April 2024 incident that critics urged investigators to review. A 2021 Amnesty International investigation found that the NYPD was able to put images from more than 15,000 cameras in Brooklyn, The Bronx and Manhattan into facial recognition software.

The MTA says its AI inquiry is centered on tapping into current technology for the sake of public safety.

The response from tech providers to the MTA’s request is the latest move for the country’s largest mass-transit system in looking to adapt the burgeoning technology for security purposes. There are more than 15,000 cameras throughout the entire transit system and on the more than 6,000 cars in the subway fleet.

All subway cars are now equipped with security cameras, Jan. 6, 2026. Credit: Alex Krales/THE CITY

Artificial intelligence is already being put to the test elsewhere in the city’s transportation network.

The authority last year retrofitted the axles of some cars along the A line with Google Pixel smartphones that combine with advanced artificial intelligence capabilities to detect and analyze potential track defects. The MTA is also testing new AI-enabled fare gates at select stations.

The safety-centric initiative is geared toward using the existing network of cameras whose streaming video feeds were not functioning at a Sunset Park station during an April 2022 subway shooting.

A December 2022 report on the outage from the MTA Inspector General cited that the video stream at the Brooklyn station and two other stops had gone down four days prior to the shooting.

In its request for interest, the MTA acknowledged some of the issues associated with the use of its eyes in the transit system.

“With more than 15,000 cameras deployed across approximately 472 subway stations, current monitoring practices remain manual, reactive and resource intensive,” it notes.

The document adds that the MTA is aiming for that monitoring setup to evolve into a “proactive intelligence-driven ecosystem, capable of flagging behavior, risk assessment and incident response.”

While the initiative would be grounded in advanced video analytics and AI technologies, insights from certified subject-matter experts in behavioral science and psychology who have “a deep understanding of human behavior in transit environments” would guide the effort, according to the MTA.

There is no timetable for the project, whose next step will involve reviewing submissions from interested parties to determine what may be possible for eventually activating it within an around-the-clock transit system that moves close to 4 million subway riders daily.

The MTA’s chief security officer said its potential value is “immense” to riders.

“We’re looking to move forward as soon as we find something that we’re comfortable with,” Kemper said.

Greco, of Legal Aid, countered that the MTA needs to proceed with caution when it comes to predictive technology on “unusual” or “unsafe” behavior in the subway system.

“How’s that going to work and who gets to make that decision and what are the consequences of that decision?” he said. “If it determines that there is an unsafe behavior coming — based on who knows what it will use to determine that — what happens next?

“Are we essentially now policing people for being strange?”


This press release was produced by The City. The views expressed here are the author’s own.