A team led by Harvard computer scientists has introduced a new framework called “cy-trust” to make networks of robots and self-driving vehicles safer. The system assigns trust scores to data from other agents to prevent harmful behavior. It solves a critical problem: traditional cybersecurity is not enough for fleets of machines that coordinate in real time.
The news is a proposed framework published in Proceedings of the IEEE. It helps multi-agent systems — such as self-driving rideshare fleets, smart power grids, or search-and-rescue robots — decide which information to trust before acting.
READ ALSO: Beyond Power: China’s Space Solar Plan Signals Military Control and Surveillance Shift
Stephanie Gil, the John L. Loeb Associate Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), led the multi-university team. Co-authors include Andrea Goldsmith, president of Stony Brook University, and experts from Princeton, Arizona State, and other institutions.
Current security focuses on who can access a system. But for robots and vehicles, a hacked or greedy agent can cause real physical harm. Examples include an autonomous car speeding up to cut in line dangerously, fake data in traffic maps rerouting cars, or a rescue robot lying about its location, creating surveillance gaps.
Each agent uses onboard sensors — cameras, lidar, radar, GPS — to cross-check information from others. It also analyzes wireless signals to verify the true source of the data. Each piece of data gets a trust value between 0 and 1. That score decides how much it should influence the agent’s actions. Low trust? The robot simply ignores that input.
WATCH ALSO: Cartoon-like humanoid robot learns everyday kitchen chores from humans
In real-world use, this could protect rideshare fleets already operating in Phoenix and San Francisco. It could help truck platooning (automated convoys) and warehouse robots like those at Amazon. The framework also needs policy and regulation to gain public acceptance before wide deployment.
The framework is still being tested in labs. In one experiment, “blue” cooperative robots were attacked by “red” fake-identity robots. The blue robots used signal processing to detect that many fake IDs originated from a single source, then ignored them. But scaling this to noisy, chaotic real-world conditions requires more work.
As Andrea Goldsmith said, we are moving into a world where physical systems are controlled by AI in the cloud. Without a rigorous trust framework, malicious agents could cause accidents, endanger pedestrians, or collapse supply chains. Cy-trust offers a way to build resilience from the ground up.













