These scientists have a unique way of tackling video deepfakes - and all it takes is a burst of light

Security News

These scientists have a unique way of tackling video deepfakes – and all it takes is a burst of light

Credit: The original article is published here.
  • Noise-coded illumination hides invisible video watermarks inside light patterns for tampering detection
  • The system stays effective across varied lighting, compression levels, and camera motion conditions
  • Forgers must replicate multiple matching code videos to bypass detection successfully

Cornell University researchers have developed a new method to detect manipulated or AI-generated video by embedding coded signals into light sources.

The technique, known as noise-coded illumination, hides information within seemingly random light fluctuations.

Each embedded watermark carries a low-fidelity, time-stamped version of the original scene under slightly altered lighting – and when tampering occurs, the manipulated areas fail to match these coded versions, revealing evidence of alteration.

Using illumination patterns as a forensic tool

The system works through software for computer displays or by attaching a small chip to standard lamps.

Because the embedded data appears as noise, detecting it without the decoding key is extremely difficult.

This approach uses information asymmetry, ensuring that those attempting to create deepfakes lack access to the unique embedded data required to produce convincing forgeries.

The researchers tested their method against a range of manipulation techniques, including deepfakes, compositing, and changes to playback speed.

They also evaluated it under varied environmental conditions, such as different light levels, degrees of video compression, camera movement, and both indoor and outdoor settings.

In all scenarios, the coded light technique retained its effectiveness, even when alterations occurred at levels too subtle for human perception.

Even if a forger learned the decoding method, they would need to replicate multiple code-matching versions of the footage.

Each of these would have to align with the hidden light patterns, a task that greatly increases the complexity of producing undetectable video forgeries.

The research addresses an increasingly urgent problem in digital media authentication, as the availability of sophisticated editing tools means people can no longer assume that video represents reality without question.

While methods such as checksums can detect file changes, they cannot distinguish between harmless compression and deliberate manipulation.

Some watermarking technologies require control over the recording equipment or the original source material, making them impractical for broader use.

The noise-coded illumination could be integrated into security suites to protect sensitive video feeds.

This form of embedded authentication may also help reduce risks of identity theft by safeguarding personal or official video records from undetected tampering.

Although the Cornell team acknowledged the strong protection its work offers, it said the broader challenge of deepfake detection will persist as manipulation tools evolve.

You might also like

Leave a Reply

Your email address will not be published. Required fields are marked *