Consumer device security is typically affected by hardware security threats. As these devices operate in uncontrolled conditions, lacking physical protection, they may be subjected to local attacks in provisional setups and makeshift labs. While these local attacks can have a disastrous effect on security, it can be difficult for an attacker to get physical access to their target. Therefore, researchers are interested in understanding whether these attacks can be executed at a greater distance. For instance, when a device leaks confidential side channel information through electromagnetic emanations, this may also be measured from some distance if a sensitive antenna is used.
Researchers from Cornell and Ben Gurion universities recently came up with a novel approach. Essentially, they convert leakage from one side channel to another, which allows observation from a bigger distance. Their targets were cryptographic implementations in smart cards and smartphones that would leak key material through power consumption. Since the power consumption could only be measured locally, they looked for an existing component that would transform that power signal into something that could bridge a bigger distance.
On the transmission side, they selected a power LED as a light source, knowing that high frequency signals transmit easily in light (think of fibers used for high-speed internet). The power LED would be connected to the same power source that fed into the cryptographic processor. As the leakage signal draws current from the power source, this has a small effect on the voltage available for the power LED, which may be observable from a distance. They performed two experiments. One experiment involved a smart card inserted in a reader with a power LED. The other experiment involved a smartphone connected to a power cable also feeding into a Bluetooth speaker with a power LED.
On the receiving side they used a digital camera, which was adjusted to allow for a very fast frame rate (60k pictures per second). The camera was shown to be able to detect tiny light fluctuations from the power LEDs at a 16-meter distance. Next, filtering software was used to extract the original power signal from the cryptographic device and consequently decode a cryptographic key. All of this is done with custom hardware, which could indeed be present at locations where cryptographic functions are performed and where attackers may get access to the camera feed.
The researchers proved that a local physical attack could be stretched to vicinity and that the threat of local attack may be taken more seriously. However, with the experience of 20 years of side channel attacks in a sophisticated security lab, we can argue that this attack is little more than a fancy demonstration without practical impact.
First of all, the transformation of a cryptographic power consumption signal into a power source fluctuation is a very rough one. Although it was demonstrated that some signal remains, this signal is heavily attenuated by the normal inductance and capacitance of the power line and source. Also, the same power source feeds into many more (noisy) processes than just the cryptographic process. This leads to an extreme reduction of the signal to noise ratio. In contrast, when an evaluation lab measures power leakage, this would be done as close as possible to the consuming chip, while removing noise and signal flattening components. The demonstrated setup using a power LED will therefore suffer from extreme information loss.
Secondly, recording through a video camera has a significant speed limitation. Even though the camera was adjusted to allow for 60k frames per second, this comes nowhere close to the capabilities of professional oscilloscopes that can measure more than 1 billion samples per second. This means that fast signal variations, which often matter a lot, would be totally invisible using the chosen attack method.
So, how did this demonstration still succeed? The researchers deliberately chose cryptographic algorithms that are slow and used extremely leaky (outdated) implementations. These implementations would be considered “functional proof-of-concept” and would never pass any security certification. The problem demonstrated in the work has been studied for 25 years, and hundreds of publications have been written on how to avoid or mitigate this. The attack is original and in theory allows an adversary to execute remotely, without having local test equipment. But such an attack may only succeed in cases where the affected application does not warrant a reasonable level of security testing. In that sense, it is a good warning that security testing remains important.