Thursday, July 30, 2015

Optical Emission Security – Frequently Asked Questions

Markus Kuhn In the paper
Optical Time-Domain Eavesdropping Risks of CRT Displays,
2002 IEEE Symposium on Security and Privacy, Berkeley, California, May 2002.
I describe a new eavesdropping technique that reconstructs text on computer screens from diffusely reflected light. This publication resulted in some wide media attention (BBC, New Scientist,Wired, Reuters, Slashdot). Here are answers to some of the questions I have received, along with some introductory information for interested readers looking for a more highlevel summary than the full paper, which was mainly written for an audience of hardware-security and optoelectronics professionals. Q: How does this new eavesdropping technique work? To understand what is going on, you have to recall how a cathode-ray display works. An electron beam scans across the screen surface at enormous speed (tens of kilometers per second) and targets one pixel after another. It targets this way tens or hundreds of millions of pixels every second to convert electron energy into light. Even though each pixel shows an afterglow for longer than the time the electron beam needs to refresh an entire line of pixels, each pixel is much brighter while the e-beam hits it than during the remaining afterglow. My discovery of this very short initial brightness in the light decay curve of a pixel is what makes this eavesdropping technique work. An image is created on the CRT surface by varying the electron beam intensity for each pixel. The room in which the CRT is located is partially illuminated by the pixels. As a result, the light in the room becomes a measure for the electron beam current. In particular, there is a little invisible ultrafast flash each time the electron beam refreshes a bright pixel that is surrounded by dark pixels on its left and right. So if you measure the brightness of a wall in this room with a very fast photosensor, and feed the result in another monitor that receives the exact same synchronization signals for steering its electron beam, you get to see an image like this: raw rasterized photomultiplier signal You can already recognize some large characters and lines of text, but the long afterglow of the phosphor still distorts the image significantly. A mathematical signal processing technique known as deconvolution can now be used to undo this blurring to some degree: deconvoluted photomultiplier signal This magnification shows that even small font sizes become readable: magnified excerpt Q: How far away does this work and what is "shot noise"? The amount of noise that a constant light level introduces into a photo sensor is proportional to the square root of the light intensity. This is because light does not arrive as a continuous stream of energy, but in small countable energy packets (photons). The number of photons that arrive during some fixed time interval varies randomly, and this variation is described by what statisticians call Poisson distribution. It is a characteristic of the Poisson distribution that if you expect on average N photons to arrive, then the average difference between N and the actual number of photons that you got will be sqrt(N). This fluctuation is in electronics called shot noise. Shot noise means, that in order to get a signal, the number of photons that you receive per pixel from the CRT must be at least the square root of the number of photos that you get from other light sources such as the sun or light bulbs. As the eavesdropper moves further away, the receiver will be able to capture fewer photons. Even though the ratio between CRT photons and background photons might remain roughly the same, the square root of the number of background photons will grow relative to the CRT photon count with distance, thereby reducing the signal-to-noise ratio. The paper contains the mathematical details for calculating the signal-to-noise ratio at various distances, and in one example calculation in which I used what I hope are practically interesting parameters for background light, and size of the sensor, I ended up with a maximum eavesdropping distance of in the order of 50 meters. It is important to understand, that this figure is just one example calculation result. Changes in the background light, the pixel frequency, the required signal-to-noise ratio, or other parameters will lead to significant different distances. Having said that, I do believe that the outcome of this study can be described as that eavesdropping a computer monitor with common font sizes via light reflected from a wall seems feasible from a building on the other side of a street if the targeted room is only weakly illuminated. Interested readers will find in the paper enough data and information to perform realistic numeric simulations of an eavesdropping attack in a specific situation. Q: What can eavesdroppers do to improve reception? There are a number of techniques, for example some of those mentioned in the paper are:
  • The eavesdropped videosignal is periodic over at least a few seconds, therefore periodic averaging over a few hundred frames can help significantly to reduce the noise.
  • If you know exactly what font is used, many of the equalization and symbol detection techniques used in modems or pattern recognition applications can be applied to recover the text (remote optical character recognition).
  • Optical filters can eliminate other colours from background light.
  • A large sensor aperture (telelens, telescope) can improve the photon count.
Q: What can I do to prevent reception? There are a number of techniques, for example some of those mentioned in the paper are:
  • Reception is difficult if not impossible from well-lit rooms, in which CRTs do not make a visible contribution to the ambient illumination. Don't work in the dark.
  • No not assume that etched or frosted glass surfaces prevent this technique if there is otherwise a direct line of sight to the screen surface.
  • This particular eavesdropping technique is not applicable to LCDs and other flat-panel displays that refresh all pixels in a row simultaneously.
  • Make sure, nobody can install eavesdropping equipment within a few hundred meters line-of-sight to your window.
  • Use a screen saver that removes confidential information from the monitor in your absence.
Q: What phosphor types reduce the risk and which monitor brands use them? The vendors of colour computer monitor vendors currently do not provide any useful information about what screen phosphor type is used. The designation "P22" found sometimes in datasheets is mostly meaningless, as it describes any combination of red/green/blue phosphors that fulfill the NTSC TV colour specification. For that reason, I have not yet performed a systematic test of different phosphor types currently on the market. Q: What colour combination is most difficult to eavesdrop? A bright background contributes shot noise. In the monitor I tested, the red phosphor had the by far lowest initial luminosity. It emitted a much smaller part of its stored energy in the first microsecond than did the blue and green phosphors. Normal light bulbs emit more red then blue light. Therefore it might seem prudent to place critical text information into the red channel only, while keeping the other to maximum brightness, leading to colour combinations such as cyan on white or green on yellow. However, such low-contrast text display violates ergonomic standards, therefore I would advise against using this approach. In addition, the light from typical red phosphors is concentrated in a few narrow spectral lines, and therefore might be more easily separated by wavelength from background light. Q: Is this really a serious risk? A radio-frequency technique for video display eavesdropping was first described in the open literature by Wim van Eck in 1985. Even though it generated considerable excitement at the time and NATO governments have invested billions of dollars for countermeasures, not a single case of computer crime based on this technique has been uncovered so far, apart from occasional suspicions in some financial institutions that handle information of very high short-term value. This either means that such compromising emanations attacks are practically not used, or if they are used, then so rarely and well prepared that the eavesdroppers never got caught so far. It is my understanding that some major NATO intelligence agencies still do maintain the capability to perform remote RF video display eavesdropping, but I have no idea on how valueable they find such techniques in practice to gather useful intelligence. Compared to radio frequency emanations, the equipment need for optical eavesdropping is somewhat easier to obtain, because suitable commercial RF wide-band receivers are normally rather expensive and export controlled (though obtaining a second-hand one was not entirely impossible, even for a research student, and there are always numerous ways to modify readily availabl low-cost equipment suitably). Photomultipliers and digital storage oscilloscopes on the other hand are rather common laboratory equipment. The feasible eavesdropping ranges in an urban/suburban environment are very comparable for both techniques. Optical eavesdropping is somewhat more restricted in that for the diffuse reflection case, low background illumination is essential to make it work, but on the other hand it leads usually to much better image quality and works not only for text but also for colour images. I would be surprised if optical CRT eavesdropping finds widespread use. The risk is not entirely unrealistic, but it probably should be of primary concern only to those in charge of high-security facilities that are targeted by intelligence agencies or well-funded industrial espionage teams and that have already invested significant effort to close any other known type of security vulnerability. There might also be specialized applications for TV or software licence enforcement targeted against private and commercial consumers. A compromising emanations eavesdropper has to be extremely patient, because it is necessary to wait until the information of interest is displayed or processed by the system, which is often not easy to predict. In most commercial environments, it is for a determined attacker far more easy to break into a facility, get physical access to the storage media and take or copy it, in order to gain noise-free access to all information at once, or to bribe an employee with access to do the same. Unsurprisingly, breakins where all the burglars were interested in was the harddisk of a company's central file server are being reported regularly. For me, this project was mostly driven by academic curiosity. It was a good excuse to learn a lot about optoelectronics and get a photomultiplier to play with. I hope, not only computer security educators and textbook authors will find this demonstration a valuable example for how surprising and unexpected the nature of hardware-security side channels can be. Q: What about LEDs? For devices with RS-232 serial ports, it is customary to provide a status indicator LED for some of the signal lines (in particular transmit data and receive data). Often, these LEDs are directly connected to the line via just a resistor. As a result, anyone with a line of sight to the LED, some optics and a simple photosensor can see the data stream. Joe Loughry and David A. Umphress have recently announced a detailed study (submitted to ACM Transactions on Information and System Security) in which they tested 39 communications devices with 164 LED indicators, and on 14 of the tested devices they found serial port data in the LED light. Based on their findings, it seems reasonable to conclude that LEDs for RS-232 ports are most likely carrying the data signal today, whereas LEDs on high-speed data links (LANs, harddisk) do not. Even these LEDs are still available as a covert channel for malicious software that actively tries to transmit data optically. I expect that this paper will cause a number of modem manufacturers to add a little pulse stretcher (monostable multivibrator) to the LEDs in the next chip set revision, and that at some facilities with particular security concerns, the relevant LEDs will be removed or covered with black tape. The data traffic on LEDs is not a periodic signal, and therefore, unlike with video signals, periodic averaging cannot be used to improve the signal-to-noise ratio. The shot-noise limit estimation technique that I used to estimate the CRT eavesdropping risk can even more easily (because no deconvolution is needed) also be applied to serial port indicators and allows us to estimate a lower bound for the bit-error rate at a given distance. I have performed a few example calculations and concluded that with a direct line of sight, and a 100 kbit/s signal (typical for an external telephone modem), at 500 m distance it should be no problem to acquire a reliable signal (one wrong bit every 10 megabit), whereas for indirect reflection from the wall of a dark room, a somewhat more noisy signal (at least one wrong bit per 10 kilobit) can be expected to be receivable in a few tens of meters distance. Q: Where can I learn more? My dissertation Compromising emanations: eavesdropping risks of computer displays is currently perhaps the most detailed openly available compendium on electromagnetic eavesdropping techniques for video displays. Pretty good online repositories of material related to compromising emanations have been collected by Joel McNamara and John Young.
created 2002-03-05 – last modified 2004-11-29 – http://www.cl.cam.ac.uk/~mgk25/emsec/optical-faq.html

Saturday, July 4, 2015

SO, WHAT WAS 9/11 NANO THERMITE?
IT WAS NANO EXPLOSIVES ELECTRONIC CHIPS, VERY SMALL SIZED CHIPS!
"....chip of porous silicon, laced with gadolinium nitrate, exploded after being scratched..."

Friday, July 3, 2015

Stealing Keys from PCs using a Radio:
Cheap Electromagnetic Attacks on Windowed Exponentiation



Daniel Genkin Lev Pachmanov Itamar Pipman Eran Tromer
Technion and Tel Aviv UniversityTel Aviv UniversityTel Aviv UniversityTel Aviv University



This web page contains an overview of, and Q&A about, our recent results published in a technical paper (PDF, 2.1MB), archived as IACR ePrint 2015/170. It will be presented at the Workshop on Cryptographic Hardware and Embedded Systems (CHES) 2015 in September 2015.

This research was conducted at the
Laboratory for Experimental Information Security (LEISec).

Overview

We demonstrate the extraction of secret decryption keys from laptop computers, by nonintrusively measuring electromagnetic emanations for a few seconds from a distance of 50 cm. The attack can be executed using cheap and readily-available equipment: a consumer-grade radio receiver or a Software Defined Radio USB dongle. The setup is compact and can operate untethered; it can be easily concealed, e.g., inside pita bread. Common laptops, and popular implementations of RSA and ElGamal encryptions, are vulnerable to this attack, including those that implement the decryption using modern exponentiation algorithms such as sliding-window, or even its side-channel resistant variant, fixed-window (m-ary) exponentiation.
We successfully extracted keys from laptops of various models running GnuPG (popular open source encryption software, implementing the OpenPGP standard), within a few seconds. The attack sends a few carefully-crafted ciphertexts, and when these are decrypted by the target computer, they trigger the occurrence of specially-structured values inside the decryption software. These special values cause observable fluctuations in the electromagnetic field surrounding the laptop, in a way that depends on the pattern of key bits (specifically, the key-bits window in the exponentiation routine). The secret key can be deduced from these fluctuations, through signal processing and cryptanalysis.

The attack can be mounted using various experimental setups:

  • Software Defined Radio (SDR) attack. We constructed a simple shielded loop antenna (15 cm in diameter) using a coaxial cable. We then recorded the signal produced by the probe using an SDR receiver. The electromagnetic field, thus measured, is affected by ongoing computation, and our attacks exploit this to extract RSA and ElGamal keys, within a few seconds.
    Electromagnetic measurement
  • Untethered SDR attack. Setting out to simplify and shrink the analog and analog-to-digital portion of the measurement setup, we constructed the Portable Instrument for Trace Acquisition (Pita), which is built of readily-available electronics and food items (see instructions here). Pita can be operated in two modes. In online mode, it connects wirelessly to a nearby observation station via WiFi, and provides real-time streaming of the digitized signal. The live stream helps optimize probe placement and allows adaptive recalibration of the carrier frequency and SDR gain adjustments. In autonomous mode, Pita is configured to continuously measure the electromagnetic field around a designated carrier frequency, and records the digitized signal into an internal microSD card for later retrieval, by physical access or via WiFi. In both cases, signal analysis is done offline, on a workstation.
    Untethered Attack
  • Consumer radio attack. Despite its low price and compact size, assembly of the Pita device still requires the purchase of an SDR device. As discussed, the leakage signal is modulated around a carrier circa 1.7 MHz, located in the range of the commercial AM radio frequency band. We managed to use a plain consumer-grade radio receiver to acquire the desired signal, replacing the magnetic probe and SDR receiver. We then recorded the signal by connecting it to the microphone input of an HTC EVO 4G smartphone.
    radio attack

Q&A

Q1: What information is leaked by the electromagnetic emanations from computers?

This depends on the specific computer hardware. We have tested numerous laptop computers, and found the following:
  • In almost all machines, it is possible to tell, with sub-millisecond precision, whether the computer is idle or performing operations.
  • On many machines, it is moreover possible to distinguish different patterns of CPU operations and different programs.
  • Using GnuPG as our study case, we can, on some machines:
    • distinguish between the spectral signatures of different RSA secret keys (signing or decryption), and
    • fully extract decryption keys, by measuring the laptop's electromagnetic emanations during decryption of a chosen ciphertext.
A good way to visualize the signal is as a spectrogram, which plots the measured power as a function of time and frequency. For example, in the following spectrogram (recorded using the first setup pictured above), time runs vertically (spanning 2.1 seconds) and frequency runs horizontally (spanning 1.6-1.75 MHz).  During this time, the CPU performs loops of different operations (multiplications, additions, memory accesses, etc.). One can easily discern when the CPU is performing each operation, due to the different spectral signatures.
various CPU operations

Q2: Why does this happen?

Different CPU operations have different power requirements. As different computations are performed during the decryption process, different electrical loads are placed on the voltage regulator that provides the processor with power. The regulator reacts to these varying loads, inadvertently producing electromagnetic radiation that propagates away from the laptop and can be picked up by a nearby observer. This radiation contains information regarding the CPU operations used in the decryption, which we use in our attack.

Q3: How can I construct such a setup?

  • Software Defined Radio (SDR) attack. The main component in the first setup is a FUNcube Dongle Pro+ SDR receiver. Numerous cheap alternatives exist, including ``rtl-sdr'' USB receivers based on the Realtek RTL2832U chip (originally intended for DVB-T television receivers) with a suitable tuner and upconverter; the Soft66RTL2 dongle is one such example.
  • Untethered SDR attack. The Pita device uses an unshielded loop antenna made of plain copper wire, wound into 3 turns of diameter 13 cm, with a tuning capacitor chosen to maximize sensitivity at 1.7 MHz (which is where the key-dependent leakage signal is present). These are connected to the aforementioned FUNcube Dongle Pro+ SDR receiver. We control the SDR receiver using a small embedded computer, the Rikomagic MK802 IV. This is an inexpensive Android TV dongle based on the Rockchip RK3188 ARM SoC. It supports USB host mode, WiFi and flash storage. We replaced the operating system with Debian Linux, in order to run our software, which operates the SDR receiver via USB and communicates via WiFi. Power is provided by 4 NiMH AA batteries, which suffice for several hours of operation.
    Pita Device
  • Consumer radio attack. We have tried many consumer-grade radio receivers and smartphones with various results. Best results were achieved using a "Road Master" brand consumer radio connected to the microphone jack of an HTC EVO 4G smartphone, sampling at 48 kHz, through an adapter cable. The dedicated line-in inputs of PCs and sound cards do not require such an adapter, and yield similar results.

Q4: What is the range of the attack?

In order to extend the attack range, we added a 50dB gain stage using a pair of inexpensive low-noise amplifiers (Mini-Circuits ZFL-500LN+ and ZFL-1000LN+ in series, 175$ total). We also added a low-pass filter before the amplifiers. With this enhanced setup, the attack can be mounted from 50 cm away. Using better antennas, amplifiers and digitizers, the range can be extended even further.
 long-range attack

Q5: What if I can't get physically close enough to the target computer?

There are still attacks that can be mounted from large distances.
  • Laptop-chassis potential, measured from the far end of virtually any shielded cable connected to the laptop (such as Ethernet, USB, HDMI and VGA cables) can be used for key-extraction, as we demonstrated in a paper presented at CHES'14.
  • Acoustic emanations (sound), measured via a microphone, can also be used to extract keys from a range of several meters, as we showed in a paper presented at CRYPTO'14.

Q6: What's new since your previous papers?

  • Cheap experimental setup. The previous papers required either a long attack time (about an hour) when using inexpensive equipment, or a fast attack (a few seconds) but using an expensive setup. In this paper we achieve the best of both, presenting an experimental setup which extracts keys quickly while remaining simple and cheap to construct.
  • New cryptographic technique addressing modern implementations. In the previous papers we attacked the naive square-and-multiply exponentiation algorithm and the square-and-always-multiply variant (which reduces side-channel leakage). However, most modern implementations utilize faster exponentiation algorithms: sliding-window, or for better side-channel resistance, m-ary exponentiation. In this paper we demonstrate a low-bandwidth attack on the latter two algorithms, extracting their secret keys.

Q7: How can low-frequency (kHz) leakage provide useful information about a much faster (GHz) computation?

We use two main techniques.
  1. Leakage self-amplification. Individual CPU operations are too fast for our measurement equipment to pick up, but long operations (e.g., modular exponentiation in RSA and ElGamal) can create a characteristic (and detectable) spectral signature over many milliseconds. Using a suitably chosen ciphertext, we are able to use the algorithm's own code to amplify its own key leakage, creating very drastic changes, detectable even by low-bandwidth means.
  2. Data-dependent leakage. While most implementations (such as GnuPG) attempt to decouple the secret key from the sequence of performed operations, the operands to these operations are key-dependent and often not fully randomized. The attacker can thus attempt to craft special inputs (e.g., ciphertexts to be decrypted) to the cryptographic algorithm that "poison" the intermediate values inside the algorithm, producing a distinct leakage pattern when used as operands during the algorithm's execution. Measuring leakage during such a poisoned execution can reveal in which operations the operands occurred, and thus leak secret-key information.

    For example, the figure presents the leakage signal (after suitable processing) of an ElGamal decryption. The signal appears to be mostly regular in shape, and each peak corresponds to a multiplication performed by GnuPG's exponentiation routine. However, an occasional "dip" (low peak) can be seen. These dips correspond to a multiplication by a poisoned value performed within the exponentiation routine. 

    signal example

Q8: How vulnerable is GnuPG now?

We have disclosed our attack to GnuPG developers under CVE-2014-3591, suggested suitable countermeasures, and worked with the developers to test them. GnuPG 1.4.19 and Libgcrypt 1.6.3 (which underlies GnuPG 2.x), containing these countermeasures and resistant to the key-extraction attack described here, were released concurrently with the first public posting of these results.

Q9: How vulnerable are other algorithms and cryptographic implementations?

This is an open research question. Our attack requires careful cryptographic analysis of the implementation, which so far has been conducted only for the GnuPG 1.x implementation of RSA and ElGamal. Implementations using ciphertext blinding (a common side-channel countermeasure) appear less vulnerable.

Q10: Is there a realistic way to perform a chosen-ciphertext attack on GnuPG?

GnuPG is often invoked to decrypt externally-controlled inputs, fed into it by numerous frontends, via emails, files, chat and web pages. The list of GnuPG frontends contains dozens of such applications, each of them can be potentially used in order to make the target decrypt the chosen ciphertexts required by our attack. As a concrete example, Enigmail (a popular plugin to the Thunderbird e-mail client) automatically decrypts incoming e-mail (for notification purposes) using GnuPG. An attacker can e-mail suitably-crafted messages to the victims (using the OpenPGP and PGP/MIME protocols), wait until they reach the target computer, and observe the target's EM emanations during their decryption (as shown above), thereby closing the attack loop. We have empirically verified that such an injection method does not have any noticeable effect on the leakage signal produced by the target laptop. GnuPG's Outlook plugin, GpgOL also did not seem to alter the target's leakage signal.

Q11: What countermeasures are available?

Physical mitigation techniques of electromagnetic radiation include Faraday cages. However, inexpensive protection of consumer-grade PCs appears difficult. Alternatively, the cryptographic software can be changed, and algorithmic techniques employed to render the emanations less useful to the attacker. These techniques ensure that the rough-scale behavior of the algorithm is independent of the inputs it receives; they usually carry some performance penalty, but are often used in any case to thwart other side-channel attacks. This is what we helped implement in GnuPG (see Q8).

Q12: Why software countermeasures? Isn't it the hardware's responsibility to avoid physical leakage?

It is tempting to enforce proper layering and decree that preventing physical leakage is the responsibility of the physical hardware. Unfortunately, such low-level leakage prevention is often impractical due to the very bad cost vs. security tradeoff: (1) any leakage remnants can often be amplified by suitable manipulation at the higher levels, as we indeed do in our chosen-ciphertext attack; (2) low-level mechanisms try to protect all computation, even though most of it is insensitive or does not induce easily-exploitable leakage; and (3) leakage is often an inevitable side effect of essential performance-enhancing mechanisms (e.g., consider cache attacks).

Application-layer, algorithm-specific mitigation, in contrast, prevents the (inevitably) leaked signal from bearing any useful information. It is often cheap and effective, and most cryptographic software (including GnuPG and libgcrypt) already includes various sorts of mitigation, both through explicit code and through choice of algorithms. In fact, the side-channel resistance of software implementations is nowadays a major concern in the choice of cryptographic primitives, and was an explicit evaluation criterion in NIST's AES and SHA-3 competitions.

Q13: What does the RSA leakage look like?

Here is an example of a spectrogram (which plots the measured power as a function of time and frequency) for a recording of GnuPG decrypting the same ciphertext using different randomly generated RSA keys:

spectrogram of multiple GnuPG RSA decryptions

In this spectrogram, the horizontal axis (frequency) spans ranges from 1.72 MHz to 1.78 MHz, and the vertical axis (time) spans 1.2 seconds. Each yellow arrow points to the middle of a GnuPG RSA decryption. It is easy to see where each decryption starts and ends. Notice the change in the middle of each decryption operation, spanning several frequency bands. This is because, internally, each GnuPG RSA decryption first exponentiates modulo the secret prime p and then modulo the secret prime q, and we can actually see the difference between these stages. Moreover, each of these pairs looks different because each decryption uses a different key. So in this example, by simply observing electromagnetic emanations during decryption operations, using the setup from this figure, we can distinguish between different secret keys.

Q14: What is the difference between your attack and the recent cache attack by Yarom et al.?

Cache side channel (timing cross-talk between processes or virtual machines) apply to scenarios where the attacker can execute code on the same physical machine as the targeted process (e.g., in shared computers, such as Infrastructure as a Service cloud computing).

Our attack exploits physical information leakage from computation devices, and does not require the attacker to execute his own code on the intended target.


Acknowledgments


Man in the Rain