After having explained the causes of optical noise in detectors, I’d like in this post to define the parameters that makes it possible to compare detectors noise specifications.
Signal to noise ratio
Also noted SNR or S/N. This is defined as the ratio between the signal power and the noise power. Hence:
Understanding the meaning of this is quite straightforward: the higher this ratio, the best signal you get. At equal input signal, the detector with the highest SNR is the less noisy one. If S/N<1, you cannot see anything, if S/N>>1, the signal is easy to pick up. As such, the signal-to-noise ratio is not a usable figure of merit of a detector. It is rather a measure of how strong your signal is compared to the “sensitivity” of your detector.
However, comparing the optical power needed to get a SNR of 1 is a step in the right direction to compare detector noise. According to the optical noise models explained earlier,
Obviously, the noise depends completely on the bandwidth of the detector. This is understandable: to differentiate a true experimental result from random experimental error, you need to repeat the experiment. Translated in detector terminology, to get a better signal you need to increase the integration time of the experiment (= decrease the bandwidth).
To define a good figure of merit, it needs to show the minimum detectable optical power and not to depend on the integration time. Enters the noise equivalent power.
Noise equivalent power
Also noted NEP. This is a slightly confused definition. The initial concept is to define the noise equivalent power as the optical power which will yield a signal to noise ratio of 1. This is then the limit of what can be detected. But with this definition the noise equivalent power can only be given at a specific bandwidth (Δν enters in the expression of S/N).
Since not two detectors have the same integration time, manufacturers tend to call Noise Equivalent Power the minimum detectable power per square root of bandwidth. We will note this noise equivalent power per unit of bandwidth NEPΔν√ to avoid confusion. In this situation we have then:
this normalised NEPΔν√ only depends on the detector itself (and sometime on the ambient temperature!) and is measured in W⋅Hz−1/2. The smallest the NEP, the better is the detector.
Getting back to the ambient temperature issue, the fluctuations of the ambient temperature are generally too small in comparison of the absolute temperature to introduce a bias in the comparison. However, it is true that the higher the temperature, the more noisy a detector is. For that reason some high quality detector are cooled (generally thermoelectrically but sometime with cryogenic cooling).
Detectivity and Specific detectivity
The detectivity D is defined as the reciprocal of the NEP: D=1NEP. Since all of parameters we defined depend on the area of the detector, in some cases this introduces a bias in the detector comparison. Thus sometime is specified a specific detectivity D* (D star), defined as:
In fairness, I have very rarely encountered people using this specific detectivity in optical detectors.