Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it funny how Nyquist theorem is often seen as a highly theoretical rule with complex technical background. But if you think about it, it's super straight-forward. You just need to sample each up- and down-deflection of a wave at least once.

Trying to "beat" this rule is like trying to beat the Pythagorean theorem. If you want to derive higher frequencies, you'll quite obviously have to make additional assumptions about the measured wave.



Another very simple way to look at this is with the pidgeonhole principle. If I have (for simplicity) 4 signed 8-bit numbers, there are only 2^32 possible digital outputs I can represent. However, in the presence of aliasing, there's a theoretically infinite number of possible analog signals that could have produced those numbers, and no way from the numbers alone to distinguish them. Therefore, you need additional data to distinguish the original signal. In the analog domain, the most practical choice is to pre-filter the signal so that you know the sampling is adequate in the frequency range you are sampling. With that additional constraint you can take those numbers and reconstruct the original signal within the parameters of signal processing theory.

This point of view also has some advantages in that if you think about it, you can see how you might play some games to work your way around things, many of which are used in the real world. For instance, if I'm sampling a 20KHz range, you could sample 0-10KHz with one signal, and then have something that downshifts 40-50KHz into the 10-20KHz range, and get a funky sampling of multiple bands of the spectrum. But no matter what silly buggers you play on the analog or the digital side, you can't escape from the pigeonhole principle.

From here we get the additional mathematical resonance that this sounds an awful lot like the proof that there can be no compression algorithm that compresses all inputs. And there is indeed a similarity; if we had a method for taking a digital signal that could have been 500Hz or 50500Hz despite an identical aliased signal, we could use that as a channel for storing bits in above and beyond what the raw digital signal contains; if we figure out it's the 500Hz signal it's an extra 0, or if it's 50500Hz it's a 1. With higher harmonics we could get even more bits. They don't claim something quite so binary, they've got more of a probabilistic claim, but that just means they're getting fractional extra bits instead of entire extra bits; the fundamental problem is the same. It doesn't matter how many bits you pull from nowhere; anything > 0.0000... is not valid.

Of course, one of the things we know from the internet is that there is still a set of people who don't accept the pigeonhole principle, despite it literally being just about the simplest possible mathematical claim I can possibly imagine (in the most degenerate case, "if you have two things and one box, it is not possible to put both of the things in a box without the box having more than one thing in it").


When dealing with bits, the situation is different since algebraic degrees of freedom (dimensions or coefficients of sinusoids) are different than information degrees of freedom (bits). This difference in the context of the sampling theorem is explored in https://arxiv.org/abs/1601.06421, where it is shown that sampling below the Nyquist rate (without additional loss) is possible when the samples must be quantized to satisfy some bit (information) constraint.


Not even close. Nyquist requires sampling at least twice the /bandwidth/ of the signal, not necessarily twice the highest frequency, because of aliasing. For example, a signal that’s 1 megahertz +/- 1khz requires only 4khz sampling to capture the detail.

Aliasing is always a factor because no real signal has a highest frequency nor a fixed bandwidth (noise is never zero, and all filters roll off gradually forever)


I think it's a bit harsh to say not even close. I think it captures the idea pretty well.

The reason it's not perfect is because of your example, where a signal is not baseband. The extra leap required to understand that is amplitude modulation and demodulation.

Notice that to reconstruct the original signal of your example, you need to know the samples which are collected following the hypothesis of the sampling theorem, and you ALSO need to know the magic frequency 100MHz so that you can shift up your 2kHz bandwidth. That's the same setting as modulation.

The only concept missing, then, is recognizing that sampling can perform demodulation.


Demodulation via sampling isn't a weird side-case, that's at the core of understanding Nyquist and doesn't line up with the intuition that you just need to "sample each up- and down-deflection of a wave at least once"


I disagree with you. To be clear, "sampling each up and down deflection" is exactly the right idea in the case that you have no other information besides the samples (and besides knowing the hypothesis of the sampling theorem is satisfied). To use the more general version of the sampling theorem, you in addition need to know the center frequency (100 MHz in your example), otherwise you cannot reconstruct the signal. So already the setting is slightly different. You need an additional assumption.

Take a look at Wikipedia: https://en.m.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_samp... it suggests that Shannon himself considered your case an additional point on top of the sampling theorem.


All real signals have limited bandwidth, including noise.

Assuming there's no external source of radiation, I can absolutely guarantee there is no energy propagating at cosmic ray frequencies in a circuit built around an audio op-amp with standard off-the-shelf components.


No lowpass filter or circuit attentuates any frequency to zero, including your example. The attentuation will be 10^(-huge number), but it wont be zero.

This isn't just pedanticism, you really can't just sample at 2x the corner frequency of your circuit.

(Unless there's some quantum effect that gives a minimum energy level possible for a signal? But even then it would be probabablistic? Is this what you mean? I didn't pay close enough attention in physics class)


It's not straightfoward. Your intuition only works for periodic signals. Nyquist applies to all L2 functions.


Why L2? So that the Fourier integral is required to converge. But now you know your signal is the sum of periodic signals ;)


Non-periodic signals can be thought of as a Fourier series of periodic sine waves though.


Every signal can be represented as a finite or infinite series of sinusoids.


But intuition is not always correct.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: