Posted by rbanffy 9/4/2025
This took me down a very fascinating and intricate rabbit hole years ago, and is still one of my favorite hobbies. Application of Fourier, Laplace, and z transforms is (famously) useful in an incredibly wide variety of fields. I mostly use it for signal processing and analog electronics.
https://www.reddit.com/r/nostalgia/comments/b6dptv/folding_t...
I wasn't making point about mathematics qua mathematics. Was thinking that if I were doing EE undergrad today, I'd use SageMath or Mathematica to crunch the mechanical algebraic manipulations involved in doing a z-transform.
Image, video, and audio processing and synthesis, compression algorithms, 2D and 3D graphics and geometry, physics simulation, not to mention the entire current AI boom that's nothing but linear algebra… yeah, definitely algebra isn't useful in anything programming related.
gingerBill – Tools of the Trade – BSC 2025 https://youtu.be/YNtoDGS4uak
Recently I was buying a chromecast dongle thing and one of the listings had some kind of "Amazon recommends" badge on it, from the platform. It had hundreds of 5 start reviews, but if you read them they were all for a jar of guava jam from Mexico.
I'm baffled why Amazon permits and even seemingly endorses this kind of rating farming
I don't like Fourier transform but for petty reasons. In the engineering exams, I messed up a Fourier Transform calculation and ended up just a few points short of a perfect score. Hate it ever since :)
(Or jω, if you prefer that notation)
Jokes aside, I graduated as "Computer Engineer" (BSc) and then also did a "Master in Computer Science"; I was (young and) angry at the universe why soooo many classical engineering classes and then theory I had to sit through (Control theory, Electrical engineering, Physics), and we never learned about the cool design patterns etc etc.
Today I see that those formative years helped me a lot with how I develop intuition when looking at large (software) systems, and I also understand that those ever changing best design patterns I can (could have) just look up, learn, and practice in my free time.
I wish a today-me would have told my yesterday-me all this.
After graduating I joined an academic research team based mainly in a EE department who were mainly Control Engineers - we were mainly doing stuff around qualitative reasoning and using it for fault diagnosis, training etc.
No. Things acquire different names if they are independently discovered by different communities.
Native Americans called Indians. Lol! what was that.
This lecture by Dennis Freeman from MIT 6.003 "Signals and Systems" gives an intuitive explanation of the connections between the four popular Fourier transforms (the Fourier transform, the discrete Fourier transform, the Fourier series, and the discrete-time Fourier transform):
https://ocw.mit.edu/courses/6-003-signals-and-systems-fall-2...
Also, a property of wavelets is they're non-parametric, which limits their utility in knowledge discovery applications.
For ML applications, my opinions is that they're somewhat superseded by deep learning methods that apply less restrictive inductive bias. As data grows, the restrictive prior assumptions of wavelets will hurt, sort of like how CNN is being abandoned for ViT, even though CNN can outperform in situations where data is scarce.
So overall, they have a pretty small set of usecases where they're more suited than other alternative tools.
In any case, they are a bit more advanced, and out of scope for the undergraduate course I linked to.
It would be very sad to lose those treasures by passing them by because you thought you already had them. As the song says:
> When you're young, you're always looking
> On the far side of the hill;
> You might miss the fairest flower
> Standing by your side very still.
And the flowers of Fourier analysis are as fair as the fairest flowers in this universe, or any universe.
https://news.ycombinator.com/item?id=45134843 may be a clue to the hidden beauty, for those who are puzzled as to what I might be talking about.
I think he could have explained how the Gaussian filter almost kills all details / high frequency features, then rounding completely destroy them and then they can not be magically reconstructed to "unblur". He gives some hints about this, but they are to few and too general.
PS: There are some non lineal ticks "enhance" the "unblur" version , like minimizing the L1 norm of the gradient (or something like that). It helps with some images. I'm not sure which is the current state of the art.
https://youtu.be/spUNpyF58BY?feature=shared
Edit: In fact it was already mentioned in the comments, but I haven't noticed
The FT, as are many other transforms, are 1-1, so, in theory, there's no information lost or gained. In many real world conditions, looking at a function in frequency space greatly reduces the problem. Why? Pet theory: because many functions that look complex are actually composed of simpler building in the transformed space.
Take the sound wave of a fly and it looks horribly complex. Pump it through the FT and you find a main driver of the wings beating at a single frequency. Take the sum of two sine waves and it looks a mess. Take the FT and you see the signal neatly broken into two peaks. Etc.
The use of the FT (or DCT or whatever) for JPEG, MP3 or the like, is basically exploiting this fact by noticing the signal response for human hearing and seeing it's not uniform, and so can be "compressed" by throwing away frequencies we don't care about.
The "magic" of the FT, and other transforms, isn't so much that it transforms the signal into a set of orthogonal basis but that many signals we care about are actually formed from a small set of these signals, allowing the FT and cousins to notice and separate them out more easily.
And there's another good reason why so many real-world signals are sparse (as you say) in the FT domain in particular: because so many real-world systems involve periodic motion (rotating motors, fly's wings as you noted, etc). When the system is periodic, the FT will compress the signals very effectively because every signal has to be harmonic of the fundamental frequency.
Well, stable systems are can either be stationary or oscillatory. If the world didn't contain so many stable systems, or equivalently if the laws of physics didn't allow so, then likely life would not have existed. All life is complex chemical structures, and they require stability to function. Ergo, by this anthropic argument there must be many oscillatory systems.
Ordinary differential equations can describe any system with a finite number of state variables that change continuously (as opposed to instantaneously jumping from one state to another without going through states in between) and as a function of the system's current state (as opposed to nondeterministically or under the influence of the past or future or some kind of supernatural entity).
Partial differential equations extend this to systems with infinite numbers of variables as long as the variables are organized in the form of continuous "fields" whose behavior is locally determined in a certain sense—things like the temperature that Fourier was investigating, which has an infinite number of different values along the length of an iron rod, or density, or pressure, or voltage.
It turns out that a pretty large fraction of the phenomena we experience do behave this way. It might be tempting to claim that it's obvious that the universe works this way, but that's only because you've grown up with the idea and never seriously questioned it. Consider that it isn't obvious to anyone who believes in an afterlife, or to Stephen Wolfram (who thinks continuity may be an illusion), or to anyone who bets on the lottery or believes in astrology.
But it is at least an excellent approximation that covers all phenomena that can be predicted by classical physics and most of quantum mechanics as well.
As a result, the Fourier and Laplace transforms are extremely broadly applicable, at least with respect to the physical world. In an engineering curriculum, the class that focuses most intensively on these applications is usually given the grandiose title "Signals and Systems".
We have discovered a method (calculus) to mathematcally describe continuous functions of various sorts and within calculus there is a particular toolbox (differential and partial differential equations) we have built to mathematically describe systems that are changing by describing that change.
The fact that systems which change are well-described by the thing we have made to describe systems which change shouldn’t be at all surprising. We have been working on this since the 18th century and Euler and many other of the smartest humans ever devoted considerable effort to making it this good.
When you look at things like the chaotic behaviour of a double pendulum, you see how the real world is extremely difficult to capture precisely and as good as our system is, it still has shortcomings even in very simple cases.
A learning that describes chaos well enough may not want to be associated with "calculus", or even "math" (ask a friendly reverse mathematician about that)
https://www.johndcook.com/blog/2021/04/09/period-three-impli...
Somewhat tangentially, if Ptolemy I had responded (to Euclid) with anything less specific ---but much more personal--- than "check your postulate", we wouldn't have had to wait one millennium.
(Fermat did the best he could given margin & ego, so that took only a century or so (for that country to come up with a workable strategy))
Less tangentially, I'd generalize Quigley by mentioning that groups of hominids stymie themselves with a kind of emergent narcissism. After all, heuristics,rules and even values informed by experience & intuition are a sort of arrogance. "Tautology" should be outlawed in favour of "Narcissism" as a prosocial gaslighting term :)
I would say that the it's very difficult to imagine a world that would not be governed by differential equations. So it's not just that life wouldn't exist it's that there wouldn't be anything like the laws of physics.
As a side note chaotic systems are often better analysed in the FT domain, so even in a world of chaotic systems (and there are many in our world, and I'd argue that if there wasn't life would not exist either) the FT remains a powerful tool
In practice this is probably true, but I can see another possibility. The system could follow a trajectory that bounces around endlessly in some box without ever repeating or escaping the box.
For example, the concept of homeostasis in biology is like this. Lots of things are happening inside the living body, but it's still effectively at a functional equilibrium.
Similarly, lots of dynamic things are happening inside the Sun (or any star), but from the perspective of Earth, it is more or less stationary, because the behavior of the sun won't escape some bounds for billions of years.
I suspect you will find ergodicity interesting.
(And cases where that isn't true can still be instructive - a Taylor series expansion for air resistance gives a linear term representing the viscosity of the air and a quadratic term representing displacement of volumes of air. For ordinary air the linear component will have a small coefficient compared to the quadratic component.)
Cauchy had just proved that the limit of the sum of an infinite set of continuous functions was itself continous, and then along came Fourier with “are you sure about that bro?” and showed that you could take the infinite sum of very clearly continuous functions (just sine and cosine) and approximate something like a sawtooth function (which was very obviously discontinous) as closely as you like.
[1] by which I mean making obvious the fact that they had been proceeding for 100+ years using calculus without a rigorous basis.
(Fractional fourier transform on the top face of the cube)
And for short time fourier transform showing how a filter kernel is shiftes across the signal. [2]
How do you compute the fractional FT? My guess is by interpolating the DFT matrix (via matrix logarithm & exponential) -- is that right, or do you use some other method?
Yes the simplest way to think of it is to exponentiate the dft matrix to an exponent between 0 and 1 (1 being the classic dft). But then the runtime complexity is O(n^2) (vector multiplied with precomputed matrix) or O(n^3) opposed to the O(n log n) of fast fourier transform. There are tricks to do a fast fractional fourier transform by multiplying and convolving with a chirp signal. My implementation is in rust [1] compiled to web assembly, but it is based on the matlab of [2] who gladly answered all my mails asking many questions despite already being retired.
[1]: https://github.com/laszlokorte/svelte-rust-fft/tree/master/s...
The Wikipedia FFT article (https://en.wikipedia.org/wiki/Fast_Fourier_transform) credits Gauss with originating the FFT idea later expanded on by others, and correctly describes Cooley and Tukey's work as a "rediscovery."
To be extra clear: it’s a very good video and you should watch it if you don’t have a feel for the Fourier transform. I’m just trying to proactively instil a tiny bit of dissatisfaction with what you will know at the end of it, so that you will then go looking for more.