I'm a little scared to take my fourier work into an actual fan recording, given how hard the other project ended up being to engage. Maybe I'll crazy a little around the ideal of finding the unknown frequency efficiently. It seems _relatively_ clear how to find the unknown frequency using a feedback loop. It's not that clear whether it will be the "largest" peak or the "lowest" peak or whatnot, but often in an FFT there is a largest peak which is also pretty low, especially in an FFT of only one signal. Having intimately considered parts of the DFT, I can think about how this peak happens because two sinusoids that are near the real frequency resonate with it very accurately. So, given you can find the two sinusoids bordering the highest peak, you could then perform another DFT filled with frequencies tightly packed around those two sinusoids, and from them find a much more accurate depiction of the peak. You wouldn't even need to use sinusoids, you could form a frequency matrix out of your current best guess as to the waveform. It's an old idea, and the feedback loop would be short, and I'd like to try implementing it. But I guess what's active for me is more the idea of thinking about it a little more [maybe unfortunately]. Maybe it can get smaller and simpler, with less feedback structure. A small reusable component? In the end, we can only represent a frequency with so much accuracy: that is, we only _need_ a frequency represented with so much accuracy. If we know the accuracy of the frequency of interest, we could craft frequency matrices that are much more effective. We could also form an exact model of how the wave responds to off-frequency harmonization, and from that try to calculate its precise frequency. This again becomes a chicken-and-egg problem, and one uses a feedback loop to make a best guess, since we don't know the shape of the wave. So, what collapses the recursive feedback is the fact that we can only guess the shape of the wave as accurately as we have data on it. If we only have 16 or 1024 or 1m samples of data, we can only describe the wave with that many samples at once: and as soon as there is noise added the order of magnitude of accurate samples of the wave and accurate bits of precision immediately begins dropping. So there is a bound to the concept of a feedback loop that depends on the data rather than the precision, and thinking about this one can begin to see that there would be again a matrix that can immediately calculate the exact frequency, although this is not strongly and directly apparent. If you imagine newton's method applied to a linear transformation made from every sample, one can show this would collapse into a matrix. {although this leaves out the concept of finding the maximum.} So there are a few concepts here: - how do i find the maximum in a way that can flatting feedback? - do i need a model of the shape of the wave? - what function best models the accuracy of a considered frequency vs the data? - how does the real frequency best relate to the output of that function? The maximum question is most recently interesting, but maybe in general it helps to think about a simple example. We could consider empty data containing only an exact nyquist frequency, and then also data that contains only 1 full sinusoid. Maybe also data that contains 2 full sinusoids. The case of 1 full sinusoid is interesting. There isn't actually information here that there is repeating data, other than the shape of the data being a sinusoid. Most signal mediums carry data as sinusoids, so it's meaningful to consider these. But most data is carried in ways that are so dense that it's not very accurate to model them as sinusoids. Both can happen. Kind of two different possibilities. In the case of sinusoids, it's very accurate to model things as sinusoids and one could consider how multiplying two sinusoids creates an output wave with frequency equal to their difference. There's probably a space in that multiplication where the frequency can be quickly derived. (Stated that way, it's a recursive problem, because you then need the frequency of the output wave, which makes feedback.) [[usually i would have just quickly implemented feedback solutions and moved on to a larger problem. i seem to be in a situation where planning is much easier than acting, and it's fun to find new efficient things.]] Then, in the situation where the signal is not related to sinusoids, it kind of seems like the problem is one of convolution. Various shifts of the signal against itself are compared. What's more interesting is that, here, the signal could be shifted a fractional amount, which relates to modeling its structure at a different resolution. It seems like it would be pretty useful to have a generic model for what is likely to be a signal, and sample locality seems to be a thing here. In my recent tests, I'm using dense underlying signals with wildly changing data, and then modelling them as being composed of sinusoids. This is an additional challenge added to the situation of not knowing the frequency of the signal, since we don't know the number of sinusoids composing them. I don't really know how much signal locality there is in something like that, but I could use my code to downsample it and look at it, theoretically. Something I can remember is that there are no frequencies higher than the nyquist frequency. So the wildest swing something is going to have is from +1 at one sample, to -1 at the next. Nothing is going to be swinging up and down twice in this test data, I think. 0936. I'm having some cognitive concerns. Maybe it makes sense to consider the sample depth of the signal unknown, or to imagine that there could be wild swings present. The wild swing areas would be considered noise, I suppose, since there isn't enough information on them. One way of considering noise, in my opinion, would be signals that are too small, high frequency, loud, numerous, to discern. 0937 I'm thinkiing of that part of the feedback where it helps to model the signal. This originally rose in the idea of considering how the signal might behave if multiplied by an off-frequency signal (or summed with one). Given data for the signal, we can algorithmically sum or multiply that data by a scaled form of the signal, and algorithmically plot a chart of how different offsets result in different outputs. I'm thinking this might be doable in a linear way to condense into a matrix. That's a bit much for me to consider right now it seems. Maybe it's more interesting to implement feedback right now!