[ot][spam][random][crazy][random][crazy]

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Sat Nov 12 03:41:57 PST 2022


[some typing lost]

>>> signal(np.array([0,1,2,3])*1.1+1.1) * np.exp(np.array([0,1,2,3]) * 1.1 * 2j * np.pi * np.fft.fftfreq(4)[2])
array([-0.95105652-0.30901699j, -0.95105652-0.30901699j,
       -0.95105652-0.30901699j, -0.95105652-0.30901699j])

There it works with 4 samples: a single phase and magnitude of 1.0 are
reconstructed from data with 10% sampling error. The input data is not
phase aligned:

>>> signal(np.array([0,1,2,3])*1.1)
array([ 1.        +0.j        , -0.95105652-0.30901699j,
        0.80901699+0.58778525j, -0.58778525-0.80901699j])

So it is looking much more doable !!

The next wave has half the frequency. How does this work out? How does
this combine to lose or recover information? How does it introduce
noise?
For the test code, what is most relevant is, how do I retain the
meaning of the interim data as being a spectrum of the input, and
precisely recover the input by processing that spectrum.

>>> signal(np.array([0,1,2,3])*1.1+1.1) * np.exp(np.array([0,1,2,3]) * 1.1 * 2j * np.pi * np.fft.fftfreq(4)[1])
array([-0.95105652-0.30901699j, -0.70710678+0.70710678j,
        0.30901699+0.95105652j,  0.98768834+0.15643447j])
>>> np.fft.fftfreq(4)
array([ 0.  ,  0.25, -0.5 , -0.25])

>>> wonky_points = signal(np.array([0,1,2,3])*1.1+1.1) * np.exp(np.array([0,1,2,3]) * 1.1 * 2j * np.pi * np.fft.fftfreq(4)[1])
>>> abs(wonky_points), np.angle(wonky_points)*180//np.pi
(array([1., 1., 1., 1.]), array([-162.,  135.,   72.,    9.]))

>>> -162+360-135
63
>>> 135-72
63
>>> 72-9
63

The output is of unit length, and there are 4 angles offset by 63 degrees.

The output with constructive interference, where the frequency was
0.5, all had the same angle, so summing them makes a peak.
This output, here, is supposed to have destructive interference. If 63
were 90, then the outputs would all cancel each other and the sum
would be 0. Instead, it's 63, so there will be some sum.

>>> accurate_points = signal(np.array([0,1,2,3])+1.1) * np.exp(np.array([0,1,2,3]) * 2j * np.pi * np.fft.fftfreq(4)[1])
>>> abs(accurate_points), np.angle(accurate_points)*180//np.pi
(array([1., 1., 1., 1.]), array([-162.,  107.,   17.,  -73.]))
>>> -162+360 - 107
91
>>> 107 - 17
90
>>> 17 - -73
90
>>> -73 - -162
89


The original data indeed has 90 degree offsets and completely eradicates itself.
Additionally, with 4 samples and 90 degree offsets, the offsets
complete a circle and cycle within the data. This is basically the
equivalent of destructive interference, since walking vectors in a
circle returns to zero.

So, rather than stabilising the constructive interference, which is
already stable, it seems more interesting to consider the destructive
interference with the wave twice as large.

With the twice as large wave, the algorithm is relying on the second
half of the data canceling out the first. The sinusoid of the longer
wave is opposite here, whereas the sinusoid of the shorter wave is
equivalent.

Because the sampling points differ, the data does not cancel. Here is
a more accurate place where it could be transformed, so as to cancel
itself out.

What is going on that needs transformation? We're multiplying two
sinusoids at specific points. One of the sinusoids has half the
frequency.

I'm engaging a strong internal struggle trying to consider this. I'm
guessing part my mind found that part of could be is unlikely to work
how I hope, and is pressing back, but I don't know yet why that would
be, since I'm struggling so hard to hold it.

I'm trying to discern what kind of situation is going on when a wave
with half the frequency is compared, at samples that do not align with
the zero crossings of the waves, to consider whether the data can be
transformed to reproduce the destructive interference of zero
crossings.


More information about the cypherpunks mailing list