[ot][spam][random][crazy][random][crazy]

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Sat Nov 12 14:20:47 PST 2022


ok, so let's try the matrix inverse approach to making a fourier
transform, first with a normal transform

there are n frequencies, and each of these is evaluated at n sample offsets

>>> freqs = np.fft.fftfreq(6)
>>> freqs
array([ 0.        ,  0.16666667,  0.33333333, -0.5       , -0.33333333,
       -0.16666667])
>>> mat = np.exp(2j * np.pi * np.outer(np.arange(6), freqs))
>>> np.abs(mat), np.angle(mat)*180//np.pi
(array([[1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1.]]), array([[   0.,    0.,    0.,    0.,
   0.,    0.],
       [   0.,   59.,  119., -181., -120.,  -60.],
       [   0.,  119., -121.,    0.,  120., -120.],
       [   0.,  180.,   -1., -180.,    0., -181.],
       [   0., -121.,  119.,    0., -120.,  120.],
       [   0.,  -61., -121., -180.,  120.,   60.]]))

To transform something from frequency space to the time domain, we
would have a vector of magnitudes, each one applying to each
frequency. each sample is generated from all the frequencies,
multiplied by all the magnitudes, summed.

np.outer turns its arguments into dimensions in the order they are
passed. so the first dimension of the matrix (the larger dimension),
here, is going to be samples, whereas the second dimension of the
matrix (the smaller dimension), here, is going to be frequencies,
simply because I happened to pass them in that order.

I guess I like post-multiplying my matrices, since in my culture
information flows from left to right, so maybe i'll swap the order if
that's not what ends up happening.

The larger dimension is samples. The smaller is frequencies.
If I have a vector of data, and matrices are multiplied rows x
columns, I guess my vector would be a row, and the whole thing would
be multiplied by every column, and summed.

So if the matrix is on the right, the columns will be summed across.
Uhhh that would be larger dimension. The samples. So I think I want to
transpose the matrix.

>>> mat = np.exp(2j * np.pi * np.outer(freqs, np.arange(6)))

Now passing the frequencies first, I think they'll be summed across. Not sure.

So maybe I can make a single wave of data if I pass a 1 for the 0.5
frequency, and 0s everywhere else ...

>>> freqs
array([ 0.        ,  0.16666667,  0.33333333, -0.5       , -0.33333333,
       -0.16666667])

>>> (np.array([0,0,0,1,0,0]) @ mat).real
array([ 1., -1.,  1., -1.,  1., -1.])

ok, that didn't go as I expected. it gave me nyquist data. oops. okay,
that makes sense now that I think about frequencies. 0.5 is 180
degrees/sample. I want the 0.16666 entry to have a full wave.

>>> (np.array([0,1,0,0,0,0]) @ mat).real
array([ 1. ,  0.5, -0.5, -1. , -0.5,  0.5])

There we go. And inverting the matrix should be its transpose ...?

>>> np.angle(np.linalg.inv(mat)) * 180 // np.pi
array([[   0.,    0.,    0.,   -1.,   -1.,   -1.],
       [   0.,  -61., -120.,  179.,  120.,   59.],
       [   0., -121.,  120.,   -1., -121.,  119.],
       [   0., -180.,    0.,  180.,   -1.,  180.],
       [  -1.,  120., -120.,   -1.,  119., -121.],
       [  -1.,   60.,  120.,  180., -121.,  -61.]])

>>> np.abs(np.linalg.inv(mat))
array([[0.16666667, 0.16666667, 0.16666667, 0.16666667, 0.16666667,
        0.16666667],
       [0.16666667, 0.16666667, 0.16666667, 0.16666667, 0.16666667,
        0.16666667],
       [0.16666667, 0.16666667, 0.16666667, 0.16666667, 0.16666667,
        0.16666667],
       [0.16666667, 0.16666667, 0.16666667, 0.16666667, 0.16666667,
        0.16666667],
       [0.16666667, 0.16666667, 0.16666667, 0.16666667, 0.16666667,
        0.16666667],
       [0.16666667, 0.16666667, 0.16666667, 0.16666667, 0.16666667,
        0.16666667]])

It's transposed, and also scaled by 1/6, which doesn't mean too much.

Okay! Let's see if it can process data that is sampled 10% too slowly ;p
1635

>>> mat = np.exp(2j * np.pi * np.outer(freqs, np.arange(6)*1.1))

My first question is whether the matrix inverts ...!

>>> mat2freq = np.linalg.inv(mat)
>>>

It worked fine !!

And its magnitude is 1/6?

>>> abs(mat2freq)
array([[0.12780629, 0.15711185, 0.33061418, 0.33061418, 0.15711185,
        0.12780629],
       [0.19271258, 0.16807326, 0.15711185, 0.15711185, 0.16807326,
        0.19271258],
       [0.18145366, 0.19271258, 0.12780629, 0.12780629, 0.19271258,
        0.18145366],
       [0.18145366, 0.19271258, 0.12780629, 0.12780629, 0.19271258,
        0.18145366],
       [0.19271258, 0.16807326, 0.15711185, 0.15711185, 0.16807326,
        0.19271258],
       [0.12780629, 0.15711185, 0.33061418, 0.33061418, 0.15711185,
        0.12780629]])

No, it is full of wonky differently-lengthed vectors to do the funny
rate scaling.

Ok.

Better try using it.
1637
[afk, returned 1643]

So, here's slowed data from the mat forwardpass:

>>> (np.array([0,1,0,0,0,0]) @ mat).real
array([ 1.        ,  0.40673664, -0.66913061, -0.95105652, -0.10452846,
        0.8660254 ])

And of course we can invert it (I hope):

array([-1.77253931e-09,  5.00000001e-01, -7.02464406e-11,  1.59520641e-09,
       -7.10239902e-10,  5.00000000e-01])

It made it into complex data because I had taken only the real
component. it has 0.5 as a positive wave and 0.5 as a negative wave.

So now we have that complex frequency data, we can unscale the
sampling using a normal ifft?

>>> np.fft.ifft(np.array([-1.77253931e-09,  5.00000001e-01, -7.02464406e-11,  1.59520641e-09,  -7.10239902e-10,  5.00000000e-01]))
array([ 0.16666667+0.00000000e+00j,  0.08333333+2.36712649e-10j,
       -0.08333333+5.19624660e-11j, -0.16666667+1.48285615e-18j,
       -0.08333333-5.19624660e-11j,  0.08333333-2.36712651e-10j])

No ... it needs the complex components I trimmed. Okay, lemme try again.

>>> test_freqs = np.array([0,1,0,0,0,0])
>>> test_data = (test_freqs @ mat).real
>>> test_data
array([ 1.        ,  0.40673664, -0.66913061, -0.95105652, -0.10452846,
        0.8660254 ])
>>> analysed_freqs = test_data @ mat2freq
>>> rescaled_data = np.fft.ifft(analysed_freqs)
>>> rescaled_data
array([ 0.16666667-1.44560290e-17j,  0.08333333+4.04768811e-18j,
       -0.08333333+3.14030088e-18j, -0.16666667+9.91313139e-18j,
       -0.08333333+5.12143680e-17j,  0.08333333+2.59378204e-17j])
>>> rescaled_data.real
array([ 0.16666667,  0.08333333, -0.08333333, -0.16666667, -0.08333333,
        0.08333333])

It did successfully unscale the data, although its final magnitude was
1/6th the original magnitude.

I'm excited that this can be used to make a general fourier function.
Missing parts include:
- rectangular transformation, input and output samplecount differing
- resuming of a transform, propagating the phase forward

With resuming, I'm thinking that can probably be done by simply
multiplying a constant vector to the matrix, to shift it. Not quite
sure how, but visible from previous stuff.

[1653]

1702

Here's my draft fourier.py (License: AGPL-3):
import numpy as np

def create_freq2time(freq_rate, time_rate, freq_count, time_count):
    freqs = np.fft.fftfreq(freq_count)
    offsets = np.arange(freq_count) * freq_rate / time_rate
    mat = np.exp(2j * np.pi * np.outer(freqs, offsets))
    return mat / freq_count # scaled to match numpy convention

def create_time2freq(time_rate, freq_rate, time_count, freq_count):
    if time_count != freq_count:
        raise NotImplementedError("differing input and output samplecounts")
    forward_mat = create_freq2time(freq_rate, time_rate, freq_count, time_count)
    reverse_mat = np.linalg.inv(forward_mat)
    return reverse_mat * freq_count * freq_count # scaled to match
numpy convention

I usually make mistakes.
1703

So, in theory I can use this to extract that random data that was
resampled and repeated. I'll likely run into issues, so to handle my
funniness, I'll maybe do that in stages, and write tests that run
through the motions slowly.

Here it is passing a test that it can produce the same results as
np.fft. I had [1709] to remove the scaling from the reverse_mat, which
was already descaled by the inverse operation.

import numpy as np

def create_freq2time(freq_rate, time_rate, freq_count, time_count):
    freqs = np.fft.fftfreq(freq_count)
    offsets = np.arange(freq_count) * freq_rate / time_rate
    mat = np.exp(2j * np.pi * np.outer(freqs, offsets))
    return mat / freq_count # scaled to match numpy convention

def create_time2freq(time_rate, freq_rate, time_count, freq_count):
    if time_count != freq_count:
        raise NotImplementedError("differing input and output samplecounts")
    forward_mat = create_freq2time(freq_rate, time_rate, freq_count, time_count)
    reverse_mat = np.linalg.inv(forward_mat)
    return reverse_mat

def test():
    randvec = np.random.random(16)
    ift16 = create_freq2time(1, 1, 16, 16)
    ft16 = create_time2freq(1, 1, 16, 16)
    randvec2time = randvec @ ift16
    randvec2freq = randvec @ ft16
    randvec2ifft = np.fft.ifft(randvec)
    randvec2fft = np.fft.fft(randvec)
    assert np.allclose(randvec2ifft, randvec2time)
    assert np.allclose(randvec2fft, randvec2freq)

if __name__ == '__main__':
    test()

It's pleasant to have this basic test pass. It's a small, solid-seeming unit.

1709
1718
my test regarding scaling is failing. maybe step through it and take
notes, to sort it out.
having some larger inhibition, so sending this. 1720


More information about the cypherpunks mailing list