[ot][spam][random][crazy][random][crazy]

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Wed Nov 16 02:14:45 PST 2022


    shortspace_freqs = fftfreq(len(randvec), complex = False, dc_offset = True)
    longspace_freqs = fftfreq(len(randvec), complex = False, dc_offset
= True, repetition_samples = short_duration)
    assert np.allclose(longspace_freqs, shortspace_freqs *
short_duration / len(randvec))

0455

shortspace_freqs does this:
(Pdb) list
 54             elif repetition_time:
 55                 min_freq = 1 / repetition_time
 56             elif repetition_samples:
 57                 min_freq = freq_sample_rate / repetition_samples
 58             else:
 59  ->             min_freq = freq_sample_rate / freq_count
 60         if max_freq is None:
 61             #max_freq = freq_sample_rate / 2
 62             max_freq = freq_count * min_freq / 2
 63             if freq_count % 2 != 0:
 64                 #max_freq -= freq_sample_rate / (2 * freq_count)
(Pdb) p freq_sample_rate
1
(Pdb) p freq_count
30
(Pdb) p freq_sample_rate / freq_count
0.03333333333333333

(Pdb) p shortspace_freqs
array([0.        , 0.03333333, 0.06666667, 0.1       , 0.13333333,
       0.16666667, 0.2       , 0.23333333, 0.26666667, 0.3       ,
       0.33333333, 0.36666667, 0.4       , 0.43333333, 0.46666667,
       0.5       ])

longspace_freqs does this:
(Pdb) list
 52             if repetition_rate:
 53                 min_freq = repetition_rate
 54             elif repetition_time:
 55                 min_freq = 1 / repetition_time
 56             elif repetition_samples:
 57  ->             min_freq = freq_sample_rate / repetition_samples
 58             else:
 59                 min_freq = freq_sample_rate / freq_count
 60         if max_freq is None:
 61             #max_freq = freq_sample_rate / 2
 62             max_freq = freq_count * min_freq / 2
(Pdb) p freq_sample_rate
1
(Pdb) p repetition_samples
369.47410639563054
(Pdb) p freq_sample_rate / repetition_samples
0.0027065496138698455

(Pdb) p longspace_freqs
array([0.        , 0.00270655, 0.0054131 , 0.00811965, 0.0108262 ,
       0.01353275, 0.0162393 , 0.01894585, 0.0216524 , 0.02435895,
       0.0270655 , 0.02977205, 0.0324786 , 0.03518514, 0.03789169,
       0.04059824])

0459
OK. So, the error is because I am scaling by freq_count, which when
complex = False is scaled up from 16 to 30. The scaling up is simply
so it can represent the number of frequencies returned, but compare
clearly against np.fft.fftfreq and np.fft.rfftfreq .

I'm using freq_count here as the same thing as the output sample
count, which it is not. The bug should only be encountered when
complex = False.

Thinking ...

The frequencies here kind of assume a concept of how many samples are
output to. Multiplying each one by an increment of 1, advances the
sample offset of the signal generated, and the signal should loop
after the total samples.

So, fftfreq does assume a time-domain sample count that is equal to one period.

Once could then interpret the output when repetition_samples is set,
as more correct than when it is not.

The immediate question, then, is what time-domain sample length should
the function assume when complex = False and complex = True?

Assuming we keep freq_count as meaning the number of frequency bins,
then it would be sample_count=freq_count for complex = True and
sample_count=(freq_count-1)*2 for complex=False . This is near what it
already does.

Something I've been doing is using complex = False and imagining the
same number of frequency bins and sample bins. The idea is that I get
more frequencies and it feels more accurate or precise. This is silly
because there isn't actually that much data there. I'm likely
imagining having a set buffer size of say 1KB, and wanting to use as
much of it as I can for frequencies, while I parse data that is maybe
1MB in 1KB chunks. So, it's kind of a silly beginning of a possibly
larger scenario.

In the 1KB/1MB scenario, it could represent data that is 2KBs long,
using 1KB of frequencies. That seems reasonable to provide for.

Meanwhile, here I am foolishly using 16 frequency bins when I only
have 16 samples of data. It would be nice to provide for that if the
user wants to try it, using the pseudoinverse, but it doesn't quite
seem like the right thing to do.

It should indeed be assuming 30 samples of interesting data in such a situation.

This could be a change to create_freq2time, where it sets time_count
depending on whether or not freqs contains negative frequencies.

I'm trying changing create_freq2time:
    if freqs is None:
        freqs = fftfreq(time_count)
    elif time_count is None:
        if freqs[-1] < 0: # complex data
            time_count = len(freqs)
        else:
            time_count = (len(freqs) - 1) * 2

I'm stepping away from the system and could re-engage my issues that
make it hard to do, so I'm attaching the file to make it easier to
find again.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fourier.py
Type: text/x-python
Size: 10295 bytes
Desc: not available
URL: <https://lists.cpunks.org/pipermail/cypherpunks/attachments/20221116/ca7826c2/attachment-0001.py>


More information about the cypherpunks mailing list