# [ot][spam][random][crazy][random][crazy]

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Fri Nov 18 07:21:12 PST 2022

```0951

I'm on 321  ->     assert np.allclose(longvec, inserting_spectrum @
inserting_ift) .

It looks like the two vectors are actually roughly matching:
(Pdb) p shortspace_freqs / longspace_freqs
array([        nan, 23.09213165, 23.09213165, 23.09213165, 23.09213165,
23.09213165, 23.09213165, 23.09213165, 23.09213165])
(Pdb) p (inserting_spectrum @ inserting_ift)[23]
0.71518936637242
(Pdb) p randvec[1]
0.7151893663724195

But not at interim locations:
(Pdb) p (inserting_spectrum @ inserting_ift)[12]
0.6963222972034951

I was imagining the step function would somehow magically break the
indexing the same one I was, but it actually changes different
frequencies at different samples, such that there is still
interpolation between each value.

(Pdb) p np.count_nonzero((inserting_ift[:,:-1] ==
inserting_ift[:,1:]).all(axis=0))
815
(Pdb) p inserting_ift.shape[-1]
1600

A little under half of the samples are interpolated with their neighbors.

Between 5 and 6, only 1 frequency changed. I imagine this is likely normal:
(Pdb) p inserting_ift[:,5]
array([ 0.0625, -0.    ,  0.125 , -0.    ,  0.125 , -0.    ,  0.125 ,
-0.    ,  0.125 , -0.    ,  0.125 , -0.    ,  0.125 , -0.    ,
0.125 , -0.125 ,  0.0625,  0.0625])
(Pdb) p inserting_ift[:,6]
array([ 0.0625, -0.    ,  0.125 , -0.    ,  0.125 , -0.    ,  0.125 ,
-0.    ,  0.125 , -0.    ,  0.125 , -0.    ,  0.125 , -0.125 ,
0.125 , -0.125 ,  0.0625,  0.0625])

(Pdb) p longspace_freqs
array([0.        , 0.00270655, 0.0054131 , 0.00811965, 0.0108262 ,
0.01353275, 0.0162393 , 0.01894585, 0.0216524 ])

I'm using a 3-valued function now, of course.

1007
I tried changing to a square wave in different ways but i wasn't
understanding things quite right. i think i'd need to remove some of
the restrictions on the wavelets (which i wrote wrongly). that would
be easier if i noted what they stemmed from.

anyway, it's notable here that the long-output frequencies are an even
multiple of the same-length-output frequencies. the data is basically
the same. the interpolation arises only because the wavelets evaluate
differently in the inbetween samples.

I guess it's interesting to think about that, although I wonder how
important it is.

Each wavelet basically has a phase. The phase is selected in order to
match best the data that is available, so each wavelet will change
sign at some relatively arbitrary point between the samples.

This is because the model of the signal is as a sum of
arbitrarily-phased waves, most of which are significantly below the
nyquist frequency and hence cross many samples.

A model that would more accurately reflect the nearest-neighbor
sampling I quickly wrote, might be different. For one thing it
wouldn't allow for phases that crossed between samples. It could also
use pulses rather than waves that span the whole length. It could also
use more instances of higher frequencies.

The most important component would be not allowing for phases that
imply nonintegral sample index changes.

The phase is _roughly_ the arctangent of the real and imaginary
component, both of which are linearly produced from the input based on
the values of the function offset by 90 degrees.

So the subinteger phases seem to come first from the fact that 90
degrees off a sample index is a subinteger sample, and second that the
longvector output produces many many subinteger samples that are
evaluated. Is this right? Maybe not?

The subinteger phases are produced by the inverse of the matrix that
uses the wave value at 90 degrees.

It's further confusing stuff. Everything is so confusing when it gets
all inhibited.

I like atm that the puzzle is mostly abstract. I don't need to e.g.
buy a motor or something, to test it. This means I can work on it
engaging fewer of my behavioral inhibitions.

The general question is: is it reasonable to model nearest-neighbor
sampling using something like these matrices, based on fourier
transforms? Or would that be a different approach?

I'm imagining that say I have a signal at 40 Hz that is a step
function, changing only at 1/40s, and then I sample it at say 37 or 82
Hz: I'd like to inform the decoder that the signal is a 40 Hz step
function and recover it with precision.
1021
```

More information about the cypherpunks mailing list