[ot][spam][random][crazy][random][crazy]

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Sat Nov 12 06:31:11 PST 2022


trying to return to work
{possible additional information is that cognitive and logical and
[likely?] decision-tree concepts have similar properties like the
distributive law and consideration as spaces.}

ok so this thing is harmonizing when aligned because of 240 deg/s,
which is -120 deg/s, for 6 samples.
6 samples of 120 deg/sample is [0, -120, -240, 0, -120, -240] which is
[0, -120, 120, 0, -120, 120]

That's why it self interferese: all its expanded angles oppose.

I'm quickly thinking that given our data is accelerated or decelerated
by a disaligned sampling rate, there may be some way to realign it
all, but i haven't considered the sampling error in this way.

We harmonized the 300 deg/sample data. The other data was 180
deg/sample, which self-interferes at 2 samples. After the change, it
is now accelerating at -120 deg/sample, which self-interferes at 3
samples.

Ok.

Let's look at data that has been sampled at a rate that is off by 10%, now.

Here's how the aligned accelerated signals were made:

>>> sample_idcs = np.arange(6)
>>> aligned_products = [signal(sample_idcs) * np.exp(sample_idcs * 2j * np.pi * np.fft.fftfreq(6)[2]) for signal in (signal_a, signal_b, signal_sum)]
>>> [(abs(product), np.angle(product) * 180 // np.pi) for product in aligned_products]
[(array([1., 1., 1., 1., 1., 1.]), array([  0., 180.,  -1., 179.,
-1., 179.])), (array([1., 1., 1., 1., 1., 1.]), array([   0.,  -61.,
-121.,  179.,  119.,   59.])), (array([2., 1., 1., 2., 1., 1.]),
array([   0., -121.,  -61.,  179.,   59.,  119.]))]

With sample rate off by 10%:

>>> scaled_products = [signal(sample_idcs*1.1) * np.exp(sample_idcs * 1.1 * 2j * np.pi * np.fft.fftfreq(6)[2]) for signal in (signal_a, signal_b, signal_sum)]
>>> [(abs(product), np.angle(product) * 180 // np.pi) for product in scaled_products]
[(array([1., 1., 1., 1., 1., 1.]), array([   0., -162.,   36., -127.,
 72.,  -91.])), (array([1., 1., 1., 1., 1., 1.]), array([   0.,  -30.,
 -60.,  -91., -120., -151.])), (array([2.        , 0.81347329,
1.33826121, 1.90211303, 0.20905693,
       1.73205081]), array([   0.,  -96.,  -12., -109.,  156., -121.]))]

Organizing the output:

signal_a offsets scaled by 10%
array([1., 1., 1., 1., 1., 1.])
array([   0., -162.,   36., -127.,   72.,  -91.])

signal_b offsets scaled by 10%
array([1., 1., 1., 1., 1., 1.])
array([   0.,  -30.,  -60.,  -91., -120., -151.])

signal_sum offsets scaled by 10%
array([2.        , 0.81347329, 1.33826121, 1.90211303, 0.20905693,
       1.73205081])
array([   0.,  -96.,  -12., -109.,  156., -121.])

We can see the -162=198 deg/s rate of signal_a, and the -30=330 deg/s
rate of signal_b .
When I think of scaling the sampling points of a signal, this seems
roughly equivalent to scaling its rate by the same amount. It seems
helpful to think of that individually.

>>> np.angle(signal_a(sample_idcs)) * 180 // np.pi
array([   0.,   59.,  119.,  180., -121.,  -60.])
>>> np.angle(signal_a(sample_idcs*1.1)) * 180 // np.pi
array([   0.,   66.,  132., -162.,  -96.,  -31.])
>>> np.angle(signal_a(sample_idcs)) * 180 * 1.1 // np.pi
array([   0.,   66.,  132.,  198., -133.,  -67.])

Yes, scaling the indices, is the exact same operation as scaling the angles.

So, I mean, simplifying all this a lot, it seems it would make sense
to unscale the angles right off. I might be missing something! It's
still inhibited to think about.

But couldn't we take the output, and just decelerate it by 10%?

To do that, I'd need a decelerating wave for 10% .  Is that what waves do?

The previous "acceleration" we were using added a constant to every increment.
Multiplying by 10% does something different: it scales each angle the
same amount. The amount it is accelerated by is different depending on
what its rate already is.

So, this may not preserve across addition and multiplication, since I
haven't thought much about what it means yet, consciously.

I tried multiplying the angles by 1.1 and 1/1.1 a few times but didn't
get anywhere I understood. I'll go back to looking at it closely.

0831

signal_a offsets scaled by 10%
array([1., 1., 1., 1., 1., 1.])
array([   0., -162.,   36., -127.,   72.,  -91.])

This is advancing at -162 = 198 degrees. It's composed of a 60 deg/s
wave sampled at 110% the rate, multiplied by a 120 deg/s wave sampled
at 110% rate.
At 100% sampling rate, these multiplied to make a 180 deg/s wave that
self-interfered.

Now, it's 198 deg/s wave can be calculated as 60*1.1 + 120*1.1 = 198 .
Notably, 198 = 180 * 1.1 .

So the sample rate scaling operation is preserved _after_ the
multiplication, it distributes out.

signal_b sample rate scaled to 110%
array([1., 1., 1., 1., 1., 1.])
array([   0.,  -30.,  -60.,  -91., -120., -151.])

signal_b was 180 deg/s . So, (180 + 120) = 300, and 300 * 1.1 = 330,
which is -30 .
It's advancing at 330 deg/s, because 300 deg/s * 1.1 = 330. It appears
as -30 modulo 360.

{It's a little nice at this time to connect with the larger goal of
extracting high frequency signals from low frequency data. This
operation of acceleration during harmoniation with a fourier wave
differentiates between a -30 deg/s wave and a 330 deg/s wave, even
though they are the same in a single snapshot. This similarity could
help the whole idea unify some, but it so hard to think.}

signal_sum sample rate scaled to 110%
array([2.        , 0.81347329, 1.33826121, 1.90211303, 0.20905693,
       1.73205081])
array([   0.,  -96.,  -12., -109.,  156., -121.])

We should now be able to consider this resulting wave as a composition
of the two underlying waves.
There's a slow one at 198 deg/s, that is originally at 60 deg/s, was
undersampled to 66 deg/s, and then harmonized with a 120 deg/s wave
(undersampled to 132 deg/s) up to 198 deg/s .
Then there's a fast one at 330 deg/s, that is originally at 180 deg/s,
was undersampled to 198 deg/s, and then harmonized with the 120 or 132
deg/sec wave up to 330 deg/s.

The signals are unaligned and are making noise. Further accelerations
or decelerations could be applied to cause them to harmonize /
constructively interfere, or desynchronize / destructively interfere
them, making them sum to 6 or go away entirely. A sum would contain
the other waves, as well.

There's a space here where one could reconsider the fourier transform
to be appropriate overall for the situation, by considering how it
uses these various de/accelerations to completely synchronise or
desynchronise components at the same time.

For now, there are 6 samples, so things that self-interfere in up to 6
units work. The previous angles self-interfered in up to 3: that means
0 degrees (1 sample, 180 degrees (2 samples), or 120 degrees (3
samples).

The original data is at 60 deg/s and 180 deg/s, and the original
harmonizing wave was at 120 deg/s. We're trying to use that
harmonizing wave to demonstrate that the original data was not at 120
deg/s at all, and was entirely at 0 deg/s, 120 deg/s, and/or 180 deg/s
.

So here's the relevent point of current confusion: given the angles
are all scaled to 110%, can I realign them so that they rest at
multiples of 120 and 180 deg/s ?
0847

signal_a
>>> np.arange(6) * (60*1.1+120*1.1)
array([  0., 198., 396., 594., 792., 990.])

signal_b
>>> np.arange(6) * (180*1.1+120*1.1)
array([   0.,  330.,  660.,  990., 1320., 1650.])

They exist as a vector sum. Each sample in the sum can have its angle
added to or subtracted from, and the changes to the angle will
propagate back to the original wave's contribution. Multiplying the
angle does not have the same effect, due to the modulo 360 behavior.

0852

It seems reasonable to consider the first sample, at 198 deg and 330 deg .
198 is 18 more than 180, the original angle, whereas 330 is 30 more
than 300, the original angle.

I'm looking at this a smidge more and seeing that it's not at 120
degrees that signal_b usually cancels out: it's at -60 degrees, which
is 300:
>>> freq_120 = np.exp(sample_idcs * 2j * np.pi * 120/360)
>>> np.angle(signal_b(np.arange(6))*freq_120)*180//np.pi
array([   0.,  -61., -121.,  179.,  119.,   59.])

It takes all 6 samples for it to cancel at 300 degrees, as that 60
degree offset accumulates over the circle.

The offsets for the samples taken at the high rate are 18 (198 - 180)
and 30 (330 - 30).
Since their magnitude is different, multiplication of them would be
needed to return them to their individual values. So far, the only
operation on angles I've found a way to preserve across addition of
the signals, is addition of further angles. Given these 18 and 30
degree offsets are combined together, it doesn't seem like I'd be
aligning the underlying waves in the first sample using simple
addition of angles.

- there may be a way to scale the angles
- the fourier frequencies could be sampled at different points than
the signals, to scale things that way, maybe
- the data could be aligned in a different way such that cancellation
still works
- other options

it got harder to think when i realised the fourier frequency scaling
may work here. i'm not quite sure it does, especially after briefly
trying it while typing this paragraph. the scaling comes out via
distribution from the signal and fourier sampling. maybe it can work!

the sample angles are scaled by 1.1
the fourier angles are scaled by 1.1
the fourier angles are then added to the sample angles.
if i wanted to change the fourier angles so as to return the sample
angles to zero, i would need to shrink them differently for each one,
kind of like a + b = c * d, solve for b that undoes a d and depends on
c.

that doesn't sound like the approach to me, but i'm having some
difficulty considering it. it seems analogous to the situation after
the sample scaling.

let's see about angles and frequencies.
let's call the angular rate the frequency, so 180 deg/s is a
frequency. this appears to be the same frequency passed to the exp
function.
we're multiplying the frequency of the signal by 110%
then we're multiplying the frequency of the fourier wave by 110%
then we're multiplying the two waves, which adds their frequencies.
output_freq = sample_freq * 1.1 + fourier_freq * 1.1
the result is an output_freq that is * 1.1 .
if i want to undo the * 1.1 by reducing the fourier_freq, it won't
backpropagate to two different sample frequencies. it will instead
depend on them.
sample_freq + fourier_freq = sample_freq * 1.1 + X
X = sample_freq + fourier_freq - sample_freq * 1,1
X = fourier_freq - sample_freq * 0.1

So, an individual component can be processed correctly by subtracting
the fraction of its frequency proportional to the change in sampling
rate.

This works for only one component, so the other components would need
some form of destructive interference.

What happens if there are two sample frequencies?

it's helpful to unify the wave operations. let's take this funny
adjuster wave and put it into the more space concepts.

X = fourier_freq - sample_freq * 0.1
this X is a new fourier_freq, a fourier wave adjusted for sample_freq * 1.1
it's hard to hold all these together.

so we could have
output_wave = (sample_1_wave & sample_2_wave) + fourier_wave
where fourier_wave could be 120 Hz, or 120*1.1 = 132 Hz, or it could
be de-adjusted for either sample_1 or sample_2 .
All those options exist. We're considering adding the option of
de-adjusting these waves, to the output, to look for recombinations
where everything destructively interferes.

The impact of these deadjustments applies to both waves. [0910?] It
includes a subtraction by 10% of the original sample rate, so that
each wave could individually return to its original frequency. [0912].

The waves are at 330 and 198 Hz, so the dejustments will be 30 and 18 Hz.
So the output wave pairs would be 300 Hz, 198-30=168 Hz, and
330-18=312 Hz, and 180 Hz.

Because the mutation provides for aligning the waves at 300 Hz and 180
Hz, it's important to consider. Being away of the pair values of 168
Hz and 312 Hz, also seem important.

There may be some way in frequency space to resonate these small
offsets associated with 10% of the various original frequencies, so
that they themselves self-interfere destructively.

It looks, however, like it could be more reasonable to consider a
mutation of the fourier transform itself, to handle this situation. At
this point, it seems this would reduce the number of parts to consider
at once.
-
0914 [0915]

It seems the fourier transform basically scales everything such the
output waves hit fractions of their rotations every sample.
So if you have 6 samples, you want the output waves to engage 360/6 =
60 degree points of their phases, so as to self-interfere
destructively.

Where can that come from if the sample rate is scaled?

>>> np.angle(signal_a(np.arange(6)*1.1))*180//np.pi
array([   0.,   66.,  132., -162.,  -96.,  -31.])

Here we can see a 60 degree signal being snapshot at 110% of its points.
We don't actually have data from it at the 60 degree points. We have
offset data.

The new realization is that this data can be made to self-interfere,
by using a harmonizing wave that is appropriately shifted.

>>> un_60_10 = np.exp(np.arange(6) * (1 - 60/360 * 0.1) * 2j * np.pi)
>>> np.angle(signal_a(np.arange(6)*1.1) * un_60_10)*180//np.pi
array([   0.,   59.,  119.,  179., -121.,  -61.])

This wave's speed is altered specifically for this precise wave of
interest. The way it strikes it slightly differently at each sample,
rotates all the samples back, to produce output that is as if the wave
was sampled at the right rate.

The problem is that the original wave was alongside many other waves.
So this can be used to resonate the wave, or to make it
self-interfere, but the other waves get garbled.

[i'm having some trouble holding the idea of how aligning the other
waves can be similar to the other fourier transform. i might be
holding some contradictory assumptions in maybe a small part of my
considering. for my consciousness it's like amnesia and blocking and
stuff.]

what does the fourier transform do to a single wave?
  overall, the fourier transform lets you extract waves by resonating
them in ways that destructively interfere the others. it does this all
at the same time, but can be broken apart to be considered in parts.

so we would roughly want to resonate the wave while destructively
interfering the others.
what's notable is that we know the frequencies of all the waves.

given we know the frequencies of all the waves, here we even know
their phases, there should be a way to extract them precisely from
sufficient quantities of data containing them summed in known
proportions. it is of course possible that that isn't reasonable to
do.

so i'm thinking of, how can i extract this, when it is summed with
something else?
i found that when i hit it with this special wave, the other wave,
that i'm not focusing on, will get accelerated a different way.

here, since i happen to know the phase of the data, i could actually
craft a destructive wave, and send it in, and the two would sum to 0.
there is probably a way in a larger metaspace of waves, to craft a
metawave that would do that regardless of phase. there may even be a
normative way to do that, not sure.

the kind of obvious way of observing this, is that given the 10%
increase is a rational fraction, many frequencies can be added so that
the original fourier destructiveness happens, with 10 overlaps through
all the data. i think there is a better solution, though. but it may
engage a generalisation of spaces like that x10 space. we could for
example imagine infinite frequencies coming in smoothly.

>>> np.angle(signal_a(np.arange(6)*1.1))*180//np.pi
array([   0.,   66.,  132., -162.,  -96.,  -31.])
>>> un_60_10 = np.exp(np.arange(6) * (1 - 60/360 * 0.1) * 2j * np.pi)
>>> np.angle(signal_a(np.arange(6)*1.1) * un_60_10)*180//np.pi
array([   0.,   59.,  119.,  179., -121.,  -61.])

>>> np.angle(signal_b(np.arange(6)*1.1))*180//np.pi
array([   0., -162.,   36., -127.,   72.,  -91.])
>>> un_180_10 = np.exp(np.arange(6) * (1 - 180/360 * 0.1) * 2j * np.pi)
>>> np.angle(signal_b(np.arange(6)*1.1) * un_180_10)*180//np.pi
array([   0., -181.,    0.,  179.,    0.,  179.])

There's signal a and signal b being separately aligned again.
They're being aligned to kind of specific values. These values could
be anything.

The fourier transform makes a matrix of coefficients for different
offsets and frequencies. Its goal is simply to do that resonance and
deresonance. It needs enough points to do that.

We can collect a lot of interesting points here, and combine them such
that resonate.
In fact, we could collect every frequency, and combine it such that it
resonated.
In the background of each would be some "noise" -- the other
frequencies combined according to the adjustment offsets. It would be
much, much smaller than the incoming data. It also has precise
arithmetic properties that can be calculated.

So there is an initial solution here that involves a feedback loop:
each frequency could be resonated precisely, and this initial result
could be assumed. Then, assuming that result, a destructive wave for
the whole signal could be made, and sent back in. The remaining signal
would show the error in the construction of the wave, and this error
could be used to inform the solution with greater precision. This
would be repeated ad infinitum until the precise solution is reached.

Then, flattening that recursive approach would be the exact solution.
I don't have the training to know how to do that readily, but it shows
there is a form for a precise solution.


More information about the cypherpunks mailing list