websearch result: "If inverse of the matrix is equal to its transpose, then it is an orthogonal matrix" 0508 https://en.wikipedia.org/wiki/Orthogonal_matrix
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
https://en.wikipedia.org/wiki/Orthogonal_matrix#Properties
A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean space Rn with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Rn.
I infer the rows and columns must all be at 90 degree angles to each other. In the complex domain, where each value is itself a 2-vector, that's a little confusing. I'm considering thinking of these orthonormal vectors as being planes, which are perpendicular to each other in 4-dimensional space, but maybe it will be clearer if I look straight at the numbers, maybe later. Anyway, with the idempotent transforms it makes sense the vectors are orthogonal and define a coordinate space. The article says that all orthonormal matrices can be shuffled so as to be a diagonal of 2x2 rotation matrices, with an extra 1 on the diagonal if the row and column count is odd. https://en.wikipedia.org/wiki/Orthonormality
two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors.
https://en.wikipedia.org/wiki/Orthogonal_matrix#Overview
Orthogonal matrices preserve the dot product,[1] so, for vectors u and v in an n-dimensional real Euclidean space u dot v = (Qu) dot (Qv) where Q is an orthogonal matrix.
This implies the dot product of values in frequency and time space are off only by a constant factor.
np.dot(v,v) 1.198167136558938 np.dot(v@mat/2**.5, v@mat/2**.5) (1.1981671365589377+1.1083248738116542e-17j)
The angle between values is the same. This makes sense, since everything is simply in a different coordinate basis. 0524 https://math.stackexchange.com/a/835837
Every real Householder reflection matrix is a symmetric orthogonal matrix, but its entries can be quite arbitrary.
In general, if $A$ is symmetric, it is orthogonally diagonalisable and all its eigenvalues are real. If it is also orthogonal, its eigenvalues must be 1 or -1. It follows that every symmetric orthogonal matrix is of the form $QDQ^\top$, where $Q$ is a real orthogonal matrix and $D$ is a diagonal matrix whose diagonal entries are 1 or -1.
In more geometric terms, such a matrix is an orthogonal reflection through a subspace. I.e., if $A$ is symmetric and orthogonal, then $P:=\frac12(A+I)$ is an orthogonal projection, and $A=2P-I$ is the reflection through the image of $P$.
not immediately sure what all that math syntax represents. browser renders it if i turn on javascript from cloudflare.com or such.
if A is symmetric and orthogonal, then P := 1/2 (A+I) is an orthogonal projection, and A = 2P−I is the reflection through the image of P.
You can construct orthogonal and symmetric matrices using a nice parametrization from Sanyal [1] and Mortari [2].
One can construct such a matrix with a choice of n orthogonal vectors {r_k}k=1..n and the desired number of positive eigenvalues p∈[0,n] R=∑k=1..p (r_k * rT_k) − ∑k=p+1..n (r_k * rT_k) They also point out that
if p=n, then R=I whereas if p=0, then R=−I.
I think it's stated a couple different ways that a symmetric orthonormal matrix is a statement of N orthogonal vectors and a selection of p which are positive, the rest of which are negative. I don't know why positive/negative is separated out, instead of just saying it is N signed orthogonal vectors. I tried this with my "v" vector and a perpendicular vector (v[1],-v[0]), but i may not have been quite sure what i was doing. https://en.wikipedia.org/wiki/Householder_transformation
a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin.
householder matrices are defined in terms of the outer product of a normal vector with its conjugate (doubled and subtracted from the identity matrix) there is a lot that can be learned here! 0546 I want to think more about the construction of the matrix from time points along sinusoids.
np.fft.fftfreq(2) array([ 0. , -0.5]) freqs = np.fft.fftfreq(2) mat = np.exp(np.outer(np.array([0,1]), 2j * np.pi * freqs)) mat array([[ 1.+0.0000000e+00j, 1.-0.0000000e+00j], [ 1.+0.0000000e+00j, -1.-1.2246468e-16j]])
Each complex sinusoid is a spiral in the complex plane over time, from 0 to ....? The first row and first column evaluate this spiral at time point 0. The second row and second column evaluate this spiral at time point 1. There's only one spiral in the above example, since one of the frequencies is 0. Spiral defined by sin_or_cos(t * -0.5 * 2pi) ... so when t is +-2, it has cycled. The spiral goes from 0 to -2. evaluating it at 1.0 evaluates its negative 180 degree point, which should be -1.
mat.real array([[ 1., 1.], [ 1., -1.]])
right, okay i guess. regarding the columns and rows, one of them represents moving along the selected time points (0.0, 1.0); the other represents moving along the selected frequencies (0.0, -0.5) . One could normalise these to make them more equivalent:
mat = np.exp(np.outer(np.array([0,1])/2**.5, 2j * np.pi * freqs*2**.5)) mat array([[ 1.+0.0000000e+00j, 1.+0.0000000e+00j], [ 1.+0.0000000e+00j, -1.-5.6655389e-16j]])
np.array([0,1])/2**.5 array([0. , 0.70710678]) np.fft.fftfreq(2)*2**.5 array([ 0. , -0.70710678])
I don't know what that would mean at higher N's, though. just something notable here. Thinking of how the length of these vcetors is all sqrt(N). back to [0,1] and [0,-.5] . Two sinusoid spirals. A matrix made from them. The matrix contains orthonormal vectors. When I was trying funny offsets, I was probably constructing a matrix that did not contain orthonormal vectors.
np.exp(np.outer(np.array([0.125,1.25]), 2j * np.pi * freqs)) array([[ 1. +0.j , 0.92387953-0.38268343j], [ 1. +0.j , -0.70710678+0.70710678j]])
Here I evaluate the spirals at 0.125 and 1.25 . I'm imagining that the data is viewed stretched and offset by 1/8th of a sample. Maybe keeping it pinned to zero would be more clear:
np.exp(np.outer(np.array([0,1.125]), 2j * np.pi * freqs)) array([[ 1. +0.j , 1. -0.j ], [ 1. +0.j , -0.92387953+0.38268343j]])
v @ mat @ mat.T / 2 array([0.74880538+0.15820088j, 0.73301797-0.15506057j]) v array([0.71733727, 0.82679766])
The matrix's transpose is no longer its inverse. It doesn't contain orthonormal vectors.
np.dot(mat[0], mat[1]) (0.07612046748871315+0.38268343236508967j) abs(np.dot(mat[0], mat[1])) * 180 / np.pi 22.355704150744625
Maybe they're offset by 22 degrees; I'm not sure what's meant by a complex angle. Normally, the dot product is 0:
mat=np.exp(np.outer(np.array([0,1]), 2j * np.pi * freqs)) np.dot(mat[0], mat[1]) -1.2246467991473532e-16j
Something that's notable about a matrix that isn't orthonormal and symmetric is that it still has an inverse.
v @ mat @ np.linalg.inv(mat) array([0.71733727+0.j, 0.82679766+0.j]) v array([0.71733727, 0.82679766])
So, the resulting "spectrum" can still technically be used to recover the original data, by precisely undoing the wonky transformation it performed. 0602 I'm thinking of the meaning of applying the wonky transformation matrix from the weird points of frequency evaluation, and I'm guessing that it might still be a spectrum that is output. Not certain ... I'll try with very clear data.
def signal(x): ... return np.exp(2j * np.pi * x * 0.5) ... signal(0) (1+0j) signal(1) (-1+1.2246467991473532e-16j) signal(0.5) (6.123233995736766e-17+1j)
signal() is now a sinusoid precisely aligned with the fourier algorithm. 0605