[ot] [wrong] personal project probstats notes
Noting that the input data to my histogram block is not necessarily a population, it may just be sampled from one. In that case the expression I wrote earlier is not strictly true. When you sample from something, the distribution described by the different samplings is normal. So the probability of the true population measure being any particular thing, is the same as the probability of a normal distribution having a particular mean, given one of its values and it's standard deviation. Finding that answer means knowing the distribution of possible probability distributions, without regard to the sample. I think if we assume a uniform distribution of actual histograms possible in all reality, then the distribution of values of a single bin would be a function of that uniform distribution of histograms, and which bin it is ... that's not quite uniform, as the number of options the rest of a histogram has changes depending on the density used by a bin, but it's probably some drivable algebraic expression. Simulating is always good to test things.
Floating point optimizations for pow() (note: all the histograms could totally be integers): - https://stackoverflow.com/questions/6475373/optimizations-for-pow-with-const... - https://martin.ankerl.com/2007/10/04/optimized-pow-approximation-for-java-an... I think it was the second answer to the stackoverflow link that contained a pow() implementation that could be made arbitrarily accurate and took just a few instructions to execute. Integer optimizations for pow(): - https://gist.github.com/orlp/3551590 - https://github.com/ampl/gsl/blob/master/sys/pow_int.c I'm not yet finding a library that contains these from google. The optimizations are almost a decade old.
Gnuradio uses volk which has vectorised pow() for 32 bits: https://github.com/gnuradio/volk/blob/master/kernels/volk/volk_32f_s32f_powe... Might be slower due to precision, unknown. It would make sense to contribute a 64 bit vectorised pow() to volk but doing so is not required. If adding my previous contribution, mention it having taken so long with a link to the issue thread. If you can, mention political targeting and abuse regarding pursuing public shielding or somesuch.
Tensorflow has a pow function. This is the lite CPU implementation, part of an inline header file over 8k lines long: https://github.com/tensorflow/tensorflow/blob/8c02285dc2664c2c74edbe7d2486f0... . It automatically performs the integer optimization when its argument is an approximate integer. The floating point implementation is labeled "slow". The file is too big to easily review on my phone. Tensorflow has GPU implementations of all its ops, and a heavily-maintained tensor (numerical array) class. Tensorflow's build system is bazel which has poor compatibility, but it just a pile of compilable sourcefiles. GPU is highly valued but a fast floating point pow approximation could be missing from tensorflow, unsure it's hard to look.
participants (1)
-
Karl