[personal][crazy][ot] karl-story/spamlog: heartpulse
I'm quite excited that I managed to daydream how einsums related to matmuls yesterday, and still retain the concepts today. A _real_ goal I _actually_ hold, is building a small cell phone app to read heartbeats at a distance: in a general way to expand to other subtle information. This was a small hobby I was pursuing based on a small research paper that hit the newspapers around 2013. The algorithms used advanced linear algebra, and they are roughly why I have so much trouble thinking about linear algebra. It's a pointless goal, it's just for therapy. Like shielding and cryptocurrency, I've been making absolutely no progress on it for 9 years now. The purpose of the cell phone app is to show people how powerful surveillance is. You really can read somebody's heartbeat at a distance with a cell phone camera, along with so so much more. As my life deteriorated, the purpose of building the app wasn't to do anything real. It was just to make rational decisions. Why am I working on complex algorithm projects if I can't make a simple cell phone app? It's like a litmus test for being able to do things with success. I'd like to note some progress around this goal I held. It would be pleasant to me to accomplish this goal.
This goal has been 9 years. There's no rush if we can't demonstrate completion of something like this yet. Goals need some completion to be reached. I'm going to try to do a chunked attention implementation first. Besdies, people can use chunked attention, and I spent days wrapping my mind around it. My goal for heartpulse is more complex than chunked attention, because in heartpulse I do the algorithm optimization myself, rather than copying a paper that did it already.
This approach is now more common: you can see examples of it at say https://www.google.com/search?q=telemedicine+heartrate+web+camera . I don't know what the original paper was. The original paper's algorithm was: - take the average of the video - perform independent component analysis on the 3 RGB channels - select the channel with the highest fourier component near 60 Hz - graph it It plots a chart of the heartbeat of the person who's skin is most visible in the video. Nowadays people would use transformer models for this task. I'd like to amplify the origin of the components, live on a cell phone, so users can see that everybody's heartbeat is visible to the naked eye without cell phones already, written on our skin, and likely used by our subconsciousness to inform our senses of care for each other. I'd like to implement it in a general way, so the project could be expanded so that normal users could process other data. Most software used to be written in a general way. It's important for people to know about things like the algorithm, because computers are doing them already. But the task is just for therapy. To put my mind back together. It was supposed to be a small casual project.
A quick search with google finds https://github.com/hmmo-O/Estimation-of-Heart-rate-using-FastICA . Could be helpful for resolving implementation bugs against.
Some of my dissociated parts would like to work on this instead of the other project. 'we don't have access to the memories around this, but we want to support karl. everybody in here is forcing things to happen. we want to force things that support karl. he says the next step relates to finding or building code to process video. we'd like to copy it from the linked github project.'
[karl wants to code it in c/c++, this code is in python: that seems fine, i know python a little better right now, and modern libraries provide for optimization in python] [yes that can help synergise with other goal too, maybe we can use jax or torch?] [jax or torch unimplemented on mobile python as of now, as far as i know] [anyway it moves forward! let's make a prototype again for knowing-it-can-work!]
we want to do c/c++ if you do. let's do that instead. what projects were you starting before? you were using a mobile c++ library .. Qt? but it was inefficient? let's use Qt to load video. we're split into parts. some want this python approach that exists, some want Qt. the way to show the pythoners to do Qt is that we worked for a couple years to try to use Qt for this. we're still in disagreement. sorry for making the thread all messy. use your insertion-of-behavior to help it make sense to you. This thread is a spamlog for making the heartpulse advanced-data-for-consumers app.
The purpose of this thread is to provide a free, open-source, expandable and easy-to-modify app to end-users that charts the heartbeat of the person in front of the webcam when used. That is considered "giving advanced technology to end users" by my confused psychosis phrases. Not sure why.
The second version of the app would hilight the attributes of the video that indicate this heartbeat with adjustable color amplification, so users can see that they already had this information and didn't need the app.
It would be implemented in a general way so that other information than heartbeats could be engaged by end-users. Additionally, a decade later, it would be important to provide for other algorithms than ICA.
On 1/26/22, k <gmkarl@gmail.com> wrote:
The second version of the app would hilight the attributes of the video that indicate this heartbeat with adjustable color amplification, so users can see that they already had this information and didn't need the app.
By 'color amplification' here, I mean the image component containing the data would be amplified, so that the amplified colors actually amplify the information the user needs to comprehend to see that everybody's skin is pulsing with their heart. This likely means algorithm concept optimization for preserving that information backward into the original video.
As a brainmind, when doing this, part of the goal is understanding how it works for reuse. That's basic to 'algorithm concept optimization'. So just throwing a transformer model at the problem doesn't reach the solution unless end-users can fully comprehend and verify the workings of the model themselves, reapply those concepts, etc, which is a current research problem. More thoughts rising might give more ideas. Transformer models could make sense if _we_ understand them better. A quick explanation for why we aren't using them better is that "we are mind controlled not to make AGI", this helps a lot of concept areas move forward with explaining things. It's likely only a quick explanation and not the real explanation.
'we want to do it your way' [no need for transformer models yet, not sure why rose]
The linked github project at https://github.com/hmmo-O/Estimation-of-Heart-rate-using-FastICA does not immediately install. It needs tweaks to run. It says it's based off https://github.com/thearn/webcam-pulse-detector .
The linked github project at https://github.com/hmmo-O/Estimation-of-Heart-rate-using-FastICA does not immediately install. It needs tweaks to run. It says it's based off https://github.com/thearn/webcam-pulse-detector .
The thearn app contains instructions for running and launches. It also contains links to further biosignals revealed this way. It says it pulses the user's forehead in sync with the heartbeat but is not clear on whether the amplified information helps the user learn to see the heartbeat themselves accurately without it. i.e. whether it is the actual visible information that is amplified.
but of course it won't chart things in real time on a cell phone. it's just a copy of the paper.
Most of the inhibition around this task feels like it comes from the idea that people could read hidden information from politicians, maybe extract subvocal speech (ie thoughts for people who subvocalise their thoughts) automatically from vein contractions near mouth muscles. So a side project could be finding a way to automate putting together information indicating people's reasons for actions (another thing that shows thoughts). This is not a hard task: events that happen before other events, paired with indications of causality, have some likelihood of being part of their reasons. Developing tech toward public and free technological "mindreading" would help people understand things. There's a further step where further indications of causaility are developed repeatedly. But this heartpulse goal is not focused on that.
this is a crazy journal i made around coping strategies for optimizing and repurposing advanced algorithms while experiencing severe inhibition in doing so: process-understanding this ai-daydream-part is valued for its use in doing logical consideration. when a process is reviewed and understood, the results that can be produced from its parts are enumerated. for example when understanding something like a = bx + c we learn basic relations of proportionality and increasing/decreasingness such that we can predict one variable from a change in another and compare the relation to other relations we see in the world. we often transform the relation to a line graph. and then transform other relations to the line graph. and review the line graph to consider specific variable values. this is analogous to simply having the relation available to compare proportionality of related amounts: where proportionality is an idea of how those amounts will change, when other amounts change, and idea of how confident and likely and accurate that is for example when understanding something like a = bx + c we learn basic relations of proportionality and increasing/decreasingness such that we can predict one variable from a change in another and compare the relation to other relations we see in the world. we learn that when b goes up, a goes up by the same amount, if c is held constant we learn the when b goes down, a goes down by the same amount, if c is held constant we learn that the relation between a and b inverts with negative values, by it being held by multiplication, and we test that. we also try holding a and b constant, and learn that c is related clearly to a in a similar way, but not as clearly. we consider individual relations between the parts that seem clear and useful each relation depends on other conditions of the other parts. then when using it, we apply these individual relations to other situations. [burnt out quickly trying to write] [there are 4 variables, not 3, here. a, b, c, x. meant to write y = mx + b] how to generalise to other processes you're saying the _parts_ of the processes, the _things_ it depends on, the _variables_ and _equations or simple relations of things between them_ are _enumerable_ _deriving_ relations between the different variables, gives us utility and a great portion of understanding. we then hold these derivations near where they are useful, associated with the processes for deriving them to verify results. [algebra being very similar to intuition, surprisingly] like if someone is a postman this means they will work every day, i will see them near mailboxes, they will be driving a car with the driver seat on the wrong side then say i want to wave at the postman. to do so i will need to assume that they are on the wrong side of the driver seat. this involves part of my waving process, where place _myself_ where they can see me ... yeah stuff like arithmetic and waving-inference-behavior and stuff like that. we have a big habit of reviewing things for their future utility. for example optimizing algorithms. this means enumerating the _utility_ and _construction_ of the parts. _how_ was this constructed? _why_ were choices made? where can other approaches for the same reason meet differeing goals for the process? when considering how something was constructed and why, we often imagine we were constructing it. we figure out how the choices involved in selecting its parts helped it goals. in order to develop skill constructing it ourselves, we make sure we can find other choices that would also meet the goals. we review the entire system until we know we can make the same thing a completely different way at all the points in the process of interest. we don't actually do that, we just review it enough to do so. then we select changes with high return for our goals, to meet them. this involves assuming we can learn new things to meet the various goals, without actually learning them all. we then select which things to discover based on the expected return of doing so. ok thanks for writing it. we actually use these processes while daydreaming, to handle the cognitive issues via other approaches. also while working, in notes to keep going when handling issues. i really struggle to form logical inferences in some states of mind. especially when it overlaps algorithmic, mathemtical, geometric, or memory-based inferences. we've been using an abstract concept of 'summary' that is nonverbal, but above, it looks like summarises can be verbalised. we didn't solve for the groups of thoughts that are not understanding of processes. we also didn't translate with the word 'summary' that is so commonly referenced internally. but there's strong value around utilising understanding of processes, it's really a skill we've assumed we have. ok um. we review um the process and think of how it could have been made. when we try to understand why a part works, we look to the relation between what the part does, and the choices made in constructing it. i made a slingshot! how does it work? - rubber exerts a force when extended, to contract again - by stretching rubber with an object, the force is exerted on the object force-production -> engineering parts that make it function tool/weapon-ergonomics -> making it useful for human hands logistics-engineering-of-design -> putting parts together in way where all goals are met you could make something that has the same use as a slingshot by meeting the 3 goals above in some other way. if rubber goes away, another stretchy material could be used if stretchiness goes away, another way of exerting a force could be used the functions of parts can often be clearly described by short words but there is strong suspicion some domains have no clear words. we believe we can change that by forming languages that seem intuitive for them. not everyone is sure of this. how does the linux kernel work? it holds computer instructions for running the various services of the system it was made by nonpaid software engineers, originally just one. it likely took him many months of unpaid hours of work to make, unknown though. how does it work for ... fixing the thrashing problem of the system? we'll need to review the parts that make thrashing and store information. we'll want to review the code for moving memory between swap and ram. specifically, we'd be interested in the process of choosing when to do that. thrashing is caused when swappiong happens more than user interaction holding the goal of stopping thrashing: i.e. making user interaction responsive when ram is exhausted we would then consider avenues for changing the swapping behavior and consider avenues for detecting thrashing to do so and avenues for relating those information together kernels have 'userspace' and 'kernel space', the two can be laborious to move information between. so rather than detecting which processes are user processes, it could be good instead to measure whether thrashing is happening. - find a way to measure that thrashing is happening - alter the swapping code so as to prevent it when it does { thrashing happens when memory needs to be swapped so much that the cpu cannot do work basically, one or more processes have memory access patterns that don't provide for cpu time. these processes would need to be placed on a queue with properties that provide for other processes to do work while they wait. this may mean reducing the ram available to those processes. it could also mean engaging other parts of the linux kernel. } those are good words, and could continue. when working a projedt we're likely to not have much utility from reviewing the words, there being so many, although we are getting a little better and better at that, slowly. but while writing them we came up with useful parts of the project to engage. this typing has been preparation for further cognitive decline around learning and acting-on-understanding-of-things. it is also an attempt to find ways to learn and act-on-understanding in areas we do not presently do that well. it is designing a coping strategy, or a set of them, or imagining doing so. 'cognitive decline' could be replaced with 'inhibition', where we just can't seem to form action and thought around topics, maybe due to them triggering wild spasms in our experiences. the heartpulse goal involves analysis of an algorithm that uses research we haven't learned. for example, it uses {eigenmatrices?}, which habitually we would understand by exploration of their use and remembering via exposure. we could reach success without as much resistance, possibly, by instead considering the utility of eigenmatrices simply within the design of the algorithm. we would then put the algorithm together based on the utility of its parts, rather than the specific parts, and transform it to a more efficient approach that we come up with. this may meean interrecoding the parts, such that their subparts move between each other. it also likely means redesigning some of them. some parts are balking at this. it sounds like karl wants to be able to do that _easily_. it's such a big rote puzzle work, as described. huge-seeming. but we understand karl seems to need to understand his own processes of understanding, in order to keep going on some of his task ideas. this challenge is actually similar to reason-review and the reason for the task is not for a strongly valued result any more. it is simply to learn to do tasks like it. because we used to be able to. the reason to stick with the tasks could be because we have done them enough to form descriptions of them like the above partial one.
participants (2)
-
k
-
Undiscussed Horrific Abuse, One Victim of Many