> > > > > > > > > https://aoweave.tech/hYt6IIPFb8HLow7bf39FoKjY9MCu854hWpESH2Oiyfs > > > > > > > > https://aoweave.tech/mWO8kdHEkJ-Zh1TJBIya-T5sJBXdzr6NGQzP1CN0Chc > > > > > > > https://aoweave.tech/a2FrAIlf2EqBT8TQ6y-E_Y69N_KqLPgPhDTGNG-A5ZM > > > > > > https://aoweave.tech/3wFjC-y780bhFc9oPPYBTUEkTOjBMgS3W7sqUy65pT4#lUm9coOPT9J... > > > > > > https://aoweave.tech/tIBH7dOj7U5kHpBxkPhSDjLoYiSfL9-Dl1Hhdpw0iSU > > > > > https://aoweave.tech/kRCOQar2xH-RkocPZ8sOGiaLSWl4ivNa0mLT08rn3fU#EkyX7KwYSxM... > > > > > https://aoweave.tech/HagBr8xWiwbtIgvxnsRV0V7tA9g9Yhnh2Y7ODom-lL8 > > > > https://aoweave.tech/XgcXcBy7_jsOxNa7FEiBSeyrC2rjUb4YxFlhnrdEOeI#uFrMuJF_1AC... > > > > https://aoweave.tech/55zJYMXemTsh-uxgKUxNQIhKrvg9IdQAu37EYDSDMjs > > > https://aoweave.tech/PxShJbkAXyDCuO8AcDAtpaVv01s1p1um3mC09I0sABU#4uAwme8uhsv... > > > https://aoweave.tech/4sPnE2J_r2D9l2OtyRispMOaRbCri23XFKugTLuaLic > > https://aoweave.tech/0HvvLyVn6IKAeW64WwWrK3OQxIzmR0E-1OkBMKXaC2U#v2Bp1ZdSwX1... > > https://aoweave.tech/Jw0Y9R8IBaPlePWk0GmGtDvkwMM1TmBm2I0kmDKoOQA > https://aoweave.tech/4c4ogh0qZbZCVHZTwDulKKnvdfQDZhAYV4SPq9b7_co#hmjYwT8Traa... > https://aoweave.tech/2LDE7exgAUzIezT12u9toJa2JSUE8yp8hx9xbV_MKPw https://aoweave.tech/d9U3Y2YdOIlOPce0TNB4ALg7ZJwHjiB4ol8Njq7IBmM#TWbiekdoZw1... https://aoweave.tech/85Q3SQwSbUtwrITuUzVPbKseAHHIfP7w5l3AB0iFiUI https://aoweave.tech/_XuGeICiXOelkjoFUJTwWF92EFccKFFUfIAds8HeCuc#yFB_kC7WdYH... https://aoweave.tech/3SimE6KCJCNnligrpBIAMwmt0-mcGGDJ8OzRly3uVeQ https://aoweave.tech/k6OqacHE35A6hyGJ_S00tuGBDbEzGeinMtP4HogXP6A#0h6DJ6UaKiQ... https://aoweave.tech/IN_hP7kSJJZZQWS1GvqjOAKHSlMBZV0XCPfMmYHrEWc
bip b
https://aoweave.tech/gpYK1JphfChcLQKLM2QOZ8nguKOhxFstgZWBQssIBzc#fjV47fJz9qw... https://aoweave.tech/dXu8N0TGFBTmEBh6NXdD5tivWsBijWGFl-UgvUnzoVU
i found some fun/simple coding tasks that might relax me today we'll see
i tried to download one of my streans (by patching out urls and such) and it turns out pyarweave might need its block format updated!
in other news i think i solved trustwebs (more likely i will be adding mistakes to an existing solution to only part of the problem) which is so on-topic and cool!!!
the idea (hun i may have said this before but it might be more stable):: - all onformation is useful. the system doesn't decide what is trustworthy. - user express statements, votes, whatever signed by public keys. a user can have one or many public keys, freely change them etc. trust is associated with a public key. - a user can privately express things they consider of value. this information is not shared but is used by their client to identify public keys that share their view. this transfer of trust is then used key-to-key. - trust of a key hinges on a _termination time_, the earliest opinion that _disagrees_ with the source of transfer. at this point the key that changed its similarity has entered a different mood or it or its owner has been otherwise coopted by other opinions. - the design hinges on universal access to a trusted store. if this is not available then it's more like an advanced friend-to-friend situation. upgrades: - custom functions for trust transfer including partial trust such as 0.0-1.0 - market or work based incentivization attacks: - an adversary might create many thousands or millions of public keys that express unrelated choices and opinions in order to try to make the algorithm useless. to handle if this happens, the search for good keys would need to become more robust, verifying many agreeing opinions, or using other avenues such as friebd-to-friend.
The basic heartening ideas are of not requiring system trust or relativ infotnation value, and using the patterns of what is real and logically shown to identify what the user finds useful.
For example, in that sybil-key-attack, any information that can be publicly shared and used to differentiate the attackers from good users, can then be used by everybody to filter them out, using simple propagation of trusted properties.
But the nice thing is once you find a good user, easy if you know a user personally, then you get all their trusted keys, and the more shared knowledgr you form with them the more readily you can filter the attackers out, possibly at that point the attackers a setting a baseline of trusted agreement that needs to be formed to filter out noise. Ideally things evolve such that performing an attack strengthens the network, but it may need further logic.
The idea is to start simple, try to make it flexible, and leave it open to adding on.
so, what would a somple structure be? need a way of _representing a signed exptrssiont that's useful_ well, in a cryptographic store expressions are already sigbed. - need a way of finding other expressions by that key
then a question, do peoppe express trust of keys? or maybe it's interesting to keep trust implicit in similar exptession. i _would_ trust a key if this key were to make thr same decisions and observations as me. however, that's only useful if enough decisions and observations are shared.
so, maybe it is useful for people to be able to share everything they express or use. this helps the network function. but information could also be kept local if privacy is desired.
most important information to share is the expressions .. big work then is transfer of trust
now, an interesting thing is that users and third party data sources could work together. say you jave some data from a provider or respurce or scan of a document. this source could be given a public key too, and a date .....
basically _trust is extremely useful in the face of incorrect or cobflixting information_ and it can be done with improved concern for privacy which is the big problen with information trust netwotks, exposing users with conflicting views.
maybe had more mature expressions around this in the past :/ like i think i came up with much better ways of protecting users keeping them in carefully expanding groups that only learned a little about each other and such :/
but y'gotta start somewhere.
nowadays when attacks come they can be powerful aggressivr strikes that serve to not simply briefly disrupt a neteork but as part of coordinate disruption of teams, communities, people. this makes the "try it and improvr" strategy less effective or useful.
still, one has to start somewhere.
vulnerabilities serve as honeypots only if the people involved stay safe and able to access and use logs that were successfully made and retained.
so!
information is not complex. basically you have a subject, a property and a value.
so say i drive by a gas station and the price is $5.20 ir $2.18 up on bright letters, i could publicize this informatioj further. but then say i go to yhe pump and it says $4.09 I can publicize this too.
For that to be useful, we need to specify where the measurement was takeb from, as well as infer trust algorithms from measurement disparity. The real price of gas eas $4 but the sign said $2 or $5. This means: - signs only usually relate real price - they didn't at this gas station .
Just like a user can get coopted by othet opinions, so did the sign at the gas station. We wouldn't want to trust it again without some process if rebuilding trust.
https://aoweave.tech/Cx65hQ9RGTS5YoILo4SwTf_o4YqQZVtokduck2dw9VI#43AkodsNmVK... https://aoweave.tech/QyhLa5Wvzn2pL2qryCUnqR4noJbomUpMa53mL0GixM4
We can take that further. Say the price of gas is _fluctuating wildly at the pump_ at that gas station. First it's $4.24, then later it's $3.89, then latet it's $4.69 . Users don't necessarily expect this but it's physically possible.
You could model this, use a standard devistion or a running average or train a model to predict it somehow, but also you could consider trust over time.
Measurements are not usually made simultaneousl--
https://aoweave.tech/ehW0tc5NbNfyCdSLdsnVA2JdVhbAiqpl1eoHXG9ZlJM#bIAOds5s7iY... https://aoweave.tech/0fzHXOLoF4v4Ar4S7N2GmLYIX2bRQ0XLQFUK4MjzBWs
.... long story short like trust expires for a key that changes views or a sign that does this too, we could cobsider how long does trust last for a givrn measurement to be accurate eithet precisely or within a metric. So: - does key predict reality? - does sign predict pump? - does old measurement predict future? We could consider these things as boolean yes/no concepts, or functions of dependence, and what seems likely is somewhere;in the middle -- maybe a simplification could be how long do we trust this? But more intetesting is transfet of trust. How do we form predictive groups of information from the available data? We make _rules_ that cab either br automatically infetted or proposed by users. We want to approach those spaces being identical, for forthright, accurate, and smart usets. Or even better an algorithm surpasses users. So we would _distill and codify the process of learning from experience_ with sparse coded datapoints, which computets are good at. This seems possible _not_ because of;AI, bit rather because _leatning is just probabilitu anf statistics on groups of properties_. The novel thing is possibly forming those groups, and we can express that as a learned inference _over the theory that there are *useful* groups_ in the data. ... y'gotta hold it steady here. it's a little much for me i might do the ftl teleportation instead. But this is almost certainly something well studied in multiple fields. Here's sone data, we propose there are useful groups among it. A group is useful if forming it helps do what we are trying to do. Here, that might be predicting properties of intetest to users, or predicting trust which further predicts properties -- and there we can see an inital useful inference, trust itself. [][] https://aoweave.tech/XH4fsBp10JRTef8Rlntp-htT6saZ23HsTq9ItwHHZew#8MEohnXKTT1... https://aoweave.tech/pmjSVKV8w7EWDXP5in3SuVNUV3EOngPK8lAfc-kScbA I am not affiliated with https://aoweave.tech .