Pre-emptive content index
For quite some years, I never watched any youtubes - then there was a Java-based website which could download them, but it was cumbersome. Then there was youtube-dl, and now youtube is starting to head towards reasonable by my standards, or rather, a reasonable protocol for "consuming" content - pre-emptive local storage of everything. This is a principle upon which I view/read anything - nothing in-browser, no in-browser media players, certainly no flash plugins, no in-browser PDF viewing etc. I apply the same to code - if I can't download the source and compile it myself (which sometimes/ often enough I don't do, but at least I can), then I won't touch it. So if I've read something I personally considered worthy of the price of my human attention, it exists somewhere on my local storage. I call this pre-emptive since I always consistently download the content before ever reading, listening or viewing (/"consuming" - sounds like a base description, belittling we humans). In a "perfect" world, all articles, all content is indexed with git, or in a git-compatible way, providing enhanced possibilities for caching, verifying, indexing, retrieval, duplication/ backup, and sharing and synchronizing with fellow private net sharers. As this concept and its implementation become pervasive, some publishers would take advantage of it as a form of compression to reduce publishing bandwidth requirements (somewhat analogous to torrents, but with greater integrity of the data being distributed). "As Tim O'Reilly says, my problem is not piracy, it's obscurity" creativecommons.org/weblog/entry/7774 Our true coin is our human attention - the ticket to relevance -> visibility -> popularity -> ubiquity , is the 'free will' choices that fellow humans make in 'spending' their human attention, their 'life energy', upon that which you create/ publish/ wish to see manifest into the world. Choose wisely fellow humans, both in your attempts to shift the attention-spending of others and in your own attention-spending. --- I imagine the following: - A browser plugin, let's call it "Pre-emptive Content Plugin" for now, which is configured with a data store/directory location for the browser cache. It's a --bare git repo. - Each item of content is added, and caching rules are applied on top of that. - The plugin causes the browser ui/chrome to display (or provide a shortcut for) "this is important to me" buttons/links/keyboard shortcuts, which function tells the browser git cache that this content is to be kept 'permanently' for offline viewing/ synchronization/ backup/ etc. - A similar ui/chrome element "Hot" informs the plugin that this data/ frame/ page/ website is especially contentious, needing duplication into the "Pre-emptive Private Net Data Cache for Hot Content", to be thereafter Striesanded to the world. etc Basically, industrializing/ commoditizing content care, custodianship and distribution. Zenaan
On 6/27/15, Zenaan Harkness <zen@freedbms.net> wrote:
... So if I've read something I personally considered worthy of the price of my human attention, it exists somewhere on my local storage.
this is good practice; although it would be better to have two way flow of collaboration - open design, etc. while pragmatism fills your local cache, open content fills workflows, production.
I call this pre-emptive since I always consistently download the content before ever reading, listening or viewing (/"consuming" - sounds like a base description, belittling we humans).
i remember Zooko musing about this years ago, needing a browser extension that kept a complete archive of all pages / content viewed during a session. i don't recall him finding it, and i can't seem to locate the blog post. i will try again later... best regards,
Coupled with a little local content storage server, this could be as simple as a five-line userscript: AJAX the whole HTML document and any "relevant" embedded content (video, audio, images...) to the server with the current URI as storage key. Retrieval and serving by local server (rewriting embeds on the fly), and offering as part of a distributed content store, is a later exercise, but a quick Streisand hack should be easy enough. ...that's all assuming you don't just POST current URI to a little app that just wget-spiders the whole thing. :) On 28 June 2015 03:36:35 GMT+01:00, coderman <coderman@gmail.com> wrote:
On 6/27/15, Zenaan Harkness <zen@freedbms.net> wrote:
... So if I've read something I personally considered worthy of the price of my human attention, it exists somewhere on my local storage.
this is good practice; although it would be better to have two way flow of collaboration - open design, etc. while pragmatism fills your local cache, open content fills workflows, production.
I call this pre-emptive since I always consistently download the content before ever reading, listening or viewing (/"consuming" - sounds like a base description, belittling we humans).
i remember Zooko musing about this years ago, needing a browser extension that kept a complete archive of all pages / content viewed during a session. i don't recall him finding it, and i can't seem to locate the blog post. i will try again later...
best regards,
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Sat, Jun 27, 2015 at 07:36:35PM -0700, coderman wrote:
i remember Zooko musing about this years ago, needing a browser extension that kept a complete archive of all pages / content viewed during a session. i don't recall him finding it, and i can't seem to locate the blog post. i will try again later...
i built omnom, a delicious like bookmarking engine, with a greasemonkey script that made a snapshot of the rendered page as it was in your browser and inlined all its css and images (as data urls), so it was one (quite huge) html. unfortunately i think greasemonkey/userscripts have been neutered so that the snapshotting does not work anymore. the meat is still available: https://gitorious.org/tagr/omnom/raw/419b512734021b71c01500514b5ae87d0b7f3ab... i know - ridiculous, someone posting code on the cypherpunks list, i hope you're all not to offended by my contributions to your fine noise. -- otr fp: https://www.ctrlc.hu/~stef/otr.txt
On 6/28/15, stef <s@ctrlc.hu> wrote:
On Sat, Jun 27, 2015 at 07:36:35PM -0700, coderman wrote:
i remember Zooko musing about this years ago, needing a browser extension that kept a complete archive of all pages / content viewed during a session. i don't recall him finding it, and i can't seem to locate the blog post. i will try again later...
i built omnom, a delicious like bookmarking engine, with a greasemonkey script that made a snapshot of the rendered page as it was in your browser and inlined all its css and images (as data urls), so it was one (quite huge) html. unfortunately i think greasemonkey/userscripts have been neutered so that the snapshotting does not work anymore.
the meat is still available: https://gitorious.org/tagr/omnom/raw/419b512734021b71c01500514b5ae87d0b7f3ab...
i know - ridiculous, someone posting code on the cypherpunks list, i hope you're all not to offended by my contributions to your fine noise.
My God! Shock :)
Dnia niedziela, 28 czerwca 2015 11:00:29 stef pisze:
i know - ridiculous, someone posting code on the cypherpunks list, i hope you're all not to offended by my contributions to your fine noise.
Please turn in your Cypherpunk card and find your way to the backdoor. -- Pozdrawiam, Michał "rysiek" Woźniak Zmieniam klucz GPG :: http://rys.io/pl/147 GPG Key Transition :: http://rys.io/en/147
Dnia sobota, 27 czerwca 2015 19:36:35 coderman pisze:
On 6/27/15, Zenaan Harkness <zen@freedbms.net> wrote:
... So if I've read something I personally considered worthy of the price of my human attention, it exists somewhere on my local storage.
this is good practice; although it would be better to have two way flow of collaboration - open design, etc. while pragmatism fills your local cache, open content fills workflows, production.
I call this pre-emptive since I always consistently download the content before ever reading, listening or viewing (/"consuming" - sounds like a base description, belittling we humans).
i remember Zooko musing about this years ago, needing a browser extension that kept a complete archive of all pages / content viewed during a session. i don't recall him finding it, and i can't seem to locate the blog post. i will try again later...
I use PrintToPdf for this: https://addons.mozilla.org/pl/firefox/addon/printpdf/ It does the job well. -- Pozdrawiam, Michał "rysiek" Woźniak Zmieniam klucz GPG :: http://rys.io/pl/147 GPG Key Transition :: http://rys.io/en/147
Den 28 jun 2015 04:28 skrev "Zenaan Harkness" <zen@freedbms.net>:
For quite some years, I never watched any youtubes - then there was a Java-based website which could download them, but it was cumbersome.
Then there was youtube-dl, and now youtube is starting to head towards reasonable by my standards, or rather, a reasonable protocol for "consuming" content - pre-emptive local storage of everything.
[...]
In a "perfect" world, all articles, all content is indexed with git, or in a git-compatible way, providing enhanced possibilities for caching, verifying, indexing, retrieval, duplication/ backup, and sharing and synchronizing with fellow private net sharers. As this concept and its implementation become pervasive, some publishers would take advantage of it as a form of compression to reduce publishing bandwidth requirements (somewhat analogous to torrents, but with greater integrity of the data being distributed).
http://ipfs.io/ Close enough for the underlying framework?
participants (6)
-
Cathal (Phone)
-
coderman
-
Natanael
-
rysiek
-
stef
-
Zenaan Harkness