[crazy][hobby][spam] Automated Reverse Engineering

k gmkarl at gmail.com
Sun Jan 16 11:45:21 PST 2022


batchsize of 20 is about the same speed

redaction: this is not actually the free colab.  to make it work on
the free colab, you'd drop the batchsize so it fit in ram.  while
frustrated with the tpu rpc timeouts i bought the paid colab.  it
didn't help, turns out because the timeout is hardcoded in the
tensorflow source.  google cloud sdk shouldn't have the timeout.  this
notebook is using a single tesla p100 with 16G of vram.  batchsize=24
exhausts the vram.

might let it run for a bit, see how fast it fits


More information about the cypherpunks mailing list