17 Jan
2022
17 Jan
'22
4:45 a.m.
batchsize of 20 is about the same speed redaction: this is not actually the free colab. to make it work on the free colab, you'd drop the batchsize so it fit in ram. while frustrated with the tpu rpc timeouts i bought the paid colab. it didn't help, turns out because the timeout is hardcoded in the tensorflow source. google cloud sdk shouldn't have the timeout. this notebook is using a single tesla p100 with 16G of vram. batchsize=24 exhausts the vram. might let it run for a bit, see how fast it fits