2 Apr
2022
2 Apr
'22
2:57 p.m.
Basically, for all the approaches, learning how to represent letters from sound logits is what's important. And we'll need some data to do that. If we use transformer models, that means backpropagation involving something that tokenizes.