27 Jan
2022
27 Jan
'22
3:47 a.m.
- https://github.com/xloem/transformers/commit/7575b8286dd5c2b328d3c34d9b66dab... A draft of calling memory_efficient_attention from the perceiver model, when configuration parameters are set. - Untested. Maybe I can copy google's example again, like before, somehow, and run the same test with the configuration settings set, and walk through it to make sure it uses the new code.