26 Jan
2022
26 Jan
'22
10:15 a.m.
As a brainmind, when doing this, part of the goal is understanding how it works for reuse. That's basic to 'algorithm concept optimization'. So just throwing a transformer model at the problem doesn't reach the solution unless end-users can fully comprehend and verify the workings of the model themselves, reapply those concepts, etc, which is a current research problem. More thoughts rising might give more ideas. Transformer models could make sense if _we_ understand them better. A quick explanation for why we aren't using them better is that "we are mind controlled not to make AGI", this helps a lot of concept areas move forward with explaining things. It's likely only a quick explanation and not the real explanation.