To represent normal goal behavior with maximization, the return function needs to not only be incredibly complex, but also feed back to its own evaluation, in a way not provided for in these libraries.
It should have anything inside the policy that can change as part of its environment state.
This is so important that even if it doesn't help it should be done, because it's so important to observe before action, in all situations.