Goals are crucial, and goals are _not_ the purpose of life. Goals are _needed by_ life. My counterargument to AI alignment is that it would be stupid to infinitely optimize a function. Normal systems get satisfied when they reach a threshold. I say that this prevents takeover of the universe by a paperclip factory. [In reality there are a complex set of possible scenarios, many of which simply just break, many of which still destroy things if your system runs faster than you can comprehend it, but this destruction is clearly already happening. we need to build these things rather than suffer from them, and we need to run systems at all so we can observe and comprehend them -> in summary, please _do_ build your AI, because the children need to learn to protect themselves from them.]