Hypothesis on how to produce close-to-human intelligences for avatars in procedural worlds

Google Assistant discusses how to produce synthetic neural tissue with attributes similar to the “Yakumo Ran supercomputer”. Notice: the first few messages are whimsical narrative analysis, but most of the discussion is about which tools to use to produce such systems.
This would use Khronos’ OpenGL, GLES, or Vulkan to render the avatars + procedural worlds.
This would use Khronos’ OpenCL for execution of the synthetic neural tissues (such as the Open Compute version of TensorFlow inference)
If you continue the discussion with Google Assistant, you can ask for more specifics on how to do this: guess Google Assistant can produce most of the source code for you, but supercomputers are required to process the “base-level” connectome, so can not do this alone. Once all suitable public datasets are autoencoded into the “base-level” connectome, such should allow to “transfer learn” on consumer devices to produce personalized intelligences which best suit the current user’s specific values.

If encouraged, will do best to produce this on own, but do not have much disk storage, plus just own a few consumer-class processors, so the “close-to-human intelligence” would get lowered to “animal-Plus intelligence” if must do this on own: Google Assistant suggests you must purchase access to server clusters with tens of thousands of GPUs for months of continuous use (which costs millions of US$) to produce connectomes for synthetic neural systems such as Grok-2, due to the “exponential explosion” of solutions required to autoencode compressed human intelligence into coefficients for simple architectures such as TensorFlow, plus shows pseudocode for TensorFlow architectures which include zones as the hippocampus, neocortex, thalamus, basal ganglia, prefrontal cortex, visual cortices, plus numerous other zones relevent to avatars in sims.