LLMs definitely take the grunt work out of documenting, coding, basic Q&A. I use them everyday and it helps a lot in reducing time to assist in refining ideas. But it is most definitely not sentience-style intelligence. It’s a mathematical tool. So, whether the LLM model can be trained in a year with 10,000 GPU or a month with 100,000 you actually still end up with the same basic capabilities, a tool.
Now, if the aim is to iterate faster to use it to assist in identifying the next steps for AI then fine.
Personally, my brain uses about 20Watts and doesn’t use datasets. Using the LLM as if it was a brain, no, wrong approach. Use an LLM to help figure out from the data on the internet to search for a possible brain solution, a better approach.
However, want to actually build an infrastructure to support emergent behaviour with the potential for intelligence? That’s my NeuralMimicry project A.A.R.N.N… (you’ll find it if you’re interested)
Elon Musk is reportedly planning to build a massive "supercomputer" for his AI company, which would be powered by 100,000 Nvidia GPUs (four times larger than Meta) and would be used to train the next version of the AI chatbot Grok, currently integrated into the X platform.
Proud Lead UI/UX/CX Engineer | Project Manager | Art Director | Multifaceted Expertise in UI, UX, SEO, and Front-End Development | Proven Success in Driving Business Goals | Leadership and Collaboration
5moInspiring & Scary at the same time!