Verneek reposted this
So much of the fear about AI has been fueled by the chatter on how AI has these so-called "emergent behaviors" that no one anticipated or built it for. As someone who spent my academic life on Commonsense Reasoning, which is by many accounts the holy grail of language understanding, I take issues with such characterizations. I keep reminding people that decade(s) before LMs became this Large, the best techniques we had for world/causal modeling and script learning were actually language models. All my years of research showed that predicting what happens next in a sequence of events, using LMs trained on the web-scale text corpora, was indeed the best way to predict complicated knowledge structures about the world! This is also why at Verneek we look at LMs as our intermediary reasoning engines grounded on top of environments, i.e., #ContextualRetrievalGeneration. So not much is quite "shocking" on how LLMs are so good at building representations/structures of language. It is so easy to feed on the hyperbolic takes on "AI getting out of control" if we don't look at the historical context. All the doomsday fear-mongering merely distracts all of us from both the real harms and the real benefits of AI, as #AugmentedIntelligence. Listen to some of my conversations at #Collision on the same topics.