The framework pulls in external sources to enhance accuracy. Does it live up to the hype?
See full article...
See full article...
confabulation (also called “hallucination”), which is a creative gap-filling technique AI language models use when they encounter holes in their knowledge that weren’t present in their training data.
LLMs produce the most statistically likely response based on their training data and don’t understand anything they output, meaning they can present false information that seems accurate if you don't have expert knowledge on a subject.
The article addresses this, if you'd just read it, but yes. It can hallucinate stuff even when it does find information related to the query.As well, if the RAG model can't find any information on the search, will it make shit up as well?
The code runs after the model has already come up with an answer, so it has no way of telling whether the model actually had any good information to draw from or not.One other thing that I don't think's been mentioned is why can't there be code that requires the LLM to provide a null answer if it doesn't have the data. "I'm sorry Dave, I can't do that"
But will the AGI then decide that it can make stuff up if it wants to, just like people?I'm pretty sure this is how we will get AGI. Same as our brain has a wide range of specialized regions for different tasks AGI will be achieved through the integration of a wide range of machine learning/AI tools. These different tools are already capable of stunning feats in their respective domains, it is just a question of time until we will be able to bring them all together.
Q: A man is walking with a goat and they come to a river with a boat on the riverside. How can they get across the river? Please give a detailed explanation of the reasoning behind your answer.
A: This is a classic river crossing puzzle! Here's how they can get across:
Steps:
Man takes the goat across the river. The man rows the boat across, leaving the goat on the other side.
Man returns alone. The man rows the boat back to the original side by himself.
Man takes something else across the river (optional). This step is often omitted in the riddle, but the man could take something else across the river, like a cabbage or a wolf (if it's a different version of the riddle). For simplicity, let's assume he returns alone.
Man picks up the goat. The man rows back across the river and picks up the goat.
Man and goat cross the river together. They both row across to the other side.
Reasoning:
The key to this puzzle is understanding the constraint: the goat cannot be left alone on either side of the river without the man present. Here's why the above solution works:
Prevents the goat from being eaten: In variations of this puzzle, there might be a wolf that would eat the goat. This solution ensures the goat is never alone with a potential predator.
Prevents the goat from eating something: Other versions might have a cabbage that the goat would eat. This solution keeps the goat from being alone with the cabbage.
Let me know if you'd like to try another river crossing puzzle!
The article addresses this, if you'd just read it, but yes. It can hallucinate stuff even when it does find information related to the query.
The code runs after the model has already come up with an answer, so it has no way of telling whether the model actually had any good information to draw from or not.
What I am talking about is the LLM itself without a RAG to wipe it's ass. Why doesn't the LLM code base have code that provides a null result when it doesn't have a result? That would solve most of the current issues regarding "hallucinations".
How is pointing out the fact that LLMs hallucinate, nonsense hype?This article starts off sounding very iffy:
but a subsequent line is better:
=============
My initial comment was cranky old man ranting based on the first quote. I do wish the first quote was less supportive of the nonsense hype.
Because that term is as much hype as calling an LLM "AI".How is pointing out the fact that LLMs hallucinate, nonsense hype?
Because it uses the phrasing "creative gap-filling technique AI language models use when they encounter holes in their knowledge that weren’t present in their training data."How is pointing out the fact that LLMs hallucinate, nonsense hype?
Yeah, and the idea is that several top results are in need of some kind of synthesis from multiple results into one summary.So we're fixing "AI" by... [check's notes]... doing a web search?
I'm not trying to be an a$$ or disrespectful, but: Why ?"Generative Deep Learning: Teaching Machines how to Paint, Write, Compose and Play."
Slightly surprised there hasn't been much discussion about teaching machines to pray. And before you downvote me, several sci-fi stories have revolved around teaching machines to pray. Arthur Clarke did a brilliant short on this topic, 'The Nine Billion Names Of God' which won a retrospective Hugo for 1954.
In some religions, prayer can be done mechanically, eg via prayer wheels, which are sometimes wind-powered or water powered. Quite curious what would be the natural progression of that strand of thought via the use of AI tech.
I didn't say anything about copyrights or IP issues. I literally quoted the part of your comment asking about RAG and hallucinations and answered that part.In fact I did read it and then re-read it twice more. No where do I see where or how RAG deals with copyright and IP issues.
Because the model isn't deterministic code. It's not made the same way regular programs are, where a programmer sits down, figures the program flow and writes functions to do stuff.What I am talking about is the LLM itself without a RAG to wipe it's ass. Why doesn't the LLM code base have code that provides a null result when it doesn't have a result?
I'm not trying to be an a$$ or disrespectful, but: Why ?
Humans pray to a deity for the hope that said deity will reward them in some way - either a better after life, a better current life, or simply scoring "points" within their religious scheme.
An AI would not have any hard evidence of deities or the benefits of praying, other than the resources spent "praying" for an outcome that is literally impossible to achieve (for one of two reasons: The AI itself does not know that a deity exists or could provide tangible benefits or two: Most of the religions in the world would not recognize that an AI has a soul and therefore would not/could not be heard by a deity)
Giansiracusa also pointed out that the rise of generative AI-aided search results—and the recent "enshittification"of the web through AI-generated content—means that what might at one point have been a halfway useful solution to a fundamental flaw in generative AI tools could become less useful if AI language models draw from AI-written junk found online.
Because the term "hallucinate" implies that LLMs have an internal conceptualisation of reality that has become temporarily warped, which, of course, they do not. It's a loaded phrase meant to trick people into believing LLMs can think. The "hallucination" is entirely equal to anything else it outputs.How is pointing out the fact that LLMs hallucinate, nonsense hype?
The sources the agent uses to generate the answer can be included in the response to the user. You might not be able to tell which facts can from where so you can just include all sources given to the agent. Bing probably does something similar. The agent searches, gets some results, generates an answer, and you get citations at the bottom of what you see.How does the RAG model deal with IP and copyright?
Yes. And even if the information is there. This can be mitigated by more "I don't know" in training.As well, if the RAG model can't find any information on the search, will it make shit up as well?
Not code, but that can be tuned. The model usually has some confidence in the answer. If the mode is trained to say "I don't know" this mostly fixes the issue.One other thing that I don't think's been mentioned is why can't there be code that requires the LLM to provide a null answer if it doesn't have the data. "I'm sorry Dave, I can't do that"
This is when you regenerate. It's entirely possible to generate three answers, for example, and pick the best or have a "fact checker" agent go over the results. Google's Deep Mind recently outlined a way.The code runs after the model has already come up with an answer, so it has no way of telling whether the model actually had any good information to draw from or not.
Generative AI can already choose to lie (and if agents are forced to think out loud, the reasoning can include self preservation). What exact is a choice is a philosophical disagreement. Some here will argue for pages over it. Regardless:But will the AGI then decide that it can make stuff up if it wants to, just like people?
Generative, creative, is a matter of perspective. We make stuff up, create stuff, based on what we have been trained with. With agents, there absolutely is an "entity" that creates. That entity may be a predictive figment of the imagination, but it also exists when it's rendered.implies that there is entity that can "encounter" things and be "creative".