🚨🚨 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗡𝗢𝗧 𝗮 𝗯𝘂𝗴 𝗯𝘂𝘁 𝗮 𝗙𝗘𝗔𝗧𝗨𝗥𝗘 !! In lot of organizations operating often in regulated space (think financial, healthcare etc.), hallucination is often considered to be bad. Primarily because most of the use-cases uncovered in this industry so far are focussing on information extraction (think RAG). For Such use-case it does make sense to force the model NOT to be too creative. 👉 But, we are just getting started in terms of what these models can do. Think of use-cases around new product launches, creating GTM material and strategies, coming up with new improvements in existing business processes based on the lessons learns from the past and many more. 👉 Lets keep reminding ourselves why LLMs were created in first place. To generate "new" content and ideas. Using them only for information retrieval tasks is massive under-utilization of this amazing technology. Agree/Disagree? Share your thoughts in the comments. Follow me to stay up to date on all things Gen AI/ML at AWS and beyond !! #GenAI
Helping Global FS clients at AWS on their digital transformations.
2dVikesh Pandey perhaps someone can come up with an alternative term to hallucination. Dreamscape? The quest here is explainability/predictability. That is, occasional acceptable deviation can be forgiven for greater benefits most other times, but one needs to know why, when and true extent of odd looking creativity aka hallucination.