Bridging the data gap between human children and large language models can reveal a lot about the human mind and how children learn, says Stanford HAI faculty affiliate Michael Frank. Watch the full video of his talk at our five-year anniversary conference: https://lnkd.in/g7H63eWR
So I'm here to tell you a little bit about the��
data gap between human children and large language���� models. AI models are fascinating,��
pervasive, sometimes problematic,���� and clearly here to stay. But what can AI tell��
us about the human mind, and in particular,���� can it give us insight into how children��
learn? So what I've argued here is that we���� really need to mind this data gap, we��
really need to be researchers that dig���� into the reasons for the efficiencies of human��
learning relative to machine learning. And the���� way to do this is to create a data ecosystem��
that allows us to train models on what human���� children experience and to evaluate them on the��
learning outcomes that we measure in real children.
The complexities within this topic are many, exciting to see research and focus.
For understanding of children speech and the careful, thoughtful building of recgjntiion models #voiceAI, — massive nod & credit to: SoapBox Labs under visionary founder Patricia Scanlon & team including Martyn FarrowsAmelia Kelly & Niamh Bushnell.
Thank you for this insightful perspective! Applying Systems Thinking in HLEP EdTech platform to bridge the data gap between human children and large language models can indeed reveal a lot about the human mind and how children learn. By integrating comprehensive data analysis with evidence-based approaches, we can develop more effective, personalized educational experiences that cater to the diverse needs of students.
🆕 📽 Discover the 3rd episode of our series "Altissia Chair - Expert Talk" !
In this third episode , Prof. Serge Bibauw, assistant professor at UcLouvain, introduces us into the captivating world of dialog-based computer assisted language learning and chat bots and discusses if they really live up to the hype and find their place among the language learning tools of the future.
👉 https://lnkd.in/ePHFw26U
Rhythm Rules Baby Language! 🎵🗣️
New research spills the beans on baby talk:
- Phonetics kick in at seven months.
- Rhythmic speech is the early language champ.
👶 Tip for Parents:
Start rhythmic talk early! 🌈 It's the secret language scaffold.
Groundbreaking insights from the BabyRhythm project reshape how we see language learning, read more here: https://lnkd.in/eCMgksMp#BabyTalkRevealed#RhythmMagic 🚀
Rhythm Rules Baby Language! 🎵🗣️
New research spills the beans on baby talk:
- Phonetics kick in at seven months.
- Rhythmic speech is the early language champ.
👶 Tip for Parents:
Start rhythmic talk early! 🌈 It's the secret language scaffold.
Groundbreaking insights from the BabyRhythm project reshape how we see language learning, read more here: https://lnkd.in/esCDgjuT#BabyTalkRevealed#RhythmMagic 🚀
TWIN-GPT: Digital Twins for Clinical Trials via Large Language Model
[2404.01273] TWIN-GPT: Digital Twins for Clinical Trials via Large Language Model (arxiv.org)
Aloha from Honolulu 🌺
I’m really excited to be here at the Hawaii Convention Centre for the CHI Conference on Human Factors in Computing Systems. I’ve just had the pleasure of presenting my first (!) published paper at the workshop on Theory of Mind in Human-AI Interaction, entitled LLM Theory of Mind & Alignment: Opportunities and Risks.
The paper explores how large language models (LLMs) having theory of mind (ToM) might help or hinder efforts to align LLMs with human values, taking inspiration from the role that ToM plays in human:human interactions. Themes covered include the opportunities for ToM to support LLM alignment with implicit user goals and normative moral judgements, as well as potential risks that ToM enables LLM manipulation, deception or unevenly distributed competitive advantages.
If you’re interested in LLMs, human psychology or alignment, please check out the paper here: https://lnkd.in/e2Q7NF2z#chi2024#hci#theoryofmind#largelanguagemodels#aialignment
Language-based AIs have hidden morals and values - Researchers from the University of Mannheim and GESIS have now investigated how the characteristics of language models can be made visible and what consequences this bias could have for society: https://lnkd.in/e9PvHhFz
Check out the conversation with Professor Martin Vechev in the "Az-buki" newspaper, the official publication of the Bulgarian Ministry of Education and Science, where they delve into INSAIT's achievements in recent years and its status as a leading global research institution in the fields of AI and computer science.
Link to the interview can be found in the below.
🗞️ Take a look at the interview with Prof. Martin Vechev for the "Az-buki" newspaper, the official newspaper of the Bulgarian Ministry of Education and Science (in BG).
🚀 The interview discusses INSAIT’s results over the last couple of years and its positioning as a world-class research institution in AI and computer science.
Link to the interview - in the comments 👇
🗞️ Take a look at the interview with Prof. Martin Vechev for the "Az-buki" newspaper, the official newspaper of the Bulgarian Ministry of Education and Science (in BG).
🚀 The interview discusses INSAIT’s results over the last couple of years and its positioning as a world-class research institution in AI and computer science.
Link to the interview - in the comments 👇
Chair, Sustainability IEEE SSIT| Co-Founder with Vint Cerf People Centered Internet| Co-Chair UN Commission on the Status of Women - Digital Innovation 2023 Africa Asia Europe Middle East |
In reflecting on Alan Kay's keynote, Science, Systems, and Humanity in the Age of AI, at the Digital Governance Series, we are reminded of the vast complexities of the human mind and the systems we have created—both scientific and technological. Kay poignantly highlights that humanity, despite living in an age of rapid technological advancement, is still operating with the same cognitive frameworks developed in the Stone Age. His exploration of how our brains are wired to think fast and react emotionally, as Daniel Kahneman described in Thinking, Fast and Slow, serves as a key point in understanding why we struggle to fully grasp the ramifications of technological developments such as AI.
Kay underscores that science itself, while effective in predicting critical issues such as climate change, has been slow to influence societal and political action. The disconnect between scientific understanding and public behavior is a critical issue that Kay attributes to the inherent biases and limitations of human cognition, as well as cultural systems that reinforce fast, reactive thinking over slower, more deliberate processes.
At the heart of his talk, Kay invokes Francis Bacon’s call for “a new science” that addresses the limitations of human cognition. He suggests that this new approach, which we now call "science," must not only continue to evolve but also take into account the amplification of human biases by technologies such as AI. His warning about the dangers of relying on non-cognitive, fast-reacting AI systems without adequate checks speaks directly to current concerns about AI’s role in amplifying misinformation, social division, and systemic inequalities.
Kay ends with the powerful reminder that "we cannot solve our problems with the same levels of thinking that we used to create them," urging a qualitative revolution in human thought. His keynote challenges us to rethink how we integrate science, technology, and human understanding in the age of AI, emphasizing that without significant changes to how we think and learn, we risk creating systems and technologies that outpace our ability to use them wisely.
Chair, Sustainability IEEE SSIT| Co-Founder with Vint Cerf People Centered Internet| Co-Chair UN Commission on the Status of Women - Digital Innovation 2023 Africa Asia Europe Middle East |
AXIOM(Advanced eXploration of Ideas and Ontological Meaning)
This is our model developed for Gemma Sprint program.
A.X.I.O.M. is designed to analyze philosophical texts, engage in deep discourse, and unravel complex concepts surrounding existence, knowledge, and ethics. By drawing on a vast repository of philosophical works, from ancient philosophers to modern thinkers, A.X.I.O.M. provides users with rich insights and thoughtful guidance, fostering a deeper understanding of reality and thought.
We have fine-tuned the gemma2-2b language model using a dataset of quotes by philosophers in Korean.
Excited to share that I've completed the 2024 Google Machine Learning Bootcamp!
Explore the model here: https://lnkd.in/g_XU5RiN#GemmaSprint
Start-up Builder. Tech Exec. | Researcher AI, LLMs | Advisor | Keynote Speaker (CES, SxSW, GovAI)
3moThe complexities within this topic are many, exciting to see research and focus. For understanding of children speech and the careful, thoughtful building of recgjntiion models #voiceAI, — massive nod & credit to: SoapBox Labs under visionary founder Patricia Scanlon & team including Martyn Farrows Amelia Kelly & Niamh Bushnell.