LLMs in Life Science Roadblocks to Discovery – Part 3: Modeling Abstract Concepts How can we apply learnings from neuroscience to address the roadblocks to discovery discussed in Parts 1 & 2 – Emerging Science and the Nature of Human Language? https://lnkd.in/emXiBT5Z https://lnkd.in/eitPvWH7 The National Academies of Sciences just published a report “Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience” https://lnkd.in/eH_vNHD7. It explored the multidimensional, multiscale, and dynamic complexity of the brain, as well as the significant knowledge gaps that challenge the development of computational intelligence. A key conclusion is “Studying the simplest possible CONCEPTUAL models will help neuroscientists fill gaps in knowledge and generate new theories.” In a Financial Times interview titled “The Productivity gains from AI are not guaranteed,” Google’s head of research, James Manyika, identified the main achievement of LLMs. Transformers — the technology underpinning large language models — have allowed Google Translate to more than double the number of languages it supports to 243. (To grasp the limitations, try an experiment. Find a website with articles in English and a non-European language in which you are fluent. Copy a paragraph of English text and have Google translate it into your other language and compare with the website content.) Manyika acknowledged that when it comes to research, LLMs can only summarize and draft. To generate new theories requires abstraction, conceptualization, and contextualization to much higher levels of precision than routine content. The transformer diagram shows that it is not designed to abstract or contextualize conceptually, so it cannot learn in any significant sense. Building conceptual models that represent the real world requires a biomimetic digital twins ecosystem approach that begins with: 1-Identifying the real-world components that are critical to the model purpose 2-Twinning each component independently to the level of detail required by the purpose 3-Identifying and modeling the relationships and interactions between the components 4-Identifying and modeling the potential scenarios for each interaction I will address each of these steps in upcoming posts.
RYAILITI LLC’s Post
More Relevant Posts
-
LLMs in Life Science Roadblocks to Discovery – Part 3: Modeling Abstract Concepts How can we apply learnings from neuroscience to address the roadblocks to discovery discussed in Parts 1 & 2 – Emerging Science and the Nature of Human Language? https://lnkd.in/ePmEbKD3 https://lnkd.in/e9sMzqCU The National Academies of Sciences just published a report “Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience” https://lnkd.in/eFjt-tiG. It explored the multidimensional, multiscale, and dynamic complexity of the brain, as well as the significant knowledge gaps that challenge the development of computational intelligence. A key conclusion is “Studying the simplest possible CONCEPTUAL models will help neuroscientists fill gaps in knowledge and generate new theories.” In a Financial Times interview titled “The Productivity gains from AI are not guaranteed,” Google’s head of research, James Manyika, identified the main achievement of LLMs. Transformers — the technology underpinning large language models — have allowed Google Translate to more than double the number of languages it supports to 243. (To grasp the limitations, try an experiment. Find a website with articles in English and a non-European language in which you are fluent. Copy a paragraph of English text and have Google translate it into your other language and compare with the website content.) Manyika acknowledged that when it comes to research, LLMs can only summarize and draft. To generate new theories requires abstraction, conceptualization, and contextualization to much higher levels of precision than routine content. The transformer diagram shows that it is not designed to abstract or contextualize conceptually, so it cannot learn in any significant sense. Building conceptual models that represent the real world requires a biomimetic digital twins ecosystem approach that begins with: 1-Identifying the real-world components that are critical to the model purpose 2-Twinning each component independently to the level of detail required by the purpose 3-Identifying and modeling the relationships and interactions between the components 4-Identifying and modeling the potential scenarios for each interaction I will address each of these steps in upcoming posts.
To view or add a comment, sign in
-
This is your last chance to attend the Florence Nightingale Colloquium, Friday 15th December at 4 pm. We are honoured that Iris van Rooij Professor of Computational Cognitive Science at the Faculty of Social Sciences of Radboud University will give a talk titled Reclaiming AI as a Theoretical Tool for Cognitive Science. Abstract The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science. In reclaiming this older idea of AI, however, it is important not to repeat conceptual mistakes of the past (and present) that brought us to where we are today. This is joint work with Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova, and Patricia Rich. The full paper is available : https://lnkd.in/evwzMnFt.
To view or add a comment, sign in
-
🚀 Research Paper Highlights: Let's explore HippoRAG, a neurobiologically inspired approach for augmenting large language models with long-term memory capabilities. 'HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models' by Bernal Jiménez et al from The Ohio State University and a Stanford University. 📌Drawing inspiration from the hippocampus in the human brain, HippoRAG employs a retrieval-augmented generator (RAG) architecture combined with a differentiable episodic memory module. This allows the model to store and retrieve relevant information from past experiences, enabling improved performance on tasks requiring long-term memory and reasoning over time. Let's dive in: 1. Neurobiological Inspiration: The hippocampus plays a crucial role in episodic memory formation and retrieval in humans. HippoRAG mimics this by incorporating a differentiable episodic memory module. 2. Retrieval-Augmented Architecture: HippoRAG builds upon the Retrieval-Augmented Generator (RAG) architecture. It combines a pretrained language model with a retrieval component for accessing external knowledge. 3. Episodic Memory Module: The episodic memory module stores and retrieves relevant information from past experiences. It uses a key-value store and a differentiable attention mechanism for retrieval. 4. Long-Term Memory: HippoRAG enables language models to maintain long-term memory of relevant information. This improves performance on tasks requiring reasoning over time and context. 5. Empirical Evaluation: The authors evaluate HippoRAG on various tasks, including question answering and multi-session dialogue. Results demonstrate improved performance compared to baselines without long-term memory capabilities. 6. Future Directions: Potential extensions include incorporating forgetting mechanisms and exploring more complex memory architectures. Applications could include personalized assistants, task-oriented dialogue systems, and lifelong learning models. HippoRAG is a promising approach for enhancing large language models with long-term memory capabilities, inspired by the workings of the human hippocampus. By incorporating an episodic memory module, HippoRAG aims to improve performance on tasks that require reasoning over time and context. Further reading- https://lnkd.in/dpDphPWe 🌟 Stay tuned for more updates on upcoming research and analysis in this rapidly evolving landscape of Generative AI. #RAG #ai #innovation #research #llm
To view or add a comment, sign in
-
🔍 Exploring New Horizons in AI and Cognitive Science! A recent paper by Princeton and the University of Warwick proposes a novel approach to enhance the utility of LLMs as cognitive models. - 🧠 New methodologies bridging AI and cognitive science - 🔬 Improved understanding and simulation of human cognition - 📊 Cutting-edge research from top universities #AI #CognitiveScience #Innovation - 🤖 Advanced techniques: Leveraging advanced algorithms to mimic human cognitive processes - 🌐 Interdisciplinary insights: Combining insights from AI, neuroscience, and psychology - 🚀 Future applications: Potential to revolutionize fields like education, mental health, and human-computer interaction - 📈 Research impact: Expected to set new standards in AI research and cognitive modeling - 🔧 Practical implementations: How these methodologies can be integrated into current AI systems - 📚 Educational potential: Enhancing personalized learning experiences using AI - 🏥 Healthcare innovations: Improving diagnostic tools and mental health treatments through AI - 🔍 Enhanced understanding: Providing deeper insights into human thought processes and decision-making This AI Paper from Princeton and the University of Warwick Proposes a Novel Artificial Intelligence Approach to Enhance the Utility of LLMs as Cognitive Models https://lnkd.in/ggsKXWyG
This AI Paper from Princeton and the University of Warwick Proposes a Novel Artificial Intelligence Approach to Enhance the Utility of LLMs as Cognitive Models
https://www.marktechpost.com
To view or add a comment, sign in
-
Congratulations Dr. Bhanu Chander and Dr. Koppala Guravaiah on the publication of their insightful edited work “Handbook of AI-based Models in Healthcare and Medicine: Approaches, Theories, and Applications” The main focus of this book is on all the related technologies to solve health related issues with a single platform, so that undergraduate and postgraduate students, researchers, academicians, and industry people can easily understand AI, machine learning, deep learning algorithms, and learning analytics in IoT-enabled technologies for health-care applications.
To view or add a comment, sign in
-
Chief of Artificial Intelligence | E.N.I.A. Innovation & Digital Transformation | LA ISO/IEC 42001 | C2PA Contributor | CCEM | MCE | CTU | IEEE WGM | I-EMBA Candidate | DCS Candidate | Gartner ITCommunity Ambassador
📚 The Building Blocks of Thought: A Rationalist Account of the Origins of Concepts 👥 Stephen Laurence, Eric Margolis 📆 July 2024 🌐 https://lnkd.in/dsxF_Xwk Published by Oxford Academy under #CreativeCommon license (CC BY-NC-ND 4.0) the monumental work of 692 pages by Laurence and Margolis the book presents a #rationalist perspective on the genesis of human concepts, often referred to as concept #nativism. It explores the historical context and contemporary cognitive science research within the rationalism-empiricism debate. The authors propose that many concepts across different fields are either inherent or gained through learning processes that use innate representations or specialized components. The opposing view is concept #empiricism, which holds that concepts are primarily or entirely derived from experience and learning. This tradition goes back to philosophers like Locke and Hume, and remains influential in modern psychology and philosophy while the first is more anchored to Plato and Descartes. 🎓 Nativist Arguments: ➡️ Some concepts appear too quickly to be simply learned; ➡️ Certain concepts seem universal across cultures; ➡️ Infants can understand some things before they have any experience with them; ➡️ Complex concepts may require simpler innate concepts as building blocks. 🎓 Empiricist counter-arguments: ➡️ Learning mechanisms are more powerful than nativists suppose; ➡️ Cultural variation in concepts suggests learning plays a major role; ➡️ The adaptability of the brain demonstrates that our understanding of concepts is influenced by our personal experiences; ➡️ It is more logical to avoid innate concepts when learned explanations are adequate. Of course I haven't read the entire book, my speed reading isn't that fast 🦸♂️ but I have been focused on #Chapter19 (pg 461 to 494) because it is related the "#Artificial #NeuralNetworks: From Connectionism to Deep Learning" where the authors "assess the bearing of research on artificial neural networks on the rationalism- empiricism debate by critically examining two important and representative types of empiricist proposals for how artificial neural networks might provide domain - general learning accounts of such concepts": 1️⃣ #Connectionist, all approaches that model cognition using artificial neural networks; 2️⃣ #DeepLearning as evolution of the previous one with an empiricist approach. Lot of attention is reserved to the research and findings of #Rogers and #McClelland, who proposed a connectionist model of #SemanticMemory, as concepts which can be learned through association and generalization without innate domain-specific knowledge by authors recognize to extend that view by incorporating innate domain-specific constraints and learning mechanisms. 💡 I highly suggest reading this chapter, as it is really fascinating but how this can be connected to the #ArtificialIntelligence? Let's continue in the comments 👇! #Philosophy #Thinking #AI
To view or add a comment, sign in
-
Earlier this week we shared an interview with Mikkel Elle Lepperød, a Research scientist at Simula. Now, we'll share more about the ways he collaborates with research partners and the interesting, and trend-defying technological advancements within neuroscience and AI. 🌐 Can you share an example of how you collaborate with industry partners or other researchers in your work? Collaboration is vital in my work, especially in creating teaching materials and conducting research. A key example that I base much of my current project management on was during my PhD, where I worked with biologists on diverse experiments, combining our expertise in software development, modeling, and data analysis. As a generalist, I find that collaboration accelerates learning and discovery. Additionally, digital communication enables global partnerships that enhance research with diverse cultural and intellectual insights. Moreover, maximizing my time with family and reducing carbon emissions, I find digital communication particularly effective. ❓Are there emerging trends or technologies within your field that you find particularly exciting or promising? In neuroscience, the emerging technologies are mind-boggling. When I started in neuroscience 10 years ago, recording hundreds of neurons was a big accomplishment. Now we can record from tens of thousands of neurons and do precise perturbations using genetic and optic tools. In AI there is massive excitement with the recent models that are popping up, such as chatGPT, but I’m most excited about how challenges identified in the 80s and 90s are still very prominent, which means there is still much to be done. On the other hand, it is undeniable that these large networks are extremely impressive, pointing to a potential resemblance between biological nervous systems and artificial neural networks. AI's very foundation is intertwined with our quest to understand the brain. For instance, the development of Convolutional Neural Networks was inspired by the Nobel Prize-winning work of Hubel and Wiesel in 1962. They made significant discoveries about visual processing in the brain, which directly influenced computer vision. More recently, the attention mechanism, used in Transformers and Large Language Models, is based on 1998 research by Itti, Koch, and Niebur. These trends are very exciting and I am honoured to be a scientist in this field. Thanks to Mikkel for contributing to this researcher profile. Find the full interview at (https://lnkd.in/dd__fZaf). At Simula, we take pride in our people, with over 150 scientific researchers, fostering a collaborative and innovative environment for science research. #research #education #innovation #AI
Spotlight: Mikkel Lepperød
simula.no
To view or add a comment, sign in
-
In a study by Stanford University, researchers Sanmi Koyejo, Brando Miranda, and Rylan Schaeffer present an argument against the commonly held belief in the emergent abilities of large language models (LLMs). Their research suggests that what has been perceived as sudden and unpredictable leaps in LLM capabilities may be a result of the methodologies used to evaluate these models rather than the inherent properties of the models themselves. This insight not only challenges existing perceptions but also paves the way for a more nuanced understanding of how LLMs develop and improve over time. Key Points: 💡Reevaluation of Emergent Abilities: The Stanford team argues that the appearance of emergent abilities in LLMs—skills that seem to manifest abruptly at certain scales—is influenced more by the choice of performance metrics than by actual changes in model capacity or functionality. 💡Mathematical Model and Predictive Analysis: Through a simple mathematical model and extensive testing, the researchers demonstrate how nonlinear or discontinuous metrics exaggerate the presence of emergent abilities, while linear or continuous metrics reveal a more predictable and gradual improvement. 💡Practical Implications for Measurement: By applying alternative metrics that award partial credit or assess performance incrementally, the team was able to show that abilities such as arithmetic emerge gradually, not suddenly, as model parameters increase. 💡Continued Debate Among Scientists: Despite these findings, the notion of emergence is not completely dismissed. Critics and other researchers highlight that the Stanford study doesn't fully eliminate the unpredictability of when or how these abilities might appear under different conditions. This work emphasizes the importance of developing a robust science of prediction for LLMs. As models grow in complexity, understanding the nuances of emergent abilities and the factors that influence them becomes crucial for anticipating the capabilities and behaviors of next-generation models. #generativeai #airesearch #emergentproperties #llms Source: https://lnkd.in/gvYm4v8d
To view or add a comment, sign in
-
PhD | Innovation | Ecosystem Builder | Generating business opportunities thanks to market research based on neuromarketing and advanced techniques.
Exciting Research Alert! 🧠💡 Researchers have proposed a method to estimate consumer preferences using EEG signals and deep learning techniques. The study focuses on emotion estimation in human-computer interaction, particularly in neuromarketing studies. EEG data from participants watching ads of two automobile brands were processed with deep learning to gauge their liking status. By converting EEG signals into RGB images and leveraging the short-time Fourier transform method, researchers successfully estimated liking status for various advertisement sections. These findings showcase the potential of utilizing EEG signals in neuromarketing. #ConsumerResearch #Neuromarketing #DeepLearning 🚗🧠 https://lnkd.in/dws7UY2t Marc Polo (Ph.D) Blanquerna - Universitat Ramon Llull Joan Cuenca Ph.D Josep Maria Picola Meix (Ph. D.) Miriam Diez Bosch PhD Jaume Suau
Consumer Preference Estimation Using EEG Signals and Deep Learning
ieeexplore.ieee.org
To view or add a comment, sign in
-
LLMs Roadblocks to Discovery – Part 3: Modeling Abstract Concepts How can we apply learnings from neuroscience to address the roadblocks to discovery discussed in Parts 1 & 2 – Emerging Science and the Nature of Human Language? https://lnkd.in/ePmEbKD3 https://lnkd.in/e9sMzqCU The National Academies of Sciences just published a report “Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience” https://lnkd.in/eFjt-tiG. It explored the multidimensional, multiscale, and dynamic complexity of the brain, as well as the significant knowledge gaps that challenge the development of computational intelligence. A key conclusion is “Studying the simplest possible CONCEPTUAL models will help neuroscientists fill gaps in knowledge and generate new theories.” In a Financial Times interview titled “The Productivity gains from AI are not guaranteed,” Google’s head of research, James Manyika, identified the main achievement of LLMs. Transformers — the technology underpinning large language models — have allowed Google Translate to more than double the number of languages it supports to 243. (To grasp the limitations, try an experiment. Find a website with articles in English and a non-European language in which you are fluent. Copy a paragraph of English text and have Google translate it into your other language and compare with the website content.) Manyika acknowledged that when it comes to research, LLMs can only summarize and draft. To generate new theories requires abstraction, conceptualization, and contextualization to much higher levels of precision than routine content. The transformer diagram shows that it is not designed to abstract or contextualize conceptually, so it cannot learn in any significant sense. Building conceptual models that represent the real world requires a biomimetic digital twins ecosystem approach that begins with: 1-Identifying the real-world components that are critical to the model purpose 2-Twinning each component independently to the level of detail required by the purpose 3-Identifying and modeling the relationships and interactions between the components 4-Identifying and modeling the potential scenarios for each interaction I will address each of these steps in upcoming posts.
To view or add a comment, sign in
256 followers