🚀 Join us at the UniReps Workshop! 🚀📅 Date: December 14th, 2024📍 Location: NeurIPS, Vancouver, Canada How and why do different neural models – whether biological or artificial – develop such strikingly similar internal representations? 🤔 This question is at the heart of a workshop Lamarr PI Jun. Prof. Dr. Zorah Lähner is co-organizing. Leading experts from neuroscience, AI, and cognitive science will come together to explore the convergence of neural representations. 🔍 Why Attend? You'll dive into groundbreaking research on: - The conditions under which different models learn similar representations - The underlying mechanisms driving these similarities - Practical applications in modular deep learning, model merging, knowledge transfer, and more! 📝 Call for Papers! The organizing team invites researchers to submit their work exploring why, when, and how different learning processes yield similar representations. Full papers and extended abstracts are welcome! 👉 More info about topics and requirements here: https://lnkd.in/eivim7P7 🔗 Submission Deadline: September 20th, 2024 (AoE). Submit your papers via OpenReview and contribute to this exciting discussion! 🎉 Don’t miss this chance to collaborate and innovate at the intersection of neuroscience and AI! 💡 👉 Join the conversation and be part of the future of neural modeling! #NeurIPS2024 #AI #Neuroscience #CognitiveScience #MachineLearning #DeepLearning #RepresentationLearning #CallForPapers #ResearchCommunity #Innovation
Lamarr Institute’s Post
More Relevant Posts
-
🧠 Exploring the Nexus of Neuroscience and Deep Learning: NeuroDeep 🚀 Excited to share my latest blog post, "NeuroDeep: The Connection Between Neuroscience and Deep Learning." 🌐 Read the full article here 📖 https://lnkd.in/dXmHjW6q In a world where technology and neuroscience intersect, NeuroDeep serves as a captivating bridge. 🌐💡 Dive into the fascinating synergy between these two realms and discover the profound impact on AI and our understanding of the brain. 🤯🔍 Key takeaways: ✨ Unraveling the parallels between neural networks and the human brain. ✨ How insights from neuroscience are shaping the evolution of deep learning algorithms. ✨ Real-world applications and the potential for groundbreaking advancements. Let's embark on this intellectual journey together! 🚀 Feel free to share your thoughts and insights in the comments. 🗣️ #NeuroDeep #Neuroscience #DeepLearning #AI #TechInnovation #BlogPost #LinkedInReads
NeuroDeep : The Connection between Neuroscience and Deep Learning
medium.com
To view or add a comment, sign in
-
Founder & leader OpenSourceResearch- Chairperson of cohort studies committee in European Society of Colo-Proctology (ESCP)- Member of Surgical Steering Committee in European Crohn Colitis Organisation (S-ECCO)
AI in healthcare 10/12 The rapid progress in neuroscience Working out how the brain does what it does is no easy feat. Much of what #neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish. It’s often not clear whether living, learning brains work by scaled-up versions of these same rules, or if something more sophisticated is taking place. Even with modern #experimental techniques, wherein neuroscientists track hundreds of neurons at a time in live animals, it is hard to reverse-engineer what is really going on. #Brains learn by being subtly rewired: some connections between neurons, known as synapses, are strengthened, while others must be weakened. But because the brain has billions of neurons, of which millions could be involved in any single task, scientists have puzzled over how it knows which #synapses to tweak and by how much. A clever mathematical #algorithm known as backpropagation was suggested to solve this problem in artificial neural networks. Backpropagation-of-error #algorithm means: If neural #network makes mistake, it generates an error #signal This error signal moves backwards through the #network, layer by layer, strengthening or weakening each connection in order to minimise any future errors. Science is beautiful, isn't it? Join us in a scientific discovery journey OpenSourceResearch Collaboration https://lnkd.in/drFdevXQ
AI scientists are producing new theories of how the brain learns
economist.com
To view or add a comment, sign in
-
The structure and arrangement of components in a Neural Network Anatomy (NNA) mimic the way biological neurons work in the human brain. This complex machine learning model operates like the human brain, using processes that imitate how biological neurons interact to recognize patterns, consider options, and make informed decisions. Below are the NNA components, if you would like to learn more, join Nth University Webinar on November 5th at https://lnkd.in/gRiTmkzi 1. Neurons (Nodes) Neurons are the basic elements of a neural network. Each neuron takes in inputs, processes them, and produces an output. This output is then sent to the next layer. Each neuron has specific weights and biases that determine how it processes the input. 2. Layers A neural network is made up of layers, each containing many neurons. There are typically three types of layers: Input Layer: The first layer of the network, which receives the initial data. Each neuron in the input layer corresponds to one feature in the dataset. Hidden Layers: Intermediate layers are located between the input and output layers. These layers enable the network to understand complex patterns. The number of neurons and layers can vary based on the task and the model's complexity. Output Layer: The last layer in the network produces the final output. The number of neurons in this layer corresponds to the desired output. For example, in classification, there could be one neuron per class.). 3. Weights Weights determine how important inputs are. Each connection between neurons has a weight that is multiplied by the input. These weights are changed during training to reduce errors in predictions. 4. Bias A bias is an extra factor in each neuron that helps the model adjust the activation function. It gives the model more flexibility to make accurate predictions. 5. Activation Function Activation functions decide what the neurons will output by using a non-linear change to the input. Common activation functions include...: Sigmoid: Produces outputs in the range (0, 1). ReLU (Rectified Linear Unit): Outputs zero for negative values and the input itself for positive values. Tanh: Similar to sigmoid but outputs between (-1, 1). 6. Loss Function The loss function shows how much the neural network's predictions differ from the actual target values. This is the function that the network tries to minimize during training. Examples of loss functions include mean squared error for regression tasks and cross-entropy for classification. 7. Optimization Algorithm Optimization algorithms like Stochastic Gradient Descent (SGD) or Adam are used to reduce the loss function by adjusting the weights and biases. These algorithms update the parameters through a process called backpropagation. 8. Backpropagation Backpropagation is a method that calculates how the loss function changes with each weight in order to efficiently update the weights. It is crucial for training neural networks.
To view or add a comment, sign in
-
Neural networks have long been a source of fascination for their ability to learn and adapt. Recent research has revealed that, regardless of their complexity or training method, these networks follow a surprisingly uniform path from ignorance to expertise in image classification tasks. According to a study by Neuroscience News, neural networks classify images by identifying the same low-dimensional features, such as ears or eyes. This finding debunks the assumption that network learning methods are vastly different. Check out the link below to learn more about the fascinating world of neural networks. Link: https://lnkd.in/d9k36PkB
AI's Learning Path: Surprising Uniformity Across Neural Networks - Neuroscience News
https://neurosciencenews.com
To view or add a comment, sign in
-
The CircuitNet is a neural network architecture inspired by neural circuits in the brain. Experiments show that CircuitNet , with fewer parameters, outperforms popular neural networks in function approximation, image classification, reinforcement learning, and time series forecasting. This work highlights the benefits of incorporating neuroscience principles into deep learning model design.
CircuitNet: A Brain-Inspired Neural Network Architecture for Enhanced Task Performance Across Diverse Domains
https://www.marktechpost.com
To view or add a comment, sign in
-
Bridging silicon and soul in the age of thinking machines. AI Consultant, Advisor and Instructor, Marketing exec. PhD Researcher in Generative AI. EdTech. Author. Speaker. Media Ecology. Mental Health Advocate
This research highlights the brain’s ability to reconstruct memories with unique details, offering insights into memory’s role in survival and prediction. Key Facts: 1. The AI model simulates the interaction between the hippocampus and neocortex in memory processing. 2. The neocortex forms “conceptual” representations, enabling the brain to recreate past experiences and imagine new scenarios. 3. The study provides insights into memory’s role in survival, predicting future events, and understanding memory distortions. #research #science #ai
AI Unlocks Secrets of Human Imagination and Memory Formation - Neuroscience News
https://neurosciencenews.com
To view or add a comment, sign in
-
Associate Professor at McGill University; CSO Lamarck Labs; Co-director of the QC Sleep Research Network
Do Androids Dream of Electric Sheep? 😴 🤖 I'm excited to share our latest preprint, led by Dan Levenstein in collaboration with Blake Richards at Mila - Quebec Artificial Intelligence Institute. In this work, we explore how neural networks can recapitulate previously formed memories during sleep – driven solely by noise and without any external inputs. The hippocampus, a brain region crucial for memory formation, encodes life's episodes during wakefulness, with each neuron firing based on the animal's position, forming a map of the environment. During sleep, these previously experienced trajectories are replayed, reactivating the constellation of neurons in the neocortex repeatedly, thereby strengthening the memory and potentially stabilizing it for a lifetime. But how does a map of the environment emerge in a neuronal network? And how can neuronal activity remain coherent during sleep without sensory inputs? In this paper, we demonstrate that sequential predictive learning is sufficient to account for these properties. Specifically, the recurrent neural network is trained to predict sequences of future sensory states, not just the next state. Interestingly, around the time we posted the first version of this preprint, Gabriel Synnaeve and his team published a paper showing that predicting series of masked inputs optimizes training in large language models. This paper marks the beginning of our collaboration with Blake Richards. The dialogue between neuroscience and AI is truly fascinating. While AI has often drawn inspiration from neuroscience, I firmly believe that neuroscience will learn a great deal from AI in the coming years. Enjoy the read, and please don't hesitate to share your comments on the paper! https://lnkd.in/g2Y4K5fC
Sequential predictive learning is a unifying theory for hippocampal representation and replay
biorxiv.org
To view or add a comment, sign in
-
Self-supervised learning in AI changes the game. An astonishing article I came across recently was a research by researchers at MIT. It suggests that the brain may learn about the world like self-supervised learning in AI models. The research that was published, talked about how neural networks that are trained using self-supervised learning generate activity patterns remarkably similar to those observed in animal brains performing similar tasks. Being a Final year Computer Science student, this is highly relevant to my field as it bridges the gap between AI and neuroscience. This would enable AI to perform a wide variety of generalised tasks, similar to the brain’s ability to adapt to various scenarios. It also opens many new doors for training AI more efficiently to generate more accurate results. This, also has an huge impact on the field of Data Analytics as this enables better pattern recognition in large datasets, unlocking new insights and potentially reducing the need for labelled data. One of the most fascinating aspects of the article or research for me, was the “Mental-Pong” experiment, which, similar to the game of Pong, but the difference is that here, the ball disappears after a certain time but still the model was able to accurately track the ball. Link : https://lnkd.in/gjyE6jRY
The brain may learn about the world the same way some computational models do
news.mit.edu
To view or add a comment, sign in
-
📃Scientific paper: Early prediction of Alzheimer's disease using convolutional neural network: a review Abstract: In this paper, a comprehensive review on Alzheimer's disease (AD) is carried out, and an exploration of the two machine learning (ML) methods that help to identify the disease in its initial stages. Alzheimer's disease is a neurocognitive disorder occurring in people in their early onset. This disease causes the person to suffer from memory loss, unusual behavior, and language problems. Early detection is essential for developing more advanced treatments for AD. Machine learning (ML), a subfield of Artificial Intelligence (AI), uses various probabilistic and optimization techniques to help computers learn from huge and complicated data sets. To diagnose AD in its early stages, researchers generally use machine learning. The survey provides a broad overview of current research in this field and analyses the classification methods used by researchers working with ADNI data sets. It discusses essential research topics such as the data sets used, the evaluation measures employed, and the machine learning methods used. Our presentation suggests a model that helps better understand current work and highlights the challenges and opportunities for innovative and useful research. The study shows which machine learning method holds best for the ADNI data set. Therefore, the focus is given to two methods: the 18-layer convolutional network and the 3D convolutional network. Hence, CNNs with multi-layered fetch more accurate results as compared to 3D CNN. The work also contr... Continued on ES/IODE ➡️ https://etcse.fr/gnBY ------- If you find this interesting, feel free to follow, comment and share. We need your help to enhance our visibility, so that our platform continues to serve you. #alzheimer #science #health
Early prediction of Alzheimer's disease using convolutional neural network: a review
ethicseido.com
To view or add a comment, sign in
-
📃Scientific paper: Early prediction of Alzheimer's disease using convolutional neural network: a review Abstract: In this paper, a comprehensive review on Alzheimer's disease (AD) is carried out, and an exploration of the two machine learning (ML) methods that help to identify the disease in its initial stages. Alzheimer's disease is a neurocognitive disorder occurring in people in their early onset. This disease causes the person to suffer from memory loss, unusual behavior, and language problems. Early detection is essential for developing more advanced treatments for AD. Machine learning (ML), a subfield of Artificial Intelligence (AI), uses various probabilistic and optimization techniques to help computers learn from huge and complicated data sets. To diagnose AD in its early stages, researchers generally use machine learning. The survey provides a broad overview of current research in this field and analyses the classification methods used by researchers working with ADNI data sets. It discusses essential research topics such as the data sets used, the evaluation measures employed, and the machine learning methods used. Our presentation suggests a model that helps better understand current work and highlights the challenges and opportunities for innovative and useful research. The study shows which machine learning method holds best for the ADNI data set. Therefore, the focus is given to two methods: the 18-layer convolutional network and the 3D convolutional network. Hence, CNNs with multi-layered fetch more accurate results as compared to 3D CNN. The work also contr... Continued on ES/IODE ➡️ https://etcse.fr/gnBY ------- If you find this interesting, feel free to follow, comment and share. We need your help to enhance our visibility, so that our platform continues to serve you. #alzheimer #science #health
Early prediction of Alzheimer's disease using convolutional neural network: a review
ethicseido.com
To view or add a comment, sign in
3,259 followers