default search action
Yee Whye Teh
Person information
- affiliation: University of Oxford, UK
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j24]Mrinank Sharma, Tom Rainforth, Yee Whye Teh, Vincent Fortuin:
Incorporating Unlabelled Data into Bayesian Neural Networks. Trans. Mach. Learn. Res. 2024 (2024) - [c155]Ning Miao, Yee Whye Teh, Tom Rainforth:
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning. ICLR 2024 - [c154]Michalis K. Titsias, Alexandre Galashov, Amal Rannen-Triki, Razvan Pascanu, Yee Whye Teh, Jörg Bornschein:
Kalman Filter for Online Classification of Non-Stationary Data. ICLR 2024 - [c153]Shengzhuang Chen, Jihoon Tack, Yunqiao Yang, Yee Whye Teh, Jonathan Richard Schwarz, Ying Wei:
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts. ICML 2024 - [c152]Leo Klarner, Tim G. J. Rudner, Garrett M. Morris, Charlotte M. Deane, Yee Whye Teh:
Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design. ICML 2024 - [c151]Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David B. Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang:
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI. ICML 2024 - [c150]Silvia Sapora, Gokul Swamy, Chris Lu, Yee Whye Teh, Jakob Nicolaus Foerster:
EvIL: Evolution Strategies for Generalisable Imitation Learning. ICML 2024 - [i124]Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David B. Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang:
Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI. CoRR abs/2402.00809 (2024) - [i123]Anya Sims, Cong Lu, Yee Whye Teh:
The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning. CoRR abs/2402.12527 (2024) - [i122]Soham De, Samuel L. Smith, Anushan Fernando, Aleksandar Botev, George-Cristian Muraru, Albert Gu, Ruba Haroun, Leonard Berrada, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, Arnaud Doucet, David Budden, Yee Whye Teh, Razvan Pascanu, Nando de Freitas, Caglar Gulcehre:
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models. CoRR abs/2402.19427 (2024) - [i121]Amal Rannen-Triki, Jörg Bornschein, Razvan Pascanu, Marcus Hutter, András György, Alexandre Galashov, Yee Whye Teh, Michalis K. Titsias:
Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models. CoRR abs/2403.01518 (2024) - [i120]Jihoon Tack, Jaehyung Kim, Eric Mitchell, Jinwoo Shin, Yee Whye Teh, Jonathan Richard Schwarz:
Online Adaptation of Language Models with a Memory of Amortized Contexts. CoRR abs/2403.04317 (2024) - [i119]Shengzhuang Chen, Jihoon Tack, Yunqiao Yang, Yee Whye Teh, Jonathan Richard Schwarz, Ying Wei:
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts. CoRR abs/2403.08477 (2024) - [i118]Aleksandar Botev, Soham De, Samuel L. Smith, Anushan Fernando, George-Cristian Muraru, Ruba Haroun, Leonard Berrada, Razvan Pascanu, Pier Giuseppe Sessa, Robert Dadashi, Léonard Hussenot, Johan Ferret, Sertan Girgin, Olivier Bachem, Alek Andreev, Kathleen Kenealy, Thomas Mesnard, Cassidy Hardin, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Armand Joulin, Noah Fiedel, Evan Senter, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, David Budden, Arnaud Doucet, Sharad Vikram, Adam Paszke, Trevor Gale, Sebastian Borgeaud, Charlie Chen, Andy Brock, Antonia Paterson, Jenny Brennan, Meg Risdal, Raj Gundluru, Nesh Devanathan, Paul Mooney, Nilay Chauhan, Phil Culliton, Luiz GUStavo Martins, Elisa Bandy, David Huntsperger, Glenn Cameron, Arthur Zucker, Tris Warkentin, Ludovic Peran, Minh Giang, Zoubin Ghahramani, Clément Farabet, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, Yee Whye Teh, Nando de Frietas:
RecurrentGemma: Moving Past Transformers for Efficient Open Language Models. CoRR abs/2404.07839 (2024) - [i117]Silvia Sapora, Gokul Swamy, Chris Lu, Yee Whye Teh, Jakob Nicolaus Foerster:
EvIL: Evolution Strategies for Generalisable Imitation Learning. CoRR abs/2406.11905 (2024) - [i116]Leo Klarner, Tim G. J. Rudner, Garrett M. Morris, Charlotte M. Deane, Yee Whye Teh:
Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design. CoRR abs/2407.11942 (2024) - [i115]Leo Zhang, Kianoosh Ashouritaklimi, Yee Whye Teh, Rob Cornish:
SymDiff: Equivariant Diffusion via Stochastic Symmetrisation. CoRR abs/2410.06262 (2024) - [i114]Guneet S. Dhillon, Xingjian Shi, Yee Whye Teh, Alex Smola:
L3Ms - Lagrange Large Language Models. CoRR abs/2410.21533 (2024) - 2023
- [j23]Jörg Bornschein, Alexandre Galashov, Ross Hemsley, Amal Rannen-Triki, Yutian Chen, Arslan Chaudhry, Xu Owen He, Arthur Douillard, Massimo Caccia, Qixuan Feng, Jiajun Shen, Sylvestre-Alvise Rebuffi, Kitty Stacpoole, Diego de Las Casas, Will Hawkins, Angeliki Lazaridou, Yee Whye Teh, Andrei A. Rusu, Razvan Pascanu, Marc'Aurelio Ranzato:
Nevis'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research. J. Mach. Learn. Res. 24: 308:1-308:77 (2023) - [j22]Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh:
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations. Trans. Mach. Learn. Res. 2023 (2023) - [j21]Francisca Vasconcelos, Bobby He, Nalini M. Singh, Yee Whye Teh:
UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural Representations for Computed Tomography. Trans. Mach. Learn. Res. 2023 (2023) - [c149]Alexandre Galashov, Jovana Mitrovic, Dhruva Tirumala, Yee Whye Teh, Timothy Nguyen, Arslan Chaudhry, Razvan Pascanu:
Continually learning representations at scale. CoLLAs 2023: 534-547 - [c148]Bobby He, James Martens, Guodong Zhang, Aleksandar Botev, Andrew Brock, Samuel L. Smith, Yee Whye Teh:
Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation. ICLR 2023 - [c147]Sheheryar Zaidi, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro Sanchez-Gonzalez, Peter W. Battaglia, Razvan Pascanu, Jonathan Godwin:
Pre-training via Denoising for Molecular Property Prediction. ICLR 2023 - [c146]Leo Klarner, Tim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M. Morris, Charlotte M. Deane, Yee Whye Teh:
Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions. ICML 2023: 17176-17197 - [c145]Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim:
Learning Instance-Specific Augmentations by Capturing Local Invariances. ICML 2023: 24720-24736 - [c144]Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin:
Modality-Agnostic Variational Compression of Implicit Neural Representations. ICML 2023: 30342-30364 - [c143]Jin Xu, Emilien Dupont, Kaspar Märtens, Thomas Rainforth, Yee Whye Teh:
Deep Stochastic Processes via Functional Markov Transition Operators. NeurIPS 2023 - [c142]Cong Lu, Philip J. Ball, Yee Whye Teh, Jack Parker-Holder:
Synthetic Experience Replay. NeurIPS 2023 - [c141]Emile Mathieu, Vincent Dutordoir, Michael J. Hutchinson, Valentin De Bortoli, Yee Whye Teh, Richard E. Turner:
Geometric Neural Diffusion Processes. NeurIPS 2023 - [i113]Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin:
Modality-Agnostic Variational Compression of Implicit Neural Representations. CoRR abs/2301.09479 (2023) - [i112]Bobby He, James Martens, Guodong Zhang, Aleksandar Botev, Andrew Brock, Samuel L. Smith, Yee Whye Teh:
Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation. CoRR abs/2302.10322 (2023) - [i111]Mrinank Sharma, Tom Rainforth, Yee Whye Teh, Vincent Fortuin:
Incorporating Unlabelled Data into Bayesian Neural Networks. CoRR abs/2304.01762 (2023) - [i110]Jin Xu, Emilien Dupont, Kaspar Märtens, Tom Rainforth, Yee Whye Teh:
Deep Stochastic Processes via Functional Markov Transition Operators. CoRR abs/2305.15574 (2023) - [i109]Michalis K. Titsias, Alexandre Galashov, Amal Rannen-Triki, Razvan Pascanu, Yee Whye Teh, Jörg Bornschein:
Kalman Filter for Online Classification of Non-Stationary Data. CoRR abs/2306.08448 (2023) - [i108]Emile Mathieu, Vincent Dutordoir, Michael J. Hutchinson, Valentin De Bortoli, Yee Whye Teh, Richard E. Turner:
Geometric Neural Diffusion Processes. CoRR abs/2307.05431 (2023) - [i107]Leo Klarner, Tim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M. Morris, Charlotte M. Deane, Yee Whye Teh:
Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions. CoRR abs/2307.15073 (2023) - [i106]Ning Miao, Yee Whye Teh, Tom Rainforth:
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning. CoRR abs/2308.00436 (2023) - [i105]Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, Yarin Gal:
Tractable Function-Space Variational Inference in Bayesian Neural Networks. CoRR abs/2312.17199 (2023) - [i104]Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal:
Continual Learning via Sequential Function-Space Variational Inference. CoRR abs/2312.17210 (2023) - 2022
- [j20]Dhruva Tirumala, Alexandre Galashov, Hyeonwoo Noh, Leonard Hasenclever, Razvan Pascanu, Jonathan Schwarz, Guillaume Desjardins, Wojciech Marian Czarnecki, Arun Ahuja, Yee Whye Teh, Nicolas Heess:
Behavior Priors for Efficient Reinforcement Learning. J. Mach. Learn. Res. 23: 221:1-221:68 (2022) - [j19]Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Golinski, Yee Whye Teh, Arnaud Doucet:
COIN++: Neural Compression Across Modalities. Trans. Mach. Learn. Res. 2022 (2022) - [j18]Jonathan Schwarz, Yee Whye Teh:
Meta-Learning Sparse Compression Networks. Trans. Mach. Learn. Res. 2022 (2022) - [c140]Emilien Dupont, Yee Whye Teh, Arnaud Doucet:
Generative Models as Distributions of Functions. AISTATS 2022: 2989-3015 - [c139]Saeid Naderiparizi, Adam Scibior, Andreas Munk, Mehrdad Ghadiri, Atilim Gunes Baydin, Bradley J. Gram-Hansen, Christian A. Schröder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood:
Amortized Rejection Sampling in Universal Probabilistic Programming. AISTATS 2022: 8392-8412 - [c138]Sheheryar Zaidi, Tudor Berariu, Hyunjik Kim, Jörg Bornschein, Claudia Clopath, Yee Whye Teh, Razvan Pascanu:
When Does Re-initialization Work? ICBINB 2022: 12-26 - [c137]Ning Miao, Emile Mathieu, Siddharth N, Yee Whye Teh, Tom Rainforth:
On Incorporating Inductive Biases into VAEs. ICLR 2022 - [c136]Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal:
Continual Learning via Sequential Function-Space Variational Inference. ICML 2022: 18871-18887 - [c135]Valentin De Bortoli, Emile Mathieu, Michael J. Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet:
Riemannian Score-Based Generative Modelling. NeurIPS 2022 - [c134]Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, Yarin Gal:
Tractable Function-Space Variational Inference in Bayesian Neural Networks. NeurIPS 2022 - [c133]Muhammad Faaiz Taufiq, Jean-Francois Ton, Rob Cornish, Yee Whye Teh, Arnaud Doucet:
Conformal Off-Policy Prediction in Contextual Bandits. NeurIPS 2022 - [c132]Cian Naik, François Caron, Judith Rousseau, Yee Whye Teh, Konstantina Palla:
Bayesian Nonparametrics for Sparse Dynamic Networks. ECML/PKDD (5) 2022: 191-206 - [i103]Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Golinski, Yee Whye Teh, Arnaud Doucet:
COIN++: Data Agnostic Neural Compression. CoRR abs/2201.12904 (2022) - [i102]Valentin De Bortoli, Emile Mathieu, Michael J. Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet:
Riemannian Score-Based Generative Modeling. CoRR abs/2202.02763 (2022) - [i101]Francisca Vasconcelos, Bobby He, Nalini M. Singh, Yee Whye Teh:
UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural Representations for Computed Tomography. CoRR abs/2202.10847 (2022) - [i100]Jonathan Richard Schwarz, Yee Whye Teh:
Meta-Learning Sparse Compression Networks. CoRR abs/2205.08957 (2022) - [i99]Ning Miao, Emile Mathieu, Yann Dubois, Tom Rainforth, Yee Whye Teh, Adam Foster, Hyunjik Kim:
Learning Instance-Specific Data Augmentations. CoRR abs/2206.00051 (2022) - [i98]Sheheryar Zaidi, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro Sanchez-Gonzalez, Peter W. Battaglia, Razvan Pascanu, Jonathan Godwin:
Pre-training via Denoising for Molecular Property Prediction. CoRR abs/2206.00133 (2022) - [i97]Muhammad Faaiz Taufiq, Jean-Francois Ton, Robert Cornish, Yee Whye Teh, Arnaud Doucet:
Conformal Off-Policy Prediction in Contextual Bandits. CoRR abs/2206.04405 (2022) - [i96]Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh:
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations. CoRR abs/2206.04779 (2022) - [i95]Sheheryar Zaidi, Tudor Berariu, Hyunjik Kim, Jörg Bornschein, Claudia Clopath, Yee Whye Teh, Razvan Pascanu:
When Does Re-initialization Work? CoRR abs/2206.10011 (2022) - [i94]James Thornton, Michael J. Hutchinson, Emile Mathieu, Valentin De Bortoli, Yee Whye Teh, Arnaud Doucet:
Riemannian Diffusion Schrödinger Bridge. CoRR abs/2207.03024 (2022) - [i93]Jörg Bornschein, Alexandre Galashov, Ross Hemsley, Amal Rannen-Triki, Yutian Chen, Arslan Chaudhry, Xu Owen He, Arthur Douillard, Massimo Caccia, Qixuang Feng, Jiajun Shen, Sylvestre-Alvise Rebuffi, Kitty Stacpoole, Diego de Las Casas, Will Hawkins, Angeliki Lazaridou, Yee Whye Teh, Andrei A. Rusu, Razvan Pascanu, Marc'Aurelio Ranzato:
NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research. CoRR abs/2211.11747 (2022) - [i92]Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal, Yee Whye Teh:
On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations. CoRR abs/2212.13936 (2022) - 2021
- [j17]Qi Wang, Vinayak Rao, Yee Whye Teh:
An Exact Auxiliary Variable Gibbs Sampler for a Class of Diffusions. J. Comput. Graph. Stat. 30(2): 297-311 (2021) - [j16]Chris J. Maddison, Daniel Paulin, Yee Whye Teh, Arnaud Doucet:
Dual Space Preconditioning for Gradient Descent. SIAM J. Optim. 31(1): 991-1016 (2021) - [c131]Jean-Francois Ton, Lucian Chan, Yee Whye Teh, Dino Sejdinovic:
Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings. AISTATS 2021: 1099-1107 - [c130]Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh:
Robust Pruning at Initialization. ICLR 2021 - [c129]Peter Holderrieth, Michael J. Hutchinson, Yee Whye Teh:
Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes. ICML 2021: 4297-4307 - [c128]Michael J. Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim:
LieTransformer: Equivariant Self-Attention for Lie Groups. ICML 2021: 4533-4543 - [c127]Siu Lun Chau, Jean-Francois Ton, Javier González, Yee Whye Teh, Dino Sejdinovic:
BayesIMP: Uncertainty Quantification for Causal Data Fusion. NeurIPS 2021: 3466-3477 - [c126]Jin Xu, Hyunjik Kim, Thomas Rainforth, Yee Whye Teh:
Group Equivariant Subsampling. NeurIPS 2021: 5934-5946 - [c125]Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C. Holmes, Frank Hutter, Yee Whye Teh:
Neural Ensemble Search for Uncertainty Estimation and Dataset Shift. NeurIPS 2021: 7898-7911 - [c124]Michael J. Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Whye Teh, Marc Peter Deisenroth:
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels. NeurIPS 2021: 17160-17169 - [c123]Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal, Yee Whye Teh:
On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations. NeurIPS 2021: 28376-28389 - [c122]Emile Mathieu, Adam Foster, Yee Whye Teh:
On Contrastive Representations of Stochastic Processes. NeurIPS 2021: 28823-28835 - [c121]Jonathan Schwarz, Siddhant M. Jayakumar, Razvan Pascanu, Peter E. Latham, Yee Whye Teh:
Powerpropagation: A sparsity inducing weight reparameterisation. NeurIPS 2021: 28889-28903 - [i91]Emilien Dupont, Yee Whye Teh, Arnaud Doucet:
Generative Models as Distributions of Functions. CoRR abs/2102.04776 (2021) - [i90]Emilien Dupont, Adam Golinski, Milad Alizadeh, Yee Whye Teh, Arnaud Doucet:
COIN: COmpression with Implicit Neural representations. CoRR abs/2103.03123 (2021) - [i89]Siu Lun Chau, Jean-François Ton, Javier González, Yee Whye Teh, Dino Sejdinovic:
BayesIMP: Uncertainty Quantification for Causal Data Fusion. CoRR abs/2106.03477 (2021) - [i88]Jin Xu, Hyunjik Kim, Tom Rainforth, Yee Whye Teh:
Group Equivariant Subsampling. CoRR abs/2106.05886 (2021) - [i87]Emile Mathieu, Adam Foster, Yee Whye Teh:
On Contrastive Representations of Stochastic Processes. CoRR abs/2106.10052 (2021) - [i86]Ning Miao, Emile Mathieu, N. Siddharth, Yee Whye Teh, Tom Rainforth:
InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via Intermediary Latents. CoRR abs/2106.13746 (2021) - [i85]Jonathan Schwarz, Siddhant M. Jayakumar, Razvan Pascanu, Peter E. Latham, Yee Whye Teh:
Powerpropagation: A sparsity inducing weight reparameterisation. CoRR abs/2110.00296 (2021) - [i84]Michael J. Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Whye Teh, Marc Peter Deisenroth:
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Equivariant Projected Kernels. CoRR abs/2110.14423 (2021) - 2020
- [j15]Benjamin Bloem-Reddy, Yee Whye Teh:
Probabilistic Symmetries and Invariant Neural Networks. J. Mach. Learn. Res. 21: 90:1-90:61 (2020) - [c120]Adam Foster, Martin Jankowiak, Matthew O'Meara, Yee Whye Teh, Tom Rainforth:
A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments. AISTATS 2020: 2959-2969 - [c119]Giuseppe Di Benedetto, Francois Caron, Yee Whye Teh:
Non-exchangeable feature allocation models with sublinear growth of the feature sizes. AISTATS 2020: 3208-3218 - [c118]Siddhant M. Jayakumar, Wojciech M. Czarnecki, Jacob Menick, Jonathan Schwarz, Jack W. Rae, Simon Osindero, Yee Whye Teh, Tim Harley, Razvan Pascanu:
Multiplicative Interactions and Where to Find Them. ICLR 2020 - [c117]Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh:
Functional Regularisation for Continual Learning with Gaussian Processes. ICLR 2020 - [c116]Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban:
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. ICML 2020: 8970-8980 - [c115]Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal:
Uncertainty Estimation Using a Single Deep Deterministic Neural Network. ICML 2020: 9690-9700 - [c114]Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh:
MetaFun: Meta-Learning with Iterative Functional Updates. ICML 2020: 10617-10627 - [c113]Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth:
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support. ICML 2020: 11534-11545 - [c112]Bobby He, Balaji Lakshminarayanan, Yee Whye Teh:
Bayesian Deep Ensembles via the Neural Tangent Kernel. NeurIPS 2020 - [c111]Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh:
Bootstrapping neural processes. NeurIPS 2020 - [c110]Mrinank Sharma, Sören Mindermann, Jan Markus Brauner, Gavin Leech, Anna B. Stephenson, Tomas Gavenciak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal:
How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19? NeurIPS 2020 - [i83]Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban:
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. CoRR abs/2002.05685 (2020) - [i82]Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh:
Pruning untrained neural networks: Principles and Analysis. CoRR abs/2002.08797 (2020) - [i81]Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal:
Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network. CoRR abs/2003.02037 (2020) - [i80]Giuseppe Di Benedetto, François Caron, Yee Whye Teh:
Non-exchangeable feature allocation models with sublinear growth of the feature sizes. CoRR abs/2003.13491 (2020) - [i79]Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C. Holmes, Frank Hutter, Yee Whye Teh:
Neural Ensemble Search for Performant and Calibrated Predictions. CoRR abs/2006.08573 (2020) - [i78]Bobby He, Balaji Lakshminarayanan, Yee Whye Teh:
Bayesian Deep Ensembles via the Neural Tangent Kernel. CoRR abs/2007.05864 (2020) - [i77]Bryn Elesedy, Varun Kanade, Yee Whye Teh:
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning. CoRR abs/2007.08243 (2020) - [i76]Mrinank Sharma, Sören Mindermann, Jan Markus Brauner, Gavin Leech, Anna B. Stephenson, Tomas Gavenciak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal:
On the robustness of effectiveness estimation of nonpharmaceutical interventions against COVID-19 transmission. CoRR abs/2007.13454 (2020) - [i75]Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh:
Bootstrapping Neural Processes. CoRR abs/2008.02956 (2020) - [i74]Alexandre Galashov, Jakub Sygnowski, Guillaume Desjardins, Jan Humplik, Leonard Hasenclever, Rae Jeong, Yee Whye Teh, Nicolas Heess:
Importance Weighted Policy Learning and Adaption. CoRR abs/2009.04875 (2020) - [i73]Dhruva Tirumala, Alexandre Galashov, Hyeonwoo Noh, Leonard Hasenclever, Razvan Pascanu, Jonathan Schwarz, Guillaume Desjardins, Wojciech Marian Czarnecki, Arun Ahuja, Yee Whye Teh, Nicolas Heess:
Behavior Priors for Efficient Reinforcement Learning. CoRR abs/2010.14274 (2020) - [i72]Ari Pakman, Yueqi Wang, Yoonho Lee, Pallab Basu, Juho Lee, Yee Whye Teh, Liam Paninski:
Attentive Clustering Processes. CoRR abs/2010.15727 (2020) - [i71]Peter Holderrieth, Michael J. Hutchinson, Yee Whye Teh:
Equivariant Conditional Neural Processes. CoRR abs/2011.12916 (2020) - [i70]Michael J. Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim:
LieTransformer: Equivariant self-attention for Lie Groups. CoRR abs/2012.10885 (2020)
2010 – 2019
- 2019
- [c109]Alexandre Galashov, Siddhant M. Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojciech M. Czarnecki, Yee Whye Teh, Razvan Pascanu, Nicolas Heess:
Information asymmetry in KL-regularized RL. ICLR (Poster) 2019 - [c108]Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, S. M. Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh:
Attentive Neural Processes. ICLR (Poster) 2019 - [c107]Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, Nicolas Heess:
Neural Probabilistic Motor Primitives for Humanoid Control. ICLR (Poster) 2019 - [c106]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Do Deep Generative Models Know What They Don't Know? ICLR (Poster) 2019 - [c105]Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar:
A Statistical Approach to Assessing Neural Network Robustness. ICLR (Poster) 2019 - [c104]Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh:
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks. ICML 2019: 3744-3753 - [c103]Emile Mathieu, Tom Rainforth, N. Siddharth, Yee Whye Teh:
Disentangling Disentanglement in Variational Autoencoders. ICML 2019: 4402-4412 - [c102]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Hybrid Models with Deep and Invertible Features. ICML 2019: 4723-4732 - [c101]Emilien Dupont, Arnaud Doucet, Yee Whye Teh:
Augmented Neural ODEs. NeurIPS 2019: 3134-3144 - [c100]Dushyant Rao, Francesco Visin, Andrei A. Rusu, Razvan Pascanu, Yee Whye Teh, Raia Hadsell:
Continual Unsupervised Representation Learning. NeurIPS 2019: 7645-7655 - [c99]Shufei Ge, Shijia Wang, Yee Whye Teh, Liangliang Wang, Lloyd T. Elliott:
Random Tessellation Forests. NeurIPS 2019: 9571-9581 - [c98]Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh:
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders. NeurIPS 2019: 12544-12555 - [c97]Adam Foster, Martin Jankowiak, Eli Bingham, Paul Horsfall, Yee Whye Teh, Tom Rainforth, Noah D. Goodman:
Variational Bayesian Optimal Experimental Design. NeurIPS 2019: 14036-14047 - [c96]Adam R. Kosiorek, Sara Sabour, Yee Whye Teh, Geoffrey E. Hinton:
Stacked Capsule Autoencoders. NeurIPS 2019: 15486-15496 - [c95]Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood:
Revisiting Reweighted Wake-Sleep for Models with Stochastic Control Flow. UAI 2019: 1039-1049 - [i69]Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, S. M. Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh:
Attentive Neural Processes. CoRR abs/1901.05761 (2019) - [i68]Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh:
Hierarchical Representations with Poincaré Variational Auto-Encoders. CoRR abs/1901.06033 (2019) - [i67]Benjamin Bloem-Reddy, Yee Whye Teh:
Probabilistic symmetry and invariant neural networks. CoRR abs/1901.06082 (2019) - [i66]Michalis K. Titsias, Jonathan Schwarz, Alexander G. de G. Matthews, Razvan Pascanu, Yee Whye Teh:
Functional Regularisation for Continual Learning using Gaussian Processes. CoRR abs/1901.11356 (2019) - [i65]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Hybrid Models with Deep and Invertible Features. CoRR abs/1902.02767 (2019) - [i64]Adam Foster, Martin Jankowiak, Eli Bingham, Paul Horsfall, Yee Whye Teh, Tom Rainforth, Noah D. Goodman:
Variational Estimators for Bayesian Optimal Experimental Design. CoRR abs/1903.05480 (2019) - [i63]Dhruva Tirumala, Hyeonwoo Noh, Alexandre Galashov, Leonard Hasenclever, Arun Ahuja, Greg Wayne, Razvan Pascanu, Yee Whye Teh, Nicolas Heess:
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL. CoRR abs/1903.07438 (2019) - [i62]Alexandre Galashov, Jonathan Schwarz, Hyunjik Kim, Marta Garnelo, David Saxton, Pushmeet Kohli, S. M. Ali Eslami, Yee Whye Teh:
Meta-Learning surrogate models for sequential decision making. CoRR abs/1903.11907 (2019) - [i61]Emilien Dupont, Arnaud Doucet, Yee Whye Teh:
Augmented Neural ODEs. CoRR abs/1904.01681 (2019) - [i60]Alexandre Galashov, Siddhant M. Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojciech M. Czarnecki, Yee Whye Teh, Razvan Pascanu, Nicolas Heess:
Information asymmetry in KL-regularized RL. CoRR abs/1905.01240 (2019) - [i59]Pedro A. Ortega, Jane X. Wang, Mark Rowland, Tim Genewein, Zeb Kurth-Nelson, Razvan Pascanu, Nicolas Heess, Joel Veness, Alexander Pritzel, Pablo Sprechmann, Siddhant M. Jayakumar, Tom McGrath, Kevin J. Miller, Mohammad Gheshlaghi Azar, Ian Osband, Neil C. Rabinowitz, András György, Silvia Chiappa, Simon Osindero, Yee Whye Teh, Hado van Hasselt, Nando de Freitas, Matthew M. Botvinick, Shane Legg:
Meta-learning of Sequential Strategies. CoRR abs/1905.03030 (2019) - [i58]Jan Humplik, Alexandre Galashov, Leonard Hasenclever, Pedro A. Ortega, Yee Whye Teh, Nicolas Heess:
Meta reinforcement learning as task inference. CoRR abs/1905.06424 (2019) - [i57]Bradley Gram-Hansen, Christian Schröder de Witt, Tom Rainforth, Philip H. S. Torr, Yee Whye Teh, Atilim Günes Baydin:
Hijacking Malaria Simulators with Probabilistic Programming. CoRR abs/1905.12432 (2019) - [i56]Jean-Francois Ton, Lucian Chan, Yee Whye Teh, Dino Sejdinovic:
Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings. CoRR abs/1906.02236 (2019) - [i55]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan:
Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality. CoRR abs/1906.02994 (2019) - [i54]Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu:
Task Agnostic Continual Learning via Meta Learning. CoRR abs/1906.05201 (2019) - [i53]Shufei Ge, Shijia Wang, Yee Whye Teh, Liangliang Wang, Lloyd T. Elliott:
Random Tessellation Forests. CoRR abs/1906.05440 (2019) - [i52]Adam R. Kosiorek, Sara Sabour, Yee Whye Teh, Geoffrey E. Hinton:
Stacked Capsule Autoencoders. CoRR abs/1906.06818 (2019) - [i51]Juho Lee, Yoonho Lee, Yee Whye Teh:
Deep Amortized Clustering. CoRR abs/1909.13433 (2019) - [i50]Saeid Naderiparizi, Adam Scibior, Andreas Munk, Mehrdad Ghadiri, Atilim Günes Baydin, Bradley Gram-Hansen, Christian Schröder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood:
Amortized Rejection Sampling in Universal Probabilistic Programming. CoRR abs/1910.09056 (2019) - [i49]Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth:
Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support. CoRR abs/1910.13324 (2019) - [i48]Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu, Raia Hadsell:
Continual Unsupervised Representation Learning. CoRR abs/1910.14481 (2019) - [i47]Adam Foster, Martin Jankowiak, Matthew O'Meara, Yee Whye Teh, Tom Rainforth:
A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments. CoRR abs/1911.00294 (2019) - [i46]Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh:
MetaFun: Meta-Learning with Iterative Functional Updates. CoRR abs/1912.02738 (2019) - 2018
- [c94]Mark Rowland, Marc G. Bellemare, Will Dabney, Rémi Munos, Yee Whye Teh:
An Analysis of Categorical Distributional Reinforcement Learning. AISTATS 2018: 29-37 - [c93]Hyunjik Kim, Yee Whye Teh:
Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes. AISTATS 2018: 575-584 - [c92]Wojciech Marian Czarnecki, Siddhant M. Jayakumar, Max Jaderberg, Leonard Hasenclever, Yee Whye Teh, Nicolas Heess, Simon Osindero, Razvan Pascanu:
Mix & Match Agent Curricula for Reinforcement Learning. ICML 2018: 1095-1103 - [c91]Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Jimenez Rezende, S. M. Ali Eslami:
Conditional Neural Processes. ICML 2018: 1690-1699 - [c90]Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh:
Tighter Variational Bounds are Not Necessarily Better. ICML 2018: 4274-4282 - [c89]Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell:
Progress & Compress: A scalable framework for continual learning. ICML 2018: 4535-4544 - [c88]Yee Whye Teh:
On Big Data Learning for Small Data Problems. KDD 2018: 3 - [c87]Xenia Miscouridou, Francois Caron, Yee Whye Teh:
Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data. NeurIPS 2018: 2349-2358 - [c86]Stefan Webb, Adam Golinski, Robert Zinkov, Siddharth Narayanaswamy, Tom Rainforth, Yee Whye Teh, Frank Wood:
Faithful Inversion of Generative Models for Effective Amortized Inference. NeurIPS 2018: 3074-3084 - [c85]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Causal Inference via Kernel Deviance Measures. NeurIPS 2018: 6986-6994 - [c84]Jianfei Chen, Jun Zhu, Yee Whye Teh, Tong Zhang:
Stochastic Expectation Maximization with Variance Reduction. NeurIPS 2018: 7978-7988 - [c83]Adam R. Kosiorek, Hyunjik Kim, Yee Whye Teh, Ingmar Posner:
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. NeurIPS 2018: 8615-8625 - [c82]Benjamin Bloem-Reddy, Adam Foster, Emile Mathieu, Yee Whye Teh:
Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks. UAI 2018: 477-486 - [i45]Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh:
Tighter Variational Bounds are Not Necessarily Better. CoRR abs/1802.04537 (2018) - [i44]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Causal Inference via Kernel Deviance Measures. CoRR abs/1804.04622 (2018) - [i43]Jonathan Schwarz, Jelena Luketina, Wojciech M. Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, Raia Hadsell:
Progress & Compress: A scalable framework for continual learning. CoRR abs/1805.06370 (2018) - [i42]Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood:
Revisiting Reweighted Wake-Sleep. CoRR abs/1805.10469 (2018) - [i41]Wojciech Marian Czarnecki, Siddhant M. Jayakumar, Max Jaderberg, Leonard Hasenclever, Yee Whye Teh, Simon Osindero, Nicolas Heess, Razvan Pascanu:
Mix&Match - Agent Curricula for Reinforcement Learning. CoRR abs/1806.01780 (2018) - [i40]Adam R. Kosiorek, Hyunjik Kim, Ingmar Posner, Yee Whye Teh:
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. CoRR abs/1806.01794 (2018) - [i39]Jin Xu, Yee Whye Teh:
Controllable Semantic Image Inpainting. CoRR abs/1806.05953 (2018) - [i38]Marta Garnelo, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, S. M. Ali Eslami:
Conditional Neural Processes. CoRR abs/1807.01613 (2018) - [i37]Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, Yee Whye Teh:
Neural Processes. CoRR abs/1807.01622 (2018) - [i36]Benjamin Bloem-Reddy, Adam Foster, Emile Mathieu, Yee Whye Teh:
Sampling and Inference for Beta Neutral-to-the-Left Models of Sparse Networks. CoRR abs/1807.03113 (2018) - [i35]Chris J. Maddison, Daniel Paulin, Yee Whye Teh, Brendan O'Donoghue, Arnaud Doucet:
Hamiltonian Descent Methods. CoRR abs/1809.05042 (2018) - [i34]Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh:
Set Transformer. CoRR abs/1810.00825 (2018) - [i33]Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan:
Do Deep Generative Models Know What They Don't Know? CoRR abs/1810.09136 (2018) - [i32]Xiaoyu Lu, Tom Rainforth, Yuan Zhou, Jan-Willem van de Meent, Yee Whye Teh:
On Exploration, Exploitation and Learning in Adaptive Importance Sampling. CoRR abs/1810.13296 (2018) - [i31]Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar:
A Statistical Approach to Assessing Neural Network Robustness. CoRR abs/1811.07209 (2018) - [i30]Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, Nicolas Heess:
Neural probabilistic motor primitives for humanoid control. CoRR abs/1811.11711 (2018) - [i29]Emile Mathieu, Tom Rainforth, Siddharth Narayanaswamy, Yee Whye Teh:
Disentangling Disentanglement. CoRR abs/1812.02833 (2018) - 2017
- [j14]Leonard Hasenclever, Stefan Webb, Thibaut Liénart, Sebastian J. Vollmer, Balaji Lakshminarayanan, Charles Blundell, Yee Whye Teh:
Distributed Bayesian Learning with Stochastic Natural Gradient Expectation Propagation and the Posterior Server. J. Mach. Learn. Res. 18: 106:1-106:37 (2017) - [j13]Valerio Perrone, Paul A. Jenkins, Dario Spanò, Yee Whye Teh:
Poisson Random Fields for Dynamic Feature Models. J. Mach. Learn. Res. 18: 127:1-127:45 (2017) - [c81]Seth R. Flaxman, Yee Whye Teh, Dino Sejdinovic:
Poisson intensity estimation with reproducing kernels. AISTATS 2017: 270-279 - [c80]Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, Sebastian J. Vollmer:
Relativistic Monte Carlo. AISTATS 2017: 1236-1245 - [c79]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh:
Particle Value Functions. ICLR (Workshop) 2017 - [c78]Chris J. Maddison, Andriy Mnih, Yee Whye Teh:
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. ICLR (Poster) 2017 - [c77]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Deep Kernel Machines via the Kernel Reparametrization Trick. ICLR (Workshop) 2017 - [c76]Yee Whye Teh, Victor Bapst, Wojciech M. Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu:
Distral: Robust multitask reinforcement learning. NIPS 2017: 4496-4506 - [c75]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, Yee Whye Teh:
Filtering Variational Objectives. NIPS 2017: 6573-6583 - [e2]Doina Precup, Yee Whye Teh:
Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. Proceedings of Machine Learning Research 70, PMLR 2017 [contents] - [r4]Peter Orbanz, Yee Whye Teh:
Bayesian Nonparametric Models. Encyclopedia of Machine Learning and Data Mining 2017: 107-116 - [r3]Yee Whye Teh:
Dirichlet Process. Encyclopedia of Machine Learning and Data Mining 2017: 361-370 - [i28]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh:
Particle Value Functions. CoRR abs/1703.05820 (2017) - [i27]Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, Yee Whye Teh:
Filtering Variational Objectives. CoRR abs/1705.09279 (2017) - [i26]Hyunjik Kim, Yee Whye Teh:
Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes. CoRR abs/1706.02524 (2017) - [i25]Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu:
Distral: Robust Multitask Reinforcement Learning. CoRR abs/1707.04175 (2017) - [i24]Stefan Webb, Adam Golinski, Robert Zinkov, N. Siddharth, Yee Whye Teh, Frank D. Wood:
Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference. CoRR abs/1712.00287 (2017) - 2016
- [j12]Yee Whye Teh, Alexandre H. Thiéry, Sebastian J. Vollmer:
Consistency and Fluctuations For Stochastic Gradient Langevin Dynamics. J. Mach. Learn. Res. 17: 7:1-7:33 (2016) - [j11]Sebastian J. Vollmer, Konstantinos C. Zygalakis, Yee Whye Teh:
Exploration of the (Non-)Asymptotic Bias and Variance of Stochastic Gradient Langevin Dynamics. J. Mach. Learn. Res. 17: 159:1-159:48 (2016) - [c74]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Mondrian Forests for Large-Scale Regression when Uncertainty Matters. AISTATS 2016: 1478-1487 - [c73]Hyunjik Kim, Yee Whye Teh:
Scalable Structure Discovery in Regression using Gaussian Processes. AutoML@ICML 2016: 31-40 - [c72]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. ICML 2016: 1482-1491 - [c71]Tamara Fernandez, Nicolas Rivera, Yee Whye Teh:
Gaussian Processes for Survival Analysis. NIPS 2016: 5015-5023 - [c70]Matej Balog, Balaji Lakshminarayanan, Zoubin Ghahramani, Daniel M. Roy, Yee Whye Teh:
The Mondrian Kernel. UAI 2016 - [i23]Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. CoRR abs/1602.04805 (2016) - [i22]Dorota Glowacka, Yee Whye Teh, John Shawe-Taylor:
Image Retrieval with a Bayesian Model of Relevance Feedback. CoRR abs/1603.09522 (2016) - [i21]Hyunjik Kim, Xiaoyu Lu, Seth R. Flaxman, Yee Whye Teh:
Tucker Gaussian Process for Regression and Collaborative Filtering. CoRR abs/1605.07025 (2016) - [i20]Chris J. Maddison, Andriy Mnih, Yee Whye Teh:
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. CoRR abs/1611.00712 (2016) - 2015
- [j10]Pablo G. Moreno, Antonio Artés-Rodríguez, Yee Whye Teh, Fernando Pérez-Cruz:
Bayesian nonparametric crowdsourcing. J. Mach. Learn. Res. 16: 1607-1627 (2015) - [j9]Ryan P. Adams, Emily B. Fox, Erik B. Sudderth, Yee Whye Teh:
Guest Editors' Introduction to the Special Issue on Bayesian Nonparametrics. IEEE Trans. Pattern Anal. Mach. Intell. 37(2): 209-211 (2015) - [j8]Stefano Favaro, Maria Lomeli, Yee Whye Teh:
On a class of σ-stable Poisson-Kingman models and an effective marginalized sampler. Stat. Comput. 25(1): 67-78 (2015) - [c69]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Particle Gibbs for Bayesian Additive Regression Trees. AISTATS 2015 - [c68]Maria Lomeli, Stefano Favaro, Yee Whye Teh:
A hybrid sampler for Poisson-Kingman mixture models. NIPS 2015: 2161-2169 - [c67]Thibaut Liénart, Yee Whye Teh, Arnaud Doucet:
Expectation Particle Belief Propagation. NIPS 2015: 3609-3617 - [i19]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Particle Gibbs for Bayesian Additive Regression Trees. CoRR abs/1502.04622 (2015) - [i18]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Mondrian Forests for Large-Scale Regression when Uncertainty Matters. CoRR abs/1506.03805 (2015) - [i17]Thibaut Liénart, Yee Whye Teh, Arnaud Doucet:
Expectation Particle Belief Propagation. CoRR abs/1506.05934 (2015) - [i16]Matej Balog, Yee Whye Teh:
The Mondrian Process for Machine Learning. CoRR abs/1507.05181 (2015) - [i15]Yee Whye Teh, Leonard Hasenclever, Thibaut Liénart, Sebastian J. Vollmer, Stefan Webb, Balaji Lakshminarayanan, Charles Blundell:
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server. CoRR abs/1512.09327 (2015) - 2014
- [c66]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Mondrian Forests: Efficient Online Random Forests. NIPS 2014: 3140-3148 - [c65]Minjie Xu, Balaji Lakshminarayanan, Yee Whye Teh, Jun Zhu, Bo Zhang:
Distributed Bayesian Posterior Sampling via Moment Sharing. NIPS 2014: 3356-3364 - [c64]Brooks Paige, Frank D. Wood, Arnaud Doucet, Yee Whye Teh:
Asynchronous Anytime Sequential Monte Carlo. NIPS 2014: 3410-3418 - [i14]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Mondrian Forests: Efficient Online Random Forests. CoRR abs/1406.2673 (2014) - 2013
- [j7]Vinayak A. Rao, Yee Whye Teh:
Fast MCMC sampling for Markov jump processes and extensions. J. Mach. Learn. Res. 14(1): 3295-3320 (2013) - [c63]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Top-down particle filtering for Bayesian decision trees. ICML (3) 2013: 280-288 - [c62]Changyou Chen, Vinayak A. Rao, Wray L. Buntine, Yee Whye Teh:
Dependent Normalized Random Measures. ICML (3) 2013: 969-977 - [c61]Charles Blundell, Yee Whye Teh:
Bayesian Hierarchical Community Discovery. NIPS 2013: 1601-1609 - [c60]Xinhua Zhang, Wee Sun Lee, Yee Whye Teh:
Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space. NIPS 2013: 2031-2039 - [c59]Sam Patterson, Yee Whye Teh:
Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex. NIPS 2013: 3102-3110 - [i13]Geoffrey E. Hinton, Yee Whye Teh:
Discovering Multiple Constraints that are Frequently Approximately Satisfied. CoRR abs/1301.2278 (2013) - [i12]Max Welling, Yee Whye Teh:
Belief Optimization for Binary Networks: A Stable Alternative to Loopy Belief Propagation. CoRR abs/1301.2317 (2013) - [i11]Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh:
Top-down particle filtering for Bayesian decision trees. CoRR abs/1303.0561 (2013) - [i10]Balaji Lakshminarayanan, Yee Whye Teh:
Inferring ground truth from multi-annotator ordinal data: a probabilistic approach. CoRR abs/1305.0015 (2013) - 2012
- [c58]Nicolas Heess, David Silver, Yee Whye Teh:
Actor-Critic Reinforcement Learning with Energy-Based Policies. EWRL 2012: 43-58 - [c57]Andriy Mnih, Yee Whye Teh:
A fast and simple algorithm for training neural probabilistic language models. ICML 2012 - [c56]Vinayak A. Rao, Yee Whye Teh:
MCMC for continuous-time discrete-state systems. NIPS 2012: 710-718 - [c55]Bogdan Alexe, Nicolas Heess, Yee Whye Teh, Vittorio Ferrari:
Searching for objects driven by context. NIPS 2012: 890-898 - [c54]Francois Caron, Yee Whye Teh:
Bayesian nonparametric models for ranked data. NIPS 2012: 1529-1537 - [c53]Andriy Mnih, Yee Whye Teh:
Learning Label Trees for Probabilistic Modelling of Implicit Feedback. NIPS 2012: 2825-2833 - [c52]Lloyd T. Elliott, Yee Whye Teh:
Scalable imputation of genetic data with a discrete fragmentation-coagulation process. NIPS 2012: 2861-2869 - [i9]Vinayak A. Rao, Yee Whye Teh:
Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks. CoRR abs/1202.3760 (2012) - [i8]Charles Blundell, Yee Whye Teh, Katherine A. Heller:
Bayesian Rose Trees. CoRR abs/1203.3468 (2012) - [i7]Arthur U. Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh:
On Smoothing and Inference for Topic Models. CoRR abs/1205.2662 (2012) - [i6]Max Welling, Yee Whye Teh, Hilbert J. Kappen:
Hybrid Variational/Gibbs Collapsed Inference in Topic Models. CoRR abs/1206.3297 (2012) - [i5]Max Welling, Thomas P. Minka, Yee Whye Teh:
Structured Region Graphs: Morphing EP into GBP. CoRR abs/1207.1426 (2012) - [i4]Francois Caron, Yee Whye Teh:
Bayesian nonparametric models for ranked data. CoRR abs/1211.4321 (2012) - [i3]Francois Caron, Yee Whye Teh, Thomas Brendan Murphy:
Bayesian nonparametric Plackett-Luce models for the analysis of clustered ranked data. CoRR abs/1211.5037 (2012) - 2011
- [j6]Frank D. Wood, Jan Gasthaus, Cédric Archambeau, Lancelot James, Yee Whye Teh:
The sequence memoizer. Commun. ACM 54(2): 91-98 (2011) - [c51]Yee Whye Teh:
(Invited talk) Bayesian Tools for Natural Language Learning. CoNLL 2011: 219 - [c50]Max Welling, Yee Whye Teh:
Bayesian Learning via Stochastic Gradient Langevin Dynamics. ICML 2011: 681-688 - [c49]Yee Whye Teh, Charles Blundell, Lloyd T. Elliott:
Modelling Genetic Variations using Fragmentation-Coagulation Processes. NIPS 2011: 819-827 - [c48]Vinayak A. Rao, Yee Whye Teh:
Gaussian process modulated renewal processes. NIPS 2011: 2474-2482 - [c47]Vinayak A. Rao, Yee Whye Teh:
Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks. UAI 2011: 619-626 - [c46]Ricardo Bezerra de Andrade e Silva, Charles Blundell, Yee Whye Teh:
Mixed Cumulative Distribution Networks. AISTATS 2011: 670-678 - [i2]Andriy Mnih, Yee Whye Teh:
Learning Item Trees for Probabilistic Modelling of Implicit Feedback. CoRR abs/1109.5894 (2011) - 2010
- [c45]Jan Gasthaus, Frank D. Wood, Yee Whye Teh:
Lossless Compression Based on the Sequence Memoizer. DCC 2010: 337-345 - [c44]Jan Gasthaus, Yee Whye Teh:
Improvements to the Sequence Memoizer. NIPS 2010: 685-693 - [c43]Charles Blundell, Yee Whye Teh, Katherine A. Heller:
Bayesian Rose Trees. UAI 2010: 65-72 - [c42]Yee Whye Teh, D. Mike Titterington:
Preface. AISTATS 2010 - [e1]Yee Whye Teh, D. Mike Titterington:
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010. JMLR Proceedings 9, JMLR.org 2010 [contents] - [r2]Peter Orbanz, Yee Whye Teh:
Bayesian Nonparametric Models. Encyclopedia of Machine Learning 2010: 81-89 - [r1]Yee Whye Teh:
Dirichlet Process. Encyclopedia of Machine Learning 2010: 280-287 - [i1]Ricardo Bezerra de Andrade e Silva, Charles Blundell, Yee Whye Teh:
Mixed Cumulative Distribution Networks. CoRR abs/1008.5386 (2010)
2000 – 2009
- 2009
- [c41]Frank D. Wood, Cédric Archambeau, Jan Gasthaus, Lancelot James, Yee Whye Teh:
A stochastic memoizer for sequence data. ICML 2009: 1129-1136 - [c40]Gholamreza Haffari, Yee Whye Teh:
Hierarchical Dirichlet Trees for Information Retrieval. HLT-NAACL 2009: 173-181 - [c39]Vinayak A. Rao, Yee Whye Teh:
Spatial Normalized Gamma Processes. NIPS 2009: 1554-1562 - [c38]Yee Whye Teh, Dilan Görür:
Indian Buffet Processes with Power-law Behavior. NIPS 2009: 1838-1846 - [c37]Arthur U. Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh:
On Smoothing and Inference for Topic Models. UAI 2009: 27-34 - [c36]Finale Doshi, Kurt Miller, Jurgen Van Gael, Yee Whye Teh:
Variational Inference for the Indian Buffet Process. AISTATS 2009: 137-144 - [c35]Katherine A. Heller, Yee Whye Teh, Dilan Görür:
Infinite Hierarchical Hidden Markov Models. AISTATS 2009: 224-231 - [c34]Frank D. Wood, Yee Whye Teh:
A Hierarchical Nonparametric Bayesian Approach to Statistical Language Model Domain Adaptation. AISTATS 2009: 607-614 - 2008
- [c33]Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, Zoubin Ghahramani:
Beam sampling for the infinite hidden Markov model. ICML 2008: 1088-1095 - [c32]Jan Gasthaus, Frank D. Wood, Dilan Görür, Yee Whye Teh:
Dependent Dirichlet Process Spike Sorting. NIPS 2008: 497-504 - [c31]Dilan Görür, Yee Whye Teh:
An Efficient Sequential Monte Carlo Algorithm for Coalescent Clustering. NIPS 2008: 521-528 - [c30]Gerald T. Quon, Yee Whye Teh, Esther T. Chan, Timothy R. Hughes, Michael Brudno, Quaid Morris:
A mixture model for the evolution of gene expression in non-homogeneous datasets. NIPS 2008: 1297-1304 - [c29]Daniel M. Roy, Yee Whye Teh:
The Mondrian Process. NIPS 2008: 1377-1384 - [c28]Jurgen Van Gael, Yee Whye Teh, Zoubin Ghahramani:
The Infinite Factorial Hidden Markov Model. NIPS 2008: 1697-1704 - [c27]Max Welling, Yee Whye Teh, Bert Kappen:
Hybrid Variational/Gibbs Collapsed Inference in Topic Models. UAI 2008: 587-594 - 2007
- [c26]Junfu Cai, Wee Sun Lee, Yee Whye Teh:
Improving Word Sense Disambiguation Using Topic Features. EMNLP-CoNLL 2007: 1015-1023 - [c25]Kenichi Kurihara, Max Welling, Yee Whye Teh:
Collapsed Variational Dirichlet Process Mixture Models. IJCAI 2007: 2796-2801 - [c24]Hai Leong Chieu, Wee Sun Lee, Yee Whye Teh:
Cooled and Relaxed Survey Propagation for MRFs. NIPS 2007: 297-304 - [c23]Yee Whye Teh, Hal Daumé III, Daniel M. Roy:
Bayesian Agglomerative Clustering with Coalescents. NIPS 2007: 1473-1480 - [c22]Yee Whye Teh, Kenichi Kurihara, Max Welling:
Collapsed Variational Inference for HDP. NIPS 2007: 1481-1488 - [c21]Junfu Cai, Wee Sun Lee, Yee Whye Teh:
NUS-ML: Improving Word Sense Disambiguation Using Topic Features. SemEval@ACL 2007: 249-252 - [c20]Yee Whye Teh, Dilan Görür, Zoubin Ghahramani:
Stick-breaking Construction for the Indian Buffet Process. AISTATS 2007: 556-563 - 2006
- [j5]Geoffrey E. Hinton, Simon Osindero, Max Welling, Yee Whye Teh:
Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation. Cogn. Sci. 30(4): 725-731 (2006) - [j4]Geoffrey E. Hinton, Simon Osindero, Yee Whye Teh:
A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 18(7): 1527-1554 (2006) - [c19]Yee Whye Teh:
A Hierarchical Bayesian Language Model Based On Pitman-Yor Processes. ACL 2006 - [c18]Eric P. Xing, Kyung-Ah Sohn, Michael I. Jordan, Yee Whye Teh:
Bayesian multi-population haplotype inference via a hierarchical dirichlet process mixture. ICML 2006: 1049-1056 - [c17]Yee Whye Teh, David Newman, Max Welling:
A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation. NIPS 2006: 1353-1360 - 2005
- [c16]Yee Whye Teh, Matthias W. Seeger, Michael I. Jordan:
Semiparametric latent factor models. AISTATS 2005: 333-340 - [c15]Max Welling, Thomas P. Minka, Yee Whye Teh:
Structured Region Graphs: Morphing EP into GBP. UAI 2005: 609-614 - 2004
- [j3]Max Welling, Yee Whye Teh:
Linear Response Algorithms for Approximate Inference in Graphical Models. Neural Comput. 16(1): 197-221 (2004) - [c14]Tamara L. Berg, Alexander C. Berg, Jaety Edwards, Michael Maire, Ryan White, Yee Whye Teh, Erik G. Learned-Miller, David A. Forsyth:
Names and Faces in the News. CVPR (2) 2004: 848-854 - [c13]Max Welling, Michal Rosen-Zvi, Yee Whye Teh:
Approximate inference by Markov chains on union spaces. ICML 2004 - [c12]Jaety Edwards, Yee Whye Teh, David A. Forsyth, Roger Bock, Michael Maire, Grace Vesom:
Making Latin Manuscripts Searchable using gHMMs. NIPS 2004: 385-392 - [c11]Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, David M. Blei:
Sharing Clusters among Related Groups: Hierarchical Dirichlet Processes. NIPS 2004: 1385-1392 - 2003
- [b1]Yee Whye Teh:
Bethe free energy and contrastive divergence approximations for undirected graphical models. University of Toronto, Canada, 2003 - [j2]Max Welling, Yee Whye Teh:
Approximate inference in Boltzmann machines. Artif. Intell. 143(1): 19-50 (2003) - [j1]Yee Whye Teh, Max Welling, Simon Osindero, Geoffrey E. Hinton:
Energy-Based Models for Sparse Overcomplete Representations. J. Mach. Learn. Res. 4: 1235-1260 (2003) - [c10]Yee Whye Teh, Max Welling:
On Improving the Efficiency of the Iterative Proportional Fitting Procedure. AISTATS 2003: 262-269 - [c9]Max Welling, Yee Whye Teh:
Linear Response for Approximate Inference. NIPS 2003: 361-368 - 2002
- [c8]Sham M. Kakade, Yee Whye Teh, Sam T. Roweis:
An Alternate Objective Function for Markovian Fields. ICML 2002: 275-282 - [c7]Yee Whye Teh, Sam T. Roweis:
Automatic Alignment of Local Representations. NIPS 2002: 841-848 - 2001
- [c6]Yee Whye Teh, Max Welling:
The Unified Propagation and Scaling Algorithm. NIPS 2001: 953-960 - [c5]Geoffrey E. Hinton, Yee Whye Teh:
Discovering Multiple Constraints that are Frequently Approximately Satisfied. UAI 2001: 227-234 - [c4]Max Welling, Yee Whye Teh:
Belief Optimization for Binary Networks: A Stable Alternative to Loopy Belief Propagation. UAI 2001: 554-561 - 2000
- [c3]Yee Whye Teh, Geoffrey E. Hinton:
Rate-coded Restricted Boltzmann Machines for Face Recognition. NIPS 2000: 908-914
1990 – 1999
- 1999
- [c2]Geoffrey E. Hinton, Zoubin Ghahramani, Yee Whye Teh:
Learning to Parse Images. NIPS 1999: 463-469 - 1998
- [c1]Fahiem Bacchus, Yee Whye Teh:
Making Forward Chaining Relevant. AIPS 1998: 54-61
Coauthor Index
aka: Christopher Maddison
aka: Thomas Rainforth
aka: Jonathan Richard Schwarz
aka: Jean-François Ton
aka: Frank Wood
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-01 00:14 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint