CSO & Co-Founder at Predibase, previously Staff Research Scientist at Stanford University, co-founder and Staff Research Scientist at Uber AI. Author of Ludwig.ai
This Thursday I'll be speaking at the hashtag #LLMOps Micro-Summit. We are running out of spots, so RSVP! The event theme is "Small Models, Big Results" and we are going to have technical talks about #LLM inference, #SLMs, #finetuning, model monitoring and synthetic data. I have a short talk myself, but we have a great set of speakers including AI leaders from Apple, Checkr, Inc., 🔭 Galileo, and Gretel, so it's going to be super fun. Full agenda and RSVP link in the comments below 👇 Hope to se your there!
Focusing on "small models" might limit exploration of architectural innovations crucial for truly impactful results. The recent NeurIPS conference showcased several large language models achieving breakthroughs in reasoning and generalization, suggesting a potential trade-off between size and performance. How would the emphasis on "small models" influence the development of synthetic data specifically tailored to address complex reasoning tasks?
Best of luck, Piero. I'm in the Zoom waiting area now and looking forward to this micro-summit.
Piero Molino the video will be shared ? Very interested!
Want to learn about LLMs through a lens of sanity and not hype, go listen to Piero Molino
See you there on Thursday! 😃
CSO & Co-Founder at Predibase, previously Staff Research Scientist at Stanford University, co-founder and Staff Research Scientist at Uber AI. Author of Ludwig.ai
3wFull agenda and RSVP here https://lu.ma/LLMOpsMicroSummit?utm_source=piero