A short summary on what's going on with AI Safety regulation right now based on some insightful questions at the OWASP® Foundation Vancouver event last night: 🇪🇺 EU: The EU is by far the closest to enacting #AI legislation but lately it seems that big Tech are lobbying hard for minimal regulation on their models and harsher regulation on open source models. More info: https://lnkd.in/gtdMBrFM 🇺🇸 USA: The USA is getting their act together but is more fractured right now. Biden has signed an Executive order but the real push is coming from NIST and CISA on defining roadmaps and guidelines. More info: https://lnkd.in/gmt6_bqm 🇨🇦 Canada: Oh Canada... <sigh> we're so behind it's not even funny. The AIDA act is moving along. Mostly focused on non-discrimination, but isn't expected to become law until 2025 or later. Currently everything is still in Standing Committees on Parliament hill. Latest: https://lnkd.in/greB5dR7 #aisecurity #airegulation #cisa #euaiact #aida #executiveorder #opensource #opensourceai #ArtificialIntelligence #AILegislation #GlobalTechPolicy #EUPolicy #USAPolicy #CanadaTech
Talesh Seeparsan’s Post
More Relevant Posts
-
What photographing the Aurora Borealis last night taught me about leadership in Generative AI Security… Nothing. It taught me nothing about my professional work. I just took a lovely photo and I wanted to share it with you. Have a great weekend!
To view or add a comment, sign in
-
Head's up! My talk is now on November 2nd instead of the 10th.
Uncover How AI Could Land You in Trouble with Talesh Seeparsan! 🛡️🤖 Join Talesh Seeparsan, CEO of Bit79 and a cybersecurity leadership consultant, as he explores the critical security issues facing teams building with Large Language Models (#LLMs) and #GenerativeAI. With over 12 years of experience and as a core member of the OWASP team working on #AI safety, Talesh brings invaluable insights on how to safeguard AI projects from potential risks. Stay Ahead of AI Security Challenges! - AI Security Expert: Talesh has dedicated his career to helping businesses securely adopt AI and LLMs, establishing best practices for safe, trustworthy AI. - Cutting-Edge Knowledge: As a key contributor to the newly published OWASP Top Ten for LLM Applications, Talesh is at the forefront of AI security. - Trusted by Industry Leaders: The OWASP Top Ten has been endorsed by Microsoft, IBM, and the UK AI Safety Institute, making it the go-to resource for AI security concerns. 👉 What to Expect: - Top Security Concerns: Get up to speed on the OWASP Top Ten for LLMs and Generative AI, addressing the most pressing security risks. - Real-World Examples: Learn how these security vulnerabilities have been exploited and what you can do to protect your systems. - Actionable Insights: Discover practical tools and frameworks to fortify your Generative AI applications and prevent AI-related security breaches. 🎟 See agendas and secure your spot at: vanaisummit.com Don’t miss this opportunity to learn from a leading authority in AI security and protect your AI initiatives from potential pitfalls at the AI Summit Vancouver! A huge thanks to the wonderful team behind this summit: Procheta Nag 🥽, Shubhra Sarker, Igor Korniienko, Vivian L., Kate Samonkraisorakit, RobabehSadat (Fahimeh) Taheri, Termaine Whittick, Nsikak Udoh, Haley K., Nur Martinez, Vinícius Souza, Patty Yu-Lan Liu, Anastasia Zaika, Nicholas Tsang, Spencer Nakamura, Pamela Mollinedo, Elaine C. #AIDangers #LLMSecurity #Cybersecurity #OWASP #TaleshSeeparsan #AITrust #AISafety #AISummitVancouver #Vancouver
To view or add a comment, sign in
-
#offtopicSunday? NotebookLM may actually make me start listening to podcasts again. I literally stopped paying attention to podcasts because it became a medium for people to ramble incoherently about a topic in 500x time it would take me to read a blog post about it. This is concise, a great overview and engaging to listen to. https://lnkd.in/g2h5HNDU P.S. Heads up Michael Yagudaev
To view or add a comment, sign in
-
I've been deep diving over the last couple weeks into how some #LLM models can be carefully fine tuned to pass standardized Red Teaming tests yet still return toxic or undesired results on daily use inference requests. The more I learn about that, the more I'm glad that organizations like OpenSSF are building cryptographic standards for model signing. You can learn more about what they're doing here: https://lnkd.in/g-n8rK42 and I hope to publish some of my #adversarialFineTuning attempts of #OpenSource and #OpenWeights models sometime soon. #AISafety #AISecurity #MLBOM #CycloneDX #AISupplyChain
To view or add a comment, sign in
-
Mosses is highly recommended. He finds a way to get things done.
Hi LinkedIn friends, I’m exploring new opportunities in software product management, focusing on senior roles in product or operations. With extensive experience across industries like enterprise software development, online retail, and hosting, I've had the pleasure of launching successful platforms, leading cross-functional teams, and building strong partnerships that deliver impactful results. As a chief product officer and co-founder, I led the development of a street food marketplace, and as head of professional services, I championed product solutions that drove revenue and expanded market presence for leading online merchants. Currently, I’m building an app with FlutterFlow and learning Flutter and Dart along the way. It’s been a fun challenge, and I’m enjoying the opportunity to grow my technical skills. I’m open to remote, hybrid, or on-site roles, and would appreciate any leads or connections to opportunities where I can contribute. Feel free to tag, share, or reach out to me directly. Thanks for your support! #OpenToWork #ProductManagement #SoftwareProduct #LearningToCode
To view or add a comment, sign in
-
🔍 Navigating the Complexities of Multilingual NLP: Are We Truly Ready? 🌐 Natural Language Processing (NLP) has made impressive strides, but when it comes to multilingual contexts, the challenges are far from solved. From dialectal variations to cultural nuances, building NLP models that genuinely understand and generate language across borders is a tough nut to crack. 🧠 One major hurdle is 𝐝𝐚𝐭𝐚 𝐬𝐜𝐚𝐫𝐜𝐢𝐭𝐲. While English boasts vast datasets, many languages lack the high-quality, annotated data needed for robust model training. This gap can lead to biased models that underperform in non-English contexts. Additionally, semantic variances across languages mean that even simple translations can misinterpret meaning, especially with idioms and regional slang. 📉 Then there’s the challenge of 𝐜𝐨𝐧𝐭𝐞𝐱𝐭𝐮𝐚𝐥 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠. Languages like Japanese, Korean, or Arabic have unique structures and context dependencies that NLP models often struggle to grasp. This can severely impact the performance of chatbots, virtual assistants, and other AI-driven applications in these languages. 🗣️ To truly unlock NLP’s potential, we need more inclusive data practices, better cross-lingual transfer learning techniques, and a deeper commitment to understanding linguistic diversity. The future of NLP isn’t just English, it’s every language. 🌎 👉 Let’s discuss: What challenges have you faced with multilingual NLP? How can we make these technologies more inclusive? Comment below! 👇 👍 Loved the content? Smash that like button! ♻️ Enjoyed it? Repost to share with your network! 🌟 Follow Gilbert Harijanto for more AI & Data Science insights! #NLP #MultilingualAI #NaturalLanguageProcessing #AI #MachineLearning #DataScience #ArtificialIntelligence #LanguageDiversity #TechInnovation #DataBias
To view or add a comment, sign in
Trust, safety and oversight of Generative AI — Helping businesses adopt AI and LLMs securely
11moHmm looks like I’ll have to write something about this again soon. Given all the chaos at the leadership of OpenAI today I think regulators will be rethinking a lot of the arguments they’ve heard from big tech. This could potentially be a good thing.