Join
@magnificent-lamp-34827’s workshop with
DeepLearning.AI where he’ll examine differences in navigating natural and algorithmic adversarial attacks, concentrating on prompt injections and jailbreaks!
🗓️ Tuesday, January 9, 2024 · 10 - 11am PST
📍Online
🎟️ Register now:
https://bit.ly/4atSoyg
Interested in some workshop pre-reading? Check out Felipe’s blog post on malicious attacks on language models, how to identify them and methods for detecting prompt injections and jailbreaks.
📚
https://bit.ly/3RPOisR