We’re kicking off our first workshop of the year with monitoring LLMs in production using LangChain and WhyLabs!
Don’t miss the workshop to learn how to:
✅ Evaluate user interactions to monitor prompts, responses, and user interactions
✅ Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts.
✅ Set up monitors and alerts to help prevent undesirable behavior
🗓️ Tuesday, January 23, 2024 · 10 - 11am PST
📍 Online
🎟️ Register now:
https://bit.ly/3RJ2Byk