WhyLabs just announced the launch of LangKit, enabling detection of risks and safety issues in LLMs including toxic language, jailbreaks, sensitive data leakage, and hallucinations!
Using LangKit, you can understand and track the behavior of any LLM model by extracting 50+ out-of-the-box telemetry signals to implement:
🛡️ Guardrails: Control which prompts and responses are appropriate for your LLM application in real time.
✅ Evaluations: Validate how your LLM responds to known prompts both continually as well as ad-hoc, to ensure consistency when modifying prompts or changing models.
🔎 Observability: Observe your prompts and responses at scale by extracting key telemetry data and compare against smart baselines over time.
Check out the article on VentureBeat:
https://venturebeat.com/ai/whylabs-launches-langkit-to-make-large-language-models-safe-and-responsible/