https://linen.dev logo
Join Slack
Powered by
# general
  • a

    astonishing-advantage-94636

    10/02/2023, 6:49 PM
    Check out what else is new this week in the R2AI & WhyLabs community! catjam whybot whylogs v1.3.7 has been released! See full release notes on Github. 💬 LangKit 0.0.19 See full release notes on Github. 📅 Upcoming R2AI & WhyLabs Events: • 10/3 AI Book Club: Natural Language Processing with Transformers | Week 1 • 10/4 Intro to LLM Monitoring in Production with LangKit & WhyLabs 📺 In case you missed it, watch the recording of our previous event(s) on YouTube: •

    Monitoring LLMs in Production using OpenAI, LangChain & WhyLabs▾

    📝 Latest Blog(s): • Understanding and Monitoring Embeddings in Amazon SageMaker with WhyLabs
    👀 1
  • c

    cool-city-46156

    10/03/2023, 4:21 PM
    Hello everyone, my name is Alex, we help top-notch developers and startups find each other. 🙂 Through our platform, founders and CTOs can quickly build their engineering dream teams. And top IT specialists can find long-term remote work with competitive rates. Today we've launched on Product Hunt and would appreciate your feedback! https://www.producthunt.com/posts/expert-remote P.S. if anyone lost jobs in the current market, feel free to leave your CV here: https://developers.expertremote.io 😉
  • b

    best-laptop-65653

    10/07/2023, 7:11 PM
    Hey All, does anyone have information on the passcode for the meeting
    a
    • 2
    • 1
  • a

    astonishing-advantage-94636

    10/13/2023, 8:45 PM
    Check out whats new this week in the R2AI & WhyLabs community! whybot whylogs v1.3.9 has been released! See full release notes on Github. 💬 LangKit 0.0.21 See full release notes on Github. 📅 Upcoming R2AI & WhyLabs Events: • 10/17 Intro to ML Monitoring: Data Drift, Quality, Bias and Explainability • 10/17 AI Book Club: Natural Language Processing with Transformers | Week 3 • 10/25 Create and Monitor LLM Summarization Apps using OpenAI and WhyLabs 📺 In case you missed it, watch the recording of our previous event(s) on YouTube: •

    Intro to LLM Monitoring in Production with LangKit & WhyLabs▾

  • c

    colossal-grass-38911

    10/17/2023, 9:36 PM
    Hi guys, my name is Anna and I am one of the co-founders of the SF AI Conference. I am working with a group of local organizers on hosting the SF AI Conference in Jan 19-20th, 2024. This AI conference will be uniquely dedicated towards cultivating open and discussions surrounding the social impacts of AI. We are looking for speakers and sponsors at this time.If anyone is interested, please reach out to me anna@sanfranciscoai.ai You can learn more about SF AI Conference: https://sanfranciscoai.ai
    👍 2
    b
    w
    • 3
    • 3
  • a

    astonishing-advantage-94636

    10/18/2023, 8:51 PM
    Join our workshop next week! Create and Monitor LLM Summarization Apps using OpenAI and WhyLabs https://www.eventbrite.com/e/create-and-monitor-llm-summarization-apps-using-openai-and-whylabs-tickets-730832468587
    👀 2
    🙌 1
  • a

    astonishing-advantage-94636

    10/20/2023, 5:41 PM
    Hey everyone! Check out whats new this week in the R2AI & WhyLabs community! whybot whylogs v1.3.10 has been released! See full release notes on Github. 💬 LangKit 0.0.21 See full release notes on Github. 📅 Upcoming R2AI & WhyLabs Events: • 10/17 AI Book Club: Natural Language Processing with Transformers | Week 4 • 10/25 Create and Monitor LLM Summarization Apps using OpenAI and WhyLabs • 10/26

    Intro to ML Monitoring: Data Drift, Quality, Bias and Explainability▾

    📺 In case you missed it, watch the recording of our previous event(s) on YouTube: •

    Intro to LLM Monitoring in Production with LangKit & WhyLabs▾

    📝 Latest Blog(s): • Understanding and Mitigating LLM Hallucinations
    🙌 2
  • a

    astonishing-advantage-94636

    10/27/2023, 4:12 PM
    Hey everyone! Check out whats new this week in the R2AI & WhyLabs community! whybot whylogs v1.3.11 has been released! See full release notes on Github. 💬 LangKit 0.0.22 See full release notes on Github. 📺 In case you missed it, watch the recording of our two new workshops this week on YouTube: •

    Intro to LLM Security - OWASP Top 10 for Large Language Models (LLMs)▾

    •

    Create and Monitor LLM Summarization Apps using OpenAI and WhyLabs▾

    🙌 2
  • w

    white-breakfast-66961

    11/03/2023, 7:11 PM
    🚨 LLMs are susceptible to a range of vulnerabilities like prompt injections, data leakage, misinformation and more. As the guidance around these vulnerabilities rapidly evolves - WhyLabs offering a free intro to LLM security workshop to help teams tackle the OWASP Top 10 security challenges for Large Language Model Applications! Whether you are integrating with a public API or running a proprietary model, don’t miss the chance to learn how to implement monitoring for common security issues and adopt best practices to ensure your LLMs are safe and responsible! Date: Nov 16 Time: 10:00am PST Register now: https://www.eventbrite.com/e/intro-to-llm-security-owasp-top-10-for-large-language-models-llms-tickets-751792340127?aff=r2slack
    🎯 1
    🔥 1
    💡 1
    👍 2
  • w

    white-breakfast-66961

    11/15/2023, 5:50 PM
    Today DeepLearning.AI launched a new short course built in collaboration with WhyLabs on monitoring the safety and quality of LLMs! In Quality and Safety for LLM Applications, taught by WhyLabs Senior Data Scientist, Bernease Herman, you’ll learn to: 🔍 Spot hallucinations with different methods 🛡️ Detect jailbreak attempts using sentiment analysis and other models 🔐 Safeguard against data leakage of personal or internal company information 👀 Build your own active monitoring guardrail system Learn more about the course and enroll now for free: https://bit.ly/4910JZk
    New course with WhyLabs_ Quality and Safety for LLM Applications.mp4
    🎯 2
    🎉 2
    🙌 3
    pikadance 1
    partywizard 2
  • w

    white-breakfast-66961

    11/16/2023, 9:10 PM
    I hope some of you were able to attend the LLM Security Workshop today! Our next workshop is Nov 30th on Monitoring LLMs in Production with Hugging Face and WhyLabs (details below). If you have any feedback, or any other topics you'd like to see covered in 2024 - let us know! Monitoring LLMs in Production with Hugging Face and WhyLabs Nov 30th | 10:00 am PST, 1:00 pm EST Register now to follow along and ask your questions live: https://bit.ly/3sIXKV9
    💯 1
    partywizard 2
    whybot 1
    👍 2
  • o

    orange-vr-53429

    11/22/2023, 4:14 PM
    Hi everyone, Does anyone have any good recommendations for AI headshot generators they have used?
    b
    n
    a
    • 4
    • 7
  • g

    glamorous-hospital-27667

    11/27/2023, 8:10 AM
    How do I interpret the below. I ran the input_ouput and trying to get response.relevance_to_prompt {'counts/n': 1, 'counts/null': 0, 'counts/nan': 0, 'counts/inf': 0, 'types/integral': 0, 'types/fractional': 1, 'types/boolean': 0, 'types/string': 0, 'types/object': 0, 'types/tensor': 0, 'distribution/mean': 0.311745822429657, 'distribution/stddev': 0.0, 'distribution/n': 1, 'distribution/max': 0.311745822429657, 'distribution/min': 0.311745822429657, 'distribution/q_01': 0.311745822429657, 'distribution/q_05': 0.311745822429657, 'distribution/q_10': 0.311745822429657, 'distribution/q_25': 0.311745822429657, 'distribution/median': 0.311745822429657, 'distribution/q_75': 0.311745822429657, 'distribution/q_90': 0.311745822429657, 'distribution/q_95': 0.311745822429657, 'distribution/q_99': 0.311745822429657, 'cardinality/est': 1.0, 'cardinality/upper_1': 1.000049929250618, 'cardinality/lower_1': 1.0}
  • a

    acoustic-painter-98305

    11/27/2023, 6:48 PM
    hi @glamorous-hospital-27667 - These are output metrics generated from whylogs. the results are statistical data profiles for the dataset you are summarizing. Since you are only running this on one prompt and response record, you notice that you have the same value for all distribution values (0.311745822429657). If you were running this across a larger dataset, you will see different values for each of the distribution metrics (max, min, mean, stddev, etc.). You can grab any of these - say mean or max to extract your single score since they all are the same: (0.311745822429657). The
    response.relevance_to_prompt
    computed column will contain a similarity score between the prompt and response. The higher the score, the more relevant the response is to the prompt. The similarity score is computed by calculating the cosine similarity between embeddings generated from both prompt and response. The embeddings are generated using the hugginface's model
    sentence-transformers/all-MiniLM-L6-v2
    .
  • a

    acoustic-painter-98305

    11/27/2023, 7:50 PM
    @glamorous-hospital-27667 - Please let me know if this helps. If you have any other questions or need any assistance, let me know. Happy to jump on a call with you if it helps!
  • w

    wooden-glass-33314

    11/28/2023, 10:33 PM
    Is there a way to serialize a profile so i can store it somewhere like a db? i’d like to recall this profile at a later point and merge/compare it with another profile.
    b
    m
    • 3
    • 29
  • w

    white-breakfast-66961

    12/01/2023, 4:23 PM
    It’s easy to get started with Large Language Models (LLMs) but it’s hard to move beyond the proof of concept. Especially when you don’t know how to evaluate the quality of the LLM-powered experience. And unfortunately, the most popular evaluation approaches - eyeballing or asking the LLM to self-evaluate - are both flawed. Join our next workshop where we will explore 7 different approaches to calculate metrics for evaluating the quality of LLMs for your specific use case, so you never have to eyeball again! Date: December 12, 2023 | 10:00 am PST Speaker: Alessa Visnjic, CEO and Co-founder of WhyLabs Register now: https://bit.ly/46DuywP
    ⭐ 1
    ❤️ 2
    📚 1
  • d

    dry-finland-99540

    12/06/2023, 4:21 PM
    Leveraging Tools for AI Innovation: It's Not Just What You Use, It's How You Use It In the fast-moving world of AI, it seems like a new tool or framework pops up almost every week! It's an exciting time for innovators and technologists alike, but amidst this tool overload, I see many of us get caught up in a common misconception: that the best tools automatically lead to the best solutions. But the reality? It's not the tools themselves that set us apart—it's how we use them. During my recent project, I learned an invaluable lesson: even the most basic tool, when wielded with expertise and creativity, can outperform its more sophisticated counterparts. It's like having a Swiss Army knife and only using the bottle opener; we're missing out on the plethora of possibilities! As we navigate an ecosystem brimming with options for AI development, let’s remember that our competitive edge lies not in the tools we select, but in our capacity to push those tools to their limits. It’s about innovation, not just in creation but in application. Whether you're a developer, a data scientist, or a tech enthusiast, I challenge you to dive deeper into the tools you have at your disposal. How can you exploit their full potential? How can the "not so great" become your secret weapon? Let's use our tools to their maximum, experiment with new combinations, and drive forward with solutions that are not just technologically advanced but also creatively unparalleled.
  • w

    white-breakfast-66961

    12/21/2023, 6:12 PM
    Join @magnificent-lamp-34827’s workshop with DeepLearning.AI where he’ll examine differences in navigating natural and algorithmic adversarial attacks, concentrating on prompt injections and jailbreaks! 🗓️ Tuesday, January 9, 2024 · 10 - 11am PST 📍Online 🎟️ Register now: https://bit.ly/4atSoyg Interested in some workshop pre-reading? Check out Felipe’s blog post on malicious attacks on language models, how to identify them and methods for detecting prompt injections and jailbreaks. 📚 https://bit.ly/3RPOisR
    🎉 3
    🎯 1
  • w

    white-breakfast-66961

    12/29/2023, 7:19 PM
    We’re kicking off our first workshop of the year with monitoring LLMs in production using LangChain and WhyLabs! Don’t miss the workshop to learn how to: ✅ Evaluate user interactions to monitor prompts, responses, and user interactions ✅ Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts. ✅ Set up monitors and alerts to help prevent undesirable behavior 🗓️ Tuesday, January 23, 2024 · 10 - 11am PST 📍 Online 🎟️ Register now: https://bit.ly/3RJ2Byk
    🎯 1
    🙌 2
    🌟 2
  • w

    white-breakfast-66961

    01/08/2024, 7:30 PM
    If you’re in Seattle, come hang out and grab drinks with Seattle’s MLOps Community on the 25th! 🍻 This casual meetup is the perfect opportunity to network, share stories, and learn best practices - all experience levels are welcome! ⏰ Thursday, January 25 at 5:30pm 📍 Stoup (Formally Optimism) Brewing in Capitol Hill 🎟️ https://bit.ly/3S3zh6S The first 20 people to arrive will get their first drink for free! 🎉 👋 We hope to see you there!
    🙌🏽 1
    🎉 2
    🍻 2
    🙌 1
  • m

    miniature-island-30599

    02/19/2024, 12:54 AM
    Hi I’m researcher about LLM for agent Here is my wide investigation of current papers https://github.com/shure-dev/Awesome-LLM-Papers-Toward-AGI
  • w

    white-breakfast-66961

    03/01/2024, 6:10 PM
    The fact that AI has quickly become such a big part of our day-to-day lives highlights how much we need solid rules to keep everything in check. That's exactly where the European Union's Artificial Intelligence Act (EU AI Act) comes in - a groundbreaking move to create a legal framework to ensure AI development is ethical and secure. Here's a quick rundown of what we cover in this blog: ❗️ The four categories of risk associated with AI systems. 📜 The requirements for everyone involved with AI systems, from creators to distributors. 🌍 How this EU legislation could shape global AI policy and what it means for companies worldwide. 💰The repercussions for failing to comply with the Act, including the financial and operational risks. 🧩 Implications for various AI stakeholders, including developers, EU teams, and international entities. ✅ Guidance on preparing for compliance, especially for high-risk AI applications. Read the full blog post here: https://bit.ly/4bTv1Pk
    🎯 1
    💡 2
  • m

    magnificent-lamp-34827

    03/06/2024, 7:58 PM
    Hello, Community! 🌏 I wanted to share an update on the latest improvements to our open-source libraries - whylogs and LangKit! 💻 WhyLabs open source updates: 📊 whylogs v1.3.25 has been released! whylogs is the open standard for data logging & AI telemetry, This week's update includes: • Support for sending zipped segmented reference profiles to WhyLabs • Dockerfile updates and test • Added transaction example notebook 💬 LangKit v0.0.31 has been released! Langkit is an open-source text metrics toolkit for monitoring language models. • Add configurable Detoxify models for use with toxicity module • Update Response Hallucination example • Better error messages for response hallucination module
    ⭐ 2
    👏 3
  • m

    magnificent-lamp-34827

    04/04/2024, 4:26 PM
    Hello, Community! 🌏 I wanted to share an update on the latest improvements to our open-source library whylogs! 💻 WhyLabs open source updates: 📊 whylogs whylogs is the open standard for data logging & AI telemetry. Last month's updates includes: • Ranking Metrics ◦ NDCG corrections (whylogs v1.3.26) ◦ MAP corrections (whylogs v1.3.27) ◦ Better Precision/Recall/MRR (whylogs v1.3.28) • Whylabs-client update to 0.6.0 (whylogs v1.3.28) • Example updates - Ranking Metrics, BentoML
    👏 4
    ⭐ 3
  • w

    white-breakfast-66961

    04/08/2024, 8:26 PM
    WhyLabs is a finalist for GeekWire's Innovation of the Year award for the launch of LangKit - the observability and safety standard for LLMs! 🌟🏆 We built LangKit to help tackle the challenges of monitoring and running LLMs in production, allowing organizations to quickly detect risks and safety issues in open-source and proprietary LLMs, including toxic language, jailbreaks, sensitive data leakage, and hallucinations. Help us take home the award by casting your vote now! 🗳️: https://bit.ly/4adqntI
    🙌 4
    🎉 2
    d
    • 2
    • 2
  • w

    white-breakfast-66961

    04/29/2024, 6:06 PM
    Last week we announced the launch of the WhyLabs AI Control Center, offering teams real-time control over their AI applications! The new iteration of WhyLabs is built on top of our award-winning AI observability platform, with new capabilities that equip teams with safeguards that prevent unsafe interactions in under 300 milliseconds with 93% threat detection accuracy. Learn more by attending our live webinar and Q&A session on Thursday, May 2nd with WhyLabs' CEO and Co-Founder Alessya Visnjic and CTO and Co-Founder Andy Dang! Register now!
    🎯 2
  • m

    magnificent-horse-99670

    05/02/2024, 1:30 PM
    New law to regulate use of AI approved in Bahrain https://www.linkedin.com/feed/update/urn:li:activity:7191745027616739328/
    👍 1
  • b

    bulky-soccer-53116

    05/17/2024, 5:05 AM
    team, any courses for learning llm security for infosec / GRC professionals. Thanks, sam.
  • w

    white-breakfast-66961

    05/28/2024, 3:27 PM
    Don’t miss our webinar this week on chain-of-thought prompting and constitutional AI to improve the quality and safety of LLM responses! Register now: Safety in LLMs Using Chain-of-Thought: Lessons From Constitutional AI
    l
    • 2
    • 1