Love to know the views related to security implica...
# general
l
Love to know the views related to security implications with regards to LLM/AI. OpenAI recently enabled third-party plugins which even allow local code execution in the sandbox. It can be a big headache for security experts to handle many security implications, some of which I can think of by carefully crafted prompt - - To inject malware into a client machine - To launch an attack on some website - To launch a prompt bomb (continue to generate text after text) - To launch a recursion attack on the chat (feed prompt reply to prompt continuously) - To trick the chatbot (if connected to your data source) to reveal sensitive information - Other well know attacks related to multimode (image, video, audio) like pixel flood attack ---------------- Just plugging my post here if you like to discuss there: https://www.linkedin.com/posts/lalitpagaria_openai-recently-enabled-third-party-plugi[…]340317196288-tDaf and https://twitter.com/PagariaLalit/status/1639118129778552832?s=20
🎯 2
👀 1
a
Their developers must have worked on cleaning the parameters on the proxy level before passing it on.