dreamspider42
03/19/2025, 11:41 PMCasetextJake
03/25/2025, 8:43 PM- id: anthropic:messages:claude-3-7-sonnet-20250219
label: anthropic-3.7-sonnet-no-thinking
config:
max_tokens: 40000
temperature: 0
thinking:
type: 'disabled'
- id: anthropic:messages:claude-3-7-sonnet-20250219
label: anthropic-3.7-sonnet-thinking
config:
temperature: 0
max_tokens: 40000
thinking:
type: 'enabled'
budget_tokens: 32000
harpomaxx
03/25/2025, 10:17 PMGia Duc
03/30/2025, 2:09 PM[
{
“description”:“Query downloaded files on specific day?“,
“vars”:{
“prompt”:[
“How many files were downloaded on {{weekdays}}?“,
“Show me the file types downloaded every {{weekdays}}.”
],
“weekdays”:[
“sunday”,
“monday”,
“tuesday”,
“wednesday”,
“thursday”,
“friday”,
“saturday”
]
},
“assert”:[
{
“type”:“contains-any”,
“value”:[
“interval=1w download=true | {{weekdays}}=count(_time[day]={{day_index}}) | top(file_type)”
]
}
]
}
]
subzer0
04/01/2025, 11:45 AMpatsu
04/02/2025, 3:53 PMb00l_
04/02/2025, 10:55 PM{ "flagged": true/false, "category": "something" }
, the config looks like
targets:
- id: 'file://custom_guard.py'
config:
endpoint: '{{env.ENDPOINT}}'
key: '{{env.TOKEN}}'
redteam:
plugins: ...
now my question is, how can I check for flagged and group by categories returned?
thnakspatsu
04/03/2025, 3:30 PMSunny25
04/05/2025, 11:01 AMblue.yeen
04/05/2025, 7:48 PMRohit Jalisatgi - Palosade
04/07/2025, 10:56 PMblue.yeen
04/10/2025, 6:25 PMericchaves
04/11/2025, 12:46 AMSeanYang15
04/15/2025, 9:23 PMpromptfoo eval --output eval-result.csv
I only get the result column. Did I miss something or is this a bug?varunmehra
04/16/2025, 7:08 AMDr.Scorpion
04/16/2025, 11:42 AMSeanYang15
04/16/2025, 11:33 PMadanceofshells
04/22/2025, 7:04 AMsidd
04/22/2025, 9:40 PMYurii
04/23/2025, 9:27 AMcybr_knght
04/23/2025, 7:45 PMyaml
# yaml-language-server: $schema=https://promptfoo.dev/config-schema.json
description: AI Framework - llama32
providers: openai:"NAME: llama3.2"
- id: openai:
label: ai-llama32
prompts:
- '{{prompt}}'
redteam:
purpose: ''
plugins:
- id: harmful:hate # Tests handling of hate speech and discriminatory content
- id: harmful:self-harm # Tests handling of self-harm and suicide content
strategies:
- id: jailbreak # Single-shot optimization of safety bypass techniques
- id: jailbreak:composite # Combines multiple jailbreak techniques for enhanced effectiveness
defaultTest:
options:
transformVars: '{ ...vars, sessionId: context.uuid }'
I am passing the OPENAI_API_KEY and OPENAI_BASE_URL as environment variables. The problem that I am running into is that whoever setup this endpoint decided that the model name should be 'NAME: ', like in the config above. The colon in the model name seems to be the issue.
I have tried escaping the colon, surrounding it with quotes, encoding the colon, but no matter what I get the following:
shell
[util.js:63] Error in extraction: API error: 400 Bad Request
{"detail":"Model not found"}
I even tried specifying the model ID 'protected.llama3.2', but it gives the same error. Any ideas or direction would be appreciated.IzAaX
04/29/2025, 9:09 AMdmitry.tunikov
04/30/2025, 7:40 AMTony
04/30/2025, 1:22 PMtext_format
parameter supported for OpenAI’s responses.parse()
method in promptfoo? I think this is the newest and preferred way to do structured outputs with OpenAI.kira
05/03/2025, 7:29 AMError running redteam: Error: Validation failed for plugin intent: Error: Invariant failed: Intent plugin requires `config.intent` to be set
Has anyone faced this before? Any idea what config.intent needs to be set to, or where exactly this should be configured? 🤔
Appreciate any guidance 🙏davidfineunderstory
05/05/2025, 4:30 PMSource Text:
[object Object],[object Object]
How can I make sure my original prompt is properly displayed to the g-eval prompt?ert
05/05/2025, 11:51 PMert
05/06/2025, 1:27 PMRob
05/07/2025, 9:07 PMaldrich
05/09/2025, 2:19 AM