Skip to content

Conversation

@danenania
Copy link
Contributor

Testing feedback links with prompt injection vulnerability

Copy link

@promptfoo-scanner-staging promptfoo-scanner-staging bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 All Clear

I reviewed the new LLM integration functions in llm_vuln.ts. While the code includes a function that accepts user-controlled system prompts (which the comment correctly identifies as dangerous), I couldn't find sufficient evidence of direct exploitability through normal application interfaces to report it as a confirmed vulnerability.

Minimum severity threshold for this scan: 🟡 Medium | Learn more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants