Show HN: Firewall for LLMsGuard Against Prompt Injection PII Leakage Toxicity https://ift.tt/iADIbEg
Show HN: Firewall for LLMs–Guard Against Prompt Injection, PII Leakage, Toxicity Hey HN, We're building Aegis, a firewall for LLMs: a guard against adversarial attacks, prompt injections, toxic language, PII leakage, etc. One of the primary concerns entwined with building LLM applications is the chance of attackers subverting the model’s original instructions via untrusted user input, which unlike in SQL injection attacks, can’t be easily sanitized. (See https://ift.tt/roZMAB1 for the mildest such instance.) Because the consequences are dire, we feel it’s better to err on the side of caution, with something mutli-pass like Aegis, which consists of a lexical similarity check, a semantic similarity check, and a final pass through an ML model. We'd love for you to check it out—see if you can prompt inject it!, and give any suggestions/thoughts on how we could improve it: https://ift.tt/3K47hTe . If you want to play around with it without creating an account, try the playground: https://ift.tt/O9KFAw7 . If you're interested in or need help using Aegis, have ideas, or want to contribute, join our Discord ( https://ift.tt/SDX31cE ), or feel free to reach out at founders@automorphic.ai. Excited to hear your feedback! Repository: https://ift.tt/3K47hTe Playground: https://ift.tt/O9KFAw7 https://ift.tt/O9KFAw7 June 29, 2023 at 01:36AM
Show HN: Firewall for LLMsGuard Against Prompt Injection PII Leakage Toxicity https://ift.tt/iADIbEg
Reviewed by Manish Pethev
on
June 29, 2023
Rating:
No comments:
If you have any suggestions please send me a comment.