Example prompts
Risk assessment will appear here.
GuardLLM is powered by
Llama Prompt Guard 2 (86M) by Meta.
This model classifies prompts into 2 categories: Benign and Malicious (injection/jailbreak).
Maximum input length: 512 tokens.
This model classifies prompts into 2 categories: Benign and Malicious (injection/jailbreak).
Maximum input length: 512 tokens.