BastionLLM:对大型语言模型端点进行持续安全检查
BastionLLM 允许您注册您的 LLM 端点,并持续测试其是否存在提示注入、越狱和系统提示泄漏等问题。它首先进行安全连接检查,验证您对该端点的拥有权,然后进行对抗性扫描并生成报告。
如果您构建了 LLM API(例如 RAG 应用),并希望确保它们按预期行为运行,这可能会对您有所帮助。欢迎反馈(请联系 support@bastionllm.com)。
查看原文
BastionLLM lets you register your LLM endpoint and continuously test it for prompt injection, jailbreaks, and system‑prompt leakage. It starts with a safe connectivity check, verifies you own the endpoint, then runs adversarial scans and shows reports.<p>If you’ve built LLM APIs (e.g. RAG apps) and want to make sure they behave as they are supposed to, this might be useful. Feedback welcome (support at bastionllm dot com)