免费人工智能安全测试

1作者: aiagentlover3 天前原帖
我的联合创始人和我建立了一个人工智能红队平台,希望在筹集资金之前让5到10家公司进行测试。我们正在通过真实案例验证我们的方案,您将获得一次全面的安全审计作为回报。 我们专注于那些实际上会破坏生产中AI系统的内容: - 提示注入攻击(直接/间接)和越狱 - 工具滥用和RAG数据外泄 - 身份操控和角色扮演漏洞 - 通过文档上传进行CSV/HTML注入 - 语音系统操控和基于音频的攻击 您将获得一份完整的报告,其中包含具体的重现步骤、针对性的缓解措施,并且在您实施修复后我们会进行重新测试。如果需要,我们还可以将发现结果映射到合规框架(如OWASP LLMs十大风险、NIST AI RMF、欧盟AI法案等)。我们只需要访问一个端点,并获得使用您匿名结果作为案例研究的许可。整个过程大约需要2到3周。如果您在生产中运行AI/LLM系统并希望进行安全审查,请给我发私信。
查看原文
My co-founder and I built an AI red teaming platform and want 5-10 companies to test it on before trying to go fundraise. We&#x27;re validating our approach with real-world case studies, and you&#x27;d get a comprehensive security audit in return.<p>We focus on the stuff that actually breaks AI systems in production:<p>Prompt injection attacks (direct&#x2F;indirect) and jailbreaks<p>Tool abuse and RAG data exfiltration<p>Identity manipulation and role-playing exploits<p>CSV&#x2F;HTML injection through document uploads<p>Voice system manipulation and audio-based attacks<p>You&#x27;d get a full report with concrete reproduction steps, specific mitigations, and we&#x27;ll do a retest after you implement fixes. We can also map findings to compliance frameworks (OWASP Top 10 for LLMs, NIST AI RMF, EU AI Act, etc.) if that&#x27;s useful. All we need is access to an endpoint and permission to use your anonymized results as a case study. The whole process takes about 2-3 weeks. If you&#x27;re running AI&#x2F;LLM systems in production and want a security review, shoot me a DM.