请问HN:向数学和工程社区提问

1作者: Patternician5 天前原帖
随着大型语言模型(LLMs)的不断增强,我们正面临一种新情况:<p>一个人发现了某个数学方法或结果,但正式的证明却是由多个LLMs生成的(甚至经过交叉验证),即使原作者自己也无法完全重现该证明。<p>这样的人工智能生成的证明是否应被视为有效且可发表的?<p>当创意是人类创造的,但证明是由人工智能推导出来的,应该适用什么标准?<p>希望听到数学家、工程师、研究人员和期刊编辑的意见。这似乎是我们对证明和作者身份的思考方式的重要转变。
查看原文
As LLMs become stronger, we’re seeing a new situation:<p>A human discovers a mathematical method or result, but the formal proof is generated (and even cross-verified) by multiple LLMs—even if the original author can’t fully reproduce the proof themselves.<p>Should such AI-generated proofs be considered valid and publishable?<p>What standards should apply when the idea is human-created, but the proof is AI-derived?<p>Curious to hear opinions from mathematicians, engineers, researchers, and journal editors. This feels like an important shift in how we think about proofs and authorship.