问HN:维护者们,使用仅限大型语言模型的用户是否经常让你们的问题/拉取请求变得杂乱?
我之所以问这个问题,是因为我最近在一个开源项目中提交了一个拉取请求,修复了一个漏洞(通过 Python 中的 pickle 反序列化进行远程代码执行)。一天后,我收到了一个完全由大型语言模型生成的评论,声称我的方法是错误的,并建议我以不同的方式重写,同时告诉维护者如果“项目愿意进行更精细的重构”,他可以贡献代码。
最近,这种情况发生得频繁得令人惊讶。
我很想听听其他贡献者或维护者是否也遇到过类似情况,以及他们是如何应对的。
查看原文
I'm asking this because I recently opened a PR to fix a vulnerability in an OSS project (RCE via pickle deserialization in Python). A day later, I got a fully LLM-generated comment claiming my approach was wrong and that I should rewrite it differently and telling the maintainers he could contribute "if the project is open to a more surgical refactoring."<p>It's astonishing how often these encounters have been happening lately.<p>I'd love to hear from contributors or maintainers whether this happens to them and how they deal with it.