请问HN:使用人工智能工具进行博士文献综述是否不诚实?

5作者: latand615 天前原帖
我是一名结构工程的博士生。我的论文主题是关于在乌克兰常用软件中使用大型语言模型(LLM)代理来自动化有限元分析(FEA)计算。我现在正在撰写文献综述,并且我已经开发了一个个人本地仪表板,帮助我管理文献综述的过程。 我使用LLM代理来填写LaTeX模板(以自动化格式化,同时可以使用IDE查看差异)在GitHub仓库中。然后,我运行ChatGPT Pro来收集与我的主题相关的所有论文(以及相关信息)。接着,我收集那些在线可用的论文,确保PDF文件可以获取。我有一个特殊的文件夹结构,里面存放着纯文本文件,如Markdown和JSON。 这个仪表板的想法是这样的:我通过网页聊天运行Codex,以识别与我的论文主题相关的引用及其相关性,它将这些引用整合成与每个引用相关的一系列主张,并附上链接。然后,我手动审查每个引用和每个主张,并勾选相应的框。还有一个按钮可以运行验证脚本,以验证确切的引用确实存在于PDF中。通过这种方式,我可以收集真实的证据,并在阅读这些材料时获得新的见解。 我记得在英国攻读硕士学位时,所有这些工作都是手动完成的。那是一段非常糟糕和乏味的经历,部分原因是我有注意力缺陷多动障碍(ADHD)。 所以我的问题是,这样做算不算不诚实? 因为我可以为文献综述中的每个主张辩护,因为我建立了验证流程并手动审查了每一个主张。我可以说,我对文献的理解比如果我自己手动阅读并标记所有内容要更深刻。但我知道许多大学会将任何AI生成的文本视为学术不端。 我真的不太理解这种立场背后的原则。因为如果你将校对的任务外包出去,没人会在意。当你使用Grammarly时,情况也是如此。但如果我使用LLM从经过验证、结构化且经过人工审查的证据中生成文本——这可能会被视为不诚实。
查看原文
I&#x27;m a PhD student in structural engineering. My dissertation topic is about using LLM agents in automating FEA calculations on common Ukrainian software that companies use. I&#x27;m writing my literature review now and I&#x27;ve vibecoded a personal local dashboard that helps me manage the literature review process.<p>I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON.<p>The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these.<p>I remember doing this all manually when I was doing my master&#x27;s degree in the UK. That was a very terrible and tedious experience partially because I&#x27;ve ADHD<p>So my question is, is it dishonest?<p>Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct.<p>I really don&#x27;t quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest.