发布 HN:WorkDone(YC X25)– 医疗记录的人工智能审计

13作者: digitaltzar9 天前原帖
大家好!我们是Dmitry、Sergey和Alex,WorkDone的联合创始人。简单来说,我们开发了一款人工智能产品,能够实时审核医疗文档,以便在错误演变成治疗失误或保险索赔被拒之前及时发现并修正这些错误。 我们对这个问题产生兴趣,是因为我们看到小的文档错误如何可能演变成巨大的财务、法律,甚至是危及生命的后果。有时只是一个错误的用药时间或缺失的出院记录——这些都是基本问题——但在处理索赔和法规时,轻微的错误可能会导致自动拒绝。出院记录上的错误复制粘贴会被保险公司发现,进而引发令人紧张的申诉。在一个超负荷工作的临床或合规团队发现问题时,通常已经太晚,无法简单地修正。我们的亲身经历让我们深有感触:Dmitry的家人因误读实验室结果而面临严重后果,而Sergey来自一个全是医疗背景的家庭,亲身经历过这些问题。 如果你有兴趣,可以查看我们的演示 - [点击这里](https://www.loom.com/share/add16021bb29432eba7f3254dd5e9a75)。 我们的解决方案是一组人工智能代理,直接嵌入诊所或医院的电子健康记录(EHR)/电子病历(EMR)系统中。当临床医生进行日常工作时,WorkDone会持续监控记录。如果发现有异常情况,比如缺少签名或可疑的时间戳,它会要求相关工作人员进行核对并即时修正。我们希望防止错误演变成日后的大麻烦和浪费时间。从技术上讲,这涉及在EHR API之上运行一个安全的事件监听器,并应用一组协调的人工智能代理,这些代理加载了临床协议、支付方规则,并针对历史索赔拒绝和监管指南进行了微调。当模型标记出潜在错误时,代理会提示用户进行澄清或确认。如果确实是错误,我们会请求提供者的修正批准,并立即进行修正,同时存储审计记录以确保合规。我们还在扩展这一方法,以发现冲突的药物或处方治疗。 与医院收入管理的人工智能工具不同,我们的方法专注于近实时的干预。大多数工具在索赔提交后才检测到错误,因此合规团队往往处于应急状态。我们认为,修正问题的最佳时机是在工作流程中。 关于人工智能在医疗/健康领域使用的一个常见问题是:如果人工智能产生幻觉或出错怎么办?在我们的案例中,由于工具是标记可能的错误,其主要效果是促使额外的人类审核,因此不会对任何健康关键的事情(如治疗)产生影响。相反,风险在于过多的假阳性可能会浪费员工宝贵的时间。对于试点项目,我们首先采用只读模式,仅使用API来检索数据,我们发现我们在代理编排层中构建的质量保证系统在识别常见文档错误方面表现良好,即使是在较长的病历中(例如,多天住院)。 我们正在早期阶段完善我们的系统,非常希望能得到社区的反馈。如果您对与EHR集成有想法,或对合规工具的经验,或者对在医疗环境中工作的见解,我们都非常欢迎。我们也在寻找早期用户,特别是愿意尝试我们人工智能的康复中心、小诊所和医院,并告诉我们需要改进的地方。 感谢您的阅读,期待听到您的想法!
查看原文
Hey HN! We’re Dmitry, Sergey, and Alex, co-founders of WorkDone. In one sentence: we built an AI product that audits medical documentation in real time to catch and fix errors before they turn into treatment mistakes or denied insurance claims.<p>We got interested in this problem when we saw how often small documentation slip-ups can snowball into huge financial, legal, and even life-threatening outcomes. Sometimes it’s just a mistyped medication time or a missing discharge note - basic stuff - but when you’re dealing with claims and regulatory rules, a minor error can trigger an automatic denial. Wrong copy-pasting on a discharge note will be uncovered by the insurance provider and will cost stressful appeal. By the time an overworked clinical or compliance team discovers it, it’s usually too late to just fix it. Our own experiences hit close to home: Dmitry’s family member faced grave consequences from a misread lab result, and Sergey comes from a full medical family that’s battled these issues up close.<p>Here’s our demo if you’d be interested to take a look - <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;add16021bb29432eba7f3254dd5e9a75" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;add16021bb29432eba7f3254dd5e9a75</a><p>Our solution is a set of AI agents that plug directly into a clinic or hospital EHR&#x2F;EMR system. As clinicians go about their daily routines, WorkDone continuously monitors the records. If it spots something that looks off-like a missing signature or a suspicious timestamp- it asks the responsible staff member to double-check and correct it on the spot. We want to prevent errors from becoming big headaches and wasted hours down the road. Technically, this involves running a secure event listener on top of EHR APIs and applying a group of coordinated AI agents that’s been loaded with clinical protocols, payor rules, and finetuned on historical claim denials and regulatory guidelines. The moment the model flags a potential error, an agent nudges the user to clarify or confirm. If it’s a genuine mistake, we request correction approval from the provider and fix it right away and store an audit trail for compliance. We are extending the approach to finding conflicting medication or prescribed treatments.<p>What’s different about our approach from AI tools for hospital revenue management is this focus on near-real-time intervention. Most tools detect errors after the claim has already been submitted, so compliance teams end up firefighting. We think the best place to fix something is in the flow of work itself. One common question about the use of AI in the medical&#x2F;health field is: what if the AI hallucinates or gets something wrong? In our case, since the tool is flagging possible errors and its primary effect is to get extra human review, there’s no impact on anything health-critical like treatments. Rather the risk is that too many false positives could waste staff members’ valuable time. For pilots, we are starting with read-only mode in which we use API only to retrieve the data, and we are able to see that the QA we built into the agent orchestration layer does a pretty good job for spotting common documentation mistakes even in lengthy charts (for instance, multi-day hospital stay).<p>We’re in the early stages of refining our system, and we’d love feedback from the community. If you have ideas on integrating with EHRs, experiences with compliance tools, or just general insights about working in healthcare environments, we’re all ears. We’re also on the lookout for early users - particularly rehabs, small clinics and hospitals - willing to give our AI a try and tell us where it needs improvement.<p>Thanks for reading, and let us know what you think!