展示HN:Malleon – 将真实用户会话转化为自动化测试
嗨,HN,
我正在开发一个名为 Malleon 的工具,因为我厌倦了那些无法反映用户实际操作的端到端测试。我一直对使用真实用户会话进行负载测试充满热情。在使用 Tsung 和 Gatling 等负载测试工具的过程中,我常常希望能够简单地重放昨天的流量两倍或五倍,而不是使用合成会话。Malleon 的诞生就是为了弥合这两个领域。
基本思路是:与其从头编写测试,不如记录真实用户会话并将其转化为可重放的测试。
SDK 记录的内容包括:
- DOM 交互
- 网络请求
- 控制台输出
- 动作之间的时间间隔
重放并不是视频。它重建并重放与您的应用程序的实际浏览器交互。
因此,真实用户会话就变成了可复现的测试用例。
典型的流程如下:
- 将一个小型 JS SDK 嵌入到您的应用中
- 用户正常与网站互动
- 会话被记录
- 您浏览重放并发现一些有趣的内容(bug、错误、奇怪的行为)
- 将该会话转化为测试并在 CI 中运行
测试运行器是自托管的。您可以拉取 Docker 镜像并在任何地方运行它。它驱动浏览器(无头或有头)并重放交互序列。
在构建这个工具的过程中,有一些意想不到的挑战:
- 当 DOM 自记录以来发生变化时,如何重放交互
- 处理视口/布局差异
- 使网络重放对应用透明
- 保持时间间隔真实而不使测试变慢
该系统还收集日志、错误和请求时间,以便会话可搜索,您可以跟踪网络性能,查看 p90/p95/p99 统计数据,以及所有相关信息。
链接:
- Malleon: [https://malleon.io](https://malleon.io)
- Replay SDK: [https://www.npmjs.com/package/@malleon/replay](https://www.npmjs.com/package/@malleon/replay)
- Replay CLI: [https://www.npmjs.com/package/@malleon/replay-cli](https://www.npmjs.com/package/@malleon/replay-cli)
- 文档: [https://github.com/malleonio/malleon-documentation](https://github.com/malleonio/malleon-documentation)
提供免费套餐。
我很好奇是否还有其他人遇到过“我们的测试无法反映用户实际操作(或它们如何扩展)”的问题。
查看原文
Hey HN,<p>I've been building a tool called Malleon because I got tired of e2e tests that don't reflect what users actually do. And I have long been obsessed with using real user sessions for load testing. Having worked with tools like Tsung and Gatling for load testing, I often wished I could just replay yesterday's traffic x2 or x5 instead of using synthetic sessions. Malleon started as an attempt to bridge those two worlds.<p>The basic idea is: instead of writing tests from scratch, record real user sessions and turn them into replayable tests.<p>The SDK records things like:
- DOM interactions
- network requests
- console output
- timing between actions<p>The replay isn't a video. It reconstructs and replays the actual browser interactions against your app.<p>So a real user session becomes a reproducible test case.<p>Typical flow looks like:
- drop a small JS SDK into your app
- users interact with the site normally
- sessions get recorded
- you browse replays and find something interesting (bug, error, weird behavior)
- turn that session into a test and run it in CI<p>The test runner is self-hosted. You pull the Docker image and run it wherever you want. It drives a browser (headless or headful) and replays the interaction sequence.<p>Some things that turned out to be surprisingly tricky while building this:
- replaying interactions when the DOM has changed since recording
- handling viewport/layout differences
- making network replay transparent to the app
- keeping timing realistic without making tests slow<p>The system also collects logs, errors, and request timings so sessions are searchable and you can track network performance, see p90/p95/p99 stats, all that good stuff.<p>Links:
- Malleon: <a href="https://malleon.io" rel="nofollow">https://malleon.io</a>
- Replay SDK: <a href="https://www.npmjs.com/package/@malleon/replay" rel="nofollow">https://www.npmjs.com/package/@malleon/replay</a>
- Replay CLI: <a href="https://www.npmjs.com/package/@malleon/replay-cli" rel="nofollow">https://www.npmjs.com/package/@malleon/replay-cli</a>
- Docs: <a href="https://github.com/malleonio/malleon-documentation" rel="nofollow">https://github.com/malleonio/malleon-documentation</a><p>Free tier available.<p>Curious if anyone else has run into the "our tests don't reflect what users actually do (or how they scale)" problem.