渐进式美人鱼和流式差异代码块 - 渲染速度提升100倍

1作者: simon_he2 个月前原帖
我发布了 vue-markdown-render,这是一个专注于 Vue 的 Markdown 渲染库,针对大型文档和实时预览进行了优化。其核心功能包括渐进式/增量式的 Mermaid 渲染、流式差异代码块(在接收差异时进行渲染)以及各种渲染器级别的优化,这些优化大幅减少了首次渲染的时间和在重负载下的内存使用。 <p>为什么会有这个库:许多 Markdown 渲染器在处理大型文档、大型嵌入图表和实时编辑场景时表现不佳。典型的方法是在处理所有资产/图表/代码之前阻塞渲染。在交互式编辑器中,这会导致卡顿和缓慢的反馈循环。vue-markdown-render 针对这些痛点,采用了流式优先的设计。 <p>主要特点: <p>渐进式 Mermaid:复杂图表增量渲染,用户可以更早看到可用的图表。 流式差异代码块:差异/代码块渲染可以在接收过程中流式输出部分结果,以提供即时反馈。 性能优先架构:懒惰解析、分块渲染和谨慎的内存使用。在一些大型文档基准测试中,我们观察到与 Streamdown 相比约有 100 倍的加速(具体取决于测试案例和环境)。 快速开始:使用 npm 安装 vue-markdown-render ```javascript // 在 Vue 3 应用中 import { createApp } from 'vue' import App from './App.vue' import VueMarkdownRender from 'vue-markdown-render' const app = createApp(App) app.use(VueMarkdownRender) app.mount('#app') ``` <p>复现基准测试(大致)我们在 /playground 下提供了一个小的基准测试仓库(或在发布中附上链接)。要复现类似的测试: <p>准备一个包含多个代码块和几个大型 Mermaid 图表的大型 Markdown 文件(例如,总行数超过 5 万)。 使用 Node 或浏览器自动化脚本对 vue-markdown-render 和 Streamdown(相同输入)进行计时渲染。 测量首次绘制时间和完整渲染时间,并分析内存使用情况。注意事项和警告: “100×”的数字依赖于工作负载:它代表在一些重负载、类似真实场景的测试中观察到的加速,而不是普遍的保证。 环境(CPU、浏览器、Node 版本)和特定文档结构的差异会影响结果。 我们欢迎对测试工具的复现和 PR。 讨论点/向社区提问 <p>你尝试过哪些大型文档工作流程仍然感觉缓慢? 你是否希望有一个开箱即用的编辑器集成(Monaco/CodeMirror 演示)? 对额外的流式友好的 Markdown 扩展有什么想法? 链接 <p>仓库:https://github.com/Simon-He95/vue-markdown-render 游乐场/基准测试:(链接到游乐场文件夹或单独的基准仓库) 快速演示:(如果有的话,链接到演示网站) 感谢 — 欢迎提问,也希望能收到关于基准测试方法或集成示例的反馈。
查看原文
I&#x27;m releasing vue-markdown-render, a Vue-focused Markdown rendering library optimized for large documents and real-time previews. The core features are progressive&#x2F;incremental Mermaid rendering, streaming diff code blocks (render as the diff arrives), and various renderer-level optimizations that drastically reduce time-to-first-render and memory use in heavy workloads.<p>Why this exists: Many Markdown renderers struggle with huge documents, large embedded diagrams, and live-editing scenarios. Typical approaches block rendering until all assets&#x2F;graphs&#x2F;code are processed. In interactive editors this causes jank and slow feedback loops. vue-markdown-render targets those pain points with a streaming-first design.<p>Key features:<p>Progressive Mermaid: complex diagrams render incrementally so users see a usable diagram earlier. Streaming diff code blocks: diff&#x2F;code-block rendering can stream partial results during reception for instant feedback. Performance-first architecture: lazy parsing, chunked rendering, and careful memory usage. In some large-doc benchmarks we observe ~100× speedups vs Streamdown (depends on test case and environment). Quick start npm i vue-markdown-render &#x2F;&#x2F; in a Vue 3 app import { createApp } from &#x27;vue&#x27; import App from &#x27;.&#x2F;App.vue&#x27; import VueMarkdownRender from &#x27;vue-markdown-render&#x27; const app = createApp(App) app.use(VueMarkdownRender) app.mount(&#x27;#app&#x27;)<p>Reproducing the benchmark (approx) We provide a small benchmark repo under &#x2F;playground (or attach a link in your release). To reproduce a similar test:<p>Prepare a large markdown file containing many code blocks and a few large Mermaid diagrams (e.g., 50k+ lines total). Run a timed render with Node or a browser automation script for both vue-markdown-render and Streamdown (same input). Measure time-to-first-paint and full render time, and profile memory usage. Notes and caveats: The &quot;100×&quot; number is workload dependent: it represents observed accelerations in some heavy, real-world-like tests, not a universal guarantee. Differences in environment (CPU, browser, Node version) and the specific document shape affect results. We welcome replication and PRs on test harnesses. Discussion points &#x2F; Ask the community<p>What large-doc workflows have you tried that still feel slow? Would you prefer an out-of-the-box editor integration (Monaco&#x2F;CodeMirror demo) for this? Ideas for additional streaming-friendly Markdown extensions? Links<p>Repo: https:&#x2F;&#x2F;github.com&#x2F;Simon-He95&#x2F;vue-markdown-render Playgrounds &#x2F; benchmarks: (link to playground folder or separate bench repo) Quick demo: (link to demo site if available) Thanks — happy to answer questions, and would love feedback on benchmark methodology or integration examples.