在AMD上运行Nvidia CUDA PyTorch容器项目/管道,无需更改

1作者: medicis1232 个月前原帖
你好,我想分享一些关于我们在WoolyAI GPU虚拟机中开发的酷炫功能的信息。这个功能使得用户可以在AMD GPU上运行现有的Nvidia CUDA PyTorch/vLLM项目和管道,而无需进行任何修改。机器学习研究人员可以透明地从异构集群中使用Nvidia和AMD GPU。MLOps团队不需要维护单独的管道或运行时依赖。机器学习团队可以轻松扩展计算能力。 请分享您的反馈,我们也在招募Beta用户。 https://youtu.be/MTM61CB2IZc
查看原文
Hi, I wanted to share some information on this cool feature we built in WoolyAI GPU hypervisor, which enables users to run their existing Nvidia CUDA pytorch/vLLM projects and pipelines without any modifications on AMD GPUs. ML researchers can transparently consume GPUs from a heterogeneous cluster of Nvidia and AMD GPUs. MLOps don't need to maintain separate pipelines or runtime dependencies. The ML team can scale capacity easily. Please share feedback, and we are also signing up Beta users. https://youtu.be/MTM61CB2IZc