2025/12/26

GB10 : MXFP4 support in transformers

在用 huggingfacetransformers load gpt-oss:120b 來 run,出現
triton.runtime.errors.PTXASError: PTXAS error: Internal Triton PTX codegen error
`ptxas` stderr:
ptxas fatal   : Value 'sm_121a' is not defined for option 'gpu-name'
transformers calling 順序
gpt-oss-120b (uses MXFP4 quantization)
    ↓
transformers (loads model)
    ↓
kernels package (provides MXFP4 Triton kernels)
    ↓
Triton (compiles GPU kernels at runtime)
    ↓
ptxas (NVIDIA's PTX assembler, bundled with Triton)
    ↓
❌ Doesn't recognize sm_121a (Blackwell)
然後 ptxas fatal : Value 'sm_121a' is not defined for option 'gpu-name' 說:
The issue should be fixed by using PTXAS shipped with CUDA 13. Try setting TRITON_PTXAS_PATH to /usr/local/cuda/bin/ptxas etc.
所以看 python 自己的 triton ptxas 版本:
$ ./lib/python3.12/site-packages/triton/backends/nvidia/bin/ptxas --version
ptxas: NVIDIA (R) Ptx optimizing assembler
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:22:20_PST_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0
系統的:
$ ptxas --version
ptxas: NVIDIA (R) Ptx optimizing assembler
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Aug_20_01:53:56_PM_PDT_2025
Cuda compilation tools, release 13.0, V13.0.88
Build cuda_13.0.r13.0/compiler.36424714_0
所以依照說明宣告PATH 之後,GB10 就可以用 transformers正常的支援 gpt-oss:120b 的 MXFP4 了。
export TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas

有關 gpt-oss:120b 與 MXFP4 可以看 :MXFP4
要等 Pypl 的 triton 用 cuda13 重build 之後,package 才算 official support.

沒有留言:

張貼留言