2025/11/29

ComfyUI Stable Diffusion Service Setup

   This guide assumes you have Python 3.8+ and NVIDIA GPU drivers (CUDA 13.0 or compatible) installed on your system.


   1. Navigate to your desired project directory:

      Open your terminal and change to the directory where you want to install ComfyUI. If the directory doesn't exist, create it.


   1     cd /path/to/your/comfyui_project

   2     # If the directory is empty or you want a fresh start, ensure it is empty

   3     # If .venv exists, remove it: rm -rf .venv


   2. Clone the ComfyUI repository:

      This will download all the necessary ComfyUI files into your current directory.

   1     git clone https://github.com/comfyanonymous/ComfyUI.git .


   3. Create a Python Virtual Environment:

      We'll use uv to create an isolated environment for ComfyUI's dependencies.

   1     uv venv


   4. Install PyTorch with CUDA Support:

      This step is crucial for GPU acceleration. We'll use the specific PyTorch build that is compatible with your DGX Spark's CUDA environment (cu129 index).


   1     uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu129


   5. Install ComfyUI's remaining dependencies:

   1     uv pip install -r requirements.txt


   6. Download the Stable Diffusion Model:

      You need a Stable Diffusion model checkpoint. We'll download the v1-5-pruned-emaonly.safetensors model, which is a standard choice.

      First, ensure the target directory exists:


   1     mkdir -p models/checkpoints

      Then, download the model:


   1     wget -O models/checkpoints/v1-5-pruned-emaonly.safetensors https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors


   7. Launch the ComfyUI Server:

      To make ComfyUI accessible over your network (for tools like Open-WebUI), launch it with the --listen flag. You can run it in the foreground to see output or in the background:


       * To run in foreground (for debugging/monitoring):

   1         .venv/bin/python main.py --listen

          (You'll need to press Ctrl+C to stop it.)


       * To run in background (recommended for service):

   1         nohup .venv/bin/python main.py --listen > comfyui.log 2>&1 &

          This command runs ComfyUI in the background, redirects its output to comfyui.log, and detaches it from your terminal. You can check tail -f comfyui.log for logs or use fg to

  bring it back to foreground if launched using &.


   8. Verify ComfyUI is Running:

      Open your web browser and go to http://<YOUR_SPARK_IP>:8188. Replace <YOUR_SPARK_IP> with the actual IP address of your machine. If you see the ComfyUI interface, the server is

  running.


   9. Test Image Generation:

      Inside the ComfyUI web UI, try to generate a simple image using the default workflow. If it successfully generates an image, your Stable Diffusion service is fully operational.


   10. Connect to Open-WebUI:

      You can now configure Open-WebUI to use this ComfyUI instance by pointing its "API URL" setting to http://<YOUR_SPARK_IP>:8188.


2025/11/25

在 Ubuntu 用 Miniforge 建立 PyTorch CUDA 環境

Ubuntu 24.04 上,利用 Miniforge 建立 Python 環境,安裝支援 CUDAPyTorch

特別注意,從 PyTorch 2.6 開始,官方已不再發布 CondaGPU 套件,因此必須改用 pip 搭配 PyTorch 官方提供的 CUDA 套件。


一 安裝 Miniforge

下載最新版 Miniforge 安裝腳本。

wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh

安裝執行檔。

bash Miniforge3-Linux-x86_64.sh

當安裝程式詢問是否要執行 conda init 時,選擇 no,避免修改 .bashrc 或 .profile。

二 啟用 Miniforge 基底環境

要使用 Miniforge 時,在當前 shell 手動載入即可。

source ~/miniforge3/bin/activate

出現 (base) 前綴代表 Miniforge 已啟用。

三 建立獨立的 PyTorch 環境

Python 3.10 建立名為 torch 的環境。

conda create -n torch python=3.10

啟用環境。

conda activate torch

四 移除可能已存在的 CPU 版 PyTorch

由於 Conda 預設可能從 conda-forge 安裝 CPU 版 PyTorch,先將其移除。

conda remove -y pytorch pytorch-cuda torchvision torchaudio

pip uninstall -y torch torchvision torchaudio

五 安裝支援 CUDA 的 PyTorch

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126

六 驗證 CUDA 是否可用

python - <<'EOF'

import torch

print("PyTorch:", torch.__version__)

print("CUDA version:", torch.version.cuda)

print("CUDA available:", torch.cuda.is_available())

if torch.cuda.is_available():

    print("GPU name:", torch.cuda.get_device_name(0))

EOF

若 CUDA available 顯示 True,且顯示 GPU 名稱,代表 GPU 加速已正常啟用。




2025/11/1

 gitdaemon 不支援 lfs,如果要的話,要另外 install 一個 lfs server。