./out/host/linux-x86/bin/apksignerplatform.pk8 在
./build/make/target/product/security/platform.pk8 ./build/make/target/product/security/platform.x509.pemAOSP 中 OTA 相關的 generate key 說明:Sign build for release。
./out/host/linux-x86/bin/apksignerplatform.pk8 在
./build/make/target/product/security/platform.pk8 ./build/make/target/product/security/platform.x509.pemAOSP 中 OTA 相關的 generate key 說明:Sign build for release。
tools/buildutils/build_packages.sh
sudo usermod -aG kvm,cvdnetwork,render $USER然後因為會產生 /dev/ 下的node,所以要 reboot.
launch_avd .... ailed to connect:No such device [2025-06-19T10:09:00.523729147+00:00 ERROR crosvm] exiting with error 1: the architecture failed to build the vm Caused by: failed to create a PCI root hub: failed to create proxy device: Failed to configure tube: failed to receive packet: Connection reset by peer (os error 104) Detected unexpected exit of monitored subprocess /home/charles-chang/aosp/out/host/linux-x86/bin/process_restarter Subprocess /home/charles-chang/aosp/out/host/linux-x86/bin/process_restarter (16314) has exited with exit code 1 Failed to connect:No such device Client closed the connection Client closed the connection Client closed the connection [2025-06-19T10:09:01.549915546+00:00 ERROR crosvm] exiting with error 1: the architecture failed to build the vm Caused by: failed to create a PCI root hub: failed to create proxy device: Failed to configure tube: failed to receive packet: Connection reset by peer (os error 104) ....不知道是什麼原因,但是加上 gpumode 之後就可以開起來..
launch_cvd --gpu_mode=gfxstream ... VIRTUAL_DEVICE_BOOT_STARTED VIRTUAL_DEVICE_NETWORK_MOBILE_CONNECTED VIRTUAL_DEVICE_BOOT_COMPLETED Virtual device booted successfully依照 google cuttlefish 的說明,這樣啟動的 emulator 是沒有螢幕的,所以用 adb 可以連線。
launch_cvd --gpu_mode=gfxstream --daemon ... Virtual device booted successfully VIRTUAL_DEVICE_BOOT_COMPLETED這樣會跑在background,用 browser 開啟 http://localhost:8443 就會看到 cvd-1
launch_cvd --gpu_mode=gfxstream -start_webrtc --daemon停止要用 cvd reset 或 stop_cvd
mkdir aosp && cd aosp repo init -u https://android.googlesource.com/platform/manifest -b android-14.0.0_r75 --depth=1 repo sync -c --no-tags --no-clone-bundle -j3超過 3 就會有 resource limiit error,其實最後還是用1。
source buile/envsetup.sh lunch出現一堆,但是都沒有類似說明的 x86_64,ref:how do I build android emulator from source 的說明。
lunch sdk_phone64_x86_64-trunk_staging-user竟然 OK 了。-- 嘗試 userdebug, eng 也都 OK
============================================ PLATFORM_VERSION_CODENAME=VanillaIceCream PLATFORM_VERSION=VanillaIceCream PRODUCT_INCLUDE_TAGS=com.android.mainline mainline_module_prebuilt_nightly TARGET_PRODUCT=sdk_phone64_x86_64 TARGET_BUILD_VARIANT=user TARGET_ARCH=x86_64 TARGET_ARCH_VARIANT=x86_64 TARGET_2ND_ARCH_VARIANT=x86_64 HOST_OS=linux HOST_OS_EXTRA=Linux-6.8.0-60-generic-x86_64-Ubuntu-24.04.2-LTS HOST_CROSS_OS=windows BUILD_ID=AP2A.240805.005.S4 OUT_DIR=out ============================================如果依照 ref 的說明
lunch aosp_cf_x86_64_only_phone-aosp_current-userdebug ... build/make/core/release_config.mk:145: error: No release config found for TARGET_RELEASE: aosp_current. Available releases are: ap2a next staging trunk trunk_food trunk_staging.依照error 說明把 aosp_current 改掉
lunch aosp_cf_x86_64_only_phone-trunk-user就可以了 (順便測試 user build 也 OK)
============================================ PLATFORM_VERSION_CODENAME=VanillaIceCream PLATFORM_VERSION=VanillaIceCream PRODUCT_INCLUDE_TAGS=com.android.mainline mainline_module_prebuilt_nightly TARGET_PRODUCT=aosp_cf_x86_64_only_phone TARGET_BUILD_VARIANT=user TARGET_ARCH=x86_64 TARGET_ARCH_VARIANT=silvermont HOST_OS=linux HOST_OS_EXTRA=Linux-6.8.0-60-generic-x86_64-Ubuntu-24.04.2-LTS HOST_CROSS_OS=windows BUILD_ID=AP2A.240805.005.S4 OUT_DIR=out ============================================cf 就是給 cuttlefish (android emulator name) 用的。
repo init --partial-clone --no-use-superproject -b android-latest-release -u https://android.googlesource.com/platform/manifest repo sync -c -j3依照googlesource 說明
lunch aosp_cf_x86_64_only_phone-aosp_current-userdebug因為現在 lunch 都不會出現target 讓你選了。
sudo sysctl -w kernel.apparmor_restrict_unprivileged_unconfined=0 sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0或是增加這個檔案到系統,免得每次開機都要重新設:
$ cat /etc/sysctl.d/99-apparmor-unconfined.conf kernel.apparmor_restrict_unprivileged_unconfined=0 kernel.apparmor_restrict_unprivileged_userns=0 $sudo sysctl --system
DMAR: [DMA Read NO_PASID] Request device [04:00.0] fault addr 0xff62e000 [fault reason 0x06] PTE Read access is not set [19704.235699] DMAR: DRHD: handling fault status reg 3 ...所以google 一下,都說是這樣: 就是修改 /etc/default/grub..
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=off"然後更新
update-grub之後重開機就正常了。
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -然後再 apt install nodejs
:Copilot help會 show copilot command,就是 copilot.txt。
:Copilot setup果然不知道怎麽用...
:set filetype=c看狀態
:Copilot status
git clone https://github.com/AsyncFuncAI/deepwiki-open.git參考上一篇,修改 NEXT_PUBLIC_SERVER_BASE_URL 到 server 的 public ip address.
pip install -r api/requirements.txt python -m api.main啟動 front-end
npm install npm run dev -- -H 0.0.0.0然後就可以開啟 http://server-ip:3000
npm run dev -- -H 0.0.0.0就可以改到 public ip
diff --git a/docker-compose.yml b/docker-compose.yml index a7d42c3..b1d85d3 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -3,6 +3,7 @@ version: '3.8' services: deepwiki: build: . + network_mode: host ports: - "${PORT:-8001}:${PORT:-8001}" # API port - "3000:3000" # Next.js port把 contrainer 的 localhost:11434 使用 host 的,才能跟 ollama 連接。
diff --git a/docker-compose.yml b/docker-compose.yml index a7d42c3..210e476 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -3,6 +3,7 @@ version: '3.8' services: deepwiki: build: . ports: - "${PORT:-8001}:${PORT:-8001}" # API port - "3000:3000" # Next.js port @@ -11,6 +12,6 @@ services: environment: - PORT=${PORT:-8001} - NODE_ENV=production - - NEXT_PUBLIC_SERVER_BASE_URL=http://localhost:${PORT:-8001} + - NEXT_PUBLIC_SERVER_BASE_URL=http://192.168.145.77:${PORT:-8001} volumes: - ~/.adalflow:/root/.adalflow # Persist repository and embedding data發現沒辦法support gitlab 的 subgroup,因為會和產生的 page 的 path 混在一起,而 source code 的result http server 頁面是依照source path 安排的。
sudo dpkg --force-overwrite /var/cache/apt/archives/libnvidia-common-570-server_570.133.20-0ubuntu0.24.04.1_all.deb最後還是 fail 因為unpack 某個 deb 出現 /usr/share/nvidia/files.d/sandboxutils-filelist.json 無法overwrite,
pip install flash-attn --no-build-isolation然後會用 cicc build 很久...
soundfile peft backoff結果phi4-multimodal 太大... 超過 24G..
PS C:\Users\charles.chang> conda init powershell no change D:\miniconda\Scripts\conda.exe no change D:\miniconda\Scripts\conda-env.exe no change D:\miniconda\Scripts\conda-script.py no change D:\miniconda\Scripts\conda-env-script.py no change D:\miniconda\condabin\conda.bat no change D:\miniconda\Library\bin\conda.bat no change D:\miniconda\condabin\_conda_activate.bat no change D:\miniconda\condabin\rename_tmp.bat no change D:\miniconda\condabin\conda_auto_activate.bat no change D:\miniconda\condabin\conda_hook.bat no change D:\miniconda\Scripts\activate.bat no change D:\miniconda\condabin\activate.bat no change D:\miniconda\condabin\deactivate.bat modified D:\miniconda\Scripts\activate modified D:\miniconda\Scripts\deactivate modified D:\miniconda\etc\profile.d\conda.sh modified D:\miniconda\etc\fish\conf.d\conda.fish no change D:\miniconda\shell\condabin\Conda.psm1 modified D:\miniconda\shell\condabin\conda-hook.ps1 no change D:\miniconda\Lib\site-packages\xontrib\conda.xsh modified D:\miniconda\etc\profile.d\conda.csh modified D:\OneDrive\OneDrive - Royaltek\文件\WindowsPowerShell\profile.ps1 ==> For changes to take effect, close and re-open your current shell. <==
one@MB127:~$ dmesg | grep -i rknpu ... [ 4.532693] [drm] Initialized rknpu 0.9.6 20240322 for fdab0000.npu on minor 1所以 kernel 有 enable rknpu
(toolkit2) one@MB127:~/projects/rknn_model_zoo/examples/yolov5/python$ python yolov5.py --model_path ~/yolov5s_relu.onnx --img_show use anchors from '../model/anchors_yolov5.txt', which is [[[10.0, 13.0], [16.0, 30.0], [33.0, 23.0]], [[30.0, 61.0], [62.0, 45.0], [59.0, 119.0]], [[116.0, 90.0], [156.0, 198.0], [373.0, 326.0]]] /home/one/projects/rknn_model_zoo/py_utils/onnx_executor.py:12: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar. if getattr(np, 'bool', False): Model-/home/one/yolov5s_relu.onnx is onnx model, starting val infer 1/1 IMG: bus.jpg person @ (208 242 286 508) 0.881 person @ (478 238 560 525) 0.859 person @ (109 237 232 534) 0.842 person @ (79 355 121 515) 0.318 bus @ (91 129 555 465) 0.702
[ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: Link is Down [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: FPE workqueue stop [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: Register MEM_TYPE_PAGE_POOL RxQ-0 [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: PHY [stmmac-1:01] driver [YT8531 Gigabit Ethernet] (irq=POLL) [ 4? 15 16:59:14 2025] dwmac4: Master AXI performs any burst length [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: No Safety Features support found [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: IEEE 1588-2008 Advanced Timestamp supported [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: registered PTP clock [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: FPE workqueue start [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: configuring for phy/rgmii-rxid link mode [ 4? 15 16:59:14 2025] rk_gmac-dwmac fe1c0000.ethernet end1: FPE workqueue stop [ 4? 15 16:59:15 2025] PM: suspend entry (deep) [ 4? 15 16:59:15 2025] Filesystems sync: 0.015 seconds大概是no activity suspend,雖然有去 settings 確認關掉,但是還是沒用。
關於MP520 Linux kernel版本的說明: 目前MP520採用的Linux kernel為SoC RK3588S原廠提供的版本加以修改成適合MP520主機板。RK3588S原廠推出新版的速度很慢。 我們會將原廠版本經過修改,並在每一滾動發行版中徹底測試再發佈出來。 另外開源界主流的Mainline linux kernel已有RK3588S版本,但是還只限於終端機文字模式,還很陽春,無法支援圖形視窗介面。 等到Mainline linux kernel的RK3588S版本可以支援圖形視窗時,MP520的Kernel,計劃從Mainline Kernel版本修改而成。 這樣就可以隨時保持最新版本了。不過估計至少還需要一、兩年以上的時間。
沒有,但是提供刷機服務沒有 sourcecode,也沒有 image,所以只能買一個 nvme ssd 來換了,以免郵寄麻煩..
pascal_voc/ ├── VOC2012/ │ ├── Annotations/ # XML annotation files │ ├── ImageSets/ │ │ └── Main/ │ │ └── train.txt # Training image names │ └── JPEGImages/ # Image files (.jpg) │ └── VOC2007/ ├── Annotations/ # XML annotation files ├── ImageSets/ │ └── Main/ │ └── test.txt # Testing image names └── JPEGImages/ # Image files (.jpg)train.txt 是 filename list.
. ├── images │ ├── train │ └── val └── labels ├── train └── val好像有一個 filelist ..
Based on the code in dataset.py, I can explain how the train.csv file should be structured. The file is used to map image names to their corresponding label files. The train.csv should contain two columns: 1. Image filename 2. Label filename Here's an example of how train.csv should look: ```csv image_name,label_name image1.jpg,image1.txt image2.jpg,image2.txt dog_picture.jpg,dog_picture.txt car_street.jpg,car_street.txt ``` Key points about the CSV structure: - The image filenames should correspond to files in the `COCO/images/images/` directory - The label filenames should correspond to files in the `COCO/labels/labels_new/` directory - Each label file (.txt) contains the bounding box information in the format: <class_label> <x> <y> <width> <height&ht;` You can create this CSV file using a simple Python script like this: ````python import os import pandas as pd def create_train_csv(image_dir, label_dir, output_csv): image_files = [f for f in os.listdir(image_dir) if f.endswith(('.jpg', '.jpeg', '.png'))] data = [] for img_file in image_files: # Assume label file has same name but .txt extension label_file = os.path.splitext(img_file)[0] + '.txt' if os.path.exists(os.path.join(label_dir, label_file)): data.append([img_file, label_file]) df = pd.DataFrame(data, columns=['image_name', 'label_name']) df.to_csv(output_csv, index=False) # Example usage create_train_csv( 'COCO/images/images/', 'COCO/labels/labels_new/', 'COCO/train.csv' ) ```` This script will: 1. Scan the images directory 2. Match image files with their corresponding label files 3. Create a CSV file with the correct format expected by the YOLODataset class The dataset class then uses this CSV file to load the corresponding image and label pairs during training.然後 COCO dataset 的 annotation 中 class (category 會到 90),但是只有列出 80 個。
... /images/val2014/COCO_val2014_000000581827.jpg /images/val2014/COCO_val2014_000000581829.jpg ...剛好有 5000 個 file
ssl_verify: true channels: - defaults envs_dirs: - D:/minicondaenvs pkgs_dirs: - D:/minicondapkgs這樣download 下來的 package file 和 create 的 environment 就會在 D:/mini....
>conda config --get --add channels 'defaults' # lowest priority --add envs_dirs 'D:/minicondaenvs' --add pkgs_dirs 'D:/minicondapkgs' --set ssl_verify True
Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'出現結果:
Name : OpenSSH.Client~~~~0.0.1.0 State : Installed Name : OpenSSH.Server~~~~0.0.1.0 State : NotPresent所以安裝 server:
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 Path : Online : True RestartNeeded : False
Start-Service sshd
$ssh loyaltec\\charko.chang@192.168.144.78
cd linux-npu-driver git submodule update --init --recursive cmake -B build -S . -DENABLE_NPU_COMPILER_BUILD=ON cmake --build build --parallel $(nproc) # install the driver in the system sudo cmake --install build --prefix /usr # reload the intel_vpu module to load new firmware sudo rmmod intel_vpu sudo modprobe intel_vpu這樣就會 build NPU plugin 和 OpenVINO runtime
# Prepare the add_abc model in path pointed by basic.yaml mkdir -p models/add_abc curl -o models/add_abc/add_abc.xml https://raw.githubusercontent.com/openvinotoolkit/openvino/master/src/core/tests/models/ir/add_abc.xml touch models/add_abc/add_abc.bin # Run tests with add_abc.xml npu-umd-test --config=validation/umd-test/configs/basic.yaml才 run 得起來。
sudo apt-get install -y libze-intel-gpu1 libze1 intel-opencl-icd clinfo intel-gsc libze-dev intel-ocloc然後 download zip,解開就可以 run 了,有一個 script 啟動 ollama server,另外用 ollama 命令就跟平常一樣。
* cline is having trouble... Cline uses complex prompts and iterative task execution that may be challenging for less capable models. For best results, it's recommended to use Claude 3.5 Sonnet for its advanced agentic coding capabilities.這是因為 context windows 不夠大 (default 是 4096)
$ cat Modelfile-deepseek FROM deepseek-r1:14b PARAMETER num_ctx 32768然後用 ollama command:
ollama create deepseek-r1:14b-32k -f Modelfile-deepseekcreate 完,用 ollama list 就可以看到 deepseek-r1:14b-32k 這個model
img = Image.open('image1.jpg') tensor1 = transforms.ToTenor()(img) tensor2 = torch.from_numpy(np.array(img))tensor1 的 dimension sequence 是 [3,400,500]
git clone https://github.com/deepseek-ai/Janus.git然後安裝 requirements.txt
set DISPLAY=127.0.0.1:0.0實際上後面的意思是
127.0.0.1:display_number.0.雖然我也不知道 XLauncher 的 display number 是哪個...
dbus-launch gnome-terminal
libEGL warning: DRI3: failed to query the version libEGL warning: DRI2: failed to authenticate libEGL warning: DRI3: failed to query the version有說明說,在 import matplotlib 後,加上:
matplotlib.use('TkAgg')就可以。
class : W, H, anchor_number, 2 Regre : W, H, anchor_numbwe, 4
# To get a numpy [[vel, azimuth, altitude, depth],...[,,,]]: points = np.frombuffer(radar_data.raw_data, dtype=np.dtype('f4')) points = np.reshape(points, (len(radar_data), 4))-- comment 排列方式跟上面的剛好不一樣
$ nvidia-xconfig --query-gpu-info Number of GPUs: 1 GPU #0: Name : NVIDIA TITAN RTX UUID : GPU-XXXXXXXXXXXX PCI BusID : PCI:1:0:0 Number of Display Devices: 1 Display Device 0 (TV-6): No EDID information available.看一下 GPU 的 PCI Bus Id.
$ sudo nvidia-xconfig -a --allow-empty-initial-configuration --use-display-device=None --virtual=1920x1080 --busid=PCI:1:0:0 Using X configuration file: "/etc/X11/xorg.conf". Option "AllowEmptyInitialConfiguration" "True" added to Screen "Screen0". Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf'然後因為 nvidia driver 版本大於 440.xx,所以在 xorg.conf 的 Screen 的地方加上:
Option "HardDPMS" "false"
allowed_users=anybody needs_root_rights=yes然後 user 要在 tty group 裡。
X.Org X Server 1.21.1.11 X Protocol Version 11, Revision 0 Current Operating System: Linux i7-14700 6.8.0-35-generic #35-Ubuntu SMP PREEMPT_DYNAMIC Mon May 20 15:51:52 UTC 2024 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.0-35-generic root=UUID=e4ea2afe-cc4e-42ce-a53f-5032e417f9f7 ro xorg-server 2:21.1.12-1ubuntu1.1 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.42.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.7.log", Time: Fri Jan 17 05:52:20 2025 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" The XKEYBOARD keymap compiler (xkbcomp) reports: > Warning: Could not resolve keysym XF86CameraAccessEnable > Warning: Could not resolve keysym XF86CameraAccessDisable > Warning: Could not resolve keysym XF86CameraAccessToggle > Warning: Could not resolve keysym XF86NextElement > Warning: Could not resolve keysym XF86PreviousElement > Warning: Could not resolve keysym XF86AutopilotEngageToggle > Warning: Could not resolve keysym XF86MarkWaypoint > Warning: Could not resolve keysym XF86Sos > Warning: Could not resolve keysym XF86NavChart > Warning: Could not resolve keysym XF86FishingChart > Warning: Could not resolve keysym XF86SingleRangeRadar > Warning: Could not resolve keysym XF86DualRangeRadar > Warning: Could not resolve keysym XF86RadarOverlay > Warning: Could not resolve keysym XF86TraditionalSonar > Warning: Could not resolve keysym XF86ClearvuSonar > Warning: Could not resolve keysym XF86SidevuSonar > Warning: Could not resolve keysym XF86NavInfo Errors from xkbcomp are not fatal to the X server
$ /opt/TurboVNC/bin/vncserver :8 Desktop 'TurboVNC: i7-14700:8 (charles-chang)' started on display i7-14700:8 Starting applications specified in /opt/TurboVNC/bin/xstartup.turbovnc Log file is /home/charles/.vnc/i7-14700:8.log如果是第一次啟動,會要求給一個 vncviewer 用的 password.
~$ DISPLAY=:8 vglrun -d :7 glxgears先指定DISPLAY環境變數是:8 (VNCSERVER),然後用 "-d :7" 指定 opengl rendering 交給 display :7 (nvidia X session) 負責。
MESA-LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri) failed to load driver: iris MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri) X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 3 (X_GLXCreateContext) Value in failed request: 0x0 Serial number of failed request: 167 Current serial number in output stream: 168所以參考上面的 ref,去看 /usr/lib 下果然沒有 dri 目錄。所以把安裝位置 ln 過來...
sudo apt --reinstall install libgl1-mesa-dri cd /usr/lib sudo ln -s x86_64-linux-gnu/dri ./dri然後 Error:
MESA-LOADER: failed to open iris: /home/charles/miniconda3/envs/carla/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-17.so.1) (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri) failed to load driver: iris MESA-LOADER: failed to open swrast: /home/charles/miniconda3/envs/carla/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-17.so.1) (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri) X Error of failed request: BadValue (integer parameter out of range for operation)所以看一下 conda 系統的 libstdc++.so.6 的 GLIBCXX 支援版本有:
$ strings /home/charles/miniconda3/lib/libstdc++.so.6 | grep ^GLIBCXX GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 ... GLIBCXX_3.4.28 GLIBCXX_3.4.29 GLIBCXX_DEBUG_MESSAGE_LENGTH GLIBCXX_3.4.21 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.16 GLIBCXX_3.4.1 ... GLIBCXX_3.4.4 GLIBCXX_3.4.26果然沒有 3.4.30
$ batcat --generate-config-file Success! Config file written to /home/charles/.config/bat/config然後去修改 ~/.config/bat/config
-- #--theme="TwoDark" ++ --theme="GitHub"default theme 是給 darkmode用的。bright mode 可以用 "GitHub" 這個 theme.
batcat README.md
./CarlaUE4.sh結果發現沒有使用 GPU,參考cannot run Carla using nvidia GPU under Linux with multi-GPU installed : PRIME instruction ignored by Carla #4716,
./CarlaUE4.sh -prefernvidia就可以了。
./CarlaUnreal.sh0.10.0 不用家 -prefernvidia,就會用 GPU了。
.CarlaUE4.sh -prefernvidia -RenderOffScreen
python environment.py --cars LowBeam All
sudo apt install nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker這樣docker 的 --cpus all 才會有作用。
docker run --privileged --gpus all --net=host -e DISPLAY=$DISPLAY carlasim/carla:0.9.15 /bin/bash ./CarlaUE4.sh啟動了之後,因為 --net=host,所以要 run PythonAPI/examples 下的 code 都一樣。