2023/12/14

ollama : run llama locally

ollama.ai 是一個很方便的 run LLM 的program.
作者寫好了 shell script 自動安裝:
curl https://ollama.ai/install.sh | sh
執行完就裝完了。
他是用 以 service 的形式,用 systemd 管理。
所以 ollama.service 和 user, group 都 create 好了。
之後,只要用 ollama 這個 command 就可以控制。

舉例來說, run model:llamma2
ollama run llama2
>>>
等一下download 完model file,就會出現 提示符號等你輸入prompt

作者的blog有很多model 的執行 command.
可以直接照著試試看。
-- 例如uncensored llama2

專案的github 頁面有 "community integration",是一些其他作者的相關專案。
例如: 就是提供一個類似 chatgpt web 頁面的專案。

以ollama-webui 為例:

先把 ollam.service run 在 ip 上 (default 是 127.0.0.1):
要在 /etc/systemd/system/ollama.service.d/下 create 一個 file:
cat /etc/systemd/system/ollama.service.d/environment.conf
[Service]
Environment=OLLAMA_HOST=0.0.0.0:11434
然後 systemctl daemon-reload, restart ollama.service.
用 systemctl status (或是 journalctl -u ollama.service)看 log,會有
 routes.go:843: Listening on [::]:11434 (version 0.1.14)
之後 follow ollama-webui 中,用 dockerfile 的作法:
git clone https://github.com/ollama-webui/ollama-webui.git
cd ollama-webui
docker build -t ollama-webui .
build 好 image 後..

uninstall ollama -- 就是 uninstall systemd service
sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service
sudo rm -r /usr/share/ollama 
sudo userdel ollama 
sudo groupdel ollama
sudo rm /usr/local/bin/ollama

2023/12/7

java code : wait system signal in jni

就是... java call jni,jni wait system signal,作到 C program 類似 的 wait signal 功能。
java code :
$ cat SignalExample.java 
public class SignalExample {
    static {
        System.loadLibrary("SignalLibrary"); // Load the native library
    }

    // Native method declaration
    public native void waitForSignal();

    public static void main(String[] args) {
	System.out.println("Start..");
        SignalExample signalExample = new SignalExample();
        signalExample.waitForSignal();
	System.out.println("End");
    }
}
jni code:
$ cat SignalLibrary.c 
#include <jni.h>
#include <stdio.h>
#include <signal.h>

// Global variable to indicate whether the signal has been received
volatile sig_atomic_t signalReceived = 0;

// Signal handler function
void handleSignal(int signo) {
    signalReceived = 1;
}

// Native method implementation
JNIEXPORT void JNICALL Java_SignalExample_waitForSignal(JNIEnv *env, jobject obj) {
    // Set up signal handler
    signal(SIGUSR1, handleSignal);

    // Wait for the signal
    while (!signalReceived) {
        // Perform other work or sleep if needed
        // ...
    }

    printf("Signal received!\n");
}

build code..
先build jni..

先 export 我的 jdk 位置
$ export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
compile jni C:
$ gcc -shared -o libSignalLibrary.so -fPIC -I$JAVA_HOME/include -I$JAVA_HOME/include/linux SignalLibrary.c
然後 compile java
javac SignalExample.java

TEST run:
run 的時候要告訴他 so 在哪裡...
$ java -Djava.library.path=/home/charles-chang/ SignalExample 
Start..
Signal received!
End
在Start.. 的時候就停了,開啟另一個 terminal 找這個process 的 PID,下signal:
charles+ 3994772 98.5  0.1 11275340 36368 pts/2  Sl+  10:14   0:03 java -Djava.library.path=/home/charles-chang/ SignalExample
...
$ kill -SIGUSR1 3994772
就會出現 "Signal received, End",然後結束。
如果不送SIGUSER1,改用 Ctrl-C,就會...
$ java -Djava.library.path=/home/charles-chang/ SignalExample 
Start..
^C
不會有 Signal received 和 End,會直接結束。


  • 這個 example 是 chatgpt 寫的

2023/12/6

linux watchdog

linux kernel 定義了標準的 watchdog interface。
寫在 Documentation/watchdog
其中 watchdog-api.rst 有說明一些基本的操作。

#include <iostream>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/watchdog.h>

class Watchdog {
private:
	int watchdog_fd;

public:
	Watchdog(const char* device_path = "/dev/watchdog") {
		watchdog_fd = open(device_path,O_RDWR);
		if (watchdog_fd == -1) {
			std::cerr << "Error opening watchdog device" << std::endl;
		}else {
			std::cout << "watchdog open OK" << std::endl;
		}
	}

	~Watchdog() {
		if (watchdog_fd != -1) {
			int options = WDIOS_DISABLECARD;
			ioctl(watchdog_fd,WDIOC_SETOPTIONS,&options);
			close(watchdog_fd);
		}
		std::cout << "watchdog closed OK" << std::endl;
	}

	bool kick() {
		int dummy = 0;
		std::cout << "kick" << std::endl;
		return ioctl(watchdog_fd, WDIOC_KEEPALIVE, &dummy) != -1;
	}
};

int main() {
	Watchdog wd;

	for(int i=0;i<10;i++) {
		wd.kick();
		sleep(10);
	}
		
	return 0;
}
用 raspberry pi測試 (因為 NB 沒有 /dev/watchdog):
$sudo ./watchdog
watchdog open OK
kick
kick
kick
..
kick
watchdog close OK
系統不會 reboot

但是在中途用 Ctrl-C 中斷。會出現
watchdog: watchdog0: watchdog did not stop!
然後 10 sec 後系統 reboot


另外,用 shell command 也可以。
document 說,寫入 'V' 之後close 可以停止 watchdog.
echo 'V' > /dev/watchdog
另外,雖然driver load,但是 watchdog timrer 是沒有啟動的。
一旦有人 open dev node,timer 就開始啟動。
在timeout 之前,寫除了'V' 之外的東西,都可以 reset timer 或是用IOCTL)。

2023/11/20

repo sync --mirror and --reference

repo init 的時候,加上 --mirror,會做出一份 mirror 的 repo 結構。
以前就是把這個做為其他 repo init 的 source。也就是 local mirror。

但是從這個 mirror clone(sync) 的 project,就 push 不回去原來的 repo server 了。
依樣,原來的 repo sever 有新 commit,他因為是跟 local mirror clone,所以也不知道。

這樣,就可以用 --reference

一樣,repo init -u 指向 repo server,最後加上 --reference local mirror

這樣sync(clone)的時候會參考 --reference 的資料,但是也會到 原來的 server 去 update。
sync 完後,.repo 的 size 變小了。

之後,project push 會回到 repo server,pull 也會。
local mirror 維持與 repo server 不同步也沒關係。不用到 local mirror 去 sync

2023/11/17

raspbian lite 新的設定方式

一樣,只是要作為 local wifi nas,所以裝 lite 就可以。
選了 arm64.

先是要啟動 uart login,結果新版 已經移除從 uart contol login pi 了。
說是寫第一次開機就會跳出rasp-config,要你 create 一個 username/password.
但是是在monitor, keyboard 界面。
因為是 headless,所以沒辦法這樣。
說明的第二格方法是用 rpi-imager,新的 tool,GUI界面幫 user download, write SD image。
其中一個新的功能就是customize image,邦你打開 ssh 和設定 username/password.
用這個方法設定,download 燒錄後,再把 boot partition 中的 config.txt 加上 enable_uart=1後,在 uart 可以 login 了。

進入後,再用 ssh 連線, run rasp-config.

啟動 wifi ap 也改用 nmcli 了。
ref: 設置完全用 nmcli 命令來做,他會幫忙寫好 config 檔。
設置的方式就跟 ref 的說明一樣..
sudo nmcli connection add type wifi mode ap con-name nonopiap ifname wlan0 ssid nonopiap ipv4.address 192.168.33.254/24
sudo nmcli connection modify nonopiap 802-11-wireless.band bg
sudo nmcli connection modify nonopiap 802-11-wireless-security.key-mgmt wpa-psk
sudo nmcli connection modify nonopiap 802-11-wireless-security.psk mypassword123
sudo nmcli connection modify nonopiap ipv4.method shared
sudo nmcli connection up nonopiap 
sudo nmcli device wifi show-password


另外firewall 以前用 iptables。現在用 nftables: 看看 servier 有沒有起來:
$ sudo systemctl status nftables.service
然後看看現在的 rules
$ sudo nft list ruleset
table ip nm-shared-wlan0 {
	chain nat_postrouting {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 192.168.44.0/24 ip daddr != 192.168.44.0/24 masquerade
	}

	chain filter_forward {
		type filter hook forward priority filter; policy accept;
		ip daddr 192.168.44.0/24 oifname "wlan0" ct state { established, related } accept
		ip saddr 192.168.44.0/24 iifname "wlan0" accept
		iifname "wlan0" oifname "wlan0" accept
		iifname "wlan0" reject
		oifname "wlan0" reject
	}
}
根據這一篇: 用 networkmanager 做 internet connection sharing ,會自動設定 nat.同時啟動dnamasq
文章看起來。share mode 跟 hotspot mode 不一樣。

2023/11/16

Android Programming : Ignore the orientation change

ref: 機器(手機) 改變方向( portait -- landscape) 的時候,application 會經歷:
  • Surface Destroyed
  • Surface created
然後在 created 就要偵測orientation, 好知道現在的方向。

如果只想固定方向,AndroidManifest.xml 的 activity 要加上一個 attribute :
android:screenOrientation="nosensor"
這樣螢幕轉動的時候,這個 activity 就不會收到 ondestory, oncreate 的 callback


如果依照google 自己的說明,在activity 加上
    android:configChanges="orientation|screenSize|screenLayout|keyboardHidden"
結果application 是不會重新 destroy, creatre view 沒錯,但是application 還是會旋轉。
是系統幫你轉的(?)
因為這樣,如果是 camera 的應用,可以看到顯示的camera 內容剛好跟螢幕旋轉方向相反。

2023/11/8

Android : start Activity from another package

就是要叫起另一個 package 中的activity.

Example:
裡面就是兩個 application: Target Application, Launch Target.
Launch Target 這個 app 會把 Target Application 叫起來。

clone 下來,分別用Android Studio 開啟兩個project,都 build OK,and install 後,
啟動 Launch Target,去按中間的 "Launch Target" 字樣,就會啟動 Target Application.

原來有人說用 getLaunchIntentForPackage( ) 取得 launchIntent,結果我都return null
所以有人說 android 11 後,要加上 query tag 的 applicatin 才能被取到。

所以最後只有這種方法可以用:
      Intent launchIntent = new Intent(Intent.ACTION_MAIN);
      launchIntent.setComponent(new ComponentName("com.example.targetapplication","com.example.targetapplication.MainActivity"));
      startActivity(launchIntent);

bookmark: lazarus . GPLed Delphi

就是以前很有名的 RAD: Delphi 的 opensource 版本。

Windows 11 也可以用。
UI 就跟以前的 delphi 依樣,會有 Form, Unit,然後寫 pascal,build 成exe 檔,也可以直接 run.
應該是目前 最方便的 Windows UI 開發環境。

USB type-c. alternat mode. and VDM

USB Type C, 新的協定有很多功能:
  • 角色(host, device)轉換
  • 充電能力(PD)
  • alt mode : DP, hdmit
  • audio
這些功能都是透過 CC pin 來完成。
CC1.2 除了以往用 High, Low, R-GND 來辨識對方之外,新增了通訊功能,用 BMC 編碼。:


透過CC溝通VDM,讓USB進入不同模式。


以 DP Alternat Mode 為例。
經過 VDM 的溝通,進入 dp alter mode後,usb 的某些 pin,就會連接到 DP 的信號:


之後,就可以像一般DP連接依樣的動作了。
-- 可以看到,CC 的溝通會轉成 HPD(Hotplug detection) 給 Monitor.

ref:


一般 type-C to DP 的線,應該就是內含這類 IC 的線。
這個 ST 的chip,說明如何藉由 VDM 進入 display alternate mode.

linux kernel 根據這個 patch,在drivers/usb/typec/altmodes/displayport.c。
kernel config option 是: CONFIG_TYPEC_DP_ALTMODE

2023/11/2

Try opencv android sdk and snpe...

照著做做看,是不是做得出來 需要:
  • android studio , ndk 17.2 cmake 18
  • opencv for android sdk 4.5.4 : unzip 到下面 cmakelist.txt 中的 path
  • snpe 1.68 : unzip 到下面 cmakelist.txt 中的 path
用 android studio import clone 下來的 folder.
等android studio import 完成,所有 project component 都顯示了。
選 Project Files. 然後下面的 Object-Detection-with..... ,選 app - main - cpp 就會看到有 CMakeList.txt。
修改這個 CMakeList.txt 中的 opencv 和 snpe 的目錄:
diff --git a/app/src/main/cpp/CMakeLists.txt b/app/src/main/cpp/CMakeLists.txt
index 8aa71f5..3e041ce 100644
--- a/app/src/main/cpp/CMakeLists.txt
+++ b/app/src/main/cpp/CMakeLists.txt
@@ -10,7 +10,7 @@ cmake_minimum_required(VERSION 3.4.1)
 project("objectrecognition")
 
 set(OpenCV_STATIC on)
-set(OpenCV_DIR "/<OPENCV_PATH>/opencv-4.5.4-android-sdk/OpenCV-android-sdk/sdk/native/jni")
+set(OpenCV_DIR "/home/charles-chang/OpenCV-android-sdk/sdk/native/jni")
 find_package(OpenCV REQUIRED)
 
 set(CMAKE_VERBOSE_MAKEFILE on)
@@ -18,7 +18,7 @@ set(CMAKE_VERBOSE_MAKEFILE on)
 # build native_app_glue as a static lib
 include_directories(${ANDROID_NDK}/sources/android/native_app_glue ${COMMON_SOURCE_DIR})
 
-include_directories(/<SNPE_PATH>/snpe-1.68.0.3932/include/zdl)
+include_directories(/home/charles-chang/snpe-1.68.0.3932/include/zdl)
 add_library(app_glue STATIC
         ${ANDROID_NDK}/sources/android/native_app_glue/android_native_app_glue.c)
然後選回 android, 選 build apk,會說 Native_C 有錯,就改:
diff --git a/app/src/main/cpp/Native_C.h b/app/src/main/cpp/Native_C.h
index 56c5024..458a822 100644
--- a/app/src/main/cpp/Native_C.h
+++ b/app/src/main/cpp/Native_C.h
@@ -40,8 +40,8 @@ class Native_C {
 
   bool CreateCaptureSession(ANativeWindow* window);
 
-  int32_t GetCameraCount() { return m_camera_id_list->numCameras; }
-  uint32_t GetOrientation() { return m_camera_orientation; };
+  int32_t GetCameraCount() { return camera_id_list->numCameras; }
+  uint32_t GetOrientation() { return camera_orientation; };
 
  private:
這樣就能 build apk OK.

然後再 build apk boundle.
出現manifest 沒有 version 的錯。
修改... 亂寫一個版本
diff --git a/app/build.gradle b/app/build.gradle
index 3bc0773..7daba68 100644
--- a/app/build.gradle
+++ b/app/build.gradle
@@ -10,6 +10,8 @@ android {
         applicationId 'com.object.recognition'
         minSdkVersion 24
         targetSdkVersion 29
+        versionName '1.0.2'
+        versionCode 3
 
         ndk {
             abiFilters 'armeabi-v7a'//, 'arm64-v8a', 'x86', 'x86_64'
build Boundle OK

run 起來後,windows 一直打不開。因為下面這行被 optimize 調,所以要改:
diff --git a/app/src/main/cpp/main.cpp b/app/src/main/cpp/main.cpp
index 88c8e52..785636d 100644
--- a/app/src/main/cpp/main.cpp
+++ b/app/src/main/cpp/main.cpp
@@ -156,17 +156,17 @@ void main::OnCreate() {
 void main::OnPause() {}
 void main::OnDestroy() {}
 
-void main::SetNativeWindow(ANativeWindow* native_window) {
+void main::SetNativeWindow(ANativeWindow* anative_window) {
     // Save native window
-    native_window = native_window;
+    native_window = anative_window;
 }
 
 void main::SetUpCamera() {
 
     native_camera = new Native_C(selected_camera_type);
     native_camera->MatchCaptureSizeRequest(&m_view,
-                                             ANativeWindow_getWidth(native_window),
-                                             ANativeWindow_getHeight(native_window));
+                                             720/*ANativeWindow_getWidth(native_window)*/,
+                                             1080/*ANativeWindow_getHeight(native_window)*/);
 
     LOGI("______________mview %d\t %d\n", m_view.width, m_view.height);
之後還有 camera aspect ratio 問題,找不到可用的camera 設定。所以固定.

要 adb root, remount,follow instruction push snpe so 和 dlc file,
是 push 整個 folder,所以要 follow 說明,不要擅自用 *




同公司的令一個project 也一樣。 一樣修改 CMakeList.txt 的 opencv 和 snpe 的 path,build.grandle 加上 version.
然後就是 commit code 的 dlc , class.txt 是從 sdcard 上讀取,要概回原來的。
另外這個dlc 和 class.txt 是在 assets folder,還有 push 的 folder 是 models 不是上一個 Object Detection 的 mode..
然後把存檔功能刪掉。
diff --git a/app/build.gradle b/app/build.gradle
index 11378c8..6b306d9 100644
--- a/app/build.gradle
+++ b/app/build.gradle
@@ -21,6 +21,8 @@ android {
         applicationId 'com.homesight.personrecognition'
         minSdkVersion 24
         targetSdkVersion 29
+        versionName '1.0.2'
+        versionCode 3
 
         ndk {
             abiFilters 'armeabi-v7a'//, 'arm64-v8a', 'x86', 'x86_64'
diff --git a/app/src/main/cpp/CMakeLists.txt b/app/src/main/cpp/CMakeLists.txt
index 12c12fc..c30cae2 100644
--- a/app/src/main/cpp/CMakeLists.txt
+++ b/app/src/main/cpp/CMakeLists.txt
@@ -10,7 +10,7 @@ cmake_minimum_required(VERSION 3.4.1)
 project("persondetection")
 
 set(OpenCV_STATIC on)
-set(OpenCV_DIR "/home/krishnapriya/Desktop/Office_work/opencv-4.7.0-android-sdk/OpenCV-android-sdk/sdk/native/jni")
+set(OpenCV_DIR "/home/charles-chang/opencv-4.7.0-android-sdk/sdk/native/jni")
 find_package(OpenCV REQUIRED)
 
 set(CMAKE_VERBOSE_MAKEFILE on)
@@ -18,7 +18,7 @@ set(CMAKE_VERBOSE_MAKEFILE on)
 # build native_app_glue as a static lib
 include_directories(${ANDROID_NDK}/sources/android/native_app_glue ${COMMON_SOURCE_DIR})
 
-include_directories(/home/krishnapriya/Desktop/Office_work/SNPE/snpe-1.51.0.2663/include/zdl)
+include_directories(/home/charles-chang/snpe-1.51.0.2663/include/zdl)
 #set(SNPE_LIB_DIR "/home/krishnapriya/Desktop/Office_work/SNPE/snpe-1.51.0.2663/lib/aarch64-android-clang6.0")
 #set(DSP_LIB_DIR "/home/krishnapriya/Desktop/Office_work/SNPE/snpe-1.51.0.2663//lib/dsp")
 add_library(app_glue STATIC
diff --git a/app/src/main/cpp/Native_Camera.cpp b/app/src/main/cpp/Native_Camera.cpp
index e601b88..d12dab4 100644
--- a/app/src/main/cpp/Native_Camera.cpp
+++ b/app/src/main/cpp/Native_Camera.cpp
@@ -157,4 +157,4 @@ bool Native_Camera::CreateCaptureSession(ANativeWindow* window) {
                                             &m_capture_request, nullptr);
 
   return true;
-}
\ No newline at end of file
+}
diff --git a/app/src/main/cpp/Person_Detect.cpp b/app/src/main/cpp/Person_Detect.cpp
index acf1158..9983989 100644
--- a/app/src/main/cpp/Person_Detect.cpp
+++ b/app/src/main/cpp/Person_Detect.cpp
@@ -1,4 +1,4 @@
-3#include "Person_Detect.h"
+#include "Person_Detect.h"
 #include <unistd.h>
 #include <cmath>
 #include <opencv2/core/core.hpp>
@@ -55,8 +55,8 @@ void Person_Detect::SetUpCamera() {
 
     m_native_camera = new Native_Camera(m_selected_camera_type);
     m_native_camera->MatchCaptureSizeRequest(&m_view,
-                                             ANativeWindow_getWidth(m_native_window),
-                                             ANativeWindow_getHeight(m_native_window));
+                                             720/*ANativeWindow_getWidth(m_native_window)*/,
+                                             480/*ANativeWindow_getHeight(m_native_window)*/);
 
     LOGI("______________mview %d\t %d\n", m_view.width, m_view.height);
     LOGI("______________mview %d\t %d\n", ANativeWindow_getWidth(m_native_window),ANativeWindow_getHeight(m_native_window));
@@ -71,8 +71,8 @@ void Person_Detect::SetUpCamera() {
     m_camera_ready = m_native_camera->CreateCaptureSession(image_reader_window);
 }
 
-//std::string class_name_path = "/storage/emulated/0/appData/models/classes.txt";
-std::string class_name_path = "/sdcard/Documents/classes.txt";
+std::string class_name_path = "/storage/emulated/0/appData/models/classes.txt";
+//std::string class_name_path = "/sdcard/Documents/classes.txt";
 std::vector<std::string> load_class_list()
 {
     std::vector<std::string> class_list;
@@ -88,7 +88,7 @@ std::vector<std::string> class_list = load_class_list();
 
 void Person_Detect::CameraLoop() {
     bool buffer_printout = false;
-    video_writer.open("/sdcard/Documents/Person_Detect_video.avi", cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), 10.0, cv::Size(640, 480), true);
+    //video_writer.open("/sdcard/Documents/Person_Detect_video.avi", cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), 10.0, cv::Size(640, 480), true);
 
     while (1) {
         if (m_camera_thread_stopped) { break; }
@@ -210,14 +210,14 @@ void Person_Detect::CameraLoop() {
         }
         cv::imwrite("/storage/emulated/0/appData/models/Person_Detect_bgr.jpg",bgr_img);
         cv::resize(img_mat, out_img, cv::Size(640, 480));
-        video_writer.write(out_img);
+        //video_writer.write(out_img);
         cv::imwrite("/storage/emulated/0/appData/models/Person_Detect_image.jpg",out_img);
 
         pred_out.clear();
         ANativeWindow_unlockAndPost(m_native_window);
         ANativeWindow_release(m_native_window);
     }
-    video_writer.release();
+    //video_writer.release();
 
 }
 
diff --git a/app/src/main/cpp/Person_Detect.h b/app/src/main/cpp/Person_Detect.h
index cbc6971..9c194f2 100644
--- a/app/src/main/cpp/Person_Detect.h
+++ b/app/src/main/cpp/Person_Detect.h
@@ -75,8 +75,8 @@ private:
 
     cv::VideoWriter video_writer;
 
-//    std::string model_path = "/storage/emulated/0/appData/models/yolov5_person_latest.dlc";
-    std::string model_path = "/sdcard/Download/Telegram/yolov5_person_latest.dlc";
+    std::string model_path = "/storage/emulated/0/appData/models/yolov5_person_latest.dlc";
+    //std::string model_path = "/sdcard/Download/Telegram/yolov5_person_latest.dlc";
     std::vector<std::string> output_layers {OUTPUT_LAYER_1};
 
     std::map <std::string, std::vector<float>> pred_out;
diff --git a/local.properties b/local.properties
index 06c338c..ae5b6a0 100644
--- a/local.properties
+++ b/local.properties
@@ -4,5 +4,5 @@
 # Location of the SDK. This is only used by Gradle.
 # For customization when using a Version Control System, please read the
 # header note.
-#Thu Aug 17 18:03:24 IST 2023
-sdk.dir=/home/krishnapriya/Android/Sdk
+#Wed Nov 01 16:48:14 CST 2023
+sdk.dir=/home/charles-chang/Android/Sdk

2023/10/31

scrcpy, linux

ref: 在 linux 上用的話,有提供 install_release.sh,
就 run 這個script 就會安裝到 /usr/local/bin
另外這個 script 也會自動 pull更新。
但是要注意的是。只能待在 master,不能checkout 特定 tag,不然會出現 client, server 版本不合的 Error

另外, ubuntu 上也提供 apt 安裝,但是版本太舊了 (1.2),有些新的Android OS (Android 12) 不能正確工作。

2023/10/30

build caffe for snpe

snpe 1.X 版 的 setup 好像比較簡單(?)

以 1.68 版來看。
snpe-caffe-to-dlc 是用 python3,加上其他onnx, pytorch 也都是用 python3。
所以 build caffe 得時候,要設定能 python3

可以參考這一個人 寫的 Makefile.config:
USE_CUDNN := 0
CPU_ONLY := 1
USE_OPENCV := 0
USE_LEVELDB := 0
USE_LMDB := 0
BLAS := open
ANACONDA_HOME := /opt/conda/envs/snpe
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
                $(ANACONDA_HOME)/include/python3.6m \
                $(ANACONDA_HOME)/lib/python3.6/site-packages/numpy/core/include
PYTHON_LIB := $(ANACONDA_HOME)/lib
PYTHON_LIBRARIES := boost_python36 python3.6m
WITH_PYTHON_LAYER := 1
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
USE_NCCL := 0
USE_PKG_CONFIG := 0
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
DEBUG := 0
TEST_GPUID := 0
Q ?= @
依照這個修改 Makefile.config 需要自己 build libboost_python.
參考另一個 caffe for python3 的說明。
library 改 boost_python3 就可以 make pycaffe OK



參考這一篇,用 conda install 的方式。
用 conda install caffe (python2.7.17) run snpe-caffe-to-dlc 是 failed:
$ snpe-caffe-to-dlc --input_network deploy.prototxt --caffe_bin mobilenet_iter_73000.caffemodel --output_path mobile_net.dlc
Encountered Error: ERROR_CAFFE_NOT_FOUND: Error loading caffe, Message: No module named 'caffe'. PYTHONPATH: 
['/home/charles-chang/snpe-1.68.0.3932/bin/x86_64-linux-clang', '/home/charles-chang/snpe-1.68.0.3932/models/alexnet/scripts', 
 '/home/charles-chang/snpe-1.68.0.3932/models/lenet/scripts', '/home/charles-chang/snpe-1.68.0.3932/lib/python', 
 '/home/charles-chang/qidk/Solutions/VisionSolution1-ObjectDetection/model', 
 '/usr/lib/python36.zip', '/usr/lib/python3.6', 
 '/usr/lib/python3.6/lib-dynload', 
 '/home/charles-chang/.local/lib/python3.6/site-packages', 
 '/usr/local/lib/python3.6/dist-packages', 
 '/usr/lib/python3/dist-packages']

Stack Trace:
Traceback (most recent call last):
  File "/home/charles-chang/snpe-1.68.0.3932/lib/python/qti/aisw/converters/caffe/caffe_to_ir.py", line 92, in convert
    import caffe
ModuleNotFoundError: No module named 'caffe'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/charles-chang/snpe-1.68.0.3932/bin/x86_64-linux-clang/snpe-caffe-to-dlc", line 46, in <module>
    graph = converter.convert()
  File "/home/charles-chang/snpe-1.68.0.3932/lib/python/qti/aisw/converters/caffe/caffe_to_ir.py", line 95, in convert
    raise Exception(code_to_message.get_error_message("ERROR_CAFFE_NOT_FOUND")(e.msg, str(sys.path)))
Exception: ERROR_CAFFE_NOT_FOUND: Error loading caffe, Message: No module named 'caffe'. PYTHONPATH: 
['/home/charles-chang/snpe-1.68.0.3932/bin/x86_64-linux-clang', 
 '/home/charles-chang/snpe-1.68.0.3932/models/alexnet/scripts',
 '/home/charles-chang/snpe-1.68.0.3932/models/lenet/scripts',
 '/home/charles-chang/snpe-1.68.0.3932/lib/python',
 '/home/charles-chang/qidk/Solutions/VisionSolution1-ObjectDetection/model',
 '/usr/lib/python36.zip', '/usr/lib/python3.6',
 '/usr/lib/python3.6/lib-dynload',
 '/home/charles-chang/.local/lib/python3.6/site-packages',
 '/usr/local/lib/python3.6/dist-packages',
 '/usr/lib/python3/dist-packages']
看起來也是因為python3 的關係。

2023/10/27

qidk object detection : YoloNas and MobileSSD

ref: yolonas 的 project 提供一個 jupyter notebook 的檔案,需要 python3.6 以上,用torch 跟 onnx 做出 dlc
mobileSSD 則是在 readme中說明步驟,原來的 model 是 caffe。

for Android 的部分:
yolonas 提供一個 script,download 需要的opencv source和把convert 好的dlc 和 spne 的 so copy 到 source folder.
mobileSSD 在 reame 中有說明步驟。



Mobilenet SSD

mobilenetssd 是用 caffe,所以snpe 要裝好 caffe。
snpe 中各個framework 的 tool 都是獨立的。所以可以針對tool 開conda env.
我用 ubuntu18.04,用update-alternatives把系統 python 改成 python3 (3.6) 來做。

snpe 1.68 的 caffe converting tool : snpe-caffe-to-dlc 是 python script,script head 指定用 python3。
所以build caffe 時要設定 support python3。
依照build caffe for snpe,build 好標準版 caffe。
依照 mobilessd 的說明 download prototxt 跟 caffemodel,用 snpe-caffe-to-dlc 做轉換,出現 Error:
l$ snpe-caffe-to-dlc --input_network deploy.prototxt --caffe_bin mobilenet_iter_73000.caffemodel --output_path moblie_net.dlc
ERROR_CAFFE_CAFFE_PARSING_ERROR: Caffe could not parse deploy.prototxt: 2367:3 : Message type "caffe.LayerParameter" has no field named "permute_param".
INFO_CAFFE_CAFFE_INSTALLATION_ERROR: Caffe installation in use: /home/charles-chang/caffe/python/caffe/__init__.py
說caffe 不認識 permute_param 這個 layer。
應該要用caffe ssd這個版本的 caffe。
所以依照說說明,使用這個版本,checkout ssd,並且build for python3,修改的 Makefile:
$ git diff
diff --git a/Makefile b/Makefile
index 3fd68d1d..0789ae64 100644
--- a/Makefile
+++ b/Makefile
@@ -34,7 +34,7 @@ LIB_BUILD_DIR := $(BUILD_DIR)/lib
 STATIC_NAME := $(LIB_BUILD_DIR)/lib$(LIBRARY_NAME).a
 DYNAMIC_VERSION_MAJOR          := 1
 DYNAMIC_VERSION_MINOR          := 0
-DYNAMIC_VERSION_REVISION       := 0-rc3
+DYNAMIC_VERSION_REVISION       := 0
 DYNAMIC_NAME_SHORT := lib$(LIBRARY_NAME).so
 #DYNAMIC_SONAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR)
 DYNAMIC_VERSIONED_NAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)
@@ -178,7 +178,7 @@ ifneq ($(CPU_ONLY), 1)
        LIBRARIES := cudart cublas curand
 endif

-LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_hl hdf5
+LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_serial_hl hdf5_serial

 # handle IO dependencies
 USE_LEVELDB ?= 1
@@ -328,6 +328,12 @@ ifeq ($(USE_CUDNN), 1)
        COMMON_FLAGS += -DUSE_CUDNN
 endif

+# NCCL acceleration configuration
+ifeq ($(USE_NCCL), 1)
+       LIBRARIES += nccl
+       COMMON_FLAGS += -DUSE_NCCL
+endif
+
 # configure IO libraries
 ifeq ($(USE_OPENCV), 1)
        COMMON_FLAGS += -DUSE_OPENCV
@@ -571,7 +577,7 @@ $(STATIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
        @ echo AR -o $@
        $(Q)ar rcs $@ $(OBJS)

-$(BUILD_DIR)/%.o: %.cpp | $(ALL_BUILD_DIRS)
+$(BUILD_DIR)/%.o: %.cpp $(PROTO_GEN_HEADER) | $(ALL_BUILD_DIRS)
        @ echo CXX $<
        $(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
                || (cat $@.$(WARNS_EXT); exit 1)
@@ -688,6 +694,6 @@ $(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)
        install -m 644 $(DYNAMIC_NAME) $(DISTRIBUTE_DIR)/lib
        cd $(DISTRIBUTE_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT);   ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)
        # add python - it's not the standard way, indeed...
-       cp -r python $(DISTRIBUTE_DIR)/python
+       cp -r python $(DISTRIBUTE_DIR)/

 -include $(DEPS)
Makefile.config.example (其實要 copy 成 Makefile.config
diff --git a/Makefile.config.example b/Makefile.config.example
index eac93123..f82f01e4 100644
--- a/Makefile.config.example
+++ b/Makefile.config.example
@@ -31,21 +31,19 @@ CUDA_DIR := /usr/local/cuda
 # CUDA_DIR := /usr

 # CUDA architecture setting: going with all of them.
-# For CUDA < 6.0, comment the lines after *_35 for compatibility.
-CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-             -gencode arch=compute_20,code=sm_21 \
-             -gencode arch=compute_30,code=sm_30 \
-             -gencode arch=compute_35,code=sm_35 \
-             -gencode arch=compute_50,code=sm_50 \
-             -gencode arch=compute_52,code=sm_52 \
-             -gencode arch=compute_61,code=sm_61
+# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
+# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
+# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
+CUDA_ARCH := \
+               -gencode arch=compute_60,code=sm_60 \
+               -gencode arch=compute_61,code=sm_61 \
+               -gencode arch=compute_61,code=compute_61

 # BLAS choice:
 # atlas for ATLAS (default)
 # mkl for MKL
 # open for OpenBlas
-# BLAS := atlas
-BLAS := open
+BLAS := atlas
 # Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
 # Leave commented to accept the defaults for your choice of BLAS
 # (which should work)!
@@ -63,19 +61,19 @@ BLAS := open

 # NOTE: this is required only if you will compile the python interface.
 # We need to be able to find Python.h and numpy/arrayobject.h.
-PYTHON_INCLUDE := /usr/include/python2.7 \
-               /usr/lib/python2.7/dist-packages/numpy/core/include
+#PYTHON_INCLUDE := /usr/include/python2.7 \
+#              /usr/lib/python2.7/dist-packages/numpy/core/include
 # Anaconda Python distribution is quite popular. Include path:
 # Verify anaconda location, sometimes it's in root.
-# ANACONDA_HOME := $(HOME)/anaconda2
+# ANACONDA_HOME := $(HOME)/anaconda
 # PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
-               $(ANACONDA_HOME)/include/python2.7 \
-               $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \
+               # $(ANACONDA_HOME)/include/python2.7 \
+               # $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

 # Uncomment to use Python 3 (default is Python 2)
-# PYTHON_LIBRARIES := boost_python3 python3.5m
-# PYTHON_INCLUDE := /usr/include/python3.5m \
-#                 /usr/lib/python3.5/dist-packages/numpy/core/include
+ PYTHON_LIBRARIES := boost_python3 python3.6m
+ PYTHON_INCLUDE := /usr/include/python3.6m \
+                 /usr/lib/python3.6/dist-packages/numpy/core/include

 # We need to be able to find libpythonX.X.so or .dylib.
 PYTHON_LIB := /usr/lib
@@ -89,16 +87,20 @@ PYTHON_LIB := /usr/lib
 # WITH_PYTHON_LAYER := 1

 # Whatever else you find you need goes here.
-INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
+INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
 LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

 # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
 # INCLUDE_DIRS += $(shell brew --prefix)/include
 # LIBRARY_DIRS += $(shell brew --prefix)/lib

+# NCCL acceleration switch (uncomment to build with NCCL)
+# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
+# USE_NCCL := 1
+
 # Uncomment to use `pkg-config` to specify OpenCV library paths.
 # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
-# USE_PKG_CONFIG := 1
+USE_PKG_CONFIG := 1

 # N.B. both build and distribute dirs are cleared on `make clean`
 BUILD_DIR := build



有關 snpe 做 MobiienetSSD 的,從 install(build caffe) ,train 到 convert to dlc 的步驟: 這個實際上是他門自己專案用的: 這個project clone 下來直接 build 就可以在 snapdragon 的手機上 run.
會先用 GPU 來 load model,失敗再用 CPU,在 pixel2 上 GPU 開啟失敗,只能用 CPU 啟動。
在 Qualcomm 的 QCS610 上,這個 snpe-release.aar 會 complain 不是 snapdragon,所以不 run
改用 snpe-1.63 版的 snpe-release.aar 和用這一版convert 的 dlc 來 build 就可以在 QCS610 上 run 了。
但是這樣在 pixel 2 上就 fail,說 permission 跟 so 有問題。

這個版本和qidk 的 mobilenetSSD 是用一樣的 model (prototxt, caffe bin),但是 qidk 的 example run 起來是 fail 的。
用 android studio 看,這個 dashcam 跟 infer output 要的 key 是 detectio_out,但是 qidk 要的是 detection_output_number_detection,
在 android studio 中watch infer 後的 output,key 只有 detection_out,大概就是 qidk 的 example 沒有偵測出物件的原因吧?

這一篇問題的回覆可以看到 infer 出來的結果。

2023/10/23

bookmarks : chrome develop mode,

就是用 F12 開啟 右邊的開發模式後,選 "console" tab,然後在 console 下:
document.body.contentEditable="true"

另外一個: 剛上一個一樣,F12 進入 development panel,選 console tab.
在 console 下:
document.body.innerText
按下Enter 後,內文就會被萃取出來,自己找到要的段落,copy 出來、就可以。


還有一種限制: 都一樣,進development mode, 選 console,然後輸入:
document.designMode='on'
就可以了。
上面這些方法選取之後,右鍵 選複製,有時候會跳出阻擋畫面,這時候可以改用 "Ctrl-X" 剪下,就不會碰到 "複製" 事件了。


用 disable javascript: F12 進入 developement mode,在 console 按下 Ctrl-Shift-P,出現 Run > 在這裡書入 JavaScript 就會看到下面有 Disable JavaScript - Debugger 出現,選他就可以。
這個模式好像不能退出 developement mode.不然就失效了。

2023/10/17

bookmark : yolonas custom data

先 clone 到 /mnt/hdd8t/charles-chang/ 下
先follow ref 第一個 link的。
conda create yolonas,然後 install -r requirement.txt,修好安裝的 error,另外 test run jupyter notebook 的import,把缺的裝起來。
download kaggle dataset,解開看一下。

2023/10/5

docker exec & docker attach

已經啟動,running 的 container,可以用 attatch 來取得他的控制權(?)
也可以用 exec 在container 中執行命令。

這兩個方法有蛇麼不同?

attach 的話,直接對應container 現在在 run 的 process,所以啟動時是run /bin/bash,就會進入這個啟動的 shell
exec 的話,會新開一個 process 來 run 你指定的 program,如果指定 /bin/bash,就會進入這個新 create 的 shell

所以在 exec /bin/bash 中 exit 的話,這個新開的 process terminate,container 依樣在 run。
可是 attach 的 /bin/bash 中 exit的話,container 就會整個 exit。

2023/9/27

setup git server by git-daemon

以前做過一次: 但是在 ubuntu 18.04 上就不行了,說沒有 supervise/ok, status, lock.. etc --- 都是 sv 藥用的東西。
因為 git-daemon-run 是用 sv 來啟動的。

實際上 git-daemon 包含在git 的source code ,所以 apt install git-core 就會有 git-daemon 。
git-daemon 提供 git:// 的服務。
所以只要設定對那些 folder 提供服務,允不允許 write (push). 其他就一些 log 之類的..
跟用什麼serive manager 無關。

測試:

手動 adduser gitdaemon,然後登入 gitdaemon, run git-daemon,看看git:// 有沒有作用。

這樣就可以把gitdaemon的 HOME 作為 git repository,create new repo 也只要login 成 gitdaemon, 在 自己目錄下 clone --bare 就可以。

OK 之後再寫到 systemd service.
$ sudo adduser gitdaemon
$ sudo su - gitdaemon
$ mkdir repository
$ cd repository
$ git clone --bare /home/test/test test.git
$ /usr/lib/git-core/git-daemon --export-all --verbose --enable=receive-pack --syslog --base-path=/home/gitdaemon/repository /home/gitdaemon/repository
這樣,其他user 就可以:
git clone git::/127.0.0.1/testgit.git


手動 OK 後,寫 systemd service file:
$ cat /etc/systemd/system/gitdaemon.service 
[Unit]
Description = Git Daemon Service

[Service]
Type=simple
User=gitdaemon
ExecStart=/usr/lib/git-core/git-daemon --export-all --verbose --enable=receive-pack --syslog --base-path=/home/gitdaemon/repository /home/gitdaemon/repository &

[Install]
WantedBy=multi-user.target
新增後,enable & start,看 status:
$ sudo systemctl status gitdaemon.service
● gitdaemon.service - Git Daemon Service
   Loaded: loaded (/etc/systemd/system/gitdaemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2023-09-27 16:29:00 CST; 1min 47s ago
 Main PID: 8231 (git-daemon)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/gitdaemon.service
           └─8231 /usr/lib/git-core/git-daemon --export-all --verbose --enable=receive-pack --syslog --base-path=/home/gitdaemon/repository /home/gitdaemon/repository &

Sep 27 16:29:00 xeontitan systemd[1]: Started Git Daemon Service.
Sep 27 16:29:00 xeontitan git-daemon[8231]: Ready to rumble
Sep 27 16:29:32 xeontitan git-daemon[8252]: Connection from 192.168.147.182:56802
Sep 27 16:29:32 xeontitan git-daemon[8252]: Extended attribute "host": xeontitan
Sep 27 16:29:32 xeontitan git-daemon[8252]: Request upload-pack for '/testgit.git'
Sep 27 16:29:32 xeontitan git-daemon[8231]: [8252] Disconnected
Sep 27 16:29:57 xeontitan git-daemon[8262]: Connection from 192.168.147.182:52568
Sep 27 16:29:57 xeontitan git-daemon[8262]: Extended attribute "host": xeontitan
Sep 27 16:29:57 xeontitan git-daemon[8262]: Request receive-pack for '/testgit.git'
Sep 27 16:29:57 xeontitan git-daemon[8231]: [8262] Disconnected
這樣應該就成功了。



之後多人同時做 android 的 repo sync 時,出現 error
remote: Counting objects: 100% (128934/128934), done.
remote: Compressing objects: 100% (53641/53641), done.
remote: Total 128913 (delta 57825), reused 127304 (delta 56375)
Fetching: 78% (792/1011) 41:10 | 10 jobs | 16:10 platform/frameworks/base @ frameworks/base
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

platform/prebuilts/android-emulator:
fatal: Could not read from remote repository.
查 syslog 有:
Oct  4 13:11:48 rd1-ubuntu git-daemon[1791508]: Too many children, dropping connection
Oct  4 13:11:51 rd1-ubuntu git-daemon[1791508]: message repeated 3 times: [ Too many children, dropping connection]
查git-daemon 的 option 說明有:
     --max-connections=<n>
         Maximum number of concurrent clients, defaults to 32.
         Set it to zero for no limit.
所以修改 git-daemon service:
$ cat /etc/systemd/system/gitdaemon.service
[Unit]
Description = Git Daemon Service

[Service]
Type=simple
User=gitdaemon
ExecStart=/usr/lib/git-core/git-daemon --export-all --verbose --enable=receive-pack --syslog --max-connections=0 --base-path=/home/gitdaemon/repository /home/gitdaemon/repository &

[Install]
WantedBy=multi-user.target
然後做 stop, daemon-reload, start..

2023/9/26

結果是 git-daemon-run
會create gitdaemon user, belongs to nogroup

git-daemon-run 只是幫忙git-daemon 啟動跟設置的一些 script。
使用 sv, 不是用 systemd,所以不太適用了。
直接寫 systemd 的 service 來啟動 git-daemon 比較方便,當然,要手動create git-daemon 的 user 和 group

2023/9/23

about ubuntu 22.04

一些 ubuntu22.04 相關的版本, so 問題,都會記錄在這邊。

因為 用 libssl3,所以一堆用 libssl1 的都會說沒有 supported ssl。
只好裝libssl1
wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb
sudo dpkg -i libssl1.1_1.1.0g-2ubuntu4_amd64.deb

2023/9/21

Docker : run nvidia cuda ready images

ref: nvidia 有做好一堆有 support cuda 的 docker image : nvidia in docker image

但是docker 要 supoort 這些 cuda ready 的 image,要安裝 nvidia-container-toolkit
sudo apt install nvidia-container-toolkit
之後設定runtime support:
sudo nvidia-ctk runtime configure --runtime=docker
然後重新啟動 docker daemon:
sudo systemctl restart docker
這樣之後,docker command 就會 support --gpus all 這個 option

另外,image 的 cuda 版本不能比 host 的 cuda 版本新,實際用 torch.rand(2000,128,device=torch.device('cuda')) 測試,他會說使用了新的function.


有一個 dockerfile,是從 yolov5s_android" 看到的:
FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu18.04

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update --fix-missing
RUN apt-get install -y python3 python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 -f https://download.pytorch.org/whl/cu110/torch_stable.html

# install openvino
RUN apt-get update && apt-get install -y --no-install-recommends \
    wget \
    cpio \
    sudo \
    lsb-release && \
    rm -rf /var/lib/apt/lists/*
# Add a user that UID:GID will be updated by vscode
ARG USERNAME=developer
ARG GROUPNAME=developer
ARG UID=1000
ARG GID=1000
ARG PASSWORD=developer
RUN groupadd -g $GID $GROUPNAME && \
    useradd -m -s /bin/bash -u $UID -g $GID -G sudo $USERNAME && \
    echo $USERNAME:$PASSWORD | chpasswd && \
    echo "$USERNAME   ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
USER $USERNAME
ENV HOME /home/developer

原文說明是:
git clone --recursive https://github.com/lp6m/yolov5s_android
cd yolov5s_android
docker build ./ -f ./docker/Dockerfile  -t yolov5s_android
docker run -it --gpus all -v `pwd`:/workspace yolov5s_android bash

RTX3090. python3.6 and pytorch1.10, pytorch1.7

ref: 很麻煩的 torch. cuda.
用新的顯卡(其實也沒多新,就 3090而已),使 sm85,需要某版本以上的 pytorch 才有support.
不然,雖然用 torch.cuda.is_available() 是 True, get_device_name() 也會正確顯示 RTX3090。
但是宣告一個在 GPU 的變數卻會有 Error:
>import torch
>print(torch.__version__)
1.10.2+cu102
>A = torch.rand(2000,128,device=torch.device('cuda'))
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
說其實 1.7.1 版的 pytorch 就已經 support RTX3090 了,是因為 pip repo 中的 pytorch package 配的 cuda 版本太舊,不 support RTX3090.
所以到 pytorch 網站下載新版 cuda 的 pytorch 來裝就可以了。

我的 機器 cuda 版本是 11.7,所以用:
pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
這樣裝完後,上面的測試command 就不會有 Error 了。

到 cu113 的 torch_stable 去看,支援只有從 torch 1.10.0 開始,所以可以用 ref(2) 的作法,到-f 的位址 cu110/torch_stable.html 去看裡面有沒有 torch-1.7.1
結果真的只有 cu110 有,cu111 之後都是 torch-1.10.0 了。
pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 -f https://download.pytorch.org/whl/cu110/torch_stable.html

2023/9/19

android studio and gradle 8.0

一直出現Error:
compileDebugJavaWithJavac' task (current target is 1.8) and 'kaptGenerateStubsDebugKotlin' task (current target is 17) 
jvm target compatibility should be set to the same Java version.
一堆說是 Gradle 8.0 的問題,結果更新到 8.1 也一樣。
然後說要修改 app 的 build.gradle 指定 targetjvm version.
但是沒有清楚說明。

android studio 的 build system 是 gradle,所以每個版本都會有需要的 gradle 版本。
上面的問題出現在 gradle 8.0,所以使用 gradle 7.X 的最後一版的android studio : Android Studio Electric Eel | 2022.1.1 Patch 2
就沒有問題了。

其實使用gradle 8.0配合新的 build.gradle 內容也沒有問題。

TEST project:
  • Data Binging Basic : Failed
  • New Empty kotline Sample : OK

2023/9/13

build and run SNPE Android Example : image-classifiers

要先 run examples Models 的 inception_v3 的 script 去 download train 好 的 model 和 parameter.
這個 script 要用到 SDK/bin 的 script 和 python 中 tensorflow package 的 py。
所以要設定好 SDK 和 tensorflow 的位置。
另外他用絕對位址download 到 SDK 的 example 目錄,所以 example 的 code 不能 copy 出來。一定要在 SDK 目錄中。

需要用到 snpe 提供的 python module,所以要把 snpe sdk 的 lib/python 加到 PYTHONPATH 中,這個在 bin/envsetup.sh 會做。
envsetup 會設定 SNPE_ROOT, PYTHONPATH, PATH, LD_LIBRARY_PATH

2023/9/11

debugging C++ with VSCode, in ubuntu

ubuntu 跟 windows 使用 VSCode 沒什麼差別,反而是linux 的 c++ compiler 是 opensource 的,還比 windows 方便,不用買 MS 的 build tool 或是裝 msys 來 run g++
所以就是 host 先有 g++ 能 build source code,VSCode 就可以用他來 build.

其他也一樣,VSCode 是以 folder 作為管理 project 的單位。

另外。VSCode 需要的 project setting file : task.json,也是要針對 project folder,放在folder 的 .vscode 中。
以 C++ 來說,第一次 run, debug 的時候,VSCode 發現 project folder 沒有 task.json,就會依照 source language/tool 建一個。

參考: Using C++ on Linux in VS Code

先 create 一個 folder。在 folder 下 run vscode。 (或是在 vscode 中 open folder)。
folder 下就是 C++ source file.
開啟 C++ source file,選 run 或是 debug 就可以了。
就會自動產生 task.json
在source code 中可以設定 break point,之後 run 或是 debug 遇到 break point 都會暫停。

另外。VSCode 第一次開啟 C++ file,會要求安裝 C++ extension。

2023/9/7

install and setup snpe 2.14 in ubuntu 20.04

目前最新的 snpe (2.14) 要用 qpm 來安裝。
所以download 完 snpe sdk後,要先安裝qpm.
qpm 裝完,login 後,會是一個 daemon(service),就可以用 qpm-cli 命令。

用qpm-cli 解開/安裝 snpe..不管在哪裡解開,都會 install 到:
qpm-cli --extract XXOO.qik
SUCCESS: Installed qualcomm_neural_processing_sdk.Core at /opt/qcom/aistack/snpe/2.14.0.230828
裝完可以用:
qpm-cli --info qualcomm_neural_processing_sdk

Product Name           : qualcomm_neural_processing_sdk
Product Classification : Binary
Installed version      : None
Available version(s)   : 2.14.0.230828
                         2.13.4.230831
                         2.13.2.230822
                         2.13.0.230730
                         2.12.0.230626
                         2.11.0.230603
                         2.10.40.4
安裝完後的docs下,有 html 的文件。

follow 這個附的文件,setup,這版 2023/9 月的版本,是用 ubuntu 20.04
提供兩個 script 來檢查/安裝 需要個 package:
  • check-python-dependency
  • check-linux-dependency.sh
其中 check-python-denpency 需要在 virtual env (VENV 或 conda) 下,所以要為 snpe create 一個 python 環境。
文件說明 default 是 3.8,但是如果是要 run tensorflow 1.15.0,就要用 3.6。
先用 3.8 來做..用 conda create snpe3.8

然後 run envsetup.sh
$ source /opt/qcom/aistack/snpe/2.14.0.230828/bin/envsetup.sh 
[INFO] AISW SDK environment set
[INFO] SNPE_ROOT: /opt/qcom/aistack/snpe/2.14.0.230828
check python.. 結果一堆 error:
$ /opt/qcom/aistack/snpe/2.14.0.230828/bin/check-python-dependency 
/opt/qcom/aistack/snpe/2.14.0.230828/bin/check-python-dependency:55: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  import pkg_resources
WARNING: attrs installed version: 21.4.0 does not match tested version: 22.2.0
WARNING: decorator installed version: 4.4.2 does not match tested version: 5.1.1
WARNING: joblib installed version: 1.1.0 does not match tested version: 1.0.1
WARNING: packaging installed version: 21.3 does not match tested version: 21.0
WARNING: pillow installed version: 9.4.0 does not match tested version: 6.2.1
WARNING: scipy installed version: 1.8.1 does not match tested version: 1.9.1
Python Modules missing: absl-py, invoke, lxml, mako, matplotlib, numpy, opencv-python, pandas, pathlib2, protobuf, pytest, pyyaml, six, tabulate
Installing missing modules using pip3
Installing absl-py version: 0.13.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
nbconvert 6.5.0 requires entrypoints>=0.2.2, which is not installed.
Installing invoke version: 2.0.0
Installing lxml version: 4.6.2
Installing mako version: 1.1.0
Installing matplotlib version: 3.3.4
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
moviepy 1.0.3 requires requests<3.0,>=2.8.1, which is not installed.
Installing numpy version: 1.23.5
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
moviepy 1.0.3 requires requests<3.0,>=2.8.1, which is not installed.
Installing opencv-python version: 4.5.2.52
Installing pandas version: 1.1.5
Installing pathlib2 version: 2.3.6
Installing protobuf version: 3.19.6
Installing pytest version: 7.0.1
Installing pyyaml version: 3.10
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
naked 0.1.31 requires requests, which is not installed.
Installing six version: 1.16.0
Installing tabulate version: 0.8.5
用 pip 分別安裝 error 的 package,應該過了,但是 version 不一定一樣。

文件說明 android ndk 經過 snpe 認證的是 r19c.
用 android-studio 的 sdk-manager 來看,19版 只有 19.2
裝完看 CHANGELOG.md,剛好是 r19c,所以 path 就是 Android/Sdk/ndk/19.2.5345600

所以設完變數,run check...
$export ANDROID_NDK_ROOT=/home/charles-chang/Android/Sdk/ndk/19.2.5345600
$export PATH=${ANDROID_NDK_ROOT}:${PATH}
$/opt/qcom/aistack/snpe/2.14.0.230828/bin/envcheck -n
Checking Android NDK Environment
--------------------------------------------------------------
[INFO] Found ndk-build at /home/charles-chang/Android/Sdk/ndk/19.2.5345600/ndk-build and ANDROID_NDK_ROOT is also set.
--------------------------------------------------------------
$ /opt/qcom/aistack/snpe/2.14.0.230828/bin/envcheck -c
Checking Clang-9 Environment
--------------------------------------------------------------
[INFO] Found clang++-9 at /usr/bin/clang++-9
--------------------------------------------------------------
另外,pip install torch==1.8.1 onnx==1.11.0 tensorflow==2.10.1 tflite==2.3.0

2023/9/6

/proc/meminfo MemAvaileble

在 /proc/meminfo.c:
    available = si_mem_available();
    ...

    show_val_kb(m, "MemTotal:       ", i.totalram);
    show_val_kb(m, "MemFree:        ", i.freeram);
    show_val_kb(m, "MemAvailable:   ", available);
    ...
在 mm/page_alloc.c
long si_mem_available(void)
{
    long available;
    unsigned long pagecache;
    unsigned long wmark_low = 0;
    unsigned long pages[NR_LRU_LISTS];
    unsigned long reclaimable;
    struct zone *zone;
    int lru;

    for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++)
        pages[lru] = global_node_page_state(NR_LRU_BASE + lru);

    for_each_zone(zone)
        wmark_low += low_wmark_pages(zone);

    /*
     * Estimate the amount of memory available for userspace allocations,
     * without causing swapping.
     */
    available = global_zone_page_state(NR_FREE_PAGES) - totalreserve_pages;

    /*
     * Not all the page cache can be freed, otherwise the system will
     * start swapping. Assume at least half of the page cache, or the
     * low watermark worth of cache, needs to stay.
     */
    pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE];
    pagecache -= min(pagecache / 2, wmark_low);
    available += pagecache;

    /*
     * Part of the reclaimable slab and other kernel memory consists of
     * items that are in use, and cannot be freed. Cap this estimate at the
     * low watermark.
     */
    reclaimable = global_node_page_state(NR_SLAB_RECLAIMABLE) +
            global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE);
    available += reclaimable - min(reclaimable / 2, wmark_low);

    if (available < 0)
        available = 0;
    return available;
}
所以 MemAvailable 是 free + cache + reclaimable slab (buffer?)

2023/9/4

windows10 : pyopengl , glutInit( ) error : OpenGL.error.NullFunctionError

ref: windows7 下run 的好好的,升級到windows10 之後就fail 了,
用 example code 來測試:
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *

def Draw():
        glClear(GL_COLOR_BUFFER_BIT)
        glutWireTeapot(0.5)
        glFlush()

glutInit()
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGBA)
glutInitWindowSize(300,300)
glutCreateWindow(b"teapot")
glutDisplayFunc(Draw)
glutIdleFunc(Draw)
glutMainLoop()

if __name__ == '__main__':
        Draw()
出現 Error:
Traceback (most recent call last):
  File "D:\teapot.py", line 11, in &;t;module>
    glutInit()
  File "D:\Python311\Lib\site-packages\OpenGL\GLUT\special.py", line 333, in glutInit
    _base_glutInit( ctypes.byref(count), holder )
  File "D:\Python311\Lib\site-packages\OpenGL\platform\baseplatform.py", line 423, in __call__
     raise error.NullFunctionError(
OpenGL.error.NullFunctionError: Attempt to call an undefined function glutInit, check for bool(glutInit) before calling  
結果google 說是 pypl official 的 opengl package 沒有包到兩個 dlll...
pipewire 0.3.78-1 (https://pipewire-debian.github.io)
 

  Debian Package - 

	- enable modemmanager

  Pipewire -

    - For more : https://gitlab.freedesktop.org/pipewire/pipewire/-/releases
  .

Troubleshooting - 

  - Have any package regarding issue? report on github :
    https://github.com/pipewire-debian/pipewire-debian/issues/new/choose

  - Upstream recommends to use 'WirePlumber' instead 'pipewire-media-session'      
    as session manager, to get it add another PPA,      
    'sudo add-apt-repository ppa:pipewire-debian/wireplumber-upstream'      
    For more instruction read : https://pipewire-debian.github.io    

2023/8/9

add one rotation axis

mmwave 的 python tool 有 azimuth, elevation tilt,缺一個 (不知道名子)。
所以加上去。

code:
elevAziRotMatrix = np.matrix([  [  math.cos(aziTilt),  math.cos(elevTilt)*math.sin(aziTilt), math.sin(elevTilt)*math.sin(aziTilt)],
                                [ -math.sin(aziTilt),  math.cos(elevTilt)*math.cos(aziTilt), math.sin(elevTilt)*math.cos(aziTilt)],
                                [                  0,                   -math.sin(elevTilt),                   math.cos(elevTilt)],
                             ])
對照 [https://en.wikipedia.org/wiki/Rotation_matrix wiki]:


用 0 度來猜對應關係..
所以看起來缺的應該是 β

另外 eleTilt, aziTilt 的方向和座標定義方向相反 α β γ
  • β : rotation
  • α : elevation
  • γ : azimuth
所以code :
 cos(γ),  cos(α)sin(γ), sin(α)sin(γ)
 sin(γ),  cos(α)cos(γ), sin(α)cos(γ)
      0, -sin(α)      , cos(α)
和 公式比較, α γ 的角度方向大概是相反,所以 sin 都要 * -1。

加入 rotTilt : β

原來 sin(0) = 0 的部份,要加回去 sin(β),也就是 sin(rotTilt)

2023/8/7

筆記:3D people counting, point cloud detection algorithm

根據 [https://dev.ti.com/tirex/explore/node?a=1AslXXD__1.10.00.13&node=A__AIQPG9x7K34A8l4ZELgznA__radar_toolbox__1AslXXD__1.10.00.13&r=1AslXXD__1.20.00.11 3D people tracking detection layer tuning guide],
這個3D point cloud 的 processing 的順序是:
  • 每個chirp 都做 range FFT
  • 同一TX反射的,每個水平ANT 的 range FFT,做 capon beamformer,找出方位角
  • 這樣就類似 R-V map 一樣,變成2D 的 heat map 圖,不過現在是 Range-Azimuth map,因為原來的 doppler FFT 改成 capon beamformer
  • 在 2D heat map (R-A map) 上,用 2D CFar (2-pass CFar)找出 signal point (detection point)
  • 找出 detection point (2D heat map 的 bin),再用垂直向的 ANT 做 capon beamformer 找出垂直角(elevation)
  • 針對detection point,同一 ant,連續chirp 的 bin,做 doppler FFT,算出速度

範例有兩種 ANT 排列,一個是安裝在天花板,一個安裝在牆上。
天花板的ANT 排列,X-Z 向是對稱的。
牆壁的ANT排例,X向的天線數量比較多。

所以兩個的 processing algorithm 不一樣。

牆壁的ANT,X 向比較多,所以先做 azimuth capon beamformer,再把結果的 bin 做 elevation beam former,
天花板 X-Z 向一樣多,所以都做....

2023/8/4

go-lang small http file server with upload function

ChatGPT 寫的,還沒看能不能動...
package main

import (
	"html/template"
	"io"
	"net/http"
	"os"
	"path/filepath"
)

func main() {
	http.HandleFunc("/", handleFileServer)
	http.HandleFunc("/upload", handleUpload)

	// Start the server on port 8080
	err := http.ListenAndServe(":8080", nil)
	if err != nil {
		panic(err)
	}
}

func handleFileServer(w http.ResponseWriter, r *http.Request) {
	if r.Method != http.MethodGet {
		http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
		return
	}

	dirPath := "." // Current directory
	filePath := filepath.Join(dirPath, r.URL.Path)
	file, err := os.Open(filePath)
	if err != nil {
		http.Error(w, "File not found", http.StatusNotFound)
		return
	}
	defer file.Close()

	// Check if the path is a directory
	fileInfo, err := file.Stat()
	if err != nil {
		http.Error(w, "Failed to get file info", http.StatusInternalServerError)
		return
	}
	if fileInfo.IsDir() {
		renderDirectory(w, filePath)
		return
	}

	// Serve the file if it's not a directory
	http.ServeFile(w, r, filePath)
}

func renderDirectory(w http.ResponseWriter, dirPath string) {
	tmpl, err := template.New("directory").Parse(directoryTemplate)
	if err != nil {
		http.Error(w, "Internal Server Error", http.StatusInternalServerError)
		return
	}

	// Read the directory content
	dir, err := os.ReadDir(dirPath)
	if err != nil {
		http.Error(w, "Internal Server Error", http.StatusInternalServerError)
		return
	}

	data := struct {
		DirectoryName string
		Files         []string
	}{
		DirectoryName: dirPath,
		Files:         make([]string, 0),
	}

	for _, entry := range dir {
		data.Files = append(data.Files, entry.Name())
	}

	// Render the template
	err = tmpl.Execute(w, data)
	if err != nil {
		http.Error(w, "Internal Server Error", http.StatusInternalServerError)
	}
}

func handleUpload(w http.ResponseWriter, r *http.Request) {
	if r.Method != http.MethodPost {
		http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
		return
	}

	// Limit the maximum file size to 10 MB
	r.ParseMultipartForm(10 << 20)

	// Get the uploaded file
	file, handler, err := r.FormFile("file")
	if err != nil {
		http.Error(w, "Failed to get file from request", http.StatusBadRequest)
		return
	}
	defer file.Close()

	// Create the file in the current directory
	f, err := os.Create(handler.Filename)
	if err != nil {
		http.Error(w, "Failed to create file on server", http.StatusInternalServerError)
		return
	}
	defer f.Close()

	// Copy the uploaded file to the server
	_, err = io.Copy(f, file)
	if err != nil {
		http.Error(w, "Failed to save file on server", http.StatusInternalServerError)
		return
	}

	// Redirect back to the directory listing
	http.Redirect(w, r, "/", http.StatusFound)
}

const directoryTemplate = `
<!DOCTYPE html>
<html>
<head>
	<title>Directory Listing: {{ .DirectoryName }}</title>
</head>
<body>
	<h1>Directory Listing: {{ .DirectoryName }}</h1>
	<ul>
		{{ range .Files }}
		<li><a href="{{ . }}">{{ . }}</a></li>
		{{ end }}
	</ul>
	<form action="/upload" method="post" enctype="multipart/form-data">
		<input type="file" name="file">
		<input type="submit" value="Upload">
	</form>
</body>
</html>
`

2023/7/10

mmwave profilecfg

ref: profilecfg 參數就是設定 chirp profile: start freq, slop, adc idle, start time, ramp stop time 參數的command.
每個欄位的定義要看 sdk 安裝目錄文件: mmwave_sdk_user_guide.pdf。
profileCfg 欄位是:
* profile id
* start freq
* idle time
* adc valid start time
* ramp end time  : 至少要大於 adc_samples/adc_sample_freq
* tx power : 0
* tx phase : 0
* freq slop
* tx start time
* adc samples
* adc sample freq
* high pass filter 1 corner freq
* high pass filter 2 corner freq
* rxGain
這些參數,在 mmwave studio 的 RampTimingCalculator 中,填好左邊的 slop, samples, sample freq, 其他參數就會在右邊顯示出來。
根據說明,右邊的參數是 minimum value,在 profileCfg 中,只要確定數值比他大就可以。


max range 跟 sample rate 有關系,因為距離越遠的object,delta F 越大。
range resulution 代表在 range FFT 上,鄰近兩個 peak 的對應頻率,轉換成距離。
就是 range FFT 的頻率解析度,就是 1/T, T 就是 整 sample points * sampling period .


要降低bandwidth..
* 減少 sample points * 減少 slope

2023/6/27

bookmark : python struct

struct 是python 用來處理 C 結構資料的module.

用 pack 來把一堆變數排在一起。
用 unpack 把 byte data 變成一堆變數。

pack, unpack 轉換的時候,就要靠 format string 來描述。
例如:
  • 'iif' : 兩個integer之後是一個float,所以總長度是 4+4+4,也可以用 '2i f'
  • 'B?l' : unsigned char, boolean 跟 long,所以總長度是 1 + 1 + 4 = 6
example:
packed = struct.pack('2if',1,2,1.3)
a, b, c = struct.unpack('2if',packed)
print( a,b,c )

1 2 1.29999999526

2023/6/21

TI mmwave, occupancy detection uart frame format.

mss_main.c :
static void OccupancyDetection3D_handleObjectDetResult
//copy to the format for output, and to future tracker
    gMmwMssMCB.pointCloudToUart.header.length               =   sizeof(mmwLab_output_message_tl) 
                                                              + sizeof(mmwLab_output_message_point_unit) 
                                                              + sizeof(mmwLab_output_message_UARTpoint) * outputFromDSP->pointCloudOut.object_count;
    if ( outputFromDSP->pointCloudOut.object_count == 0)
        gMmwMssMCB.pointCloudToUart.header.length           =   0;
    gMmwMssMCB.pointCloudToUart.header.type                 =   MMWDEMO_OUTPUT_MSG_POINT_CLOUD;
    gMmwMssMCB.pointCloudToUart.pointUint.azimuthUnit       =   0.01f;
    gMmwMssMCB.pointCloudToUart.pointUint.elevationUnit     =   0.01f;
    gMmwMssMCB.pointCloudToUart.pointUint.rangeUnit         =   0.00025f;
    gMmwMssMCB.pointCloudToUart.pointUint.dopplerUnit       =   0.00028f;
    gMmwMssMCB.pointCloudToUart.pointUint.snrUint           =   0.04f;
    gMmwMssMCB.numDetectedPoints                            =   outputFromDSP->pointCloudOut.object_count;
    for (pntIdx = 0; pntIdx < (int32_t)outputFromDSP->pointCloudOut.object_count; pntIdx++ )
    {
        gMmwMssMCB.pointCloudToUart.point[pntIdx].azimuth   =   (int8_t)round(outputFromDSP->pointCloudOut.pointCloud[pntIdx].azimuthAngle / gMmwMssMCB.pointCloudToUart.pointUint.azimuthUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].elevation =   (int8_t)round((outputFromDSP->pointCloudOut.pointCloud[pntIdx].elevAngle)/ gMmwMssMCB.pointCloudToUart.pointUint.elevationUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].range     =   (uint16_t)round(outputFromDSP->pointCloudOut.pointCloud[pntIdx].range / gMmwMssMCB.pointCloudToUart.pointUint.rangeUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].doppler   =   (int16_t)round(outputFromDSP->pointCloudOut.pointCloud[pntIdx].velocity / gMmwMssMCB.pointCloudToUart.pointUint.dopplerUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].snr       =   (uint16_t)round((float)outputFromDSP->pointCloudOut.snr[pntIdx].snr * 0.125f / gMmwMssMCB.pointCloudToUart.pointUint.snrUint);

        gMmwMssMCB.pointCloudFromDSP[pntIdx].elevAngle      =   outputFromDSP->pointCloudOut.pointCloud[pntIdx].elevAngle;
        gMmwMssMCB.pointCloudFromDSP[pntIdx].range          =   outputFromDSP->pointCloudOut.pointCloud[pntIdx].range;
        gMmwMssMCB.pointCloudFromDSP[pntIdx].velocity        =   outputFromDSP->pointCloudOut.pointCloud[pntIdx].velocity;
        gMmwMssMCB.pointCloudSideInfoFromDSP[pntIdx].snr            =   (float)outputFromDSP->pointCloudOut.snr[pntIdx].snr * 0.125f;
    }
message 定義:mmwLab_output.h:
typedef enum mmwLab_output_message_type_e
{
    /*! @brief   List of detected points */
    MMWDEMO_OUTPUT_MSG_DETECTED_POINTS = 1,

    /*! @brief   Range profile */
    MMWDEMO_OUTPUT_MSG_RANGE_PROFILE,

    /*! @brief   Noise floor profile */
    MMWDEMO_OUTPUT_MSG_NOISE_PROFILE,

    /*! @brief   Samples to calculate static azimuth  heatmap */
    MMWDEMO_OUTPUT_MSG_AZIMUT_STATIC_HEAT_MAP,

    /*! @brief   Range/Doppler detection matrix */
    MMWDEMO_OUTPUT_MSG_RANGE_DOPPLER_HEAT_MAP,

    /*! @brief   Point Cloud - Array of detected points (range/angle/doppler) */
    MMWDEMO_OUTPUT_MSG_POINT_CLOUD,

    /*! @brief   Target List - Array of detected targets (position, velocity, error covariance) */
    MMWDEMO_OUTPUT_MSG_TARGET_LIST,

    /*! @brief   Target List - Array of target indices */
    MMWDEMO_OUTPUT_MSG_TARGET_INDEX,

    /*! @brief   Classifier Output -- Array of target indices and tags */
    MMWDEMO_OUTPUT_MSG_CLASSIFIER_OUTPUT,

    /*! @brief   Stats information */
    MMWDEMO_OUTPUT_MSG_STATS,

    /*! @brief   Presence information */
    MMWDEMO_OUTPUT_PRESENCE_IND,

    MMWDEMO_OUTPUT_MSG_MAX
} mmwLab_output_message_type;
另外在 sdk 的 packages\ti\demo\xwr64xx\mmw\mmw_output.h 有:
typedef enum MmwDemo_output_message_type_e
{
    /*! @brief   List of detected points */
    MMWDEMO_OUTPUT_MSG_DETECTED_POINTS = 1,

    /*! @brief   Range profile */
    MMWDEMO_OUTPUT_MSG_RANGE_PROFILE,

    /*! @brief   Noise floor profile */
    MMWDEMO_OUTPUT_MSG_NOISE_PROFILE,

    /*! @brief   Samples to calculate static azimuth  heatmap */
    MMWDEMO_OUTPUT_MSG_AZIMUT_STATIC_HEAT_MAP,

    /*! @brief   Range/Doppler detection matrix */
    MMWDEMO_OUTPUT_MSG_RANGE_DOPPLER_HEAT_MAP,

    /*! @brief   Stats information */
    MMWDEMO_OUTPUT_MSG_STATS,

    /*! @brief   List of detected points */
    MMWDEMO_OUTPUT_MSG_DETECTED_POINTS_SIDE_INFO,

    /*! @brief   Samples to calculate static azimuth/elevation heatmap, (all virtual antennas exported) */
    MMWDEMO_OUTPUT_MSG_AZIMUT_ELEVATION_STATIC_HEAT_MAP,

    /*! @brief   temperature stats from Radar front end */
    MMWDEMO_OUTPUT_MSG_TEMPERATURE_STATS,

    MMWDEMO_OUTPUT_MSG_MAX
} MmwDemo_output_message_type;
前面一樣,6 開始不同,所以這個 example 的 tlv type 是自己用的。



mss_main.c 從 UART 送出 TLV 的地方 MmwDemo_uartTxTask:
        /* Send packet header */
        UART_write (uartHandle,
                           (uint8_t*)&header,
                           sizeof(mmwLab_output_message_header));

        /* Send detected Objects */
        if (objOut->header.length > 0)
        {
            UART_write (uartHandle,
                               (uint8_t*)objOut,
                               objOut->header.length);
        }
所以就是送 header,再送 objOut.
header 歸 header,只是在說接下來的 object detection result tlv 有幾個 tlv。
length, checksum 都是 header 自己,沒有包含接下來的 detection result tlv

先看 header..
typedef struct mmwLab_output_message_header_t
{
    /*! @brief   Output buffer magic word (sync word). It is initialized to  {0x0102,0x0304,0x0506,0x0708} */
    uint16_t    magicWord[4];

    /*! brief   Version: : MajorNum * 2^24 + MinorNum * 2^16 + BugfixNum * 2^8 + BuildNum   */
    uint32_t     version;

    /*! @brief   Total packet length including header in Bytes */
    uint32_t    totalPacketLen;

    /*! @brief   platform type */
    uint32_t    platform;

    /*! @brief   Frame number */
    uint32_t    frameNumber;

    /*! @brief   For Advanced Frame config, this is the sub-frame number in the range
     * 0 to (number of subframes - 1). For frame config (not advanced), this is always
     * set to 0. */
    uint32_t    subFrameNumber;

    /*! @brief Detection Layer timing */
    uint32_t    chirpProcessingMargin;
    uint32_t    frameProcessingTimeInUsec;

    /*! @brief Localization Layer Timing */
    uint32_t    trackingProcessingTimeInUsec;
    uint32_t    uartSendingTimeInUsec;


    /*! @brief   Number of TLVs */
    uint16_t    numTLVs;

    /*! @brief   check sum of the header */
    uint16_t    checkSum;

} mmwLab_output_message_header;
設定值是...
    header.platform =  0xA6843;
    header.magicWord[0] = 0x0102;
    header.magicWord[1] = 0x0304;
    header.magicWord[2] = 0x0506;
    header.magicWord[3] = 0x0708;
    header.version =    MMWAVE_SDK_VERSION_BUILD |
                        (MMWAVE_SDK_VERSION_BUGFIX << 8) |
                        (MMWAVE_SDK_VERSION_MINOR << 16) |
                        (MMWAVE_SDK_VERSION_MAJOR << 24);

  ...
  ...
        packetLen = sizeof(mmwLab_output_message_header);
        header.chirpProcessingMargin        =   timingInfo->interChirpProcessingMargin;
        header.frameProcessingTimeInUsec    =   timingInfo->frameProcessingTimeInUsec;
        header.uartSendingTimeInUsec        =   gMmwMssMCB.uartProcessingTimeInUsec; 
		
        if (objOut->header.length > 0)
        {
            packetLen += objOut->header.length;
            tlvIdx++;
        }

        header.numTLVs = tlvIdx;
        header.totalPacketLen   =   packetLen;
        header.frameNumber      =   frameIdx;
        header.subFrameNumber   =   subFrameIdx;
        header.checkSum         =   0;


        headerPtr               =   (uint16_t *)&header;
        for(n=0, sum = 0; n < sizeof(mmwLab_output_message_header)/sizeof(uint16_t); n++)
                                sum += *headerPtr++;
        header.checkSum         =   ~((sum >> 16) + (sum & 0xFFFF));

payload 的 objOut 是...
       objOut          =   &(gMmwMssMCB.pointCloudToUart);
其中,pointCloudToUart 是:
typedef struct MmwDemo_output_message_UARTpointCloud_t
{
    mmwLab_output_message_tl           header;
    mmwLab_output_message_point_unit   pointUint;
    mmwLab_output_message_UARTpoint    point[MAX_RESOLVED_OBJECTS_PER_FRAME];
} mmwLab_output_message_UARTpointCloud;
又分別是:
typedef struct mmwLab_output_message_tl_t
{
    /*! @brief   TLV type */
    uint32_t    type;

    /*! @brief   Length in bytes */
    uint32_t    length;

} mmwLab_output_message_tl;

typedef struct mmwLab_output_message_point_uint_t
{
    /*! @brief elevation  reporting unit, in radians */
    float       elevationUnit;
    /*! @brief azimuth  reporting unit, in radians */
    float       azimuthUnit;
    /*! @brief Doppler  reporting unit, in m/s */
    float       dopplerUnit;
    /*! @brief range reporting unit, in m */
    float       rangeUnit;
    /*! @brief SNR  reporting unit, linear */
    float       snrUint;

} mmwLab_output_message_point_unit;

typedef struct mmwLab_output_message_UARTpoint_t
{
    /*! @brief Detected point elevation, in number of azimuthUnit */
    int8_t      elevation;
    /*! @brief Detected point azimuth, in number of azimuthUnit */
    int8_t      azimuth;
    /*! @brief Detected point doppler, in number of dopplerUnit */
    int16_t      doppler;
    /*! @brief Detected point range, in number of rangeUnit */
    uint16_t        range;
    /*! @brief Range detection SNR, in number of snrUnit */
    uint16_t       snr;

} mmwLab_output_message_UARTpoint;



TI 有提供一個 python code, 做 pc 端UI : industrial visualizer
其中的 gui_parser.py 中 readAndParseUart( ) 就是 read uart 的進入點。
依序 sync magicword 之後,交給 parseFrame.py ,找出 tlvtype 之後,對應的 tlv parsing funtion 在 parseTLVs.py

occupancy detection 的 tlv frame 雖然是自己的,不是用 sdk 中 demo 的 標準 frame structure,
但是最後 parse point cloud 的格式,和 parseTLVs.py 的 parseCompressedSphericalPointCloudTLV( ) 一樣。

一個沒有 0 points 的 frame length 是 48,有 point 資料的frame 會是 48 + tlv length.

tlvlength 的資料中,在 CompressedSphericalCloudTLV 中,包含以下:
    tlvHeaderStruct = struct(...
        'type',             {'uint32', 4}, ... % TLV object Type
        'length',           {'uint32', 4});    % TLV object Length, in bytes, including TLV header

    % Point Cloud TLV reporting unit for all reported points
    pointUintStruct = struct(...
        'elevUnit',             {'float', 4}, ... % elevation, in rad
        'azimUnit',             {'float', 4}, ... % azimuth, in rad
        'dopplerUnit',          {'float', 4}, ... % Doplper, in m/s
        'rangeUnit',            {'float', 4}, ... % Range, in m
        'snrUnit',              {'float', 4});    % SNR, ratio
之後是一堆points 的資料,每個 point 的內容是:
    % Point Cloud TLV object consists of an array of points.
    % Each point has a structure defined below
    pointStruct = struct(...
        'elevation',        {'int8', 1}, ... % elevation, in rad
        'azimuth',          {'int8', 1}, ... % azimuth, in rad
        'doppler',          {'int16', 2}, ... % Doplper, in m/s
        'range',            {'uint16', 2}, ... % Range, in m
        'snr',              {'uint16', 2});    % SNR, ratio
以 frameLength 5098 的 frame 來看。
扣掉 frame Header 48 bytes,剩下 460
460 扣掉 4+4+5x*4 = 28 之後,剩下 432,
這 432 bytes 都是 point data,每個 point data 是 8 bytes, 所以有.. 432/8 = 54 points


結果這個 unit 的內容是固定的...
ref: mss_main.c - OccupancyDetection3D_handleObjectDetResult()
    gMmwMssMCB.pointCloudToUart.header.type                 =   MMWDEMO_OUTPUT_MSG_POINT_CLOUD;
    gMmwMssMCB.pointCloudToUart.pointUint.azimuthUnit       =   0.01f;
    gMmwMssMCB.pointCloudToUart.pointUint.elevationUnit     =   0.01f;
    gMmwMssMCB.pointCloudToUart.pointUint.rangeUnit         =   0.00025f;
    gMmwMssMCB.pointCloudToUart.pointUint.dopplerUnit       =   0.00028f;
    gMmwMssMCB.pointCloudToUart.pointUint.snrUint           =   0.04f;
    gMmwMssMCB.numDetectedPoints                            =   outputFromDSP->pointCloudOut.object_count;
    for (pntIdx = 0; pntIdx < (int32_t)outputFromDSP->pointCloudOut.object_count; pntIdx++ )
    {
        gMmwMssMCB.pointCloudToUart.point[pntIdx].azimuth   =   (int8_t)round(outputFromDSP->pointCloudOut.pointCloud[pntIdx].azimuthAngle / gMmwMssMCB.pointCloudToUart.pointUint.azimuthUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].elevation =   (int8_t)round((outputFromDSP->pointCloudOut.pointCloud[pntIdx].elevAngle)/ gMmwMssMCB.pointCloudToUart.pointUint.elevationUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].range     =   (uint16_t)round(outputFromDSP->pointCloudOut.pointCloud[pntIdx].range / gMmwMssMCB.pointCloudToUart.pointUint.rangeUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].doppler   =   (int16_t)round(outputFromDSP->pointCloudOut.pointCloud[pntIdx].velocity / gMmwMssMCB.pointCloudToUart.pointUint.dopplerUnit);
        gMmwMssMCB.pointCloudToUart.point[pntIdx].snr       =   (uint16_t)round((float)outputFromDSP->pointCloudOut.snr[pntIdx].snr * 0.125f / gMmwMssMCB.pointCloudToUart.pointUint.snrUint);

        gMmwMssMCB.pointCloudFromDSP[pntIdx].elevAngle      =   outputFromDSP->pointCloudOut.pointCloud[pntIdx].elevAngle;
        gMmwMssMCB.pointCloudFromDSP[pntIdx].range          =   outputFromDSP->pointCloudOut.pointCloud[pntIdx].range;
        gMmwMssMCB.pointCloudFromDSP[pntIdx].velocity        =   outputFromDSP->pointCloudOut.pointCloud[pntIdx].velocity;
        gMmwMssMCB.pointCloudSideInfoFromDSP[pntIdx].snr            =   (float)outputFromDSP->pointCloudOut.snr[pntIdx].snr * 0.125f;
    }

2023/6/16

TI mmWave python : send config file

send config file 抄這篇
import serial
import time

serialControl = serial.Serial("/dev/ttyUSB1",115200,timeout=0.01)

with open(configFilePath, "r") as configFile:
    for configLine in configFile.readlines():
        # Send config value to the control port
        serialControl.write(configLine.encode())
        # Wait for response from control port
        time.sleep(0.01)
        echo = serialControl.readline()
        done = serialControl.readline()
        prompt = serialControl.read(11)
        print(echo.decode('utf-8'), end='')
        if verbose:
            print(done.decode('utf-8'))
            print(prompt.decode('utf-8'))
time.sleep(0.01) 很重要,不然會送得太快,chip 後來會掉字。
另外, cfg file 是 dos 還是 unix 格式都沒關係。 (\r\n or \n)。

encode, decode 是要做 字串 對 byte 之間的轉換。

open serial port 時,加上 timeout=0.01 是為了之後 readline( ) N 次的關係。
如果不加 timeout,readline 為卡住,一直到有資料輸出。
在 command error 的時候,就只有 Error 跟 prompt,少一次 Done。

TI Example, Lab 中的code,雖然都是用 cfg file, cfg command。
但是每個 Example. Lab support 的 command 不一樣。


dataport 沒有資料。
查一下是不是command 有錯。

sdk 的 util/cli.c 的 CLI_task 負責 control console
task 從 configurPort 讀取後,先和 gCLI.cfg.tableEntry[] 找 match cmd.
如果沒有,再交給 CLI_MMWaveExtensionHandler( ) 處理。
再沒有,就是 "not recognized as CLI command"

其中 * gCLI.cfg.tableEntry[] 是每個 example 自己定義的 command * CLI_MMWaveExtensionHandler 裡面使 mmwave SDK 內建的 command 錯誤的command 好像不會對程式造成影響...

結果:

送出 cfg command 時,要 delay,即使已經有收到 Done, 跟 prompt,還是要 delay 一下。
使用 matlab 的 script 在 windows 7 上 run,可能每個 step run 起來比較慢。所以 OK
在 linux 用 python 跑就會太快了,要再每個 command 送出後做 time.sleep(0.01),另外做 sensorStart 之前也要 delay 久一點。
實際有沒有成功,可以用 sensorStar 的回應來看,如果 OK, sensorStart 後,一樣會回答 Done,然後出現 prompt等待command。
這種狀況下 Dataport 才會有輸出。

2023/6/8

2023/6/6

bookmark: mmwave, mesh

這格..用 LVDS output (from ethernet)的 raw data,計算出 mesh
-- 順便有 mmesave studio 和對應的 python code.


這個是用 out of box demo,然後接收 uart 的 point cloud data,畫出圖來。


這個好像是很久沒更新的 pymmwave ,針對 6843AOP 的更新!!


https://github.com/m6c7l/pymmw

2023/6/5

raspberry pi , ubuntu 23.04, build kernel

ref: 在 raspberry pi 的 ubuntu 23.04...
在 sources.list 加入
deb-src http://archive.ubuntu.com/ubuntu lunar main
然後 apt update,再
apt-get source linux-image-$(uname -r)
就會裝 linux-image-6.2.0-1004-raspi 在 run command 的目錄下,因為我用 sudo ,folder owner 是 root,所以 apt-get source 不要用 sudo

看source code。
pi 4 是 brcm2711,grep 2711 和 gpio 會出現 bcm2711-gpio。
然後找到 drivers/pinctrl/bcm/pinctrl-bcm2835.c: .compatible = "brcm,bcm2711-gpio",
最後是 Documentation/devicetree/bindings/pinctrl/brcm,bcm2835-gpio.txt

這一篇 有說,不要用 /sys/class/gpio 了...

2023/5/22

esp32s3 eye : led control

esp32-s3-eye 的 led 接在 GPIO3:


esp-idf 的 example 都是 esp32s3-devkit 的,所以 led 控制的 gpio 都不對。
所以要改成 gpio3

以 esp-idf 的 example : blink,要
idf.py menuconfig
Example Configuration --->
    Blink LED type (GPIO)
(3) Blink GPIO number
這樣build 完,燒錄到 eye 上,就可以看到 camera 旁邊的 綠色 led 一秒的間隔亮滅。

如果是 esp-matter 的 example : lighting-app。
一樣,用 menuconfig,改
Demo ---->
   LED type (GPIO)
(3) LED GPIO number
這樣build 完,綠色led 才可以用 chip-tool 控制亮滅。

2023/5/16

usbip -- usb over ip

usb over ip 在 linux 上已經進入 kernel 了(?)。
所以安裝 ap 層就可以。 叫做 usbip

在 ubuntu ,是包含在 linux-tools-common
apt install 後,就會多出 usbip, usbipd 等 usbip 的 command

2023/5/12

esp32 matter, and chip-tool

ref: 其中的這一篇 好像是 pc 端的 tool。
說明如何經由 ble 和 matter device 連線,並且把他註冊到? 的步驟。

照著 espressif for matter 的 install 會出現 Error: -- 感覺好麻煩...
結果都是因為 shallow 的關係,用
..

蠻混亂的,說明都只有一部分。
參考這一篇,是 connectedhomeip github 上的。

以 lighting-app 為例,要在原始 source code 位置 build,不能像m hello-world 一樣 copy 出來。

開機 log 中有:
I (1860) chip[DIS]: Advertise commission parameter vendorID=65521 productID=32768 discriminator=3840/15 cm=1
用 chip-tool command:
./chip-tool pairing ble-wifi 12345 mywifi-ssid mywifi-password 20202021 3840
看 esp32s3 的 log 出現 error:
I (103690) chip[DL]: Confirm received for CHIPoBLE TX characteristic indication (con 1) status= 14 
I (103740) CHIP[DL]: Write request received for CHIPoBLE RX characteristic con 1 12
I (103750) chip[EM]: >>> [E:43723r S:0 M:140331429] (U) Msg RX from 0:4180DA542811AAF9 [0000] --- Type 0000:22 (SecureChannel:PASE_Pake1)
Guru Meditation Error: Core  1 panic'ed (LoadProhibited). Exception was unhandled.

Core  1 register dump:
PC      : 0x42071f58  PS      : 0x00060730  A0      : 0x82077f2b  A1      : 0x3fcc5140  
A2      : 0x3fcc5174  A3      : 0x3fcc5180  A4      : 0x00000020  A5      : 0x3fca3694  
A6      : 0xce016445  A7      : 0x38eff6c3  A8      : 0x159f5221  A9      : 0x3fcc50e0  
A10     : 0xce016445  A11     : 0x3fcc5160  A12     : 0x00000020  A13     : 0x00000020  
A14     : 0x3fcd5d1c  A15     : 0x00000004  SAR     : 0x00000009  EXCCAUSE: 0x0000001c  
EXCVADDR: 0x159f5225  LBEG    : 0x40056f5c  LEND    : 0x40056f72  LCOUNT  : 0x00000000  


Backtrace: 0x42071f55:0x3fcc5140 0x42077f28:0x3fcc5160 0x42077f68:0x3fcc51d0 0x4204d8be:0x3fcc5220 0x4204278b:0x3fcc5250 0x420424c5:0x3fcc5290
0x420426ab:0x3fcc52d0 0x4205ff46:0x3fcc5350 0x42060832:0x3fcc5490 0x420608b7:0x3fcc54b0 0x420485df:0x3fcc54d0 0x42048805:0x3fcc5520 0x420dc243:0x3fcc5540
0x42048c1a:0x3fcc5570 0x4204c279:0x3fcc5640 0x4204c2c9:0x3fcc56d0 0x4204c3a9:0x3fcc5740 0x4205be1b:0x3fcc57b0 0x4205be2c:0x3fcc57f0 0x42063ac5:0x3fcc5810
0x42042045:0x3fcc5850 0x420517bc:0x3fcc5870 0x42050525:0x3fcc58d0 0x4205056c:0x3fcc58f0 0x42050888:0x3fcc5910 0x420508c9:0x3fcc5980 0x403830c5:0x3fcc59a0
ref: brian build 的 lighting-app 沒有 exception 問題,查他的 esp-idf 是 v4.4.2,我的是 latest。
esp-idf 切換版本,之後再 git submodule update --init --recursive
然後記得刪除自己home 下的 ~/.espressif ,不然 install.sh 會有 Error

果然,esp-idf 換成 v4.4.2 後,build esp-matter 的 lighting-app,使用 chip-tool 做 pairing 的時候,就不會出現 exception 了.



感覺很多都是要用 chip-tool (host tool)

2023/5/10

esp32s3,sdk. toolchain and hello-world

也不知道是不是這樣。sdk (framework, library) 跟 toolchain: 結果 build toolchain 就 failed:
 Connecting to isl.gforge.inria.fr (isl.gforge.inria.fr)|128.93.193.15|:80... failed: Connection timed out.
S3 的要看: run 這個,依照你要用的 chip 版本 (以 S3 為例):
./install.sh esp32s3
就會 download 對應的 toolchain (不用自己 build)

一樣,要 access 到 install 的 toolchain,要先
All done! You can now run:

  . ./export.sh
這個 env script 會出現:
Setting IDF_PATH to '/home/charles-chang/esp/esp-idf'
Detecting the Python interpreter
Checking "python" ...
Checking "python3" ...
Python 3.8.10
"python3" has been detected
Adding ESP-IDF tools to PATH...
Using Python interpreter in /home/charles-chang/.espressif/python_env/idf4.3_py3.8_env/bin/python
Checking if Python packages are up to date...
Python requirements from /home/charles-chang/esp/esp-idf/requirements.txt are satisfied.
Added the following directories to PATH:
  /home/charles-chang/esp/esp-idf/components/esptool_py/esptool
  /home/charles-chang/esp/esp-idf/components/espcoredump
  /home/charles-chang/esp/esp-idf/components/partition_table
  /home/charles-chang/esp/esp-idf/components/app_update
  /home/charles-chang/.espressif/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin
  /home/charles-chang/.espressif/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin
  /home/charles-chang/.espressif/tools/xtensa-esp32s3-elf/esp-2020r3-8.4.0/xtensa-esp32s3-elf/bin
  /home/charles-chang/.espressif/tools/riscv32-esp-elf/1.24.0.123_64eb9ff-8.4.0/riscv32-esp-elf/bin
  /home/charles-chang/.espressif/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin
  /home/charles-chang/.espressif/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin
  /home/charles-chang/.espressif/tools/openocd-esp32/v0.10.0-esp32-20210401/openocd-esp32/bin
  /home/charles-chang/.espressif/python_env/idf4.3_py3.8_env/bin
  /home/charles-chang/esp/esp-idf/tools
Done! You can now compile ESP-IDF projects.
Go to the project directory and run:

  idf.py build

依照這個getting start guide. copy hello-world, 要 build 的話...
都是用 idf.py 這個 tool.
idf.py set-target esp32s3
idf.py menuconfig
這個 menuconfig 包含很多東西,例如 bt, wifi, rtos, network-stack...etc
之後,用
idf.py build
build 完就會告訴你燒錄的command.



esp32-s3 的話,要 v4.4 以上才有 support,
ref:

使用 v4.3 的化,idf.py -p /dev/ttyUSB1 flash 會出現 Error:
A fatal error occurred: This chip is ESP32-S3(beta3) not ESP32. Wrong --chip argument?
改用 latest (master) 後,就 OK 了:
~/esp/hello_world$ idf.py -p /dev/ttyUSB1 flash
Executing action: flash
Running ninja in directory /home/charles-chang/esp/hello_world/build
Executing "ninja flash"...
[1/5] cd /home/charles-chang/esp/hello_world/build/esp-idf/esptool_py && /home/charles-chang/.espressif/python_env/idf5.2_py3.8_env/bin/python /home/charles-chang/esp/esp-idf/components/partition_table/check_sizes.py --offset 0x8000 partition --type app /home/charles-chang/esp/hello_world/build/partition_table/partition-table.bin /home/charles-chang/esp/hello_world/build/hello_world.bin
hello_world.bin binary size 0x314a0 bytes. Smallest app partition is 0x100000 bytes. 0xceb60 bytes (81%) free.
[2/5] Performing build step for 'bootloader'
[1/1] cd /home/charles-chang/esp/hello_world/build/bootloader/esp-idf/esptool_py && /home/charles-chang/.espressif/python_env/idf5.2_py3.8_env/bin/python /home/charles-chang/esp/esp-idf/components/partition_table/check_sizes.py --offset 0x8000 bootloader 0x0 /home/charles-chang/esp/hello_world/build/bootloader/bootloader.bin
Bootloader binary size 0x5250 bytes. 0x2db0 bytes (36%) free.
[2/3] cd /home/charles-chang/esp/esp-idf/components/esptool_py && /usr/bin/cmake -D IDF_PATH=/home/charles-chang/esp/esp-idf -D "SERIAL_TOOL=/home/charles-chang/.espressif/python_env/idf5.2_py3.8_env/bin/python;;/home/charles-chang/esp/esp-idf/components/esptool_py/esptool/esptool.py;--chip;esp32s3" -D "SERIAL_TOOL_ARGS=--before=default_reset;--after=hard_reset;write_flash;@flash_args" -D WORKING_DIRECTORY=/home/charles-chang/esp/hello_world/build -P /home/charles-chang/esp/esp-idf/components/esptool_py/run_serial_tool.cmake
esptool esp32s3 -p /dev/ttyUSB1 -b 460800 --before=default_reset --after=hard_reset write_flash --flash_mode dio --flash_freq 80m --flash_size 2MB 0x0 bootloader/bootloader.bin 0x10000 hello_world.bin 0x8000 partition_table/partition-table.bin
esptool.py v4.5.1
Serial port /dev/ttyUSB1
Connecting...
Chip is ESP32-S3 (revision v0.1)
Features: WiFi, BLE
Crystal is 40MHz
MAC: 34:85:18:98:db:8c
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 460800
Changed.
Configuring flash size...
Flash will be erased from 0x00000000 to 0x00005fff...
Flash will be erased from 0x00010000 to 0x00041fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Compressed 21072 bytes to 13405...
Writing at 0x00000000... (100 %)
Wrote 21072 bytes (13405 compressed) at 0x00000000 in 0.7 seconds (effective 253.4 kbit/s)...
Hash of data verified.
Compressed 201888 bytes to 109664...
Writing at 0x00010000... (14 %)
Writing at 0x0001cc87... (28 %)
Writing at 0x0002271e... (42 %)
Writing at 0x00028d61... (57 %)
Writing at 0x0002f274... (71 %)
Writing at 0x00037197... (85 %)
Writing at 0x0003cd0d... (100 %)
Wrote 201888 bytes (109664 compressed) at 0x00010000 in 2.6 seconds (effective 618.5 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 103...
Writing at 0x00008000... (100 %)
Wrote 3072 bytes (103 compressed) at 0x00008000 in 0.1 seconds (effective 396.0 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...
Done
devkti 上有兩個 microusb port,flash command 的 option,接 UART 時,是 /dev/ttyUSB0,接 USB 時,是 /dev/ttyACM0

燒錄好 hello-world 後,用 picocom -b 115200 /dev/ttyUSB0(ACM0) 就可以看到 hello world console output

燒錄的時候,只要下command 就好,竟然不用按reset...
但是說明有說。如果沒有bootloader,就要手動進入 download mode :
接 UART,按著 BOOT 然後 reset. 如果用 picocom 來看,就會看到:
ESP-ROM:esp32s3-20210327
build: Mar 27 2021
rst:0x1 (POWERON),boot:0x23 (DOWNLOAD(USB/UARTR0))
waiting for download

wifi-ble 測試可以用 blufi,包含android apk 和 esp32 sample code.
是利用手機經由 ble 連線後,設定 wifi ssid/password 的example

Android apk 可以用 AndroidStudio build OK
esp32 sample 一樣,用
idf.py set-target esp32s3
idf.py build
idf.py -p /dev/ttyACM0 flash


要build matter app 的話,先參考: 依照他說的,除了上面的 idf 要裝,還要裝chip environment
猜的,一樣在 ~/esp/ 下:
git clone --recurse-submodules git@github.com:project-chip/connectedhomeip.git