2018/6/28

caffe, more -- trainning

follow 這一篇Building a Cat/Dog Classifier using a Convolutional Neural Network 做一次。

用的 dataset 是 Kaggle Cats vs Gogs,到 dataset page,中間有個file browser 框,上面有一下下載icon,就可以download。
-- 需要login, 可以用google 帳號 sign in.

要 run create_imdb.py 時,要改一下,因為裡面的 image. db path 都是寫死的
create_lmdb.py 需要 opencv-python, lmdb 和指定 PATHONPATH
export PYTHONPATH=~/caffe/python:$PYTHONPATH
sudo pip install lmdb opencv-python
其中 ~/caffe 是當初 download and build caffe source 的位置。

train ...
結果遇到:F0629 19:34:17.674513 17917 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory

nfs server. export option : secure insecure

同事 A 先生Virtualbox VM mount nfs fail,說是server 權限不符。
查一下 nfs server 的 auth.log,出現 mount.rpc port XXXXX 不符合問題。
後來 A 先生改 nfs server 的 exports 檔,把 export path option 中 secure 改 insecure 後 OK 了。

用另一台 server nat 測試。不會有這個問題。
所以猜我們用 iptable 開啟nat 和 virtualbox vm nat 的 port 轉換範圍應該不一樣。

2018/6/27

caffe, test ..

follow Training LeNet on MNIST with Caffe
GPU: (1050i)
real 1m25.823s
user 1m10.579s
sys 0m19.228s
CPU:
real 11m25.795s
user 11m28.551s
sys 0m0.444s

git clone caffe 下來,在 example folder 有 mnist。

先 download data:
./data/mnist/get_mnist.sh
./examples/mnist/create_mnist.sh
開始 run train in..
./examples/mnist/train_lenet.sh
run 完,結果會是在 example/mnist 下..lenet_iter_*

修改 example/mnist/lenet_solver.prototxt,最後一行 solver_mode: GPU 改 CPU,就可以強至用 CPU training

caffe 附的一些 tool,可以用來看 training 的 trainning error ..
把 train 時的 log 紀錄到一個 file:
./examples/mnist/train_lenet.sh 2>&1 | tee -a lenet
然後用 parse_log.py 整理一下...
./tools/extra/parse_log.py lenet .
會產生 lenet.test, lenet.train

寫 plotcmd,教給 gnuplot 執行...
set datafile separator ','
set term x11 0
plot './lenet.train' using 1:4 with line,\
     './lenet.test' using 1:5 with line
用 gnuplot 畫出來..
gnuplot -persist plotcmd




使用 GTX-950M:
real 2m32.821s
user 1m53.398s
sys  0m43.228s

2018/6/25

caffe , cuda

follow caffe installation guide, ubuntu (>17.04)
sudo apt-get install caffe-cuda

這好像只有 install binary & libary。其他部份好像還是要 build from source。
利用 apt src 的 build-dep 自動把 build caffe-cuda 需要的 package 安裝起來:
這個command 須要先把sources.list 中 deb-src un-comment 掉..
他會安裝 gcc6
sudo apt build-dep caffe-cuda
然後就可以 git clone https://github.com/BVLC/caffe.git
然後依照 說明。 copy Makefile.config.example Makefile.config 來修改。
如果是用 cuda + cpu,就都不用改。
cp Makefile.config.example Makefile.config
# Adjust Makefile.config (for example, if using Anaconda Python, or if cuDNN is desired)
make all
make test
make runtest
但是依照這一篇,要un comment USE_PKG_CONFIG := 1

然後 make all 就出現問題: Unsupported gpu architecture 'compute_20'
參考這一篇
After more research I found that the newest cuda version (9.0) doesn't support compute_20 anymore. 
This means that you have two options, disable the compute_20 target or install cuda version 8.0. 
If your GPU supports newer compute architectures you should use the newest cuda version and disable compute_20.
果然,在 Makefile.config 中有...
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
                -gencode arch=compute_20,code=sm_21 \
                -gencode arch=compute_30,code=sm_30 \
                -gencode arch=compute_35,code=sm_35 \
                -gencode arch=compute_50,code=sm_50 \
                -gencode arch=compute_52,code=sm_52 \
                -gencode arch=compute_60,code=sm_60 \
                -gencode arch=compute_61,code=sm_61 \
                -gencode arch=compute_61,code=compute_61

接著是 hdf5.h : No such file or directory Error。
參考這一篇,有一些 make caffe 的問題解決方法。
修改 Makefile.config:
--- INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
+++ INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/

之後是 cannot find -lhdf5_hl, -lhdf5 Error..
一樣,剛剛的link 說..修改 Makefile
--- LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5
+++ LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial
OK

接著就要看 A step by step guide to Caffe Training LeNet on MNIST with Caffe

出現 opencv error:
.build_release/lib/libcaffe.so: undefined reference to `cv::imread(cv::String const&, int)
但是又真的有裝 libopencv 的話。
可以改一下 ..Make.config,
 # Uncomment to use `pkg-config` to specify OpenCV library paths.
 # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
-# USE_PKG_CONFIG := 1
+USE_PKG_CONFIG := 1

如果要 run python/ 下的 tool 的話,還要安裝一些 python module,寫在 python/requirement.txt:
sudo pip install -r requirement.txt


在 make runtest 時出現錯誤:
Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version
發現是因為 /usr/share/cuda link 到 cuda-10.1,但是 nvidia-smi 顯示的 cuda 版本卻是 10.0
代表 driver 和 library 不match (大概是apt 更新的)。
所以重新把 /usr/share/cuda link 到 cuda-10.0 之後就沒問題了。

2018/6/22

cuda9.0 + cunn 7.05 on ubuntu 18.04

tensorflow 官網使用 cuda Toolkit 9.0, 如果在 ubuntu 18.04 上 build 的話,要改用就版本的 gcc (ubuntu 18.04 的 gcc 太新)。

這一篇 5 月的文,好像有 2018 的 patch,安裝好像沒提到要重新 build。直接用 17.10 的 deb + patch 就可以了。

使用 cuda 9.2 的話,package 好像還沒完成,所有相依的 package 都要自己 build 過一次。(雖然github 上已經有 build 完成的 script)
所以,為求簡單,還是依照 cuda toolkit 9.0 有支援的版本 (16.04, 17.10) 選一個。(16.04 是 LTS, support 到2021)。

然後 nvidia cuda toolkit 網站download link 是 9.2, 9.0 要到 archive 去找。
所以不能 follow nvidia cuda toolkit 的 link download


真的,跟這個 說明的一樣。可以直接在 18.04 上安裝cuda-9.0 和 cudnn 7.05,不會有 gcc 版本過新的問題。

然後 pip install tensorflow 時出現 exception .... MemoryError,有一個 link 說,加上 --no-cache-dir 就可以。
然後 follow 說明.. 開啟 python..try:
>>from tensorflow.python.client import device_lib
>>device_lib.list_local_devices()
2018-06-25 10:45:58.955248: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-25 10:45:59.256407: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-06-25 10:45:59.257329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties: 
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 3.95GiB freeMemory: 3.56GiB
2018-06-25 10:45:59.257385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-25 10:46:04.408233: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-25 10:46:04.408308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0 
2018-06-25 10:46:04.408330: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N 
2018-06-25 10:46:04.425190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/device:GPU:0 with 3290 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 1869392221051952077
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 3450404864
locality {
  bus_id: 1
  links {
  }
}
incarnation: 8818617857580197841
physical_device_desc: "device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1"
]



想要 follow 這一篇, test build cuda sample 得話。會出現 gcc 版本太新問題。
所以,遵照這一篇,裝完m gcc-6, g++6 (好像已經裝好),
然後手動建 gcc-6 link 到 /usr/local/cuda/bin/gcc, g++ 也一樣。
sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
就可以 make OK 了。


200823 更新:
ubuntu 18.04. CUDA 10.2
libgflags-dev libgoogle-glog-dev liblmdb-dev libboost-all-dev libprotobuf-dev protobuf-compiler libhdf5-dev libleveldb-dev libsnappy-dev libopencv-dev libatlas-base-dev python-numpy
cmake 3.10 一樣會在link 的時候找不到 cublas.要用 3.14 以上版本。可以自己build一個版本

使用 cuda10.2 之後,系統就只有 cmake 的問題,其他都不用修改了。
上面的 package install 完,改 cmake 3.14 之後..
mkdir build && cd build
cmake ..
make
make install
這樣就可以了。

install path 是 caffe/build/install

lenovo legion Y520 -- ibus-chewing

輸入法要在 Settings-- Region and Language -- Input Sources 中選到才算數。
所以在 input Sources 下面那個 + 號,Chinese(Chewing) 要有出現才行。

但是現在18.04 好像有一個bug : How can I use chewing input method
所以,要手動切一下 locale..
:~$ sudo locale-gen zh_TW.UTF-8
然後再選 + 才會出現。
-- 好像還要 logout ..

還有 Language Support -- keyboard input methog system 選的是 ibus。

lenovo legion Y520 -- nvidia and cuda toolkit

趁特價買了。
接 usb3.0 HD,安裝 Linux

先試 ubuntu 18.04 (因為也是 LTS)。
用迴紋針按一下左邊側面的小洞,開機進入 bios。disable security boot。
用拇指疊開機,install 到另一個 usb hd
安裝一個 ESP partition 和一個 ext4。
開機按 F12 進入 boot 選單,會自動列出有開機能力的 partition (device)。
裝完也可以正常開機,但是 lspci 找不到 nvidia。
還有開一陣子system hang.. touch/keyboard 沒反應。

拆開,換上 8G ram 了。
猜開有點難,用指甲一點一點化開,但是 RJ45 那邊很緊,只好從後面開過來。
DDR 用一個鐵蓋子蓋起來。

follow 這一篇,他說 18.04 已經內建 cuda toolkit。所以用 apt 安裝就可以..
sudo apt install nvidia-cuda-toolkit 
sudo apt-add-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-396
重新開機。用 command 看..
~$ nvidia-smi
Fri Jun 22 16:40:36 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.24.02              Driver Version: 396.24.02                 |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   36C    P8    N/A /  N/A |    515MiB /  4042MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      8154      G   /usr/lib/xorg/Xorg                            28MiB |
|    0      8245      G   /usr/bin/gnome-shell                          58MiB |
|    0     15542      G   /usr/lib/xorg/Xorg                           203MiB |
|    0     15711      G   /usr/bin/gnome-shell                          91MiB |
|    0     17581      G   ...-token=B5E610D3E9F21DF705985515A610A2E7   132MiB |
+-----------------------------------------------------------------------------+


~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85


~$ sudo apt-get install clinfo
~$ clinfo
Number of platforms                               1
  Platform Name                                   NVIDIA CUDA
  Platform Vendor                                 NVIDIA Corporation
  Platform Version                                OpenCL 1.2 CUDA 9.2.127
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
  Platform Extensions function suffix             NV

  Platform Name                                   NVIDIA CUDA
Number of devices                                 1
  Device Name                                     GeForce GTX 1050 Ti
  Device Vendor                                   NVIDIA Corporation
  Device Vendor ID                                0x10de
  Device Version                                  OpenCL 1.2 CUDA
  Driver Version                                  396.24.02
  Device OpenCL C Version                         OpenCL C 1.2 
  Device Type                                     GPU
  Device Topology (NV)                            PCI-E, 01:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               6
  Max clock frequency                             1620MHz
  Compute Capability (NV)                         6.1
  Device Partition                                (core)
    Max number of sub-devices                     1
    Supported partition types                     None
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x64
  Max work group size                             1024
  Preferred work group size multiple              32
  Warp size (NV)                                  32
  Preferred / native vector sizes                 
    char                                                 1 / 1       
    short                                                1 / 1       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 0 / 0        (n/a)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              4238737408 (3.948GiB)
  Error Correction support                        No
  Max memory allocation                           1059684352 (1011MiB)
  Unified memory for Host and Device              No
  Integrated memory (NV)                          No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       4096 bits (512 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        98304 (96KiB)
  Global Memory cache line size                   128 bytes
  Image support                                   Yes
    Max number of samplers per kernel             32
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             16384x32768 pixels
    Max 3D image size                             16384x16384x16384 pixels
    Max number of read image args                 256
    Max number of write image args                16
  Local memory type                               Local
  Local memory size                               49152 (48KiB)
  Registers per block (NV)                        65536
  Max number of constant args                     9
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     4352 (4.25KiB)
  Queue properties                                
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Kernel execution timeout (NV)                 Yes
  Concurrent copy and kernel execution (NV)       Yes
    Number of async copy engines                  2
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                
  Device Extensions                               cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  NVIDIA CUDA
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [NV]
  clCreateContext(NULL, ...) [default]            Success [NV]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  No platform

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.11
  ICD loader Profile                              OpenCL 2.1


好像更新完 nvidia driver 之後,就不會 hang 住了。

2018/6/14

[30518.225322] usb 3-1.2: new full-speed USB device number 5 using xhci_hcd
[30518.375490] usb 3-1.2: New USB device found, idVendor=0a12, idProduct=0001
[30518.375492] usb 3-1.2: New USB device strings: Mfr=0, Product=2, SerialNumber=0
[30518.375493] usb 3-1.2: Product: CSR8510 A10

dbus an simple service example in python

ref: Register a “Hello World” DBus service, object and method using Python

service 的 source code 就是..
import gobject
import dbus
import dbus.service

from dbus.mainloop.glib import DBusGMainLoop
DBusGMainLoop(set_as_default=True)


OPATH = "/com/example/HelloHell"
IFACE = "com.example.HelloHell"
BUS_NAME = "com.example.HelloHell"


class Example(dbus.service.Object):
        def __init__(self):
                bus = dbus.SessionBus()
                bus.request_name(BUS_NAME)
                bus_name = dbus.service.BusName(BUS_NAME, bus=bus)
                dbus.service.Object.__init__(self, bus_name, OPATH)

        @dbus.service.method(dbus_interface=IFACE + ".SayHello",
                        in_signature="", out_signature="")
        def SayHello(self):
                print "hello, world"


if __name__ == "__main__":
        a = Example()
        loop = gobject.MainLoop()
        loop.run()
用 python run 起來後,用 dbus-send 送message 給'他',就會print Hello 出來..
dbus-send --session --print-reply --type=method_call --dest=com.example.HelloHell /com/example/HelloHell com.example.HelloHell.SayHello.SayHello

2018/6/11

raspberry pi 3 and bluez

follow 這一篇 其中 update bluez 的部份。
把 bluez 由 5.43 升級到 5.49,這樣 bluetoothconf 的 error message 就消失了。

另外這一篇 也有更新到 5.48 的步驟

這一篇 的回答有 dbus-send 的用法。

這一篇用 pi 3 做 ibeacon

2018/6/8

bookmark : nordic build with gcc on linux

官方文件 雖然寫的是 eclipse,但是也包含一些 command line 的操作方式。
大概是:

先到arm 下載安裝ㄤ arm gcc croos toolchain
apt-get install build-essential checkinstall
nordic developer site 下載 SDK.zip,解開。
修改 components/toolchain/gcc/Makefile.posix
依照你的 arm gcc cross tool 安裝途徑,版本修改..,我的是..
GNU_INSTALL_ROOT ?= /usr/bin/
GNU_VERSION ?= 5.4.1
GNU_PREFIX ?= arm-none-eabi
然後就可以到 examples/peripheral/<board name>/blank/armgcc/ 下 make 了。
make 完會在這個目錄下create _buil 目錄。

接著就是燒錄...

另一篇 也是一樣的步驟。
而且是在
EVB 是 nRF52840-DK,上面做jlink debugger 的是 PCA10056
有關linux 開發環境的文件: nRF52840-PCA10056
至於一般的 board user guide,在 download 的 SDK 解開後的 document 目錄:index.html

2018/6/7

BLE GATT Example for Android 6

Android 6 之後,BLE Scan 需要得到 Location 權限。
所以以前的 example 就要修改了,否則 scan 不出東西,logcat 還會出現 need permission COARSE_LOCATION or FINE_LOCATION

文章參考"BLE Scan Not Working"這裡

我也修改了一下,最少修改。放在: android-BluetoothLEGatt,branch : fixAndroid6
大概就是..
diff --git a/Application/src/main/AndroidManifest.xml b/Application/src/main/AndroidManifest.xml
index d3cf257..7979018 100644
--- a/Application/src/main/AndroidManifest.xml
+++ b/Application/src/main/AndroidManifest.xml
@@ -32,6 +32,7 @@
 
     <uses-permission android:name="android.permission.BLUETOOTH"/>
     <uses-permission android:name="android.permission.BLUETOOTH_ADMIN"/>
+    <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
 
     <application android:label="@string/app_name"
         android:icon="@drawable/ic_launcher"
diff --git a/Application/src/main/java/com/example/android/bluetoothlegatt/DeviceScanActivity.java b/Application/src/main/java/com/example/android/bluetoothlegatt/DeviceScanActivity.java
index 9b86f7a..7c654dc 100644
--- a/Application/src/main/java/com/example/android/bluetoothlegatt/DeviceScanActivity.java
+++ b/Application/src/main/java/com/example/android/bluetoothlegatt/DeviceScanActivity.java
@@ -16,6 +16,7 @@
 
 package com.example.android.bluetoothlegatt;
 
+import android.Manifest;
 import android.app.Activity;
 import android.app.ListActivity;
 import android.bluetooth.BluetoothAdapter;
@@ -46,6 +47,7 @@ public class DeviceScanActivity extends ListActivity {
     private BluetoothAdapter mBluetoothAdapter;
     private boolean mScanning;
     private Handler mHandler;
+    private static final int PERMISSION_REQUEST_CORASE_LOCATION = 7788;
 
     private static final int REQUEST_ENABLE_BT = 1;
     // Stops scanning after 10 seconds.
@@ -64,6 +66,8 @@ public class DeviceScanActivity extends ListActivity {
             finish();
         }
 
+        requestPermissions(new String[]{Manifest.permission.ACCESS_COARSE_LOCATION},PERMISSION_REQUEST_CORASE_LOCATION);
+
         // Initializes a Bluetooth adapter.  For API level 18 and above, get a reference to
         // BluetoothAdapter through BluetoothManager.
         final BluetoothManager bluetoothManager =

2018/6/6

Nordic nRFToolbox build from source

這是 Nordic 的 app,用來跟Nordic 的 EVB 連線用的。
原始source code 放在 Nordic 的 github repo :Android-nRF-Toolbox
但是因為 build 起來有問題,要加一些修改才 build 得起來。
所以只好 fork 一份到自己的 github

說明一下 build 的方法...

這個 需要 Android BLE Library,而且有規定位置 (在 Settings.gradle)。
Android-BLE-Library 跟 Android-nRF-Toolbox 要放在同一層。
toolbox 的 settings.gradle 是這樣寫的..
project(':ble').projectDir = file('../Android-BLE-Library/ble')


也就是說..
$ git clone https://github.com/checko/Android-BLE-Library.git
$ git clone https://github.com/checko/Android-nRF-Toolbox.git
$ cd Android-nRF-Toolox
$ git checkout fixRequstClassNotFound
我是先 import Android-BLE-Library, build 好,再 import Android-nRF-Toolbox, build apk

2018/6/4