所以就想到,直接用這個屬性來 labeling.
有幾個文章github project 就是這樣做的:
- DRIVETRUTH: Automated Autonomous Driving Dataset Generation for Security Applications
- CARLA bounding box
- github: CARLA-2DBBox
# To get a numpy [[vel, azimuth, altitude, depth],...[,,,]]:
points = np.frombuffer(radar_data.raw_data, dtype=np.dtype('f4'))
points = np.reshape(points, (len(radar_data), 4))
-- comment 排列方式跟上面的剛好不一樣
$ nvidia-xconfig --query-gpu-info
Number of GPUs: 1
GPU #0:
Name : NVIDIA TITAN RTX
UUID : GPU-XXXXXXXXXXXX
PCI BusID : PCI:1:0:0
Number of Display Devices: 1
Display Device 0 (TV-6):
No EDID information available.
看一下 GPU 的 PCI Bus Id.$ sudo nvidia-xconfig -a --allow-empty-initial-configuration --use-display-device=None --virtual=1920x1080 --busid=PCI:1:0:0 Using X configuration file: "/etc/X11/xorg.conf". Option "AllowEmptyInitialConfiguration" "True" added to Screen "Screen0". Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf'然後因為 nvidia driver 版本大於 440.xx,所以在 xorg.conf 的 Screen 的地方加上:
Option "HardDPMS" "false"
allowed_users=anybody needs_root_rights=yes然後 user 要在 tty group 裡。
X.Org X Server 1.21.1.11
X Protocol Version 11, Revision 0
Current Operating System: Linux i7-14700 6.8.0-35-generic #35-Ubuntu SMP PREEMPT_DYNAMIC Mon May 20 15:51:52 UTC 2024 x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.0-35-generic root=UUID=e4ea2afe-cc4e-42ce-a53f-5032e417f9f7 ro
xorg-server 2:21.1.12-1ubuntu1.1 (For technical support please see http://www.ubuntu.com/support)
Current version of pixman: 0.42.2
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.7.log", Time: Fri Jan 17 05:52:20 2025
(==) Using config file: "/etc/X11/xorg.conf"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
The XKEYBOARD keymap compiler (xkbcomp) reports:
> Warning: Could not resolve keysym XF86CameraAccessEnable
> Warning: Could not resolve keysym XF86CameraAccessDisable
> Warning: Could not resolve keysym XF86CameraAccessToggle
> Warning: Could not resolve keysym XF86NextElement
> Warning: Could not resolve keysym XF86PreviousElement
> Warning: Could not resolve keysym XF86AutopilotEngageToggle
> Warning: Could not resolve keysym XF86MarkWaypoint
> Warning: Could not resolve keysym XF86Sos
> Warning: Could not resolve keysym XF86NavChart
> Warning: Could not resolve keysym XF86FishingChart
> Warning: Could not resolve keysym XF86SingleRangeRadar
> Warning: Could not resolve keysym XF86DualRangeRadar
> Warning: Could not resolve keysym XF86RadarOverlay
> Warning: Could not resolve keysym XF86TraditionalSonar
> Warning: Could not resolve keysym XF86ClearvuSonar
> Warning: Could not resolve keysym XF86SidevuSonar
> Warning: Could not resolve keysym XF86NavInfo
Errors from xkbcomp are not fatal to the X server
$ /opt/TurboVNC/bin/vncserver :8 Desktop 'TurboVNC: i7-14700:8 (charles-chang)' started on display i7-14700:8 Starting applications specified in /opt/TurboVNC/bin/xstartup.turbovnc Log file is /home/charles/.vnc/i7-14700:8.log如果是第一次啟動,會要求給一個 vncviewer 用的 password.
~$ DISPLAY=:8 vglrun -d :7 glxgears先指定DISPLAY環境變數是:8 (VNCSERVER),然後用 "-d :7" 指定 opengl rendering 交給 display :7 (nvidia X session) 負責。
MESA-LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open shared object file:
No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
failed to load driver: iris
MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file:
No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 149 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 167
Current serial number in output stream: 168
所以參考上面的 ref,去看 /usr/lib 下果然沒有 dri 目錄。所以把安裝位置 ln 過來...
sudo apt --reinstall install libgl1-mesa-dri cd /usr/lib sudo ln -s x86_64-linux-gnu/dri ./dri然後 Error:
MESA-LOADER: failed to open iris: /home/charles/miniconda3/envs/carla/bin/../lib/libstdc++.so.6:
version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-17.so.1)
(search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
failed to load driver: iris
MESA-LOADER: failed to open swrast: /home/charles/miniconda3/envs/carla/bin/../lib/libstdc++.so.6:
version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-17.so.1)
(search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
X Error of failed request: BadValue (integer parameter out of range for operation)
所以看一下 conda 系統的 libstdc++.so.6 的 GLIBCXX 支援版本有:
$ strings /home/charles/miniconda3/lib/libstdc++.so.6 | grep ^GLIBCXX GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 ... GLIBCXX_3.4.28 GLIBCXX_3.4.29 GLIBCXX_DEBUG_MESSAGE_LENGTH GLIBCXX_3.4.21 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.16 GLIBCXX_3.4.1 ... GLIBCXX_3.4.4 GLIBCXX_3.4.26果然沒有 3.4.30
$ batcat --generate-config-file Success! Config file written to /home/charles/.config/bat/config然後去修改 ~/.config/bat/config
-- #--theme="TwoDark" ++ --theme="GitHub"default theme 是給 darkmode用的。bright mode 可以用 "GitHub" 這個 theme.
batcat README.md
./CarlaUE4.sh結果發現沒有使用 GPU,參考cannot run Carla using nvidia GPU under Linux with multi-GPU installed : PRIME instruction ignored by Carla #4716,
./CarlaUE4.sh -prefernvidia就可以了。
./CarlaUnreal.sh0.10.0 不用家 -prefernvidia,就會用 GPU了。
.CarlaUE4.sh -prefernvidia -RenderOffScreen
python environment.py --cars LowBeam All
sudo apt install nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker這樣docker 的 --cpus all 才會有作用。
docker run --privileged --gpus all --net=host -e DISPLAY=$DISPLAY carlasim/carla:0.9.15 /bin/bash ./CarlaUE4.sh啟動了之後,因為 --net=host,所以要 run PythonAPI/examples 下的 code 都一樣。
import os
os.environ['USER_AGENT'] = 'myagent'
from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
# List of URLs to load documents from
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
# Load documents from the URLs
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
再把網頁內容分成一小段一小段
# Initialize a text splitter with specified chunk size and overlap
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=250, chunk_overlap=0
)
# Split the documents into chunks
doc_splits = text_splitter.split_documents(docs_list)
把這些一小段一小段的句子,轉成 embedding,也就是一個 N 維 tensor。
from langchain_ollama import OllamaEmbeddings
embeddings = OllamaEmbeddings(
model="llama3",
)
所有字句轉成 embedding/tensor 後,要放到一個 local 的 database 里,讓𠹌一下 user 問問題的時候,來databasae 找答案。
from langchain_community.vectorstores import SKLearnVectorStore
from langchain_openai import OpenAIEmbeddings
# Create embeddings for documents and store them in a vector store
vectorstore = SKLearnVectorStore.from_documents(
documents=doc_splits,
embedding=embeddings,
)
retriever = vectorstore.as_retriever(k=4)
RAG 的 vectorstore 和 sql 不同的地方是,在 query 時,vecrotstore 給的是最接近 query 的內容,而不是像 sql 一樣,要完全 match 的 data。
from langchain_ollama import ChatOllama
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Define the prompt template for the LLM
prompt = PromptTemplate(
template="""You are an assistant for question-answering tasks.
Use the following documents to answer the question.
If you don't know the answer, just say that you don't know.
Use three sentences maximum and keep the answer concise:
Question: {question}
Documents: {documents}
Answer:
""",
input_variables=["question", "documents"],
)
# Initialize the LLM with Llama 3.1 model
llm = ChatOllama(
model="llama3.1",
temperature=0,
)
rag_chain = prompt | llm | StrOutputParser()
做出 RAG class:
# Define the RAG application class
class RAGApplication:
def __init__(self, retriever, rag_chain):
self.retriever = retriever
self.rag_chain = rag_chain
def run(self, question):
# Retrieve relevant documents
documents = self.retriever.invoke(question)
# Extract content from retrieved documents
doc_texts = "\\n".join([doc.page_content for doc in documents])
# Get the answer from the language model
answer = self.rag_chain.invoke({"question": question, "documents": doc_texts})
return answer
用這個 RAG class 來測試
# Initialize the RAG application
rag_application = RAGApplication(retriever, rag_chain)
# Example usage
question = "What is prompt engineering"
answer = rag_application.run(question)
print("Question:", question)
print("Answer:", answer)
輸出會是..
Question: What is prompt engineering Answer: Prompt engineering is the process of designing and optimizing input prompts for language models, such as chatbots or virtual assistants. According to Lilian Weng's 2023 article "Prompt Engineering", this involves techniques like word transformation, character transformation, and prompt-level obfuscations to improve model performance. The goal is to create effective and efficient prompts that elicit accurate responses from the model.