Artificial Intelligence (AI) Tailgater Detection and Deterrence (AITDD)


8-28-2024

I thought it would be fun to make a sort of AI, James Bond style tailgater deterrence system to my truck. I wanted something retractable to go stealthily under the bed cover that could raise and engage when needed. Since I had lasers and a pan / tilt laying around from previous projects I decided to use those as the system base. To start, I needed to be able to access the backup camera of my truck to feed an AI. Since my truck doesnt have a backup camera I installed my own for $40. Then I had to build the lifter and write the control code for the AI targeting system.



Componenets

  • Rearview Camera I bought a GreenYi 5G 720P HD Car License Plate Rear/Front View Reverse Camera (You can hopefully find it here) It turns out, this uses GPEncode under the hood as the imager/board is created by Shen Zhen Joyhonest. Any camera using GPEncoder will work with this code - but it is easy to swap in other formats too!
  • Raspberry Pi (x2) The Raspberry Pi is the control unit of the defense system. It communicates with the camera and feeds those images to the AI Server running an Object Detection Model on an attached Corral Edge TPU. A second Pi is embedded in the cabin of the truck where it is usable as a control station.
  • Laser/Headlights Need something to use for deterrence...
  • Wiring Harness This required the wiring of the rearview camera and power to the lifter, laser, AI system, etc. I used 12v from my Ram 1500s battery. I highly recommend a dedicated fuse, I used an inline fuse so it works with my spare kit if anything blows. Wire coverings/conduit is a must as well.
  • Turret and Lifter The turret consists of a Pan/Tilt system powered by winch sail servos as well as a linear actuator powered lifter to raise and lower the turret behind the tailgate.
  • The Code! Be sure to grab the code! Clone it from Gitlab HERE

Step 1 - Getting Video (Hacking the Rearview Camera)

I wanted to use a relatively stealthy looking camera - so I decided on using a license plate camera. This one runs on 12v (to easily wire into your vehicle) and produces its own wifi network that you connect to when using the companion Android App. There were no guides available on how to hack its feed so I decided to write this up!
Since the camera streamed to an Android App, I figured it would be relatively straightforward to "Hack" or reverse engineer the incoming image stream. I decided to do a little enumeration of the rearview camera. I used a 12V power supply to run the camera, then got started with the network it created.
I started with nmap to see what the camera was producing - it was generating a wifi network I connected to. I was able to find the camera in the /24 subnet my device was assigned an IP in by using nmap. In this case a scan of 192.168.29.0/24 was performed until it revealed the camera. I retried with a targeted scan and found:
$ sudo nmap -sT -A -O -sU --top-ports=100 192.168.29.1
Starting Nmap 7.80 ( https://nmap.org ) at 2023-12-03 17:09 EST
Nmap scan report for 192.168.29.1
Host is up (0.024s latency).
Not shown: 197 closed ports
PORT STATE SERVICE VERSION
8081/tcp open blackice-icecap?
67/udp open|filtered dhcps
49153/udp open|filtered unknown
MAC Address: 88:B5:FF:E0:9B:D3 (Unknown)
No exact OS matches for host (If you know what OS is running on it, see https://nmap.org/submit/ ).
TCP/IP fingerprint:
OS:SCAN(V=7.80%E=4%D=12/3%OT=8081%CT=7%CU=7%PV=Y%DS=1%DC=D%G=Y%M=88B5FF%TM= OS:656CFD2B%P=x86_64-pc-linux-gnu)SEQ(SP=0%GCD=533%ISR=6E%TI=I%CI=I%II=RI%S OS:S=O%TS=U)OPS(O1=M5B4%O2=M5B4%O3=M5B4%O4=M5B4%O5=M5B4%O6=M5B4)WIN(W1=111C OS:%W2=111C%W3=111C%W4=111C%W5=111C%W6=111C)ECN(R=Y%DF=N%T=FF%W=111C%O=M5B4 OS:%CC=N%Q=)T1(R=Y%DF=N%T=FF%S=O%A=S+%F=AS%RD=0%Q=)T2(R=N)T3(R=Y%DF=N%T=FF% OS:W=111C%S=O%A=S+%F=AS%O=M5B4%RD=0%Q=)T4(R=Y%DF=N%T=FF%W=111C%S=A%A=S%F=AR OS:%O=%RD=0%Q=)T5(R=Y%DF=N%T=FF%W=111C%S=A%A=S+%F=AR%O=%RD=0%Q=)T6(R=Y%DF=N OS:%T=FF%W=111C%S=A%A=S%F=AR%O=%RD=0%Q=)T7(R=Y%DF=N%T=FF%W=111C%S=A%A=S+%F= OS:AR%O=%RD=0%Q=)U1(R=Y%DF=N%T=FF%IPL=38%UN=0%RIPL=G%RID=G%RIPCK=G%RUCK=G%R OS:UD=G)IE(R=Y%DFI=S%T=FF%CD=S)
Network Distance: 1 hop

I played with the camera for a bit with nc, but realized I would need to reverse engineer the android app to better understand what was going on with the generated wifi network. The Android Emulator Genymotion allows for bridging an emulated Android device to a host- allowing us to packet capture between the Android App and the wirefless wifi camera on our LAN. I ran a packet capture whilst using JoyTrip in the emulator.

Three small control packets from my android (192.168.29.4) to udp port 20001 and 20000 of the camera at .1.This triggers a bunch of udp gets streamed to the emulator on port 10900 from the camera. Seems promising and simple to recreate.
Monkey see, Monkey do - a (bad) reverse engineers creedo.
I tried some simple python code to repeat the captured packets to the the camera while running a nc udp listener on udp 10900 (nc -vklu 10900)

And recieved a bunch of udp packets to my netcat listener. However, after a few seconds they stopped. I figured a heartbeat of sorts was being sent by the camera. I went back to the packet capture and noticed a few things. First was the expected heartbeats. Second was that closer inspection of the camera output revealed the string GPEncoder in the packet capture.

Searching for this string took some time - but eventually I stumbled on a paper by CHRISTOPHER L.M. WHEELER on Evidence Retrieval From Unbranded and Counterfeit Technology. which explained that GPEncoder was used to encode jpgs, as well as how JPG frames start with a FF D8 FF and is ended by a FF D9 (with no length embedded). Checking the packet capture - we were able to see the start and end jpg markers! We also see a header that contains a packet and frame counter, Soon I was able to decode a jpg from the camera like this one:

Further refinement of the scripts and researching the protocol led me to some interesting findings. It looks like GPEncoder is the standard encoding tool used by camera and wifi systems from China's Shen Zhen Joyhonest. These systems are used for drones, FPV equipment, microscopes, endoscopes, baby cameras, Rear View Cameras, etc. Apps that use it include: Car-FPV, Grinning, JHCamera, JoyTrip, LeLe Cam, LXCamera, Max-see, Sports DV, Sports Camera and TC WIFI. All of these can be hacked using the scripts in this project. Of course some modifications to commands(Start/Stop/HeartBeat/etc) may be required, but this is simple to figure out if you follow the steps to emulate and packet capture the communications between applications and their cameras. Or try to reverse engineer them with jadx or something and make your own writeup! I would be concerned using this system for anything FPV related as not encoding the images into a stream a la h264/5 would cause some serious latency I imagine. But its simple I guess.

Step 2 - Injesting the Video

Once able to read a picture as a jpg from the camera with my PC I needed to be able to view it "live". Saving the training/validation dataset for the AI system that was going to be eating the images the camera produced was also required.
The live view turned out to be easy with sxiv - which allowed me to rewrite images to a single jpg and the viewer would live update them. Its usage was easy with the readToSingleJpg.py script included in the Altrubots git mentioned in the top of this article.
###
# Run the python:
# python3 ./cameraUtils/readTosingleJpg.py
# View the latest image with sxiv:
# sudo apt install sxiv
# sxiv /dev/shm/frame_live.jpg
###

Additionally the AI system needs a mechanism to load the image. However, with our low powered Pi the model runs slower than we RX pictures, and I didnt want to pollute the already convoluted AI code with more sockets. So the readForAi.py script was developed to read and lock an image that is provided to the AI system.

Step 3 - AI Reading the Image

Before worrying about training my own model - I started with some pretrained tensorflow lite models. Using the wonderful Tensorflow Documentation, as well as EdjeElectronics which is a wonderful tutorial made by Evan Juras that was used as the starting point for the AI application.
A fair bit of modification was required as the app as written worked either on a true stream or single image, not a continually updating single image as the camera was producing. The code was modified to handle a locking mechanism to ensure an image isnt written to while its being handled and that the most recent received image is the one that the model is run on. Some restructuring was done to make the application run more rapidly by only loading the model into the coral at startup. And of course we had to add logic for the AI Server to provide its targeting information to the hardware control system.
Running the code:
cd AI/tflite1/
source tflite1-env/bin/activate
python ai_targeting_system.py --modeldir=sample_model_coral --image /tmp/img2.jpg --edgetpu
This triggers the AI server to use the TruckMessaging.py system to send messages to the hardware controller.

Step 4 - Control the hardware (Software)


Now for the fun part - actually installing and controlling the hardware. The turret itself is controlled via 2 Hitech 785hb winch servos which can rotate at much greater ranges than standard RC servos - 3.5 revolutions! The servos are controlled by a Pololu Maestro PWM controller connected to our pi. The lifter to raise and lower the turret, as well as open and close the bed cover, are both linear actuators. These are controlled via a Sabertooth motor controller. The lights and lasers are controlled via RC PWM Relays - also connected to the Maestro.
You can get a high level view of the somewhat complex architecture here:

Luckily, all the control of Maestros, Sabertooths and PWM relays are all easily integrated from the main Altrubots RC Anywhere codebase. A few additions, like the In Vehicle Router for LAN/WLAN, the CORAL TPU and of course the wifi camera were required. And in order to maximize convieience a second Pi with embedded display was used for the control system in the Cab of the Truck. Though I could have easily routed an ethernet line with the power cables (Power wiring coming in the next sections) WiFi was selected to more easily integrate other additional systems into the scheme without needing more LAN wiring.
The main logic for hardware control is located in maestroUDPServer.py - it handles all control of the servos/actuators. That is one of the great things about the Maestro's - you can easily control multiple PWM RC components with a single serial connection and simple API.
cd Altrubots/python
python3 maestroUDPServer.py
Note - the preceding code expects you to have your Maesto plugged in. It runs a UDP server socket that parses commands to the application using a simple protocol called "JohnSON". Yup. It is responsible for taking targeting information from the AI System and if the targeting system is engaged by the operator deploying and providing X/Y commands to the Servos as well as the control of the actuators to stow/deploy the system. This provides the DIY AIDTR - or the Aided Target Recognition functionality to the vehicle.

Step 5 - Control the hardware (Hardware)

The wifi camera is simple to power - it needs a 12V + from the battery - I wanted mine on a "keyed" ignition source. This means that the camera only turns on when I turn the key on - eliminating the chance of it draining my battery. Ensure that an appropriate fuse is located on the circuit if following along. Do the automotive wiring well. Make sure it gets to ground. Ensure the wiring is in a conduit, fused and is protected and secure.
The bed turret system also uses a single 12V input - power is then regulated to the appropriate voltage per components (5v, 7.3v, or 24V). A wiring diagram:

You can see the rearview camera here - Its actually pretty "stealthy" for not being OEM.

And observe the wiring under the body (The ribbed conduit):

Which originates from the Cab on a keyed ignition source. Mine is hooked up to the cigarette lighter and slid out with the parking break cable and then covered by my truck door liner. I simply had to pull up the door liner and make a small incision in the gasket that the existing cable egressed from the cab and ran the wires through there.

You cant even see any wires with the cover back on:

The bed has a single XT60 connector providing 12V to the Regulator Box that handles distribution to the "ECU's" As they would be called in Automotive.


Step 6 - The Lifter and the Turret


I needed a way to raise and lower the system. I determined linear actuators would be the easiest and most reliable mechanism for this. Leveraging a design used in woodworking for desk and coffee table lifts combined with a welded frame to mount to the truck a simple lifter was created. A platform is placed on top of the lifter for electronics, which are placed in waterproof houses and then covered by a waterproof cover. The lift itself is set in a welded frame that attaches via ratchet straps to the truck. This makes installing and uninstalling the system very quick as only two ratchet straps are required to keep it secure.
The turret is mounted to the platform and has three sets of mounting holes created. One set for the laser and the other two for the motorcycle headlights. You can find the CAD files for the 3d printed mount in the git repo under CAD and you can find a video on how to control and wire the laser on our Youtube Page Here. You can make the turret yourself using 2 servos or you can find a heavy duty pan tilt system and use that. If interested in purchasing one - let me know via the contact page!

Step 7 - Mounting in the Bed

As mentioned - this part is pretty easy due to the mounting structure I welded. Simple place it in the middle of the bed and attach the ratchet straps to your corner mount. Because the hooks are lower than the mounts the straps pull the turret down and into place on the bed. You can also screw it down to make it more permanent, but I need to be able to remove the system to go to work.

Step 8 - Calibrating the Targeting System

A targeting defense system is useless if it misses or cant hit the tailgaters - so I made a simple mechanism to recalibrate the turret to match the camera. To use this - the turret was mounted and backed up towards a blank flat surface, like a closed garage. Then the laser was turned on and the sendRight, sendUp, etc. bash scripts were used to move the turret. The image stream is referenced and the scripts should be updated to send the laser to the appropriate edge of what the camera displays. Those extremities are recorded and used to updae the ranges in the NewRangeX and NewRangeY variables in the maestroUDPserver file. This process could be automated in future updates - but for now its manual! It still works pretty quickly.

Step 9 - In Cab Control System - Embedded GUI Development


In order to control the turret a simple mechanism from inside the cabin was needed. I chose to use a wifi connected Raspberry Pi with a touchscreen. I did not want to have to run any commands on the box and wanted the gui to start automatically. To get this working I decided to do a little tkinter python gui and have it connect directly to the X server. This allows the gui to automatically start and take the whole screen. It is pretty simple with ubuntu.
You can find the display I selected here It required no driver installation and worked out of the box. The Waveshare 4.3 DSI LCD 800x480 screen ran great! It also fits nicely in a WeatherTech Cup Phone with a small 3d printed attachment.
#set the default target to not be graphical
systemctl set-default multi-user
#to re-enable the desktop run
systemctl set-default graphical
#or you can start it, also called graphical.target
#Make sure you have apt install xorg xterm python3-pil python3-pil.imagetk
#Update your /etc/X11/Xwrapper.config to have allowed_users=anybody
#Then enable a systemd serivce
#start-gui.service in the git repo looks like:
[Unit]
Description=Start Xorg
[Service]
User=jhs
ExecStart=/bin/bash -c "export DISPLAY=:0; /bin/xinit /home/jhs/Altrubots/python/systemd/run-gui.sh -- :0 vt$XDG_VTNR"
StandardOutput=journal
[Install]
WantedBy=multi-user.target
###
# Where the run-gui.sh script looks like
###
#!/bin/bash
/usr/bin/python3 /home/jhs/Altrubots/python/embeddedTurretGui.py
#You might also need to setup you XAuthority file
#Then reboot and your gui should start!

If you ever want to get to a terminal again, plug in a keyboard and use ctrl-alt - NUMBER to select a different TTY.
The gui provides the ability to raise and lower the laser turret as well as engage and disengage lights, laser and AI targeting.
A manual control and aiming option is also being worked!

Step 10 - Test

I do not have any footage of using this on a public road. That would be a bad idea. Shortly I will be uploading some videos of testing on private farm roads.
I do however have a cute gif of our Dog lady being very jealous of the turret taking her spot in the bed:


Author Credits

John

08-04-2024

Monkeying about?