A real-time computer vision inference pipeline designed for industrial edge deployment.
Runs PPE and zone-violation detection on Jetson or x86 hardware,
with RTSP camera integration, telemetry broadcast, and a Bash deploy script
for field technicians.
EDGE-NODE-01 · rtsp://192.168.1.22:554/stream1 · connected via VPN
28.4 FPS
TEMP 46.2°C · GPU 38% · eth0 UP · DET/60s 12
system
platformJetson Orin
cpu temp46.2°C
gpu util38%
uptime4h 22m
net ifaceeth0 UP
avg fps28.4
detection log
00:04NO HELMETconf 0.94
00:09PERSONconf 0.98
00:17NO VESTconf 0.87
00:23PERSONconf 0.99
00:31COMPLIANTconf 0.91
Architecture
📷
IP CameraRTSP / PoE
──▶
⚡
RTSPReaderthreaded, auto-reconnect
──▶
🧠
Inference PipelineRoboflow SDK / HOG
──▶
📡
Telemetry UDPbroadcast / VPN
──▶
🖥
Dashboardremote monitoring
The pipeline is designed for field reliability: the RTSP reader runs in a dedicated thread with auto-reconnect,
inference is decoupled from the display loop, and telemetry is broadcast over UDP so any monitoring node on the subnet
(or across a VPN tunnel) can receive it without a persistent connection.
Deployment is managed by a single Bash script (deploy.sh) that handles
dependency install, network interface config, static IP assignment, firewall rules, and systemd service registration —
the same pattern used for field rollouts at customer sites.
Code sample
bash
deploy.sh — field setup in one command
# Full install on a fresh Jetson or Linux edge devicesudo ./deploy.sh install \
--source rtsp://192.168.1.22:554/stream1 \
--api-key rf_abc123# Verify camera connectivity before deploymentsudo ./deploy.sh check-camera rtsp://192.168.1.22:554/stream1
# Set static IP (no DHCP at customer site)sudo ./deploy.sh set-ip eth0 192.168.1.100192.168.1.1# Tail live inference log
./deploy.sh logs
python
edge_monitor.py — threaded RTSP reader with auto-reconnect
def_reader_loop(self):
while self._running:
cap = cv2.VideoCapture(self.source)
if not cap.isOpened():
log.warning(f"Cannot open {self.source}, retry in {self.reconnect_delay}s")
time.sleep(self.reconnect_delay)
continuewhile self._running:
ok, frame = cap.read()
if not ok:
log.warning("Stream dropped — reconnecting")
breakwith self._lock:
self._frame = frame # thread-safe frame swap
cap.release()
time.sleep(self.reconnect_delay)
Skills demonstrated
CV / Inference
Roboflow pipeline
SDK + fallback HOG
Edge hardware
NVIDIA Jetson
JetPack 5, tegrastats
Networking
RTSP / VPN / SSH
stream reconnect logic
Linux
nmcli / ifconfig / systemd
static IP, service mgmt
Scripting
Bash deploy script
idempotent field SOP
Telemetry
UDP broadcast
remote monitoring
Docker
Inference container
roboflow/inference-server
Documentation
SOPs & wiring diagrams
field-ready format
Key outcomes
Sub-35ms inference latency on Jetson Orin using Roboflow TensorRT-optimised models
Zero-downtime RTSP reconnect — threaded reader recovers from dropped streams without stopping the inference loop
Single-command site deployment — deploy.sh configures networking, firewall, and systemd service for a new edge node in under 5 minutes
Hardware-agnostic — runs on Jetson (CUDA), x86 CPU, or ARM without code changes; device is auto-detected
Telemetry visible from any subnet node — UDP broadcast works across VPN tunnels used in customer deployments