ArrowES Equipment Management System (AEMS)
Arrow Emergency Systems (my last employer) sold a lot of equipment that lived out on roadsides. Arrow boards, trailer-mounted message signs, temporary traffic lights, and fleets of service vehicles that spent their lives moving between work sites. Each piece worked well on its own, but none of it really knew about the rest. Our customers (traffic control company managers) wanted to see everything in one place: where their vehicles were, whether a sign was running the right message, whether a light had lost power.
At the time we partially solved this by reselling a third-party vehicle tracking system from a white label software company in Pakistan. It worked, with a few bugs that were never being fixed despite promises. Once enough customers depended on it, the Pakistani company started trying to squeeze us by increasing the subscription costs despite our contract, and we still had no way to integrate the rest of our equipment. We were paying for a service that only solved half the problem and locked us out of solving the other half, who threatened to rug pull us if we didn’t start coughing up more cash.
So the question inside the company shifted from “should we build our own platform” to “can we replace this before they disconnect us”.
The proposed idea was ambitious: instead of just a vehicle tracker, build a single system capable of monitoring and controlling every connected product we manufactured, or any other asset a traffic control company operates. Vehicles, arrow boards, portable traffic lights, and variable message signs would all report into one platform, and customers would interact with them through a unified web interface.
I led the architecture and backend development of this system while working with another software engineer, Sam on the web frontend, and two senior electronics engineers on the hardware. The result became the ArrowES Equipment Management System (AEMS), a platform combining custom embedded trackers, cloud processing services, geospatial analysis, and remote firmware updates operating over 4G.
The main features of AEMS for vehicles was:
- Vehicle Tracking and Trip Analytics (Fuel Usage etc)
- GeoFences and alerts when vehicles entered a GeoFenced area
- Maintenance and Repair Scheduling, as well as report vehicle issues from the cars OBD port
- A self hosted instance of Open Street Maps database for determining if a vehicle was speeding
For equipment tracking we were able to track and remotely control our own products:
- trailer mounted variable message signs
- arrow boards
- temporary traffic lights
- temporary boom gates
For other equipment our tracker could provide:
- position tracking
- battery monitoring
- 12-24V Relay Control
Front End
Sam did the heavy lifting on the front end, handling the web design and making all the tables and icons look good. I’d occasionally jump over to help by writing a SQL query for calculating summaries, or some JavaScript for drawing geofences and trip lines, colouring them red where the vehicle was exceeding the road speed from OpenStreetMap.
That feature turned out to be a great demo tool. In one customer meeting with a traffic control company we showed a recorded trip to explain how speeding detection worked, and it turned out our boss had driven to the meeting significantly over the limit. The map was almost entirely red along the highway, which made the point very clearly without much further explanation.
Backend
I did the software engineering for the backend. This involved developing a server to communicate with the Chinese OBD trackers and interpret their data into points, trips, and vehicles in our database. I also wrote our own encrypted communication protocol using AES for our custom trackers and the associated server to support them.
Lastly I wrote a bootloader for the Atmel SAME70. On boot the tracker would securely connect to our bootloader server, check for a firmware update, and if one was available download and install it automatically.
This project was a lot of fun, some of the challenges included developing algorithms to detect when vehicles were speeding using our own hosted instance of Open Street Map, there was a bit of work getting it to reliably locate the vehicle around intersections. Plus the self-hosted Overpass API for getting street names from GPS coordinated can only sequentially handle queries, so I wrote a load balanced queueing system. It was great however in a customer meeting with traffic control companies to this system functioning by showing that our boss had sped the whole way to their office, demonstrated by the trip line on the map being almost entirely red on the highways.
Electronics
Alongside the backend work I was also involved in the electronics design of our own tracker hardware. Working with one of the senior electronics engineers I helped select components, design the schematic, and lay out the prototype PCB around an Atmel SAME70 microcontroller, a 4G modem, and a GNSS receiver.
The goal was to make a single unit that could act as a general connectivity platform rather than just a vehicle tracker. It connected to vehicles over OBD-CAN for maintenance and telemetry data, reported position over GPS, and sent everything back to the platform over 4G. We also exposed additional IO and serial interfaces so it could later be used to interface directly with other ArrowES equipment, not just vehicles.
The prototype board I produced was later refined and miniaturised by the senior engineer into the production design. Because we controlled the hardware, firmware, and protocol, the tracker became the common communications layer for future products rather than a single-purpose device.
Miscellaneous
I also wrote some Selenium scripts to download all of the existing trip data from the white-labelled system so we could back everything up before we migrated away from it.
I had some fun with our development server as well, which was an old desktop PC in the office. We could remotely fire the horn in the work ute, and it would also play an audible alarm in the office if the production server had any issues. I set up a retro terminal display showing real-time status of the PROD, STAGING and DEV environments, including connected vehicles and how many points were queued waiting to be translated from coordinates into street addresses.

My contributions
- System architecture, database, and user interface design
- Server software to communicate with third-party tracker hardware using their protocol
- Developed our own tracker hardware and firmware with a senior electronics engineer at ArrowES
- Component selection, electronics design, and PCB layout for the prototype tracker, later refined and miniaturised by the senior engineer
- Designed and implemented the communication protocol, including AES encryption in firmware
- Designed and implemented bootloader firmware and an accompanying bootloader server to remotely update tracker firmware over an encrypted connection as new features were added
- Designed and implemented server software for our own trackers in .NET Core
- Worked on frontend web interfaces such as the map and graphical displays
- Set up server infrastructure on AWS including load balancing the communication servers, TLS certificates, CI/CD pipelines, and locked-down routing tables and firewalls
Technical Challenges
- Wrote algorithms that interpolated along OpenStreetMap roads between nodes to improve the accuracy of street address lookup for trip waypoints
- Interpolated between received trip points so very small geofences, such as toll points, would still register a crossing
- Implemented multithreading and queues to allow asynchronous processing of CPU-intensive analysis while keeping vehicle positions updated in real time
- Implemented load balancing so devices could be handled by multiple AWS servers depending on network load
- Created a custom usage-tracking process across all servers to monitor CPU and memory usage, connected vehicles, and queued processing tasks in real time, triggering an office audio alarm and email alerts if a production server encountered an unhandled exception