The software system consists of a number of modular components that communicate with each other to guide the AUV through the obstacle course. Unified communication between the components is achieved by utilizing the Robot Operating System (ROS) framework.
The mission planner is responsible for determining which missions to run and when to start/stop the missions. A mission has a completion condition and/or a timeout that are specific to the particular task. Whenever the completion condition is met or the timeout expires the mission is terminated and the next one is started. Having a timeout ensures that the AUV does not spend too much time on a single task and that other tasks are given a chance.
The software team developed a generic PID controller in order to control AquaURSA based on the current mission requirements and the sensor readings. The controller itself is independent of the component actually being controlled, allowing it to control different items and making it easier to maintain.
The controlled components are split into Horizontal (controlling the orientation/truning effort of the AUV) and Vertical (controlling the depth). Both of these components are controlled all of the time, however the sensors used for controlling them depend on the particular mission. The Horizontal component can be controlled by the Digital Compass or the by Vision subsystem, or by a combination of the Digital Compass and the Sonar subsystem. The Vertical component is controlled either by the Depth Sensor or by the Vision subsystem.
The Digital Compass and Depth Sensor controllers work by having a pre-determined target for the heading and depth of the AUV (the setpoint) and using the PID controllers to calculate the required change in power applied to each pump and thruster in order to achieve and maintain the setpoint. With the Vision-based controllers the setpoint is for the tracked object to be in the middle of the frame. The estimated position of the object is obtained from the image processing components and is fed into the PID controller, which again determines the neccesery changes in power applied to the actuators.
We consider both the colour and the shape of the object when attempting to isolate it from the rest of the frame. The colour-based filtering is less computationally expensive, however, it is significantly affected by the varying light conditions trhoughout the day. The shape-based filtering is more reseilient to light conditions, however it requires more processing power and depends on the amount of unique features that the object has. Since neither of the methods can achieve satisfactory results on its own, we combine both approaches.
Our colour-based approach previously consisted of RGB filtering based on pre-specified thresholds. However, we found that this approach is difficult and time consuming to calibrate and therefore we came up with a more systematic method that utilizes normalized cross-correlation between the hue of the image from the cameras and the template image. This allows us to provide the system with a single image of each object under the current light conditions and have it use that image as the template. The cross-correlation algorithm is provided by the OpenCV library.
The discretized data from the sonar board is obtained from the Sonar board and is analyzed in order to find the Time Difference of Arrival (TDOA) for each hydrophone. This is achieved by taking the difference between the corresponding rising edges at each hydrophone and averaging them in order to figure out the time difference between the hydrophone that received the signal first and each of the other three hydrophones.
After having the four TDOAs we use Multilateration in order to determine the 3D coordinates of the source of the signal (the pinger). Since, we have four hydrophones we are able to find an analytical solution of the multilateration equations as described in the work done by R. Bucher and D. Misra (A Synthesizable VHDL Model of the Exact Solution for Three-dimensional Hyperbolic Positioning System). After having the coordinates we determine the orientation of the pinger relative to the AUV and use that angle to adjust our target heading, so that we are aiming towards the pinger.
During testing and debugging we need to be able to visualize all data coming from the sensors in a way that makes it easy to quickly see the state of the AUV. For this purpose we have developed an application that displays the sensor data, as well as the video feed, annotated with location estimation information received from the image processing components. This can be shown both in real-time (during the atual testing and debugging) as well as replayed from logs (in order to analyze a previous test run).
AquaURSA can also be controlled remotely while testing and troubleshooting. This system can issue a target value for the depth controller and can also control the direction of AquaURSA by either supplying a direct turning effort command or by adjusting the target of the heading controller, causing AquaURSA to turn in the required direction.