By Abigail Powell
October 28, 2019
When NOAA Ship Reuben Lasker arrived in San Francisco at the end of Leg 1 of the expedition, our autonomous underwater vehicle (AUV) Popoki had already done seven dives and taken around 40,000 images. These images will provide invaluable information about the fish, invertebrates, and habitats present at our survey sites that can be used to inform fisheries management and marine spatial planning. However, the quantity of images means that image analysis is a daunting task! Currently, we have custom built software – OneTwoRedBlue – that experts use to record the invertebrates that are present and to identify and measure fish. This gives us great information on species diversity and abundance at our survey sites, but it takes a long time to process and analyze all of the images.
On this EXPRESS cruise, for the first time, we are using machine learning to automate some of the image analysis and obtain rough counts of some target organisms while we are at still sea. To do this, we have been using an open source software toolkit, Video and Image Analytics for a Marine Environment (VIAME ), which was developed as part of NOAA’s strategic initiative on automated image analysis. VIAME has a number of different components that can carry out object detection from still images or video.
We have been using some of the AUV imagery collected during the first dive to train up detectors that can identify sea urchins and flatfish in the AUV images. Applying these detectors to additional AUV imagery then gives us an estimate of how many urchins and flatfish were observed at each site. While there is still a ways to go before we can use these methods to carry out more challenging tasks such as identifying rockfish to species level, computer vision tools are helping us maximize the use of the large amounts of imagery we can collect and make the results available quicker.