Experiment Updates & Paper Intro

Experiment Updates & Paper Introduction

Experiment Updates

Since our last blog post, we have ran quite a few different tests over two different data sets. At this time, we are very much in the proof-of-concept stage in regards to Loc8's usage. However, we seem to be proving that it is a very viable alternative to traditional methods, and in some cases, will be much more efficient. Our tests utilized a few different methods for using Loc8 as we are still dialing in the best way to use the software to achieve the best results. Below I will describe the different experiments we have run so far, and our preliminary results and discussion about the software.

DJI Mavic 2 Pro Testing

We acquired data for three different flights, as outlined in our previous post. Since then we have tested some of this data in Loc8, and had surprisingly decent results. Our flights started by taking images of the clothing with a phone camera as a baseline for Loc8 to go off of as far as color values are concerned. Loc8 allows us to choose specific pixel color values to search for, or simply selecting a range of the color spectrum to use. We have been opting for the pixel value, with little positive results thus far, but not due to issues with Loc8.

The DJI Mavic 2 Pro is only able to take images in the raw and jpeg formats. Loc8 does not accept raw images, and jpeg compression has been an issue for us. In our first test, we were trying to find a neon yellow shirt in Loc8, and by "squinting" (a method of visually searching images without special software). Based on the pixel values I gave Loc8 from the phone camera image, hundreds upon hundreds of false-positives were picked up, while the shirt itself was not found. Below in Figure 1 you can see our results. Everything circled in red is what Loc8 flagged as a potential match. In the middle-right of the image, you can clearly see the shirt has not been circled. This is a big issue with the Mavic, the jpeg compression blows the colors out quite badly. The neon yellow shirt looks almost completely white when zoomed in on, which is not accurate.

Figure 1: Neon shirt not found
In order to test that the software actually works as intended, we used the picture above and took the specific color value of the pixels of the shirt and used that as the color for Loc8 to search for. After changing this part of the methodology, we were able to achieve good results. Figure 2 below shows the shirt being found, and no false-positives flagged.

Figure 2: Neon shirt found
Loc8 found the target in 42 seconds, quite quick. However, compared to squinting, it lost by a margin of 3 seconds (39 second location for squinting). This target was located in one of the first few images, so squinting worked very quickly on this particular target.

Our next set of testing was with a red t-shirt. Once again, due to the jpeg compression issues of the Mavic, the phone camera image and the UAV imagery do not match up, therefore utilizing the phone image as a baseline was no help, and Loc8 was unable to find the shirt. The UAV imagery, when zoomed in on, shows that the red shirt has been blown out and looks much more pink and magenta, while the phone camera shows the true red of the shirt. When we used actual pixel values from the UAV image with the shirt in it, Loc8 was able to find the image in 5 minutes 23 seconds, compared to 1 minute 59 seconds for squinting. We are afraid that there may be some bias to the squinting method here, as Luke was with us when we laid the targets, so he may have had an idea where to find the target in the imagery. Squinting was still much quicker, but we suspect that with a squinter that didn't know where the target was, their time would be a bit slower. Figure 3 below shows the target being found by Loc8.

Figure 3: Red shirt found
We ran additional tests on the turquoise shirt, and Loc8 outperformed squinting by a slight amount, coming in at 4 minutes 12 seconds versus 4 minutes 22 seconds for squinting.

Bramor Sony A6000 Testing

Today we tested a different data set, one that we ourselves did not capture so any squinting bias was eliminated completely. It was a raw test between Loc8 and squinting, and Loc8 absolutely outdid itself. First, though, it under-performed in a big way. Our goal in this imagery was to locate a vehicle, specifically an orange Honda Pioneer. We found an image of an orange pioneer on google, and utilized that as the rgb reference value. Due to the cloudiness of the day, and potentially the image compression from the camera, Loc8 was unable to find the target using this method. However, we decided to try something different, using Loc8's color range option. Our color range spanned from bright red to dull orange. Loc8 was able to find the target in a mere 2 minutes 16 seconds, absolutely crushing the squinting method at 23 minutes 35 seconds. The two figures below show both the found target, and Loc8's output after the testing.You can see in figure 5 that 17 clusters were found in the image, partially because of the false positive on the truck.
Figure 4: Honda Pioneer found

Figure 5: Loc8 processing view
As you can see, the test using color ranges did spit out one false positive by finding a red truck. We are also aware that an orange target among a dense green background makes it fairly easy to have no false positives, but this is a huge step in the right direction. Additionally, the color range processing was much much quicker than the pixel value searching on a per-image timed basis. We're not exactly sure why this is, and logically it is a bit backwards. Using a color range should be looking for many more pixel values than us individually selecting 6-10 pixel values, so we're not quite sure why it's faster. That is something we will certainly be looking into, and definitely doing much more rigorous testing.

If we are able to refine our usage of the color range rather than utilizing pixel color values, we may have the most efficient search and rescue method in our hands. Additionally, we won't need to have a specific image to reference, for example a selfie from a phone, in order to use Loc8.

Discussion 

A major issue we have been seeing throughout our tests is the blown out color values represented by the Mavic jpeg compression versus the phone camera. Although our tests with the Bramor utilizing color ranges instead of pixel values has been a huge success, we need to perform much more extensive and rigorous testing in order to figure out the best method.

One idea we are pursuing is utilizing the Yuneec H520, a more expensive UAV than the Mavic, and seeing how the imagery from that UAV compares. We are hoping that the better camera on the H520 will allow us to use pixel values to actually find our targets. However, now that we have good results using color ranges, we are exploring that alternative as well.

Additionally, Loc8 allows us to choose the minimum number of pixels in a certain "cluster" of pixels that need to match before it will flag a target. We have run each data set from the Mavic at both 3 pixels per cluster, and 1 pixel per cluster. At 3 we were unable to find any targets, even when using the actual UAV imagery as the pixel reference. At 1 pixel per cluster we had good results, so this is yet another variable we will have to play around with in order to determine the best method. It will likely depend on the ground sampling distance we are able to get based on the UAV camera we use.

Obviously, we have a lot of testing to do in order to determine what is best with Loc8, and many more data sets to be acquired and tested. We do have quite a few more data sets available from our Mavic flights as we have only experimented with the shirt colors of the first flight. Therefore, we still have shirts to locate in the other two flights, as well as pants in each of the flights. We are hoping to acquire more data when the weather is good, and continue testing our data with different variables changed each time in order to figure out our most efficient method.

Paper Introduction

In addition to our experimental testing, we also began work on our paper that will be put up for peer review next semester. Thus far, we have a completed introduction to our research. This introduction defines exactly what we are doing and why, i.e. search and rescue methods, what is the most efficient and helpful. We address a few problems with other search and rescue methods, and briefly touch on some that we may encounter in Loc8. Described above are also some of the issues we have encountered thus far in our testing. Our introduction finishes up by talking about each of our roles within the team, and how our experiments will be run. A link to our introduction can be found here.

Comments

Popular posts from this blog

GIS Day Poster

Processing Data in Pix4D, No GCP's