Introduction
UAS imagery can be collected from a variety of different aspects. One of the most common forms is called NADIR collection. Another technique, which this lab will be focused on is called oblique image collection. Oblique images are any images that have been collected at an angle. In Pix4D, these images can be used for 3D modeling of objects. This is due to the angular collection being able to visualize an object from all sides. Figure 1 displays the 3D Model option within Pix4D and gives options for creation, such as sculptures, urban modeling, and video fly-by of scenes.
|
Figure 1: Pix4D description of 3D modeling. |
Methods
This lab includes three different sets of data for 3D Modeling. Prior labs in Geography 390 have all focused on processing data through the 3D Map template, but due to the nature of oblique imagery, the 3D Model processing option will be utilized. The use of 3D Modeling disregards the creation of the DSM and orthomosaics. The flight plan sets up the UAV to fly in a corkscrew manner around the object to model in order to capture all of the ins and outs of the thing to be modeled. The three sets of data include models of a bulldozer on the Litchfield mine site, a shed at the South Middle School in Eau Claire, and a pickup at the same location. Another important aspect of 3D Modeling is the use of annotations. This is done within Pix4D, and removes all objects which obstruct the creation of a 3D model. This can include the sky, bare ground, grass, etc.. Completing this allows the final model to represent the nature of the subject.
The first step is to run the initial processing for each data set. This allows the ray cloud to be completed, and gives a camera view for each image location. From here, one is able to use the annotation tool to select everything in the image that is not a part of the subject (Figures 1, 2, 3). The mask tool is used to complete this to turn everything in the images not a part of the subject pink. This process is completed for 4-5 images within each data set to train the program to catch all overlap in modeling. Once the annotation is complete on the set of images chosen, the re-optimize and Point Cloud and Mesh Processing option is ran to create the actual model of the object.
|
Figure 2: Completed annotation of the bulldozer image set. |
|
Figure 3: Completed annotation of the South Middle School shed. |
|
Figure 4: Completed annotation of a Toyota Tundra. |
Results
The results show mixed success. The data sets create a model, however for cases such as the shed (Figure 6), the results are less than ideal still. Even with the annotation completed. there is still a large amount of floating pixels that schew the success of the image. One can also see within Figure 6 what the results of annotation is intended to remove. Within the trees, the sky is meshed into the branches, and thus creates inaccurate, and poor quality results. The annotation is done to remove these from the subject.
The bulldozer results are fairly clean (Figure 3). There is little to no distortion of the bulldozer after completing the annotation. This could be attributed to the images having little to no sky in the background. The sky can be the biggest creator of distortion among images.
|
Figure 5: Flyby view of the bulldozer (click to load). |
The shed's modeling results are less successful than the bulldozer. The sky could have played a huge role in this. There are floating pixels everywhere throughout the image and the model has some distortion along the roofline (Figure 6). As stated earlier, the potential distortion caused by the sky can be seen within the trees, and this is why annotation is completed to eliminate that form the subject matter.
|
Figure 6: Flyby view of the South Middle School shed (click to load). |
The truck's modeling results are more successful than the previous two renders (Figure 7). The model is accurate, and there is little distortion, sans some underneath the bed. This result could be due to the flight's oblique angle. It is focused downwards at the truck for the whole flight, so there is far less noise interfering the image from background objects.
Figure 7: Flyby view of the Toyota Tundra model.
Conclusion
The 3D Models created using oblique imagery has mixed success. Subjects that are close to the ground, and show little sky in the background tend to be more successful in removing distortion. Another aspect of this is the decided flight path of the UAS. Choosing the right flight can eliminate much of the potential problems that could rise up through the results when brought back to process. This process could be very useful for urban modeling, sculpture modeling, and any other consulting project which requires accurate imagery of fine-temporal data. Due to the high amount of time taken to complete the annotations for each image set compared to the accuracy of the results, this process would not be valuable to complete unless the results needed extreme accuracy, such as survey fly-bys.