Monday, March 13, 2017

Processing Pix4D Imagery with GCPs

Ground Control Points (GCPs)

  • What is a ground control point?
A ground control point (GCP) is a set point whose coordinates are known in a given coordinate system.

  • What are GCPs used for?
GCPs are used to georeference a project. The known point's coordinates are tied down to the visual GCPs in the project to reduce the x,y,z positional error of the captured images.

  • What will GCPs be used for in this project?
GCPs in Litchfield Mine have been pre-set and their coordinates have been collected with a survey grade GPS. These coordinates will be imported into Pix4D to georeference the images, and increase the accuracy of the project. The results of this project will be compared to the previous Litchfield Mine project which was completed without the use of GCPs.


Methods

In a new project in Pix4D, the images from the first flight are brought in, and the camera's shutter mode is changed to Linear Rolling Shutter. The field collected GCP coordinates located within a text are imported to the project after this. The settings must be set to Y, X, Z as the text file is formated this way. Once this is done, the Initial Processing can be run.


The DJI Platform the images were collected with has a problem arise which produces significant issues with altitude of the GCPs. The GCPs have to be edited within either the Ray Cloud Editor or the Basic Editor to correct the errors brought on by this (Figure 1). 



Figure 1: The editor options within Pix4D must be consulted when using a DJI Platform.


Once the Editor of choice is selected, the field notes of where each GCP is located must be consulted to place the GCP in the correct place (Figure 2). The Basic Editor is used in this project. The location from the notes is zoomed into in the Basic Editor to find the proper GCP (Figure 2). When it is located, the x is moved to the very center to ensure accuracy.



Figure 2: The overview of the Basic Editor. The yellow x marks where the GCPs were imported to. 


After the GCPs are corrected, the project can be reoptimized. Once this is done, the GCP Manager can be utilized to access the Ray Cloud Editor. The same process of zooming into the GCP and moving the yellow x from the Basic Editor is done within the Ray Cloud Editor. For each GCP, this is done for at least 5 images, and then apply this to adjust the GCPs. This is done for all of the GCPs for this flight. After this, the second and third steps of the processing are run to complete the project. Once the first flights images are processed, the same process of these steps are completed for the second flight too.



Figure 3 shows the first step of the next process. A new project is created which merges the two flights from Litchfield Mine together. The GCPs are imported to tie the project down again.




Figure 3: This figure displays the selection 



After the Initial Processing is ran, the Basic Editor is used again to ensure the GCP accuracy (Figure 4). This step is important when merging two projects together to make sure that they stitch together properly.

Figure 4: The Basic Editor correcting the GCPs of the two projects when merged.


Figure 5 displays the imagery in the Ray Cloud with the triangles computed. When this step is reached the GCPs are visible on the surface.

Figure 5: The two projects stitched together displayed with the triangles computed. 

Results


The results of the processing with GCPs are clear compared to projects that don't include GCPs. 
Figure 6 shows the merged DSM of Litchfield Mine, compared to Figure 7 which did not include GCPs. The results from the GCPs reduce the noise of the image. Figure 6's results show a higher visual accuracy compared to Figure 7.



Figure 6: DSM completed from merging the two projects together.



Figure 7: Results of processing Flight 1 with no GCPs.



Figure 8 displays the results of the merged orthomosaics. The variations in altitude are much more visible in the merged project compared to the second flight (Flight 9). Although Figure 8 includes both flights in the project, the two can still be compared. When compared as layers, the project which includes GCPs is clearly more accurate.




Figure 8: Orthomosaic of the two projects merged together.




Figure 9: Litchfield Mine Flight 2 displays the results of not using GCPs.



GCPs Revisited


The results of using GCPs are more accurate than not using them. Pix4Ds GCP integration is a very streamlined process that allows users to include GCPs with ease and creates better imagery.

Thursday, March 2, 2017

Calculating Impervious Surface Area

This activity is based on the Learn ArcGIS lesson: Calculate Impervious Surfaces from Spectral Imagery. It utilizes AcrGIS Pro to familiarize users with Value Added Data Analysis. This lesson uses aerial imagery, like that collected with UAS, to classify surface types. It ultimately creates a layer that describes that impervious surfaces of a study area.


Methods

The data used is available from the lesson.


Segment the imagery


Users open up the existing Surface Impervious project first. The "Calculate Surface Imperviousness" tasks are used in this lesson.  The first step is to extract the bands to create a new layer (Figure 1).


Figure 1: Bands 4 1 3 are extracted to create a layer such as the image above.


The next step is to group similar pixels into segments of image using the "Segment Mean Shift" Task. The parameters of Figure 2 are filled into the task to create a new layer (Figure 3).




Figure 2: Parameters used to create a segment mean shifted layer. 



Figure 3: Result layer of the segment mean shift task. The pixels that are alike are grouped together.




Classify the imagery

The last section segmented the image to makes classification easier. This section classifies all of the different pervious and impervious surface types into distinct categories. Figure 4 shows many different segments of the image classified into 7 classes. All of the samples are grouped into like classes such as: gray roofs, water, roads, driveways, grass, bare earth, and shadows.


Figure 4: Different spectral classes help to distinguish pervious and impervious surfaces.


The classification samples are then saved as an individual file to use in training the classifier. The neighborhood raster is compared against the sample file to create a classified image (Figure 5).


Figure 5: The study neighborhood classified into the seven different classes. The left column shows the parameters for the next step completed

After the image is classified, a field is added to further classify the image into pervious and impervious classes as either 0 or 1 (Figure 5). The task is run to create an output of either pervious or impervious surfaces (Figure 6).




Figure 6: The dark purple segments are all pervious surfaces, while the impervious surfaces are all light purple. 


Calculate the impervious surface area


One-hundred accuracy points are created using an equalized stratified random process. The first ten of the points are analyzed and classified based upon the surface type they lie upon (Figure 7). The task is run to create a table which can be used in the lesson.

Figure 7: The highlighted point is classified into 1 for pervious or 0 for impervious surface type.

A confusion matrix is used in the next step. This computes the accuracy between the classification and the accuracy assessment tasks. If both are classified as impervious, then the matrix will have a high percentage (Figure 8). The next step runs a tabulate area process on the parcels to assess the imperviousness level within each one.




Figure 8: Confusion Matrix. U_Accuracy is user accuracy, P_Accuracy is producer accuracy, and Kappa is the final computed percentage overall.

The final step of the lessons is the join the Impervious Area table (Figure 9) with the parcel table to symbolize the parcels based upon the different levels of imperviousness.

Figure 9: The tabulated area table of imperviousness within each parcel.



Results





Figure 10: The final result of the surface imperviousness tasks ran through the lesson. The darker the shade, the more impervious surface within that parcel.

The final results were very successful. The map that I was able to make in the lesson imitates the real impervious levels within the initial raster image very well. The roads are the best example of this. Every single black top displays on the map as extremely dark red. The grass is the most pervious, and is displayed as the lightest yellow.

Some difficulties encountered in this activity were all the result of ArcGIS Pro. It is extremely difficult to make a layout in ArcGIS pro. I couldn't figure out how to change the units on the scale bar, change the decimals on the legend, or even export it as a jpeg. The ESRI help was little help at this point too. I ended up printing the image to a pdf, and going into Adobe Illustrator to manually change the units from kilometers to meters and I was able to export as a jpeg there.


Conclusion


ArcGIS Pro is a very robust software program that can run Value Added Data Analysis very well. It is the future of ESRI, and they have been slowly rolling it out. It is still being improved, so within the next few years, it should run as smooth as the current ArcMap. The resulting map for the lesson was accurate, and accomplished the task. I could imagine using this with UAS data to create well done maps for clients.