Object-Based Classification of Urban Distinct Sub-Elements Using High Spatial Resolution Orthoimages and DSM Layers

Abstract

This paper aims to assess the ways in which multi-resolution object-based classification methods can be used to group urban environments made up of a mixture of buildings, sub-elements such as car parks, roads, shades and pavements and foliage such as grass and trees. This involves using both unmanned aerial vehicles (UAVs) which provide high-resolution mosaic Orthoimages and generate a Digital Surface Model (DSM). For the study area chosen for this paper, 400 Orthoimages with a spatial resolution of 7 cm each were used to build the Orthoimages and DSM, which were georeferenced using well distributed network of ground control points (GCPs) of 12 reference points (RMSE = 8 cm). As these were combined with onboard RTK-GNSS-enabled 2-frequency receivers, they were able to provide absolute block orientation which had a similar accuracy range if the data had been collected by traditional indirect sensor orientation. Traditional indirect sensor orientation involves the GNSS receiver in the UAV receiving a differential signal from the base station through a communication link. This allows for the precise position of the UAV to be established, as the RTK uses correction, allowing position, velocity, altitude and heading to tracked, as well as the measurement of raw sensor data. By assessing the results of the confusion matrices, it can be seen that the overall accuracy of the object-oriented classification was 84.37%. This has an overall Kappa of 0.74 and the data that had poor classification accuracy included shade, parking lots and concrete pavements. These had a producer accuracy (precision) of 81%, 74% and 74% respectively, while lakes and solar panels each scored 100% in comparison, meaning that they had good classification accuracy.

Share and Cite:

Mabdeh, A. , Al-Fugara, A. and jarah, M. (2018) Object-Based Classification of Urban Distinct Sub-Elements Using High Spatial Resolution Orthoimages and DSM Layers. Journal of Geographic Information System, 10, 323-343. doi: 10.4236/jgis.2018.104017.

1. Introduction

In recent years, photogrammetry has been recognised as an extremely good surveying method when trying to produce 3D images of the Earth’s surface. This is because it can be used on demand and has the ability to create high-resolution data, including DSM layers and orthophotos (orthorectified images). Photogrammetry includes analysing Earth-based (terrestrial) data or dedicated air- and space-borne campaigns [1] [2] [3] . Photogrammetry can be used in a variety of industries, including urban mapping and planning [4] [5] , agriculture, resource management [6] [7] , recording of archaeological features [8] [9] and hydrology and hydrodynamic flood modelling [10] [11] [12] . Due to its uses, there has also been a rise in photogrammetry being used in geosciences, where it can be used for mapping, monitoring [13] [14] , and the detection of objects [15] and group changes in topography [16] .

Despite its uses, in the past, the use of aerial photogrammetry has been limited. This is because it was seen as a high-cost method of data collection and often faced difficulties when trying to collect 3D topographic data, orthophotos, topographic maps and other map features due to the large format metric cameras that were used [17] . The development of unmanned aerial vehicles (UAVs), however, has helped to make photogrammetry a more accessible means if data collection allows for the collection of images with high spatial and spectral resolutions in a way that can save both money and time. These technological advances allow for high-quality mapping of the earth’s surface using Orthoimages and also mean that 3D models (meshes) of the earth’s surface can be created with high resolution and accuracy. Alongside this, advances in computer hardware and image matching software have also meant that stereo images can be compared faster and more accurately than ever before, thus making photogrammetry a viable alternative to manned aerial photography [18] [19] [20] . In spite of these advantages though, UAVs often have weight and cost restrictions which mean that the sensors used in them are often lower quality than those that would be used during manned aerial photography. This can mean that when the sensor needs to provide accurate data in centimetres, this traditional approach may not provide suitable results unless a large number of group control points (GCPs) are distributed evenly across the sample. This can mean that a project becomes too expensive or is impractical and may even mean that inaccessible terrain gets included within the sample. In order to create overlapping imagery in a block configuration, it is important that the aerial position is precisely controlled, which can help to reduce the need for multiple GCPs [21] .

There have also been developments in Global Navigation Satellite Systems (GNSS), which can be seen to be particularly interesting in terms of this paper. There has been an increase in the use of Real-Time Kinematic (RTK) devices being placed into UAVs that are readily available. This is interesting because the use of RTKs means that the position of the UAV can be more easily tracked, and can also help to ensure that the data provided is more accurate (up to 2 cm) [22] . UAVs using this kind of technology can modulate signals between the satellites and the receivers using the GNSS carrier phase [23] . The GNSS receiver in the UAV receives a differential signal from the base station, which is corrected by the RTK and allows for a communication link. The most recent UAVs now come with RTK units onboard them and they use a dual frequency which can help to reduce atmospheric delay and provide an even more precise location. In comparison to a single frequency, the ambiguity resolution is also much quicker [24] .

Advances in remote sensing have helped to make UAVs even more useful and effective data collection tools than ever before, as it means that UAVs now have the ability to combine temporal and spatial sensing. This allows for an even more precise recognition of features, which, while positive, can also mean that the images produced are subject to noise from shadows or the salt and pepper effect [25] [26] [27] . This is due to the nature of pixels and the way in which they behave when the spatial resolution of an image is increased. Studies have shown that increasing the spatial resolution of an image can have a negative effect on data, as pixel-based techniques can make it challenging to identify features accurately [28] [29] . In order to overcome the shortfalls of pixel-based techniques, researchers have tended to move towards using object orientated classification techniques when looking at images with an extremely high spatial resolution [30] , while the use of Orthoimages in this setting is massively underused, especially in terms of mapping v features. In one study, it was found that using UAVs to recognise tree species in a mixed boreal forest gave results with an 82% accuracy rate [30] . UAVs have also been found to be very useful when mapping specific plants in open woodland. A study by Chenari et al. (2017), aimed to estimate the mean crown area of wild single level trees in open woodland and classified the Orthoimages collected using the object-oriented method. This gave a classification accuracy of 0.90 and a precision score of 0.89 [31] . UAVs can also be used to classify urban environments with increased accuracy too [32] [33] , especially when using Orthoimages and DSM, as these are useful when identifying elevated objects in urban scenes [34] [35] .

However, these building detection algorithms are not without problems though and can struggle to identify buildings when they are less than 50 km2 or the building is on sloped ground. This is particularly common in casual settlements, meaning that these detection algorithms are not suitable to be used in these kinds of areas. In order to ensure that buildings in these areas can be mapped, it is important that 2D and 3D features are analysed in order to get a high level of accuracy when classifying the area. The aim of this research, then, is to assess the effectiveness of object-oriented image analysis software eCognition (Definiens Imaging, Germany) in urban environments that include features such as buildings, roads, car parks and vegetation. This is done by combining high spatial resolution mosaic-Orthoimages and DSM layers in order to be able to group features of the environment. This method is superior to VHR imagery as UAV Orthoimages are able to combine object segmentation and the fuzzy dimension digital classification method to recognise features in a diverse environment, while objects may be too spectrally similar for VHR to be used effectively.

2. Study Site

The site chosen for this research was the Jordan University of Science and Technology (JUST). Founded in 1986 and designed by the Japanese architect Tange, the campus combines futuristic style and sustainability. It is located 70 km north of the capital Amman and 6 km south of Al-Ramtha at 32˚28'36.77"N and longitude 35˚58'24.05"E as shown in Figure 1. The campus has an elevation of 580 m and covers an area of 11 km2, which includes both buildings and natural areas. JUST is generally divided into two halves, the medical faculties, which can be seen on the lower part of Figure 1 and the engineering faculties which can be found on the upper part of Figure 1. The buildings tend to follow two main axis, the academic spine, where lecture buildings can be found, and the social spine, which includes services such as the library, mosque and accommodation.

3. Images Acquisition

3.1. UAV and Sensor Description

MARSRobotics® Talon with fixed wings, as seen in Figure 2(a), was used as the UAV in this study and performed all of the flights. This UAV complies with design standards for UAVs and is approved by Transport Canada, the Jordan Civil Aviation Regulation Commission (JCARC), as well as the Federal Aviation Administration (FAA) in the USA. MARSRobotics® Talon is a hand-launched at takeoff. It has a 530 kV brushless motor which is powered by two 6-cell 4500 mAh batteries which provide it with two hours of flight time with a full payload. When cruising, it can reach speeds of 72 km/h (20 m/s) and is able to reach maximum speeds of 85 km/h (23.6 m/s). It is also able to operate in up to 35 km/h winds when flying and 25 km/h when the parachute has been deployed. It can be controlled remotely within a 15 km radius by a handheld controller or can make use of Pixhawk software created by PX4 and manufactured by 3D robotics which allows the MARSRobotics® Talon of autonomous flight. The maximum weight at takeoff is anything up to 3.5 kg (7.7 lbs) and the MARSRobotics® Talon can reach an altitude of 2000 m (3.1 miles) above sea level if needed. The controller displays data about the flight, such as altitude, battery status and distance travelled. The following table shows the technical features of the MARSRobotics® Talon (Table 1).

Figure 1. Location map of Jordan university of science and technology (JUST), and JUST Campus plan (source: Engineering Unit at JUST).

Figure 2. MARSRobotics® UAV (a) and Sony Alpha ILCE-A6000 camera (b).

Table 1. Platform technical specifications (Aeromapper Talon).

3.2. Camera System

The MARSRobotics® Talon features a SONY A6000 (ILCE-6000L) Digital Single-Lens Reflex (DSLR) camera, as seen in Figure 2(b), which is powered by its own rechargeable battery. It has a 24.3-megapixel Advanced Photo System (APS) Type-C (Classic) which involves a sensor, hybrid autofocus feature and a continuous shooting speed of up to 11 frames/second. It has a Complementary Metal-Oxide-Semiconductor (CMOS) sensor (23.5 × 15.6 mm). The data is recorded in the form of 8-bit, in both JPEG and RAW formats with a resolution of 4000 × 6000 pixels. The lens ranges from 16 - 50 mm and has power zoom with an 83˚ - 32˚ angle of view as seen in Table 2. This camera is held on to the UAV by a gimbal, as this ensures a constant viewing angle, meaning that near-nadir images will be provided.

3.3. Control Unit

The way in which the flight is controlled is crucial to the MARSRobotics® Talon. Drones such as this can be controlled in a variety of ways, such as GPS enabled autopilot systems or through using radio-controlled hardware. In this study, the Pixhawk autopilot system was used to control the UAV. This is an open-source autopilot system which is marketed towards users of inexpensive autonomous

(https://www.imaging-resource.com/PRODS/sony-a6000/sony-a6000DAT.HTM)

Table 2. Camera technical specifications (Sony Alpha ILCE-A6000).

aircraft. This was a good choice as it is a low-cost system and is easily available. The RTKite GNSS Receiver was used to plan GCPs and checkpoints. This has 444 channels and can pick up both L1 and L2 frequencies, as well as GPS and GLONASS constellations. This is able to directly connect with the Pixhawk controller. GNSS differential processing of CGP and checkpoints were conducted with Pixhawk 32-bit ARM Cortex M4 core with FPU. The GNSS receiver has two categories, the redundant three antennas for position measurements and the two antennas for differential measurements (GNSS RTK), ensuring that the data is transmitted to the data link in real time and then back to the base station. The connection between the GNSS sensors and the Pixhawk controller uses Continuously Operating Reference Stations (CORS) with an embedded GSM/GPRS cellular modem. The GNSS receiver corrects accumulated error from AHRS and provides information regarding the position, velocity and altitude of the drone, as well as where it is heading to and raw sensor data measurements.

3.4. Software

3.4.1. Mission Planner

Misson Planner is software developed by ArduPilot which allows for a flight path to be planned. It is a Ground Control Software (GCS) system and can be used for APM and Pixhawk open source piloting systems. Using Mission Planner allows for the firmware to be upgraded and the autopilot system to be configured, as well as ensuring that live telematics readings are collected and allowing a mission or flight path to be programmed into the drone. Pix4D Mapper Pro was used for the photogrammetric processing of the images collected by the UAV. This software allows the user to calculate the position and original orientation of an image through Automatic Aerial Triangulation (AAT) and Bundle Block Adjustment (BBA). This allows the DSM layer to be generated as a 3D cloud point can be obtained from both of these data sets (Wolf, 1985; Mikhail and Bethel, 2001). By projecting and combining the original images and DSM layer, it is possible to orthorectify and mosaic the images (Pix4D Manual, 2013). GTR Processor v2.92 allows for GNSS differential processing of the ground control points and checkpoints, while statistical analysis was carried out using MATLAB v7.11 R2010b.

3.4.2. Pix4D Mapper Pro

Pix4D Mapper Pro was developed by Pix4D, a Swiss company at the École Polytechnique Fédéralede Lausanne (EPFL). It is a vision based software which allows users to define its settings, including choosing their own projection centres, positioning their own accuracies and choosing their own camera model. Depending on the way in which the geolocation data is stored for each image, it is possible for the software to automatically carry out process on it. If the file is saved in the Exchangeable Image File Format (EXIF), Pix4D Mapper Pro will load it for BBA, as well as assessing its estimated position accuracy. Although this process is generally done automatically, users are able to define options for the SFM, BBA and camera calibration. The software also allows for feature matching with SIFT operators, meaning that tie points can be extracted. The software finally stores all of the estimated parameters and results of the matching processes and orientation in the output folder, making them easily accessible. The images were processed by MARSRobotics® Team using their Pix4D license.

3.4.3. eCognition

eCognition software was developed by Delphi2-Creative Technologies, a German company and is a new way of analysing object-oriented data and multiscale images. The Beta version of this software was used in this study for the object-based classification. When looking at object-oriented data, eCognition means that the user is able to access information that they cannot gain by single pixels. The analysis process included two steps, segmentation and classification. Segmentation involves grouping certain elements in a picture based on their likeness and must be carried out before the software is able to classify the objects as the software works with objects rather than pixels [36] .

3.4.4. Universal Ground Control Station (UGCS) Software

UGCSs oftware can be used for flight route planning and fly drone survey missions. It is also supports the software that controls drones hardware from different manufacturers, therefore enabling the drone to be controlled via different broadcast systems. The software can calculate the route and fly the UAV autonomously. Also, the appropriate input parameters need to be set accordingly: the area location of interest must be well-defined on a map, and setting flight properties (side and forward image overlap percentage, flight altitude) as well. These input parameters are used for calculation of the optimal flight route which will assure full coverage of the area of interest.

3.5. System Design

3.5.1. UAV Flight Mission

The flight missions for the UAV were designed using UgCS software which allows for the planning of missions and the route was selected based on the configuration of the camera. In order to obtain an overview of the area and define the boundaries for the flight path, Google Earth was used, although this does not show dangers or obstructions, so cannot be used alone. Because of this, a survey must be carried out in order to identify obstacles such as trees, buildings and electricity pylons. During the preflight survey, it is possible to identify a wide open space suitable for take-off and landing also, making it very important. In this study, the flight was planned to cover an area of 11 hectares at 400 m average height, ensuring that the UAV could obtain a ground sample distance of 7 cm. Forward overlaps were set at 70% and side overlaps at 50%, this overlap settings was made using the mission planning software (UgCS). The flight path was computerised in order to be able to estimate the outcomes of the flight, as seen in Figure 3, and it was decided that Adaptive Bank Turns would be used and the maximum speed would be 15 ms−1. The flight path was then uploaded to the UAV. The validation of these settings was verified after examining the raw images in Figure 5 taken directly from the UAV camera before processing.

Figure 3. Flight planning realized by Mission Planner software for the MARSRobotics® Talon JUST mission.

Figure 4. The location of GCPs at the campus of JUST University, 12 GCPs (in red) of the control configuration 12 GCP and of the 14 CPs (in green) on the test site. The highlighted GCP number 13 is used in the control configuration RTK + 1 GCP. Processed by MARSRobotics® Team using their Pix4D license.

3.5.2. Establishing Ground Control Points

The need for ground control points can be avoided if the UAV has a dual frequency GNSS on board, but in this study, the GNSS was only single frequency. In this study the GNSS was not used, instead GCPs were used and had to be marked on the project area before the flight [37] . This resulted in 16 signalized targets being deployed and surveyed just before the UAV took flight. These had to be in accordance with the rules of local buildings and parks, and if it was allowed, the GCPs were surveyed using Trimble R8 GNSS. They scored a horizontal accuracy of 0.8 cm + 0.5 ppm and a vertical accuracy of 1.5 cm + 0.05 ppm using MARSRobotics® owned North® RTK system. This project used a default setting as suggested by North® RTK system when collecting results. The GCPs used in this project can be seen in Figure 4 and were selected in order to ensure the best Georeferencing results. The five points marked in red in Figure 4 were used as GCPs and the 11 yellow points were used as checkpoints. The base station of the GPS is marked with a blue dot

3.5.3. UAV-Based RTK and GCP Distribution

The flight in this study not only has well distributed GCPs, as seen in Figure 5 but was also conducted using RTK-GNSS-enabled 2-frequency receivers. This meant that the RTK-GNSS data attached to each image was combined with the

Figure 5. Illustration of the UAV-image taken with the visible camera at 400 m in the test site, the zoomed image is the image No. DS00079 which appears in the middle of the last row of the image above (Provided by MARSRobotics® Aerial Mapping Team).

bundle adjustment, with the onboard RTK reducing the alteration or deformation of the image. Once the UAV was in flight, photos of the study area were then collected based on the flight configuration listed in Table 3.

Table 3. Configuration of UAV photogrammetric system.

The same inputs were used for all versions of the flight and the flight was in semi-automatic mode with the same flight plan each time photos were taken. The UAV flew at a height of approximately 400 m above the ground and the UAV was put into manual mode for both take-off and landing. For this particular study, the five-strip flight mode was used, meaning that 542 images were collected after the 25-minute flight and covered the ground sample distance of 7 cm. In order to get the clearest images, both weather and time of day were taken into account when choosing a time for the flight. Some of the photos taken can be seen in Figure 5.

4. Image Processing

4.1. Camera Position and Orientation

The software selected for processing the images from this study was Pix4D Mapper by Pix4D. During BBA, the internal orientation of the camera was calibrated, meaning that the focal length, position of the principal point and lens distortion parameters were all self-adjusted. It was not possible to recalibrate the camera in the field before the flight, but it has been shown that this is not necessary when self-calibration has been performed. After the images have been analyzed by BBA, tie points are used to match pairs of images that are spatially similar. This allows for the exact flight path to be seen, as the tie points produce a point cloud above the image. This can be seen in Figure 6, which highlights the camera position when the UAV was over Acadia A. It also shows points where unsuitable images have been taken in the foreground.

The software selected for processing the images from this study was Pix4D Mapper by Pix4D. During BBA, the internal orientation of the camera was calibrated, meaning that the focal length, position of the principal point and lens distortion parameters were all self-adjusted. It was not possible to recalibrate the camera in the field before the flight, but it has been shown that this is not necessary when self-calibration has been performed. After the images have been analyzed by BBA, tie points are used to match pairs of images that are spatially similar. This allows for the exact flight path to be seen, as the tie points produce a point cloud above the image. This can be seen in Figure 6, which highlights the camera position when the UAV was over Acadia A. It also shows points where unsuitable images have been taken in the foreground.

4.2. DSM Layers and Orthoimage Mosaics

Photogrammetric results such as DSM layers and orthomosaics were generated from the data collected. DSM or Digital Surface Models provide a 3D representation of an area, highlighting elevation. This, alongside the creation of the point cloud and med, allows the surface of the terrain to reconstructed digitally. In this case, this was done with the Pix4D software, which takes the exterior data and camera calibration features and uses them to create a digital scene by image matching. This, along with the point clouds, means that the terrain can be described, and this is then triangulated to create orthophotos and DSM layers, as seen in Figure 7.

Figure 6. The location of the images from two different angles. It is created by MARSRobotics® Team using Pix4D Mapper using 3D view in Pix4D Mapper. The point cloud view indicates the area of each image that was taken along the flight path and the angle of each relative to the ground.

Figure 7. Orthoimage (left) and DSM (right) created from UAV images by MARSRobotics® system.

4.3. Geolocation Accuracy

Using portable North® GNSS-RTK system and the GCPs surveyed by MARSRobotics® Team in Figure 4, it was possible to carry out statistical analysis on the results of the study. Of the 16 points identified in Figure 5, 5 GCPs and 11 checkpoints were selected, allowing analysis of the exterior orientation process and accuracy assessment respectively. The points were inputted into ArcGIS 10.3 and the output coordinate system was set to the Jordanian Transfer Mercator (JTM projection). The base layer used was the orthomosaic images as well as the DSM, meaning that the vertical and horizontal accuracy could be assessed. Microsoft Excel was then used to calculate the accuracy of the results. This can be seen in Table 4 and was done by inputting the Root Mean Square Error (RMSE) into Microsoft Excel.

From Table 4 it can be seen that the accuracy of the orthophoto is 8 cm east, 7 cm north and 20 cm in terms of height, which means about one pixel in easting and northing and less than 3 pixels in the elevation accuracy, It can also be seen that the residual delta N for point number 9 varies massively from the other points. This could either be because the point is an outlier or could be down to a fault with the measurements. From this data, it can be concluded that the position of the orthophoto is accurate with a standard deviation of one pixel −7 cm -

Table 4. Geolocation accuracy.

8 cm horizontally and 20cm vertically. In terms of individual records it can be recognized that the Easting coordinates accuracy of 13 out of 16 of the Ground Control Points and the Check Points (81% of the GCPs & CkPs) are within one pixel, while the accuracy of (100%) of them is within 2 pixels. For the northing coordinates accuracy of 16 out of 16 (100%) of the Ground Control Points and the Check Points are less than two pixels.

5. Object-Based Image Analysis

5.1. Segmentation

The area covered in by this study can be seen in Figure 7(a). It is a complex zone with a size of 3350 × 4400 pixels and a spatial resolution of 0.07 m. When carrying out object-oriented image analysis, it is vital that the data is segmented. This involves dividing the data up into different categories and it is important that the parameters that are used to do this are accurate in order to ensure accurate results. The segmentation of OBIA was carried out using eCognition Developer 9 software and was based on the RGB data collected by the camera. During the MRS steps, the red, green and blue data from the DSM layer and orthomosaic data was inputted into the software. This caused some of the data to appear incorrectly, as it was hard for the software to distinguish between building roofs and the terrain due to similar spectral features. In order to remove this problem, the entire DSM layer was used and after several trials, the best parameters for the segmentation were determined by a scale. The score for objects was 1,319,821, colour was 0.8, shape was 0.2, compactness was 0.5 and smoothness was 0.5. Each of these layers was given the same weight, and due to the high resolution of the data, each image had to be segmented multiple times in order to get an accurate picture, as seen in Figure 8.

5.2. Image Object Classification

After the segmentation has been performed, an image object classification was run. This was also done using eCognition, which offers users the choice between fuzzy classification by user-defined membership functions and fuzzy nearest neighbour classification. In this study, the nearest neighbour classification was

Figure 8. Multi-resolution segmentation results with the scale parameters 10 (a), 25 (b), 100 (c) and 1000 (d) using Orthomosaic images.

used and samples were selected for each different class. This was a more efficient classifier as it uses automation and operates in a feature space which can either be automated or user controlled. Samples were selected to give a representative picture of the dataset as a whole and 11 land cover classes were identified within the area used in this study. These were defined by class rules based on shape, spectral signatures, location and relationships between objects and were used to classify the images into their most probable categories as well as categorising the DSM. The results from the ArcGIS 10.3 software can be seen in Figure 9.

In order to ensure that accurate data has been obtained from this study, the data was compared to test samples from different classes. Generally, the results were similar to the test samples (equal to 0.94, 0.95 and 0.92 respectively for scale factors 10, 40 and 80). This shows that the accuracy decreases as the scale factor increases, meaning that both the multi-resolution segmentation technique and the object-oriented classification are important for understanding remote sensing images. The SNN used here also needs specific knowledge of the examined

Figure 9. Result of classification with fuzzy nearest neighbour classifier method results applied to segmentation with scale factor 25.

area and a good selection of data in order to ensure good results. It is possible that the results of the study could be even further improved by using new sensors.

5.3. Classification Accuracy Assessment

Using confusion matrices, accuracy assessments were carried out for the object-oriented image classification. The accuracy was assessed using previous data which had been collected over the study area using both in the field and aerial photography data. A measure for the overall classification accuracy is expressed by counting and dividing the number of pixels correctly by the total number of pixels; it’s expressed as following:

overall accuracy = P i j N , (1)

where: P i j ―the total number of correctly classified pixels.

N―total number of pixels in the confusion matrix.

The producer’s accuracy is a reference-based accuracy that is calculated by reviewing the predictions produced for a class and by forming the percentage of precise predictions, it’s expressed as following:

the producer saccuracy = P i j R i I , (2)

where: P i j ―number of properly classified pixels in row i (in the diagonal cell).

Ri―total number of pixels in row i.

The user’s accuracy is a map-based accuracy that is calculated by reviewing the reference data for a class and establishing the percentage of correct predictions for these samples. It’s expressed as following:

the users saccuracy = P i j C j , (3)

where: Pij―number of properly classified pixels in column j (in the diagonal cell), C j ―total number of pixels in column j.

The results of the confusion matrices for object-oriented image classification can be seen in Table 5. Looking at this table, the c overall accuracy of the object-oriented classification was 84.37% and that the overall Kappa score was 0.74. It can also be seen that some classes, such as the shade, car parks and concrete pavements had lower levels of accuracy, while lakes and solar panels had significantly higher levels.

6. Conclusions

From this study, it can be seen that the spatial resolution of orthoimagery plays a significant role in how accurate the classification of the data will be. This flight mission aims to collect images with a very high spatial resolution as these are generally better for distinguishing between buildings and sub-elements such as car parks and vegetation. This was done by creating mosaic Orthoimages and

Table 5. Error matrix of object-oriented image classification.

DSM layers. Over 400 images were taken of the study area and they had a spatial resolution of 7 cm. These were collected by the RTK enabled UAV while it was in flight. In terms of accuracy and repeatability, the levels seem high as they can be seen by an analysis carried out using Pix4D software. GNSS-AT determines an average horizontal RMSE of 2.2 cm, while, in elevation, these rise to 5.5 cm. Despite this, the results follow the claims of the manufacturer and suggest that photogrammetric surveys can rely on onboard RTK/PPK GNSS to create a stable reference system.

The findings also showed the effectiveness of object-oriented multi-resolution segmentation when used on DSM layers and high-resolution images. Using the fuzzy nearest neighbour classifier, classes were created based on spectral signatures, shape, location and relationship and were applied to each object. This data was found to be fairly accurate, with the object-oriented classification scoring 84.37% and an overall Kappa score of 0.74.

Acknowledgements

The authors express their appreciation to MARSRobotics® Company team and management for their full technical and in-kind support in acquiring the aerial images, utilization of MARSRobotics® systems, and other technical matters to accomplish this project.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Baltsavias, E.P., Favey, E., Bauder, A., Bosch, H. and Pateraki, M. (2001) Digital Surface Modelling by Airborne Laser Scanning and Digital Photogrammetry for Glacier Monitoring. The Photogrammetric Record, 17, 243-273.
https://doi.org/10.1111/0031-868X.00182
[2] Barrand, N.E., Murray, T., James, T.D., Barr, S.L. and Mills, J.P. (2009) Optimizing Photogrammetric DEMs for Glacier Volume Change Assessment Using Laser-Scanning Derived Ground-Control Points. Journal of Glaciology, 55, 106-116.
https://doi.org/10.3189/002214309788609001
[3] Gindraux, S., Boesch, R. and Farinotti, D. (2017) Accuracy Assessment of Digital Surface Models from Unmanned Aerial Vehicles’ Imagery on Glaciers. Remote Sensing, 9, 186.
[4] Zhou, G., Qin, Z., Benjamin, S. and Schickler, W. (2003) Technical Problems of Deploying National Urban Large-Scale True Orthoimages Generation. The 2nd Digital Government Conference, Boston, 18-21 May 2003, 383-387.
[5] Ayhan, E., Erden, O. and Gormus, E.T. (2008) Three Dimensional Monitoring of Urban Development by Means of Ortho-Rectified Aerial Photographs and High-Resolution Satellite Images. Environmental Monitoring and Assessment, 147, 413-421.
https://doi.org/10.1007/s10661-007-0129-x
[6] Grenzdorffer, G.J., Engel, A. and Teichert, B. (2008) The Photogrammetric Potential of Low-Cost UAVs in Forestry and Agriculture. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 31, 1207-1214.
[7] Wallace, L., Lucieer, A., Watson, C. and Turner, D. (2011) Development of a UAV-LiDAR System with Application to Forest Inventory. Remote Sensing, 4, 1519-1543.
https://doi.org/10.3390/rs4061519
[8] Al-Fugara, A., Al-Adamat, R., Al-Shawabkeh, Y., Al-Kouri, O. and Al-Shabeeb, A. (2016) A Multi-Resolution Photogrammetric Framework for Digital Geometric Recording of Large Archeological Sites: Ajloun Castle-Jordan. International Journal of Geosciences, 7, 425-439.
https://doi.org/10.4236/ijg.2016.73033
[9] Sauerbier, M. and Eisenbeiss, H. (2010) UAVs for the Documentation of Archaeological Excavations. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 38, 526-531.
[10] Al-Fugara, A., Billa, L., Pradhan, B., Mohamed, T. and Rawashdeh, S. (2011) Coupling of Hydrodynamic Model and Aerial Photogrammetry-Derived Digital Surface Model for Flood Simulation Scenarios Using GIS: Kuala Lumpur Flood, Malaysia. Disaster Advances, 4, 20-28.
[11] Al-Fuagara, A., Ahmed, T., Ghazali, A.H., Zakaria, S., Mahmud, A.R., Mansor, S. and Al-Mattarneh, H.M.A. (2008) The Application of Hydraulic Model with GIS for Visual Floodplain Mapping: A Case Study of Kuala Lumpur City, Malaysia. International Conference on Construction and Building Technologies (ICCBT), Kuala Lumpur, 16-20 June 2008, 273-282.
[12] Moore, I.D., Grayson, R.B. and Ladson, A.R. (1991) Digital Terrain Modelling: A Review of Hydrological, Geomorphological, and Biological Applications. Hydrological Processes, 5, 3-30.
https://doi.org/10.1002/hyp.3360050103
[13] Al-Kouri, O., Al-Fugara, A., Rawashdeh, S., Balqies, S. and Biswajeet, B. (2013) Geospatial Modeling for Sinkholes Hazard Map Based on GIS & RS Data. Journal of Geographic Information System, 5, 584-592.
https://doi.org/10.4236/jgis.2013.56055
[14] Al-Kouri, O., Al-Fugara, A., Dagamseh, S. and Shafry, M. (2012) Volumetric Surface Movement Spatio-Temporal Data Model for Dynamic Modeling and Visualization of Karst Topography. International Geoinformatics Research and Development Journal, 3, 68-77.
[15] Ramon Soria, P., Bevec, R., Arrue, B.C., Ude, A. and Ollero, A. (2016) Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors. Sensors, 16, 700.
https://doi.org/10.3390/s16050700
[16] Al-Fugara, A., Al-Adamat, R., Al-Kouri, O. and Taher, S. (2016) DSM Derived Stereo Pair Photogrammetry: Multitemporal Morphometric Analysis of a Quarry in Karst Terrain. The Egyptian Journal of Remote Sensing and Space Sciences, 19, 61-72.
https://doi.org/10.1016/j.ejrs.2016.03.004
[17] Debella-Gilo, M. (2016) Bare-Earth Extraction and DTM Generation from 28 Photogrammetric Point Clouds with a Partial Use of an Existing Lower Resolution DTM. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 41, 201-206.
[18] Rusnák, M., Sládek, J., Busa, J. and Greif, V. (2016) Suitability of Digital Elevation Models Generated by UAV Photogrammetry for Slope Stability Assessment (Case Study of Landslides in SVATY ANTON, SLOVAKIA). Acta Scientiarum Polonorum Formatio Circumiectus, 15, 439-449.
[19] Sládek, J. and Rusnák, M. (2013) Low-Cost Micro UAV Technologies in Geography (A New Method of Spatial Data Collection). Geograficky Casopis, 65, 269-285.
[20] Fonstad, M.A., Dietrich, J.T., Courville, B.C., Jensen, J.L. and Carbonneau, P.E. (2013) Topographic Structure from Motion: A New Development in Photogrammetric Measurement. Earth Surface Processes and Landforms, 38, 421-430.
https://doi.org/10.1002/esp.3366
[21] Choi, K. and Lee, I. (2013) A Sequential Aerial Triangulation Algorithm for Real-Time Georeferencing of Image Sequences Acquired by an Airborne Multi-Sensor System. Remote Sensing, 5, 57-82.
https://doi.org/10.3390/rs5010057
[22] Gerke, M. and Przybilla, H.-J. (2016) Accuracy Analysis of Photogrammetric UAV Image Blocks: Influence of Onboard RTK-GNSS and Cross Flight Patterns. Photogrammetrie, Fernerkundung, Geoinformation, No. 1, 17-30.
[23] Odijk, D., Khodabandeh, A., Nadarajah, N., Choudhury, M., Zhang, B., Li, W. and Teunissen, P. (2017) PPP-RTK by Means of S-System Theory: Australian Network and User Demonstration. Journal of Spatial Science, 62, 3-27.
https://doi.org/10.1080/14498596.2016.1261373
[24] Chiang, K.W., Tsai, M.L. and Chu, C.H. (2012) The Development of an UAV Borne Direct Georeferenced Photogrammetric Platform for Ground Control Point Free Applications. Sensors, 12, 9161-9180.
https://doi.org/10.3390/s120709161
[25] Van Der Sande, C.J., De Jong, S.M. and De Roo, A.P.J. (2003) A Segmentation and Classification Approach of IKONOS-2 Imagery for Land Cover Mapping to Assist Flood Risk and Flood Damage Assessment. International Journal of Applied Earth Observation and Geoinformation, 4, 217-229.
https://doi.org/10.1016/S0303-2434(03)00003-5
[26] Fernandez-Luque, I., Aguilar, F.J., álvarez, M.F. and Aguilar, M.A. (2013) Non-Parametric Object-Based Approaches to Carry out ISA Classification from Archival Aerial Orthoimages. Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6, 2058-2071.
https://doi.org/10.1109/JSTARS.2013.2240265
[27] Daqamseh, S., Mansor, S., Al-Fuagara, A., Al-Kouri, O. and Al-Mattarneh, H. (2008) Phytobiodiversity Mapping Using Objects Oriented Analysis. International Conference on Construction and Building Technologies, Kuala Lumpur, 16-20 June 2008, 273-282.
[28] AlFugara, A.M., Pradhan, B. and Mohamed, T.A. (2009) Improvement of Land Use Classification Using Object Oriented and Fuzzy Logic Approach. Applied Geomatics, 1, 111-120.
https://doi.org/10.1007/s12518-009-0011-3
[29] Blaschke, T. and Strobl, J. (2001) What’s Wrong with Pixels? Some Recent Developments Interfacing Remote Sensing and GIS. GIS Zeitschrift für Geoinformations Systeme, 14, 12-17.
[30] Ouyang, Z.-T., Zhang, M.-Q., Xie, X., Shen, Q., Guo, H.-Q. and Zhao, B. (2011) A Comparison of Pixel-Based and Object-Oriented Approaches to VHR Imagery for Mapping Saltmarsh Plants. Ecological Informatics, 6, 136-146.
https://doi.org/10.1016/j.ecoinf.2011.01.002
[31] Chenari, A., Erfanifard, Y., Dehghani, M. and Pourghasemi, H. (2017) Woodland Mapping at Single-Tree Levels Using Object-Oriented Classification of Unmanned Aerial Vehicle (UAV) Images. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 43-49.
[32] Gevaert, C.M., Persello, C., Sliuzas, R. and Vosselman, G. (2016) Classification of Informal Settlements through the Integration of 2D and 3D Features Extracted from UAV Data. Proceedings of the 23th ISPRS Congress: From Human History to the Future with Spatial Information, Prague, 12-19 July 2016, 317-324.
[33] Longbotham, N., Chaapel, C., Bleiler, L., Padwick, C., Emery, W.J. and Pacifici, F. (2012) Very High Resolution Multiangle Urban Classification Analysis. IEEE Transactions on Geoscience and Remote Sensing, 50, 1155-1170.
https://doi.org/10.1109/TGRS.2011.2165548
[34] Weidner, U. and Forstner, W. (1995) Towards Automatic Building Extraction from High-Resolution Digital Elevation Models. ISPRS Journal of Photogrammetry and Remote Sensing, 50, 38-49.
https://doi.org/10.1016/0924-2716(95)98236-S
[35] Huang, M.-J., Shyue, S.-W., Lee, L.-H. and Kao, C.-C. (2008) A Knowledge-Based Approach to Urban Feature Classification Using Aerial Imagery with Lidar Data. Photogrammetric Engineering and Remote Sensing, 74, 1473-1485.
https://doi.org/10.14358/PERS.74.12.1473
[36] eCognition Reference Manual.
[37] Hughes, M.L., McDowell, P.F. and Marcus, W.A. (2006) Accuracy Assessment of Georectified Aerial Photographs: Implications for Measuring Lateral Channel Movement in a GIS. Geomorphology, 74, 1-16.
https://doi.org/10.1016/j.geomorph.2005.07.001

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.