[DC-03-024] Unmanned Aerial Systems (UAS)

Unmanned Aerial Systems (UAS) are revolutionizing how GIS&T researchers and practitioners model and analyze our world. Compared to traditional remote sensing approaches, UAS provide a largely inexpensive, flexible, and relatively easy-to-use platform to capture high spatial and temporal resolution geospatial data. Developments in computer vision, specifically Structure from Motion (SfM), enable processing of UAS-captured aerial images to produce three-dimensional point clouds and orthophotos. However, many challenges persist, including restrictive legal environments for UAS flight, extensive data processing times, and the need for further basic research. Despite its transformative potential, UAS adoption still faces some societal hesitance due to privacy concerns and liability issues.

Tags

aerial data capture
aerial imagery
photogrammetry
remote sensing
sensors
Structure from Motion (SfM)
unmanned aerial systems (UAS)

Author and citation

Mathews, A. J. and Frazier, A. E., (2017). Unmanned Aerial Systems. The Geographic Information Science & Technology Body of Knowledge (2nd Quarter 2017 Edition), John P. Wilson (ed.), DOI: 10.22224/gistbok/2017.2.4

Explanation

  1. Definitions
  2. Operations
  3. Sensors and Data Capture
  4. Data Processing and Analysis
  5. Applications
  6. UAS and Society

 

1. Definitions

Unmanned Aerial System (UAS): an aircraft without an onboard pilot that is operated autonomously or manually by a remote-control operator. The terms unmanned aerial vehicle (UAV), unmanned aircraft systems/vehicles, remotely piloted aircraft (RPA), and drone are often used interchangeably. UAS platforms typically adopted by geospatial researchers are considered small UAS (sUAS), weighing between 0.5 lbs (~0.2 kg) and 55 lbs (~25 kg) as designated by the U.S. Federal Aviation Administration (FAA; weight limits may vary in other countries)

Rotary-Wing (RW): single or multirotor copter with upward-mounted propeller(s) that generate lift allowing aircraft to take off and land vertically and hover during flight. RW platforms typically provide more maneuverability than fixed-wing aircraft.

Fixed-Wing (FW): platform with a stationary wing and forward-mounted propeller(s) to generate lift and continuously move aircraft forward at varying pitch angles. FW platforms can fly at higher speeds and for longer duration (40 minutes to several hours) increasing aerial coverage in comparison to RW.

Structure from Motion (SfM): computer vision algorithms to process digital photos into three-dimensional point clouds and subsequent geospatial data products such as digital terrain and surface models, and orthophotos. SfM is a broad term that often also encompasses multi-view stereo techniques (e.g., MVS, SfM-MVS).

 

2. Operations

UAS operators must adhere to civil aviation authority policies when collecting data. In the U.S., the FAA governs UAS operations, requiring aircraft to be registered and operators to obtain a remote pilot certification. Operating rules include: flying during daylight hours, below 400 feet (~120 m) in altitude, not within 5 mi (~8 km) of any airport, or over populated areas. Additionally, operators must maintain visual line-of-sight and yield to manned aircraft during flight.

Many UAS are hindered by even slightly windy conditions, requiring frequent confirmation of weather forecasts at/near the study site. Although platform dependent, FW aircraft are often flown into and with the wind to minimize side-to-side movement, whereas RW aircraft are less restricted in flight direction. FW platforms require a larger staging area than RW platforms for launch and skid landings. During data collection missions, flightlines should be organized to ensure stereoscopic coverage. UAS-based image capture requires considerable overlap (80-90% endlap and 60% sidelap recommended) to ensure effective image matching due to the larger distortions introduced by lower flying altitudes and platform instability (Harwin et al. 2015). Nadir-facing images are commonly collected, although convergent views are recommended (i.e. integration of obliques; James & Robson, 2014).

 

3. Sensors and Data Capture

3.1 Image Capture

UAS are mainly utilized to capture imagery, and off-the-shelf, point-and-shoot digital cameras are a popular sensor option (see Toth et al. [2015] for a comparison of cameras). Wide-angle lenses (e.g., GoPro Hero) are avoided due to high image distortion, and parsing video into still images is not recommended because frames may contain blur. Off-the-shelf cameras typically have limited spectral resolution, and reflectance calibration can be challenging, but removal of the internal hot mirror permits capture of near-infrared wavelengths (Mathews 2015). Spectral targets with known reflectance properties placed in situ are commonly used to calibrate optical sensor measurements, or sensors such as the Tetracam ADC Lite sensor allow image capture from UAS with spectral bands matching certain Landsat bands, thereby facilitating comparisons. Other commonly used sensors include the Parrot Sequoia and MicaSense RedEdge.

Georeferencing schemes for UAS-acquired imagery include: (1) direct, which uses known camera locations through GNSS-enabled cameras or onboard GNSS and IMU measurements stored and attached to captured images, (2) indirect, which uses GNSS-located ground control points (GCPs), and (3) a combination of direct and indirect.

3.2 Non-image Date Capture

Non-imagery applications of UAS include, for example, collecting measurements of temperature, pressure, humidity, and wind for atmospheric sampling and meteorology or environmental surveillance using sensors that can detect CO2, methane, and other gases for pipeline monitoring. Lidar sensors have been employed for terrain and 3D mapping, but sensor size, weight, and cost remain restrictive for many applications. However, advances are being made toward developing low-cost, miniaturized sensing devices.  

3.3 Coordinated Data Capture

A benefit of deploying sensors onboard UAS is the potential for coordinated, self-organized data capture between two or more vehicles. Algorithms for coordinating, controlling, and systematizing distributed networks of airborne sensors are developing rapidly, allowing multiple UAS flying in a network to communicate with each other and ground stations to coordinate capture of optimally distributed spatial datasets (Namuduri et al., 2013). This type of ‘smart’, mobile UAS network will permit adaptive sampling schemes not possible with fixed, ground-based networks.

 

4. Data Processing and Analysis

Structure from Motion (SfM) computer vision technique incorporates a series of algorithms (e.g., Scale Invariant Feature Transform—SIFT [Lowe, 2004], Bundler [Snavely et al., 2008], patch-based multi-view stereopsis—PMVS [Furukawa & Ponce, 2010]) to match overlapping areas across multiple images with differing perspectives (identifying keypoints of the same features—equivalent to photogrammetric tie points) to generate sparse and dense point cloud reconstructions of 3D space. SfM point clouds are not inherently georeferenced, and known locations of cameras or GCPs must be incorporated to transform the point cloud to real-world coordinates. SfM point clouds are similar to lidar datasets with the addition of RGB information for each point. Commonly used SfM desktop software packages include Agisoft PhotoScan, Pix4D, and VisualSfM. Cloud-based alternatives (e.g., DroneDeploy) are also available.

Images can also be processed to produce very high spatial resolution orthophotos. Proper orthophoto production requires removal of radiometric effects (e.g., vignetting, brightness variation from image-to-image, conversion to reflectance values; see Mathews, 2015) and geometric effects (e.g., lens distortion, relief displacement; see Kelcey & Lucieer, 2012). Geometric corrections remain especially challenging when using uncalibrated sensors at low altitudes where distortions are magnified (Mathews, 2015).

Advanced analyses use lidar data filtering and classification techniques to extract height information from SfM point clouds, either by generating Digital Terrain Models (DTMs) and Digital Surface Models (DSMs) (Fonstad et al., 2013) or computing height metrics to characterize vegetation canopy structure and biomass (see Dandois & Ellis, 2013; Mathews & Jensen, 2013). The temporal flexibility of data collected by UAS allow for 3D/volumetric change analyses via DTM/DSM differencing and/or direct point cloud comparison. Object-based image analysis techniques are commonly applied to analyze and extract useful vector data from 2D orthophotos (Laliberte et al., 2010).

Data collection and processing standards as well as comprehensive accuracy assessments remain in early stages. Some solutions have been proposed to optimize workflow efficiency and improve model accuracy (see Turner et al., 2012; James & Robson, 2014), but considerable uncertainty remains.

 

5. Applications

Most GIS&T research using UAS has been applied. Table 1 outlines several major application areas with some related works for further reading. 

Table 1. Geospatial applications for UAS and related research
Application Related Works
Terrain modeling Stefanik et al., 2011; Fonstad et al., 2013
Geomorphological and fluvial processes Flener et al., 2013; Dietrich, 2016
Vegetation structure, forestry, and ecosystem modeling Wallace et al., 2012; Dandois & Ellis, 2013
Precision agriculture Baluja et al., 2012; Mathews & Jensen, 2013
Land and natural resource management Rango et al., 2009; Laliberte et al., 2010
Animal habitat and monitoring Chabot et al., 2014
Natural disasters (wildfire, landslides) Ambrosia et al., 2003; Niethammer et al., 2012
Meteorology Frew et al., 2012
Cultural features and archaeology Eisenbeiss & Sauerbier, 2011

 

6. UAS and Society

As UAS usage becomes more common, ongoing discourse surrounding their role in ‘citizen mapping’ such as participatory mapping and citizen science projects as well as their role in ‘mapping citizens’ is critical. As of 2016, few citizen science and participatory mapping projects were engaging with aerial platforms (see Cummings et al., 2017), but as scientists increasingly recognize the utility of complementing traditional remote sensing with sensors onboard UAS, the use of UAS in citizen science projects will rise. Websites such as OpenAerialMap (https://openaerialmap.org/) and Dronestagram (http://www.dronestagr.am/) allow users to share UAS-acquired images online. The Humanitarian OpenStreetMap Team has crowdsourced (i.e., microtasked) the digitization of UAS-acquired imagery to support disaster recovery, and many small civilian UAS projects have been successfully completed around the world for similar purposes.

Conversely, ‘mapping citizens’ has implications for location privacy, which is concerned with individuals’ claim to determine when, how, and to what extent information about themselves and their location is communicated to others (Kerski, 2016). These locational privacy issues face new challenges according to the ease with which very high resolution imagery or other data can be captured from UAS. Privacy concerns of citizens surrounding UAS are fluid and evolving, and it will be important for GIS&T researchers to remain engaged in these societal questions as adoption of UAS technology increases. 

References

Learning outcomes

Related topics

Additional resources

  • Academy of Model Aeronautics (AMA) - www.modelaircraft.org
  • Association for Unmanned Vehicle Systems International - www.auvsi.org
  • Federal Aviation Administration (FAA) Unmanned Aircraft Systems Homepage - www.faa.gov/uas
  • United States Geological Survey (USGS) National Unmanned Aircraft Systems Project Office - uas.usgs.gov