The way photogrammetry works is by looking at every pixel captured in a series of photos from diverse angles to trying to triangulate its location in 3D space.


When trying to create a 3D model of a zone manually, the key is to get every square foot from as many angles as possible so the software has enough information to interpret the correct position for the entire surface.




Figure 1. Left: Set of 35 drone photos captured for a single stockpile at a height of 15 meters and overlap of 60%. Right: 3D model of stockpile created from captured photos.

 

Guidelines

  • Don’t fly higher than 150 ft. to keep high ground resolution
  • Don’t fly lower than 50 ft. 
    • Overlap will require too many sweeps
    • Photogrammetry software might have a hard time differentiating surfaces
  • Area swept should be no smaller than 100 ft by 100 ft to ensure correct scale

  • More photos is better than less

    • It works to take a photo every second as you move the drone from place to place

  • Try to get different angles

  • Don’t let the horizon appear in images

    • Pixels that have sky or horizon in them cause problems since they are “infinitely” far away

  • Try doing sweeps at 2 different heights

  • Try doing an “orbit flight”

    • An orbit flight is a circular flight around a target where the camera is always pointing at the target

  • If there are obstructions or bridge like structures

    • Try to get higher overlap

    • Try to get more “oblique” (angled) images

  • If the are tall objects in the capture (taller than half the flight height) increase overlap and increase flying height

  • Sweep a zone at least 4x wider and longer than what will be measured

  • It is useful to sweep an area the way a lawnmower would, a series of parellel lines

  • Reflective surfaces don’t do well in photogrammetry

  • Image alignment does not matter


Figure 2. Left: Set of 43 drone photos captured for a single stockpile at a height of 15 meters and overlap of 60. Right: 3D model created from captured photos

 

Figure 3. Zoom into a set of photos taken for photogrammetry. With 60% percent overlap, the photos are highly redundant which helps extract surface features.