Last month I ran a mapping party in Castle Rock, Colorado at the new Philip S. Miller Park:

Philip S Miller Park

Philip S Miller Park

The park was challenging for a few reasons:

  • Nothing on the map before we did the party
  • No up-to-date aerial imagery
  • Lots of footpaths in winter mud conditions

Luckily we had a bunch of enthusiastic people at the event. The footpaths were easily captured using GPS units but the new buildings, football field and other macroscopic features were harder to do.

videoscreenshot

Drones to the rescue!

Luckily I own a Phantom Vision 2+ drone which looks like this:

th

So I sent it up to 500 feet or so and took some pictures with the HD camera which looked like this:

DCIM101MEDIA

The image shows part of the car park, internal access roads and the new sports building (red) and swimming pool (beige). Having some pictures is great, but what we needed to do was patch the images together to be able to map on top of them. You take these warped images from some height, location, yaw, pitch and roll and stitch them in to something flat and usable .

Enter MapWarper. This web-based tool will help you spit out that map:

imagerymiller

You’re looking at multiple images stitched together, click it for a bigger interactive version. MapWarper is a little clunky in the work flow as it stands today. Each image is stitched to OSM as a ground truth and then you use multiple of those in to a layer. The problem here is when you have no ground reference to stitch to, which is the issue we had. It would be super useful to be able to stitch images to each other, and to the ground rather than having multiple images in free space. Still, the thing basically works but is best used (and apparently intended for) single high altitude images, or old maps. Not multiple images like I did.

One solution would be to send the drone higher and cross fingers it doesn’t decide to fly away or something, and take a single image that way. The downside is lower resolution of the imagery. Upside it (hopefully) less distortion from the fairly wide angle lens the Phantom Vision 2+ has.

You can go from MapWarper to editing using iD on the OSM website pretty trivially, and anyone can now use the imagery to help improve the map. So one person can go through all the imagery pain, but then everyone else can use the imagery as if it was just any other layer. Big savings there.

So what’s important here? I think it’s a new tool in the belt for use at mapping parties (and a new set of toys to play with). You’re no longer (and haven’t been for a while) restricted to existing imagery and GPS units. For a fairly modest cost you can collect your own live imagery and make maps better, all by yourself.