Lots of people have been experimenting with making their own aerial imagery over the last few years. Technology (cameras) and platforms (anything that flies) have been coming down in cost dramatically. This is useful if you live in a disaster area or want to do something fun on the weekend.
Personally I’ve never had a need to make aerial images as I’ve lived in areas covered by people like Bing and Google. Therefore all the technology, kites, drones and rectification has mostly passed me by as something of a cute sideline. Sure, theoretically you could do it yourself but you really need a few hundred million dollars of aircraft, cameras, people and computers to do it for any real use case.
I live in an expanding area which means imagery is dated fairly quickly. Here’s what Bing shows for my area:
We’ll jump ahead so you can see what I now have. Then we’ll back up on the process:
Also, notice the difference in color. Green indicates the summer. Bing’s image is better in many ways. For example, it is much more vertically-down and doesn’t smear the side of buildings over the image.
How did we get here? First, it helps if you have a pilots license or easy access to someone who does. Then you wet lease a plane (meaning; fuel included) and take a bunch of pictures. The plane will run around $150/hour and you can pick up a decent DSLR for a few hundred dollars. Here’s the image we start with:
Next we go over to mapwarper.net and upload the image. You do that, and add a bunch of control points that map the image you have to the flat top-down openstreetmap. What this does is take your image and flattens it out in to a map you can use.
We’re still very far from being able to do this en-mass, however. The costs and barriers to entry are many:
- You need a way to take pictures. Hexacopeters, Cessna’s and even kites cost money. My phone should be able to do 90% of this automatically.
- Rectification isn’t nearly as simple as it can be.
- There’s no color correction. The pixels at the edge of the image are further away than the middle and the atmosphere introduces color gradients because of that.
- I didn’t see a way to stitch many rectified images together; which is a prerequisite for a full map.
- Getting from mapwarper to Potlatch to edit things in OSM is non-trivial; it should be a one-click.
Mapwarper, OSM, potlatch and the rest are all awesome. They’ve taken us from “impossible to make your own map” to merely “very hard to make your own map”. I’m just impatient and want “any idiot can make their own map”.
What would be wonderful is; I point my iPhone outside the plane and take pictures. The phone knows its position and altitude and its roll, pitch and yaw. This gives us a good start on the image location. Mix in some topology and make the images overlapping… and we go a long way to making this a simple anyone-can-do-it process. The phone has a radio in it and a decent processor, it can do some work by itself or just upload it to a service which does a lot of this automatically.
On the other hand, the way imagery is collected today is based on a set of assumptions like vector mapping was 10 years ago:
- The images have to be perfectly rectified. We don’t need that accuracy.
- The images have to be cloud-free. We can tolerate a few clouds.
- The images have to be complete. We don’t need thousands of miles of Arizona desert, we just need new or changed places (for OSM).
- The images need thousands of paid staff. With automation and volunteers, as we’ve seen, you can sidestep a lot of that.
- The images need an IR layer so we can figure out crop type. For many use cases, we don’t need IR or near-IR. And even if we do, removing the IR filter from various CCDs is not super hard.
So, think what we can achieve in aerial imagery if we relax the constraints of today’s sources and use cheap COTS (commercial off-the-shelf) hardware (iPhones).