Archive | maps

HERE XYZ and Maps 3.0

HERE XYZ shipped yesterday. It’s hard to talk about it without referring to CARTO so we can do that – it’s just like CARTO in some ways. But really, it’s a glimpse of what GIS will look like in the future as it gets democratized and cheaper. While fulfilling similar goals to GIS – creating maps – it’s opposite GIS in many ways. XYZ is quicker, free (or freeish, it looks like paid plans are coming) and far simpler. It’s built for making mashups and publishable maps today, but you can see a bunch of directions it can go in the future. What it misses today is analysis, we need something in between XYZ and QGIS which would be a real tool you could use to make decisions with.

One super cool thing (for me at least) is the use of OSM everywhere.

Here’s a map I made pulling in some MSFT buildings :

It’s relatively easy to pull in geojson and get it on a map, I grabbed the smallest MSFT buildings file and uploaded that (Washington DC). At first I thought there were Safari bugs as many little things and prompts are missing but switching to Chrome didn’t seem to make much difference. There’s missing squares of buildings that I’m guessing either failed or are still importing. There’s a 1 million object limit so it could have just stopped importing too.

What’s happening is we’re moving to maps 3.0.

  • Maps 1.0 was paper and clunky tools, like reloading a page to pan or zoom a map. Stored in a silo on someones website.
  • Maps 2.0 is/was Google Maps. “free” maps, easy access, easy to add to my site but the same experience for everyone. Generally required a developer to implement.
  • Maps 3.0 isn’t here but we’re starting to get hints of it with CARTO and XYZ. Maps 3.0 removes the need for a developer, lets me customize the map and deploy it anywhere by just clicking around, and include my own data.

What we need to add here are tools that let me tell a story with the map. That means transitions, text, video and interactivity to let an end user explore the narrative. Beyond that it means tools that let me rip, mix and burn a new map. Can I take your map and add something or mash it with another one? Can I let end users do something other than just panning around and zooming, like measuring things, dropping notes, adding more context? That’s what’s really interesting as a set of possiblities.

XYZ bugs & features missing, remembering it’s day 2:

  • Upload of .gz or .zip files looks unsupported, which is important with gigantic MSFT buildings files
  • Upload is kind of slow. I was curious where XYZ is hosted but they have CloudFront in front of it. Maybe it’s in Europe or some other AWS AZ.
  • Login is HERE-only, would be nice to have social logins to make it quicker.
  • Double-click to zoom is missing(!)
  • Inertial map dragging seems to work in Chrome but not Safari.
  • shift-drag to zoom to an area is missing – a major power tool.
  • The default map zoom & location is to show western Europe. It would be nice if it used ip or browser location to show something more relevant.
  • There doesn’t seem to be indication on layer processing, though you do get a upload process bar.
  • The relationships between layers and data is slightly confusing in that you have to upload and then select the layer and then add it. XYZ has gone with the model of having maps, layers and datasets which feels a little inherited from GIS. Instead I really just want to load data on a map and not have these abstractions and dialogs. Just drag a geojson on to the map, boom, data shows up. Not click-click-click-click…
  • Exporting a map gives you some iframe code but it has no css, you have to add height and width
  • Location search is buggy in Safari and in Chrome can show 5 different “Denver” results

 

0

Validating geodatamarket.net

Today I livestreamed the process of validating geodatamarket.net. This focus was chosen based on a quick twitter poll with 5 votes. It’s not a lot of data to go on but yet still felt like pretty clear feedback.

I set up google and facebook ads for $5/day, set up email capture on the site and also wrote a reddit post.

Why are we doing this? To test the idea as quickly and cheaply as possible. If it doesn’t resonate with anyone, then drop the idea and shut the site down and move on to the next idea.

So far I’ve spent $11 on a domain, and in the end probably about $20 on advertising, for a total of $31. Time-wise I spent 2.5 hours yesterday building the site and 1 hour today doing some marketing. In a day or two, I’ll look at the ad statistics and review the reddit responses, if any. This will mean a total of about 4.5 hours of work.

It’s very arguable that I didn’t need to build the website or buy the domain and thus cut the investment down to $20 and 2 hours. So very roughly that would cut the cost in half. Next time!

Here’s the live stream:

 

0

Microsoft Encarta Atlas ’97

I have some strange fascination with old Encarta and other CD-ROM titles from 20+ years ago, and how they relate to today. How did encyclopedias and atlas products work then, and how was it different to today? I made a video so you can see how it worked.

Encarta Atlas’ main limitation was storage. So there’s only secondary roads in places, and the aerial imagery is 1 or 10 miles per pixel instead of the high resolution we’re used to today.

Still, most of the UIUX hasn’t changed between then and now. It’s fair to call Encarta Atlas the Google Maps of its time – while Encarta didn’t support shift-drag to zoom to an area… neither does Google today. That’s the only real thing missing on either platform.

Bing US Buildings to PostGIS

Bing released 125+ million US building footprints, blog post here and GitHub here. To do something with them..

Grab the files:

Then

California and Texas are too big for GDAL so here’s a semi-broken script to split them up:

Then assuming GDAL is installed and you have a PostGIS db:

 

Streetview with Synchronized iPhones

Can you make Google Streetview-like images quickly and cheaply? That’s an R&D question I worked on while at Telenav after putting together the original OpenStreetView pitch.

Producing street view images isn’t trivial. Typically these are captured with dedicated hardware on dedicated vehicles driven around by paid employees. 

I remember years ago talking to people building things like this. You couldn’t use standard DSLRs because the shutters were only rated for something like 100,000 exposures. This is fine for your typical prosumer but street view vehicles would be burning through a camera every week or something. Then, the car needs a bucket-load of data storage and you put a lot of miles on the vehicle very quickly. It gets expensive quick!

OpenStreetView’s solution, and Mapillary’s, is to put a phone on the dashboard pointing forward. This gets a lot of useful information but nowhere near the 360-degree view we’re used to in street view. But, the hardware is readily available (everyone has a phone) and the people using it are working for free. So it’s a great tradeoff, really.

How to get from there to 360-degree views?

At the time, dedicated spherical hardware cameras were expensive and hard to use. Think $500+ and you couldn’t talk to them. Most had a built-in SD card and could do a few preset recording modes without GPS (because, why would you need GPS?) For a half-decent camera the costs were more like $1k+

These prices were too high for even pro volunteers to spend. How could we drop the cost so that anybody could start taking  360-degree photos?

The obvious place to start is phones since they contain everything you need: cameras, compass, processing and a variety of radios. So, of course, I took an old iPhone and taped it to the roof of my truck, with a panoramic lens on it:

Old iPhone 4 devices can be found in bulk for ~$10 each which pulls the cost down. Taking photos as you drive or walk around resulted in images like this:

Notice that most of the image space is unused. If you unroll the donut you get a 360 strip, like this:

One of the few advantages of this approach is that “real” street view needs to blur things like faces and license plates. Since this strip is so low resolution, it comes pre-blurred!

You could in theory drive a car around like this and the phone could take photos and GPS points, unroll the image and upload it all in one. But… the images are pretty low resolution.

The answer is to use more than one phone:

We can use many phones in a mount. If they all take a photo at the same time then we can stitch them together and build a panorama. There turns out to be quite a lot of subtlety in the timing, capture, upload and stitching. The fundamental limit is the lens geometry of the camera. iPhones, like other devices, vary around 40ish degrees field of view. Since you need lots of overlap for a good panorama, you start to need something like 9 phones.

You can get to less phones by using wide angle lenses and changing the geometry a little:

Because of the CCD layout you get more pixels and a wider PoV in landscape.

The mounts were built with OpenSCAD. You write snippets of code (on the left) which outputs 3D shapes on the right. Here, we make some boxes and then subtract out another box to make a phone holder. Then we rotate and build many copies of them. To hold it together, there’s a thin cylinder (in blue) at the bottom. This will output a 3D file for printing.

Actually printing this in to a piece of plastic turns out to be surprisingly painful. Simplify3D helps a lot. The 3D model needs to be turned in to a set of commands for the printer to execute (move here, print a little bit of plastic, move over here…). Every printer is different. It takes a long time. We’re a long way from “just print this file” as we’re used to with printing on paper.

Measurements in the 3D model don’t quite come out in real life, either. The plastic oozes and has a set of material properties, so that it doesn’t print exactly what you send it but may be a few millimeters off. If you print walls that are too thin they will snap. You need to print a “raft” which is a layer of plastic on the print bed to print on top of, that you later snap off.

The cycle time is pretty long. Printing something can take 5-10 hours. Then you fix something, wait another 5-10 hours and so on.

The whole process is entertaining and educational, and reminds me yet again that manufacturing physical things is hard.

The resulting panoramas aren’t too bad, as you can see above. Each phone gets different lighting conditions and the photos are projected on the inside of a sphere. What you see above is just 5 phones, or about half a pano.

The software does some magic to try and sync timing. Initially I’d hoped that since the phones are (probably, hopefully) running ntpd they’d have pretty synchronous clocks. Wrong! Instead, a server (laptop) running a thin client is running the wifi network all the phones are connected to. Each phone runs an app which wakes up and connects. The server says something like “lets take a photo in 4 seconds” and the cameras all sync to this and take a photo at the same time.

They then connect again and upload their picture and a GPS point. This is nice as you get, say, 9 GPS readings per pano. Then they start again to take another set of photos.

The server software would then (and this is where it’s incomplete) take all these photos, build a pano and upload it somewhere. The panos I built were using autopano SIFT to find overlaps in the images but we could have taken compass readings too and used those alone or in conjunction to build the panoramas.

The finished image doesn’t look bad, as you can see. But it’s long and thin and has to crop the top and bottom off the images. The full pano would be much longer and thinner.

As the project progressed, two things happened.

  1. We started getting further from our goal (cheap, simple panos) not closer. Long thin pano image strips aren’t 360 views; you can’t look up and down. The cost and complexity kept going up with 3D printing, (old iOS version since it’s an iPhone 4) software to hang everything together, car mounts, charging 9 phones at once…
  2. Readily available commercial solutions came down in price and complexity. Moto and Essential phones now have cheap panorama attachments, for example. They tend to use two fisheye lenses back-to-back in a small consumer package.

So, while this was an interesting R&D experiment and a lot was learned it ultimately didn’t work out. You can find all the code for the server, iOS client and 3D files here.

A Digital Globe

“Energy Flux,” data source: National Geospatial-Intelligence Agency, September 2000.

Crowdsourcing, as a term, has been around for something like 12 years according to Wikipedia. OpenStreetMap is a little older and the idea stretches back fairly arbitrarily. Wikipedia thinks it goes back to the 1714 Longitude Prize competition. That seems like a stretch too far, but in any case, it’s been around a while.

The ability to use many distributed people to solve a problem has had some obvious recent wins like Wikipedia itself, OpenStreetMap and others. Yet, to some large degree these projects require skill. You need to know how to edit the text or the map. In the case of Linux, you need to be able to write and debug software.

Where crowdsourcing is in some ways more interesting is where that barrier to entry is much lower. The simplest way you can contribute to a project is by answering a binary question – something with a ‘yes’ or ‘no’ answer. If we could ask every one of the ~7 billion people in the world if they were in an urban area right this second, we’d end up with a fair representation of a map of the (urban) world. In fact, just the locations of all 7 billion people would mimic the same map.

Tomnod is DigitalGlobe’s crowdsourcing platform and today it’s running a yes/no campaign to find all the Weddell seals in their parts of the Antarctic.

The premise is simple and effective; repeatedly look for seals in a box. If there seals, press 1. If not, press 2. After processing tens of thousands of boxes you get a map of seals, parallelizing the problem across many volunteers.

Of course, it helps if you have a lot of data to analyze, with more coming in the door every day. There aren’t that many places in the world where that’s the case and DigitalGlobe is one of them, which is why I’m excited to be joining them to work on crowdsourcing.

Crowdsourcing today is pretty effective yet there are major challenges to be solved. For example:

  • How can we use machine learning to help users focus on the most important crowd tasks?
  • How can crowds more effectively give feedback to shape how machine learning works?
  • Why do crowds sometimes fail, and can we fix it? OpenStreetMap is a beautiful display map yet still lacks basic data like addresses. How can we counter that?

These feedback loops between tools, crowds and machine learning to produce actionable information is still in its infancy. Today, the way crowds help ML algorithms is still relatively stilted, as is how ML makes tools better and so on.

Today, much of this is kind of like batch processing of computer data in the 1960’s. You’d build some code and data on punch cards, ship them off to the “priests” who ran the computer and get some results back in a few days. Crowdsourcing in most contexts isn’t dissimilar. We make a simple campaign, ship it to a Mechanical Turk-like service and then get our data back.

I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.
And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds. ~ Steve Jobs

In the future, the one I’m interested in helping build, the links between all these things is going to be a lot more fluid. Computers should serve us, like a bicycle for the mind, to enhance and extend our cognition. To do that, the tools have to learn from the people using them and the tools have to help make the users more efficient.

This is above and beyond the use of a hammer, to efficiently hit nails in to a piece of wood. It’s about the tool itself learning, and you can’t do it without a lot of data.

This is all sounding a lot like clippy, a tool to help people use computers better. But clippy was a child of the internet before it was the internet it is today. Clippy wasn’t broken because of a lack of trying, or a lack of ideas. It was broken from a lack of feedback. What’s the difference between clippy and Siri or “ok, Google”? It’s feedback. Siri gets feedback in the billions of internet-connected uses every day where clippy had almost no feedback to improve at all.

Siri’s feedback is predicated upon text. Lots and lots of input and output of text. What’s interesting about DigitalGlobe’s primary asset for crowd sourcing is all the imagery, of a planet that’s changing every day. Crowdsourcing across imagery is already helping in disasters and scientific research and 1,001 other fields with some simple tools on websites.

What happens when we add mobile, machine learning and feedback? It’ll be fun to find out.

OpenLocate

Your phone knows where it is thanks to a suite of sensors that basically try to measure everything they possibly can about their environment. Where does the GPS think I am? What orientation is the device in? What WiFi networks can I see? What are the nearby Bluetooth devices? Have I been moving around a lot lately, accelerometer? What cell phone networks am I connected to?

Unless you’re standing in a field in Kansas with a clear view of the sky for ten minutes (so your GPS has lots of time to settle), your location will be questionable.

The original iPhone used WiFi network data to figure out where it was, because a GPS wasn’t included. Skyhook (I think it was…) drove cars around major cities sniffing for networks while recording their geolocation. Then an iPhone could look up its location by comparing what networks it could see to the database of network locations. Then, it could start adding networks not in the database it could also see at the same place.

As phones added all kinds of sensors, these databases grew and became free-floating associations of place information. We can now correlate almost anything with where you are so that if the GPS doesn’t work (because you’re inside a building, say), devices fail-over to what other clues they have to figure out where you are.

Integrating all this information is still a challenge, especially if you’re driving around a major city. The reliability of all the location signals are questionable as Pete Tenereillo outlined in a recent LinkedIn post. Driving around San Francisco, you’re still subjected to the map jumping all over the place even with high end phones and the latest software.

How users experience this can happen at the other end too, when you see your uber or delivery driver jumping around the map on their way to you:

As well as finding your location, many apps want to store it too. There’s 1,001 ways to do that. Different amounts of data, different formats, different places to send it. What ends up happening, quite reasonably, is that various location-based app developers both capture and store location data in many different ways, and there are paid-for APIs and SDKs to help with pieces of the puzzle.

What’s changed over time is the value of this data. Aggregating vast amounts of anonymized location data can help with use-cases such as building base maps for example. If you take all the GPS traces of everyone every day, you can figure out where all the roads are and their speed limits and so on. This data is equally valuable for other uses; advertising and predicting stock prices as two examples. If you know how many people went to WalMart this week you have an indication of their stock value. Things like this appear to have driven the new $164M round for Mapbox – “Mapbox collects more than 200 million miles of anonymized sensor data per day”.

What’s lacking is an open and standardized way to capture and store this data. Enter OpenLocate, an open iOS and Android SDK to simplify capture and storage of location data.

It’s supported by a long list of backers and it should remove a bunch of work when developing anything location-based, much as Auth0 removes having to set up custom authentication. For more, see the announcement blog post here!

Eclipse Poster Kickstarter

I just wrapped up the last kickstarter (details here) when the fine people at NASA put out this super accurate map of the eclipse in August. I put together a small kickstarter to print as many as possible (they’re done at cost!), learn more about it out here.

The map is incredibly accurate, right down to using a topographic model of the moon… will be interesting to see if it succeeds!

 

Kickstarter almost funded

It’s very humbling to look at this graph of funding over the last few days for the OpenStreetMap Stats Kickstarter:

I had expected the whole thing to fail, now it looks like it’ll succeed. I was asked once in a job interview about how much failure I’ve recently had. The idea was that if you’re not failing you’re not really trying – if everything is a success then you can’t be pushing the envelope.

I figured asking for $1k for a statistics site that’s relevant to a minority of a minority in the world was going to be too much to ask for. In the grand scheme of things it’s not a whole lot of cash, but still. And yet, here we are.

Speaking of failure, “failure” itself is the wrong way to model how these things work. Scott Adams has called it “having a system” instead of “goals”. Other people have called it “failing forward”. Either way – the basic idea is that whatever happens you want to win. Adams wrote a whole book about this:

In this case, if the Kickstarter fails then I can shut the project down. This for me is a clear win. I get more time and one less distraction. I don’t have to pay for the hosting any more. I also learn that tiny kickstarters aren’t going to work and not to bother trying them again in a similar context.

On the other hand, if it succeeds that’s great too. I can dedicate the time to fix the site, the hosting is paid for and it proves that there are people out there who care about it.

Setting up situations like this can be enormously beneficial – where you win either way. But, it’s still hard since my lizard brain wants to avoid anything that looks like failure and being judged by those who see it in that way.

There are plenty of smart, educated people out there who think Amazon’s lack of profit is a “failure” for example. I think it’s beautiful. For a start, the definition of “profit” is “we have no idea what to do with the money so we’ll give it to you”. Amazon isn’t running out of ideas worth funding. Second, if they spend all the notional profit then they don’t have to pay tax on it and get some percentage advantage via that. Reinvesting in this way for a few decades leads to some spectacular growth.

This all leads to an idea that’s almost too tantalizing to verbalize: Maybe it’s possible to live by doing Kickstarter after Kickstarter? The idea is insanely fun and the implications profound. If it’s possible to raise $1k in a week then that would lead to a $52k/year revenue, supposing you had 52 great ideas. Perhaps more likely are $10k kickstarters every 2-4 weeks, or $100k kickstarters every month or two. With some number of them failing, plus costs, it should still be possible to live using this method.

OpenStreetMap Stats Kickstarter

I’m attempting to raise $1k in a week via Kickstarter to fix the OpenStreetMap Stats site.

The site lets you explore OSM data by country, time and data type:

Sadly it’s suffered bit rot and some countries are broken and not updating. The $1k goes toward fixing, open sourcing and hosting it for a year or two. Else, it gets canned.

So far it’s raised $163 with 6 days to go.

Powered by WordPress. Designed by WooThemes