ISPRS Beijing

I’ve pretty much recovered from traveling to Beijing for the ISPRS Congress, so thought I’d post a few impressions.

First, the obvious: Beijing is an amazing place. So many people, so much energy. The sheer volume of high-rise buildings is greater than any other city I’ve ever seen, row after row. The architecture varies widely, from incredibly tacky neo-Chinese to some creative new structures (below) which would be fairly challenging to automatically extract and model from lidar.

Cloud Building
Cloud Building

The Congress itself was good, although there were no big technical breakthroughs or earthshaking announcements. It seems that this is a period of consolidation, of  trying to integrate the developments of the past few years into commercial products. Software packages continue to add features, mostly dealing with providing/ingesting web services and displaying them on a “digital earth.” Some of the main research topics included lidar exploitation, image sequence processing, and urban modeling; much of the research recreated work from the computer vision community. A highlight for me was Wolfgang Forstner’s plenary session talk, which stressed the need to work toward long-term research goals and significant problems, instead of short-term papers.

There were a large number of Chinese papers, of varying levels. The Chinese universities are still recovering from the Cultural Revolution, and judging from some of the students I heard and talked to, they are making progress.

Our Chinese hosts at the conference were very friendly and helpful. The venue was a bit small for the conference, but the location near the Olympic stadium was interesting and allowed us to watch the preparations still underway.

 

 

 

 

 

 

H. Dell Foster passes

H. Dell Foster passed away last Friday; with his passing, the photogrammetric community has lost another of its pioneers.

I went to work for Dell in 1980 after grad school. He was re-starting his photogrammetric instrument business, after having sold his earlier company to Keuffel & Esser several years earlier, and had ideas for new types of photogrammetric instruments as well as a number of other projects. I thought I knew what I was getting into–I had no idea!

Dell was a true genius, an over-used term perhaps, but I’ve never seen anyone with his ability to identify potential applications and visualize elegant solutions to them. His early analog instruments used optics and gears to solve the photogrammetric equations. With the advent of digital logic and minicomputers, his later designs integrated software, hardware, and electronics with an effectiveness that few others could equal. The ARME (Automatic Reseau Measuring Equipment) combined a highly-accurate photo stage mounted on air bearings and reseau measurement software (by Duane Brown) to produce a successful automated production instrument. The K&E DSS-300 analytical plotter used an innovative stage positioning system based on an infrared grid etched on a glass plate instead of the standard linear screw or optical encoders, thereby eliminating axis non-orthogonality and scale differences. He built one of the first large digitizing tables accurate enough for cartographic use as part of an digital mapping system that anticipated many of the features in current GIS software.

You have to remember that Dell was self-taught. After having problems with the mapping cameras he bought, he decided he could build a better one and taught himself optics and machining so he could build a camera to his own specifications. He taught himself photogrammetry by working at a mapping company, and used that knowledge to build better stereo-plotters and rectifiers. Once he identified something that appeared useful or interesting, he would work at it and tinker with it until he understood it.

Dell was a complete optimist. Given the setbacks he had and the obstacles he faced, many people would have succumbed to bitterness and defeat. Instead, he could always see a way to continue or a good side to whatever came along. Optimism is a necessary trait when facing technical challenges–you have to believe that there’s a solution and that you will find it before time and/or money run out, but Dell’s optimism extended across his whole life. He always saw the best in people and was unquestioningly loyal to all his employees, who returned that loyalty by following Dell through his different companies and situations.

I said before that I had no idea what I was getting into when I went to work for Dell. Indeed, I had no idea of how lucky I was going to be, to work closely with someone with Dell’s depth of technical knowledge and breadth of creativity. It was an experience I will always remember.

Obituary, in the San Antonio Express-News:

http://www.mysanantonio.com/news/obituaries/stories/MYSA.051408.METRO4BObitFoster.337164d.html

The main problem with digital mapping…

is that you have to use software. I’ve become increasingly frustrated with the current state of photogrammetry and mapping software. As the race to add features intensifies, especially to software whose internals were written several years and operating systems ago, we’re stuck trying to remember which functions in which package actually work.

A recent example: I had a need to change the vertical datum on some shapefiles, and tried to use the logical choice of software for dealing with shapefiles. (I won’t name names, but the company’s initials are E.S.R.I. ) After pulling up the appropriate dialog box and selecting the datums, I hit go and waited for the results, which came up in just a minute. The only problem was, they were wrong—there was no elevation change. I checked the settings and tried again with the same result. This time I looked at the command line it generated; oddly enough, it seemed to think that there is 0 meters shift between the WGS84 ellipsoid and geoid. It was apparently supposed to do that. After digging through the “help”, I finally went to the support page and discovered, buried in the FAQ, that even though all the controls were there to transform shapefiles, the functionality had not actually been implemented. I was speechless, except for a few very choice, very necessary words.

I suppose that after years of exposure to Microsoft products companies assume that we’ve all been desensitized to software failures and have low expectations for the functionality between crashes, but this is unbelievable. It’s one thing not to be able to implement this relatively simple functionality, (which one would think would be very useful for allegedly state-of-the-art GIS software), but it’s not that hard to remove buttons that don’t work. I guess that’ll be in the next version.

Interactive resection in Google Earth

The newest version of Google Earth (4.2) has an “Add Photo” feature (http://earth.google.com/intl/en/userguide/v4/ug_mapfeatures.html#add_photos) which allows the user to interactively align oblique imagery with the underlying image base, in other words, an interactive resection. There are some screen captures of the process at

http://www.ogleearth.com/2007/08/photooverlays_f.html .

 

Doing a resection interactively is a lot harder than it would seem. I messed around with it some for terrestrial imagery quite a while back. The first thing I discovered was that the controls need to function in the camera coordinate system, not the world. That way, you can move in-out, left-right, or rotate left-right, up-down, etc, and predict which way the image is going to move or rotate. The GE tool is parameterized in object-space X,Y,Z, heading, tilt, roll, and field-of-view (instead of focal length).

 

The other difficulty is that, as far as I know, no one has developed a procedure for manually orienting an arbitrary oblique image. It’s not too hard to get close, but very difficult to determine which virtual knob to tweak to take out the last bit of error.

Why Google Earth™? Why not?

One of our marketing guys showed me an email the other day asking him why everything isn’t being done in Google Earth™ (GE) these days, instead of all our expensive GIS software. Well, why not? It’s definitely one of the coolest things I’ve ever wasted an internet packet on. Like most of you, I’m a map nerd. I have piles of maps from old trips, maps on the walls, atlases, etc. GE lets me do that online. If I hear about someplace in the news, I can find it. If I want to look at random people’s photos of random places, I can do it. If I get bored at work—well, never mind…

Google Earth, with its availability of imagery of everywhere across broadband internet connections has made map nerds of everyone, from housewives to managers, while the participatory aspect embodied by mashups has established whole new applications and users of geospatial information.

So, why not use it for everything? Why do we still buy expensive GIS and photogrammetric systems? Every tool has its intended purpose; in this case the tool also includes the associated data resources. To determine suitability we need to understand both the tool and its included datasets.

The way to think of GE is as a map, more along the lines of a “you are here” map, more representational than metric. The imagery background, with its color and detail, gives a very authoritative impression. However, there are no guarantees on how accurate the imagery is and no consistent way to obtain the metadata for any particular image. Is it uncorrected, approximately rectified, or precisely orthorectified to make it the equivalent of a map? Is it from a satellite or from aerial sensors? When was it acquired? In terms of pure representation it doesn’t matter; in terms of metric accuracy, if you don’t know the characteristics of the imagery, it’s unusable.

Another issue is the unevenness of image coverage. Many parts of the world are covered with recent sub-meter color imagery, while others are not. Our family farm in eastern Kentucky, for example, has only 5-meter imagery. That may be for security reasons, although I doubt it.
The inclusion of additional data layers, especially mashups of several layers, can add to the uncertainty. Again, there’s usually no way to determine the metadata of the additional layers.

There are absolutely no guarantees of availability: for what types of data is available at any location, that the service will be accessible at any given time, or that Google will continue to provide GE under current terms and conditions. It’s unlikely that they’ll change a spectacularly successful business model, but nothing is certain on the Internet.

The main thing that makes GE a representational tool is that it has no analysis or processing capabilities, with the exception of SketchUp for constructing 3D building models. One of the factors driving the number of importers is that data must be produced and processed in external tools. Processing operations, such as resampling or reprojection, or GIS analyses such as proximity or intersection, must be done in external tools and the results exported into GE.

GE is optimized for distribution of processed information. There’s no supporting structure, such as a geodatabase, to maintain and select current data according to user requirements. KML files can be passed from user to user, but there is no version control or selection mechanism to maintain consistency among users. Note that while the GE enterprise version supports the distribution of enterprise data across an organization and to its customers, it again assumes that any processing and preparation occurs before data is loaded.

Please don’t take this as criticism of Google or Google Earth. As I said, I love it. It has introduced a huge community to geospatial concepts and imagery (I recently heard Michael Jones, CTO of GE, say that GE users are the ninth largest country in the world). We must be careful, however, that users understand the characteristics of the data and implications of those characteristics.

For more information, see http://earth.google.com.
An interesting article on the technology behind GE is at
http://www.realityprime.com/articles/how-google-earth-really-works

This article was originally published in the ASPRS Potomac Region newsletter (http://www.asprspotomac.org/).

Proposals and vacations…

Sorry for the absence–a couple of weeks of a proposal from hell, followed by a great couple of weeks in Mexico (Veracruz and Xalapa). I highly recommend both cities, although you may want to do a better job of scheduling around hurricanes than we did.

Wait a minute–you did notice I was gone, right? 🙂

PhotoSketch

Most of you are probably familiar with SketchUp, the application for building 3D models for use in Google Earth. Sketchup uses perspective geometry to assist the user in constructing a model from a single image. PhotoSketch is described as a cross between SketchUp and PhotoSynth, in that it uses multiple views to do modeling and texturing and can perform feature matching between the views to determine the image orientations. Image features are generated and matched using the SIFT (Scale Invariant Feature Transform) algorithm, then the matches are refined using the RANSAC methodology. Once matched features are obtained, relative orientations between successive images are obtained; when all images are oriented, a final bundle adjustment is done to get consistent solutions. Once the images are oriented, the user can modify the model or apply phototextures from the various images. There doesn’t appear to be published paper on this yet, but there is a video presentation. The developer, George Wolberg, compares PhotoSketch to standard photogrammetric techniques, which he defines as manual, tedious, and hard to perform with multiple images. Some of us would say that when you’re doing bundle adjustment (which is, of course, a technique developed within the photogrammetric community) you’re doing photogrammetry.