The University of Southern California Interactive Media Team have recognised the difficulty of aligning photographs in Google Earth for PhotoOverlays and have made it the target of their Viewfinder project. They state their objective as "to provide a straightforward procedure for geo-locating photos of any kind" and specify that "a 10-year-old should be able to find the pose of a photo in less than a minute".
Aligning PhotoOverlays in Google Earth can be a very time-consuming job. For example, my Abbey Road and Gliding PhotoOverlays were very tricky to set up, even with the help of flickr photos showing me exactly where the camera is for the former, and a bit of messing about with Imagemodeler for the latter.
To get a good match, not only do you have to figure out the position on the map where the photograph was taken, you also have to determine how high off the ground the camera was, its orientation (pitch, roll and yaw) and the characteristics of the lens used. With so many variables you can end up tweaking for ages and still not get things exactly right.
The Viewfinder team have focussed on two 'pose finding' methods. The 2D-to-2D method starts with the user selecting the position from which the photo was taken on Google Maps, then using a browser-based tool that feeds back the equivalent Google Earth view to refine the position and orientation of the camera. A screen shot from Google Earth is then loaded into a second tool in which the translation and scaling of the photo can be fine tuned to achieve the best match.
Even more amazing is the 2D-to-3D method, in which a 3D model of an element in the scene is brought into correspondence with the photograph by dragging model vertices to their projected positions, in a manner reminiscent of canoma.
The Viewfinder team's progress report makes fascinating reading, and hopefully is a taste of things to come. It isn't mentioned in the progress report as a related program, but I think there would be obvious benefits to linking Viewfinder with PhotoSketch.