Note: enhance. The creators have gone to some trouble to allow old browsers access to the data, too. Although their solutions don't always work (my Netscape 4 wants to be resized before showing any images, but this browser was born a troublemaker; more seriously, Netscape 3 doesn't show any map at all), this doesn't affect the basic theoretical principle: this is a useful application that could work in all browsers.
Simon and Dave originally thought it used XMLHTTP, not an unreasonable idea when confronted with a working data retrieval system silently running in the background of a web page. One of the developers pointed out that the mapplication doesn't use it. He's right, and yet he's not totally right.
I haven't studied the source code closely, but it seems the mapplication requests the necessary images in the ancient and hallowed mouseover tradition (
[someImgElement].src = 'pix/someImg.jpg'). Simple and efficient.
(Incidentally, I was gratified to see my event target, mouse position and Find Position scripts right at the top of the source code.)
As we all know, ancient mouseover scripts preloaded their images, that is, they sent HTTP requests to the server to fetch all images that the page might conceivably need. Even though map.search.ch doesn't use modern XMLHTTP, Simon and Dave were correct after all: it uses an ancient and reliable subset. This subset can only be used to retrieve images, but that's quite good enough for a mapplication.
I feel there's some confusion over terminology that ought to be settled first. Simon calls the map a DOM application, while Dave calls it a DHTML application. I disagree with them both and will call it a data retrieval application which uses a DHTML interface layer to present the data to the user.
Which forces me to define these three kinds of scripts:
Client side silent data retrieval scripts could very well become the hit of 2005, but there are a number of technical hurdles to be taken first.
Roughly speaking, I see three viable techniques for client side silent data retrieval:
XMLHTTP is hot, but is it better than iframes for data retrieval purposes? At the moment I have no idea
, but it seems that in some cases it's the worse choice, for now, at least.
When conducting some (inconclusive and unpublished) XMLHTTP experiments I tried to trick the browsers into downloading an HTML page instead of an XML document. It didn't work (though that might be due to my methods. Try to solve this one, if you can).
Update: Seems to have been my methods. XMLHTTP allows you to get an HTML file.
Why is this a problem, maybe even an argument in favour of iframes? Because of the accessibility of future data retrieval and presentation applications.
All data must be divided into chunks somehow, and a very simple way of ensuring the application's continuing graceful degradation is to store these data chunks in HTML pages. If the advanced application doesn't work, a click on the link or the area will simply load this HTML page and give any device access to the data.
If we could use these HTML pages for both layers of functionalities, if we could either show them as separate pages or niftily and silently include them in our advanced interface, we'd have ensured accessibility in the simplest possible way.
But XMLHTTP requests cannot request HTML pages, and therefore such a system cannot use it.
Of course this line of reasoning doesn't hold true for every project. If you've got the skills and the budget to arrange for server side scripts that transform the data into either HTML or XML, depending on the exact browser request, you don't have to worry about XMLHTTP's limitations.
For budgetarily challenged projects, though, iframe data retrieval may well be a good choice.
We've got plenty to think about in 2005.
I’m around at the following conferences:
Comments are closed.