Today I finally have the time to continue with the connection speed measurements. For those joining the discussion only now, first read part 1 and part 2. See also the draft spec; especially for the
I’m trying to distill a few principles for measuring the connection speed. I give them in the form of statements; be sure to speak up if you disagree with one of them.
A later article will return to the nasty details of reading out the connection speed. Today we’re only concerned with broad outlines.
Principle 1: Despite the high level of uncertainty involved, it’s useful to try to read out the connection speed.
If I didn’t think it useful I wouldn’t write so many posts about this topic. Duh.
Seriously, though: does the fact that connection speed readings can go wrong on many levels mean that it can never give useful information?
The key point here is that we need an idea of the uncertainty involved. If that uncertainty is low it would mean we have a reasonably accurate speed reading that we can use to make our sites more responsive.
To me, getting useful information in some cases is sufficient reason to add connection speed measurements to the web developers’ arsenal.
Principle 2: Users should have the option of overriding the entire system. Web developers should obey the user.
I don’t think many people will disagree with this one. Users should be able to indicate they always want to receive the high-res version, even if they’re on a slow connection. Or vice versa.
Still, I will argue in a moment that we should send all information to the web developer, and not just a conclusion. So in theory the web developer could still overrule the user. He shouldn’t, though.
Principle 3: Web developers should get full information about the connection instead of a conclusion generated internally by the browser.
This requires some explanation. Suppose the user starts up her phone and surfs to your site over a 3G connection.
Now the browser could use internal heuristics to take a decision on the reported connection speed. It could also just dump this entire series of data points onto the web developer’s plate. I prefer the latter.
The browser could try to reason it out:
The web developer only receives the result of this calculation: slow.
The browser could also send the entire data set to the web developer and wait for instructions:
This would require the web developer to write her own heuristics for dealing with this situation. In most situations I’d say she’d combine 3G and high uncertainty to decide on the low quality site, but the point here is that she could decide otherwise if she has good reason.
The disadvantage is that we need a rather large amount of properties; see the list above. I’d like to remove some, but I’m not yet sure which ones.
Principle 4: Every individual connection speed reading should react to the current situation.
Thus, if the user starts browsing in a high-speed area and then enters a tunnel where the speed drops dramatically before downloading the style sheet with speed media queries, the HTTP header should report a fast connection and the media queries a slow one.
This is in line with the previous principle of dumping all information on the web developer’s plate, and leaving it to her to make sense of it.
The last two principles are about empowering web developers by giving them all information that is available. This has advantages as well as disadvantages.
The disadvantage is information glut. Take another look at the last list above, and imagine you having to muddle through this list and take a decision.
Still, if a system like this is ever implemented, it won’t be long before clever web developers start writing tutorials and libraries to help the newbies out with their decisions.
The advantage of empowering web developers is that change occurs at a much faster rate. Suppose that research discovers that a user with the settings I sketched above would really prefer the high-res site. Now the decision heuristics should be updated.
If web developers take the decision themselves they can simply amend their scripts and libraries, and the update will take place in a matter of weeks. Conversely, if browser vendors would be involved we’d have to wait for the next upgrade to the browser, which is likely the same as the next upgrade to the OS. It might take months. That’s a clear advantage of decentralizing this sort of decisions.
Finally, if browser vendors were to take connection speed decisions themselves, it wouldn’t be long before we discovered that they may take different decisions in the same situation. Browser compatibility tables, premature hair loss etc.
So all in all I feel that empowering web developers to take connection speed decisions is the way to go.
Upcoming speaking gigs:
Comments are closed.