Measuring connection speed, ctd.

Well, there were some interesting comments on my last post about connection speed in browsers. This post continues the conversation by giving my replies to a few notes, thoughts, and concerns.

First of all, three interesting resources were noted:

Then the critique points:

It’s more complicated than just connection speed. That is true, but it’s not an argument against measuring connection speed. It’s is definitely part of the problem and should be measured.

Let users decide which assets to load. Some may prefer high-res ones. This is a good idea in principle, but we don’t really want to indicate a preference on every single site we visit, do we? So it’s either a browser preference (possible), or a site preference in case assets such as videos or images are vital content (video or photography sites, for instance).

Measure the connection speed during the loading of the site. Also a good idea, and in my proposal that’s going to happen anyway because everything is measured, but I’m not sure if the gathered data should be used for the same site. It may contradict earlier readings.

Units. The spec uses Mbps, so I guess we should standardise on that. It doesn’t really matter, as long as it’s a number, and not a keyword. (What does “slow” mean? Is one browser’s “slow” comparable to another browser’s?) And the connection type is unimportant, since it doesn’t necessarily say anything about the connection speed.

No-JavaScript-enabled and No-Images-enabled media queries. Also a good idea. The first one actually exists in the spec, the second one doesn’t but could be added.

The most complicated question is still wide open: exactly how are we going to measure the connection speed? Plenty of proposals were made; summarised as “a mixture of available TCP connection time, bandwidth, latency and packet loss.” This partly depends on what the browser can measure, and in the end we’re mostly interested in the end result: which effective connection speed does the browser have?

The period over which we should take an average is especially important. And whether every single request should be measured equally, or whether there should be a kind of weighting.

Anyway, please continue. This is most instructive.

This is the blog of Peter-Paul Koch, web developer, consultant, and trainer. You can also follow him on Twitter or Mastodon.
Atom RSS

If you like this blog, why not donate a little bit of money to help me pay my bills?

Categories:

Comments

Comments are closed.

1 Posted by Orde Saunders on 26 October 2012 | Permalink

The browser breakdown (based on user agent) for the timing research is:

IE9 - 46%
Chrome - 29%
Firefox - 21%

2 Posted by karl on 29 October 2012 | Permalink

Two missing things, not exactly at the technical layer but maybe at the user experience one.

* How much will it cost me? (money)
* How long will it take to download? (time)

You might want to have a hires of something on a slow connection if it's what you want and you know what you are putting yourself into.

3 Posted by Dave Hulbert on 29 October 2012 | Permalink

I wrote a bit about this a while back: http://createopen.com/2012/05/taking-responsive-design-a-few-steps-forwards/

My thinking was that too many people build responsive sites that only respond to screen resolution and assume all other properties. We need to enable web authors to access things like bandwidth and network latency. In addition to the network and the screen, browsers could report information about (preferred) input devices/methods, device power (eg battery) as well as CPU/GPU capabilities.

4 Posted by porneL on 29 October 2012 | Permalink

HTTP header won't work. It'll degrade to the same mess as User-Agent.

You can't trust every server to implement negotiation properly, and it'll take ages for CDNs to handle this (if ever).

The <picture> element proposal actually goes a long way towards this. Just needs filesize or quality/size tradeoff hint added as an extra argument.

It's purely client-side decision, so browsers can decide best taking into account more than you want to tell to the server.

For example cost: roaming and bandwidth caps.

e.g. LTE in UK comes with bandwidth allowance for a few minutes of usage at maximum speed — you'd still want small images despite high speed!

I may also be on low-speed unlimited traffic connection and saving page/RSS feed for reading later — then I'd rather wait for high-res images.

[picture]
[source src=low filesize=10KB]
[source src=hi filesize=90KB]
[/picture]

and I can have my browser choose based on 2G/WiFi/roaming and (if known) bandwidth cap left on my tariff — no per-site manual settings needed.

RSS/offline readers/printers would be able to configure WebView to choose higher res, etc.

5 Posted by Jonas on 30 October 2012 | Permalink

Though speed measurement is a really important topic, there definitely should be a browser setting in which you can override the automatic decision what images will be loaded. When I'm sitting at home without being in any hurry I'd rather wait one more minute for the high resolution images to load then settling for the low res images. In the waiting time I still can operate on my other browser tabs, so there is no need for me to speed things up.

6 Posted by Lee Kowalkowski on 30 October 2012 | Permalink

Well, real/effective speed is data transferred/time taken - unfortunately in past tense. It must be measured and cannot be known up-front for any request. By which time, it's just too late to serve something different, we can only decide whether to serve anything extra. Progressive encoding should have solved this for images, where once enough detail had been downloaded (or a timeout occurs), the download can be abandoned.

Time taken - should include the time taken to issue the request, and the time taken to process the response (not just receive it). We can't assume any device capability based on connection speed. We don't want a device to be overwhelmed by its bandwidth, it will be unusable (e.g. spend more time decoding "high-quality/resolution/frame-rate" video than playing it).

Using historical measurements as an indication of future capability is worthless - even from the immediate browser session. We could be browsing a very fast website, but the next site could be slow. Assuming connection speed is constant, repeated requests to the same site can hit different infrastructure with different performance, under different loads, or different states.

For example, main images could be served directly from Apache and already be loaded into memory because they're being requested very frequently, but user holiday photographs on the same site could be buried in a database or file system, a profile image may incur a call to a third party web service. The request may have to wait for run-time interpretation of scripts or security checks before it can begin its response.

Client-side we can easily disregard data transferred and only measure time, if we want our page to be completely loaded within n seconds, we can always design the site to load the crucial stuff first and only any additional items if it looks like there's enough time. However, this would not support switching between low/high quality items if they're considered crucial, you'd always have to download the low quality item first.

Can we therefore make media queries or whatever time based relative to the time of the initial page request?

Thinking about it, knowing the connection speed (in Mbps) is not very useful at all, that's just an indication of maximum potential. Any assumption a site makes based on this is dangerous, especially assuming the device (or server) is capable of performing at such rates - do we think the browser loading the page can use all the bandwidth or isn't currently doing anything else? Or the device isn't doing anything else? Or the network? Or the server?

If the problem we're trying to solve is 'Shall we serve a high-res or a low-res image?' then we should look at getting the handling of progressively interlaced file formats right.