Of viewports and screens, input modes and event handlers

Last week Luke Wroblewski published an important article in which he said that web developers practising responsive design rely too much on a device’s screen size to determine which device it is.

He showed that two devices with an ideal viewport of 320px may need two drastically different user interfaces, something that cannot be discovered by studying the screen, or the browser viewport, alone. In addition, with the proliferation of new input modes (remotes for TV, Google Glass, 3D gestures, voice, whatnot) the one size fits all approach we’re currently using may not be enough.

An interface that tries to be all things to all devices might ultimately not do a good job for any situation.

If you haven’t yet, go read his article. It’s important.

Ethan Marcotte disagreed up to a point. He feels that the screen is but the starting point for responsive design, and that it’s not necessarily bad to give users the option of adjusting the interface.

Also, Ethan said that for responsive-design purposes he prefers the browser viewport over the screen, which caused Luke to write a follow-up. Sure, currently responsive design uses the browser viewport to take important decisions, but this viewport doesn’t always tell us what we need to know about the screen.

I firmly agree. The fact that a device uses a certain ideal viewport says nothing about its physical screen or the distance the user is removed from it.

As soon as I had read Luke’s initial article I started wondering about possible solutions. How could web developers detect what kind of screen they were targeting without trusting the ideal browser viewport overmuch? I am currently heavily engaged in the umpteenth round of viewport research, and maybe there was a solution somewhere in my data?

Within a few minutes I found two possibilities: input-mode-specific event handlers, and physical screen size. I also considered and discarded a third one: device self-identification

Self-identification — no

The theoretically best solution is if all devices would just self-identify in a reliable way. navigator.device = 'TV' or something.

Unfortunately this is not going to work. Media queries tried it and failed. The case of media type handheld is instructive.

Originally the idea was that any device would pick a media type that describes it best. For phones, handheld was the obvious choice, and early phone browsers implemented it.

Unfortunately these early phone browsers were rubbish, and therefore web developers used handheld to send simplified CSS to any self-identifying phone browser. Therefore, when modern browsers came onto phones they removed the media type because it would give them simplistic styles while they could handle regular ones.

Today, Nokia’s Symbian WebKit is the only mobile browser that supports handheld, and I guess that’s more a matter of legacy than of deliberate choice. No other mobile browser self-identifies via a media type — or in any other way, for that matter.

Any self-identification scheme we think up now is going to go exactly the same route. It won’t work.

Physical screen size - no

The physical screen size idea sounded promising for a while, but turned out to be a dud. I had a vague notion that we could compare the ideal browser viewport (the one you get when using width=device-width) with the actual physical screen size in device pixels.

Alas, resolutions for HDTV turn out to be not all that different from desktop computers — not enough to make a difference. We could try for some “clever” calculations, but the fact of the matter remains that the average HDTV screen has roughly the same amount of device pixels as an average computer screen, while the use cases for these two types of screens are drastically different. That’s exactly what Luke found when studying the ideal browser viewport.

Besides, it is not possible to read out the number of device pixels. (Aren’t they in screen.width/height? Sometimes, but not reliably so.) It is also not always possible to read out the ideal viewport width without actually using width=device-width. When available, this information is in — wait for it — screen.width/height!

So let’s not go in this direction. It won’t work reliably enough.

Event handlers — possibly

What about JavaScript event handlers? The surest way to detect a touchscreen device is by waiting for a touchstart event to fire. If that happens you’re certain the user is on a touchscreen — just as multiple mousemove events tell you the browser uses a mouse, or a keydown event tells you a keyboard is involved.

Would it be possible to use this in some way? Say, when the layout viewport becomes 320px because of width=device-width BUT no touch event is detected, can we conclude we’re on a TV? Or in any case on a non-phone device that’s farther away from our eyes and may need an adjusted design?

Possibly. Nothing’s sure in this world, but it could work — for now, in certain cases.

I see three dangers here:

  1. During page load it’s likely that no events will be detected, either because the user is waiting for the page to load or because the necessary script isn’t initialized yet. So no data is available during the initial building of the page, and we have to adjust the interface later. That’s likely to be both ugly and disorienting.
  2. Any solution we devise now may not work properly on future devices. Phones emulate mouse events because so many websites depend on them. Future devices may emulate touch events for the same reason, even if the device doesn’t actually have a touchscreen.
  3. The absence of certain events doesn’t prove anything. We need positive identification.

Despite all these problems an event-handler-focused solution seems slightly better to me than a screen-size-based one.

How would it work?

Let’s do a thought experiment.

  1. The browser indicates its ideal viewport is 640px wide, but we have no clue what kind of device we’re on.
  2. Initially we show the simplest possible interface. In Luke’s example it would be the Google Glass one instead of the tablet one.
  3. We add a button “More details” — or whatever, as long as the user is invited to make a choice in a way that’s consistent with the content and the interface.
  4. Ideally, the user won’t be put off by this option and activate it when appropriate. We devoutly hope that distance from screen will play a part in the user’s decision-making process.
  5. Now for the trick: by reading out the sequence of JavaScript events leading up to the activation we get some more information about the device. Touchstart? Hey, you’re on a touchscreen. Keydown? Hey, you must be using a keyboard! Remotewiggle? Hey, it’s a TV! Rapidblink? Google Glass! (OK, I made up the last two.) Whatever. More information is now available.
  6. We use the ideal viewport, user choice, and the sequence of JavaScript events to figure out what kind of interface the user should see.
  7. Since the user has taken an action, she expects something to happen and will not be put off by a sudden change of the interface.

Could work, I suppose. Needs a lot of work, but it’s not impossible.

I also suppose something like this is what Ethan means with giving the user options.

The philosophy of JavaScript events

This use of JavaScript events raises questions. Does every distinct input mode need its own events? So far we’ve always unconsciously assumed this was the case — mouseover, touchmove, keypress.

Microsoft disagrees, though, and introduced pointer events to show a different directions we can go. Essentially, pointer events are a layer above mouse and touch events. However you interact with the browser, pointerdown will fire when you either touch the screen or depress your mouse button.

This is an interesting philosophical discussion. Who’s right here and why? I’ve always liked the pointer proposal in the abstract, since in many cases the exact input mode doesn’t matter. If the user clicks a button he wants to do something, and it doesn’t matter whether he uses a keyboard, a mouse, a touch, a remote, his eyes, his voice, or whatever.

Still, we’ve just seen that input-mode-specific events can help us with responsive design. Also, there’s the fact that despite its philosophy Microsoft found it necessary to include a pointerType property that indicates what kind of pointer the user is using.

So I find myself sliding back to the distinct-event camp. Using input-mode-agnostic events sounds wonderful in theory, but it deprives us from the possibility of figuring out what kind of device we’re on, whether to offer input-mode-specific interactions or more generally to solve the problem Luke highlighted.

Conclusion: none

So where does all this lead us? I don’t know. This stuff is tricky, and there is no simple solution.

Luke and Ethan show us, again, that the ever-changing device landscape is impossible to predict, and that a solution that works today will have severe defects tomorrow.

All we can do is continue to work on our solutions. Responsive design as practiced today is a vital first step, both because the small-screen problem is obvious to anyone and there actually is a solution for phones.

Now that we understand responsive design for phones and tablets we’ll have to figure out how to bring our methods and techniques to even more devices.

Never a dull moment.

This is the blog of Peter-Paul Koch, mobile platform strategist, consultant, and trainer. You can also follow him on Twitter.
Atom RSS

I’m speaking at the following conferences:

(Data from Lanyrd)

Categories:

Monthlies: