Do we need touch events?

One reaction I received about my touch research was: Do we really need the touch events? Can’t we just fire the mouse events when a touch action occurs? After all, touch and mouse events aren’t that different.

That’s a fair question. It deserves a fair answer.

The problem

Right now, iPhone and Android support the touchstart, touchmove, and touchend events. Let’s say my advice is accepted and future browsers also support the touchenter, touchleave, and touchhold events.

The point here is that this series of events isn’t that different from the mouse events. Respectively, mousedown, mousemove, mouseup, mouseover, and mouseout fulfill the same role as the first five touch events. The touchhold event does not have a direct mouse equivalent, but it basically fulfills the same function as a right-click detection script.

So if mousedown is really the same as touchstart, why bother with new events? Can’t we just write a mousedown script that is triggered when the user depresses his mouse button OR touches the screen?

The issues

As far as I can see there are three fundamental issues involved:

  1. Touch interaction is different from mouse interaction. Not hugely different, but different enough to warrant separate events.
  2. It MUST be possible to write ONE single JavaScript file that works optimally in any circumstances, on the desktop, in a mobile browser, or in a widget manager, with the user using the touchscreen, the mouse, or the keyboard, as he sees fit. That implies the web developer MUST be able to distinguish between mouse-driven and touch-driven interaction.
  3. The mouse events have already been implemented as legacy events in all touchscreen browsers. They all fire after the user releases the screen.

Let’s study these issues a bit further.

Issue 1: difference

Although at first sight it seems as it touchstart is equal to mousedown, touchmove to mousemove, and touchend to mouseup, this is not really the case.

Take a look at my second scrolling layer script. It was written for touchscreen phones, and works perfectly if the browser supports the touch events. Currently only iPhone and Android do.

I added the mousedown event to touchdown, mousemove to touchmove, and mouseup to touchup. Try it in a normal desktop browser. You’ll find that, although the script works, the interaction just doesn’t make sense. The mouse events aren’t quite the same as the touch events, even though they’re pretty similar.

For more fun, try using it with a touchpad.

If you really want to have fun, try the script on a Nokia S60v3 device, which has a pseudo-cursor and implements the mouse events exactly like desktop browsers. The script works ... in a way.

Question to UX experts: how should the scrolling layer work when the user has a mouse?

In other words, we need a different interaction model for desktop browsers. That requires us to first make a distinction between mouse-driven and touch-driven browsers. But how?

Issue 2: one file

There’s one extra requirement: I absolutely want to be able to deliver one single JavaScript file that contains all the scripts necessary for all interactions. And obviously I do not want to use a browser detect at all in order to determine whether a certain browser needs the mouse or the touch version.

So how do we distinguish between the two? Simple: see if a touchstart event fires. If it does, the browser is certain to be a touchscreen device and we should disable the mouse events.

It would work something like this:

var testEl = [the element you want to do something with];
testEl.onmousedown = function () {
	// initialize mouse interface
}
testEl.ontouchstart = function () {
	testEl.onmousedown = null;
	// initialize touch interface
}

This works fine. The touchstart event fires before the mousedown event, and therefore we can wipe the mousedown event handler before it’s ever executed.

A really proper script would have a third interaction model for using the keyboard. I haven’t written that yet because it’s somewhat more tricky than the touch/mouse one. I will have to deal with S60v3, where the “arrow” keys guide a pseudo mouse cursor across the screen and both the mouse and the key events fire. Besides, the user may alternate between the keyboard and either a mouse or a touchscreen.

Still, we can use a similar trick here: we know that the user is using a mouse if the mousemove event is detected. More specifically, if two mousemove events are detected within a short period of time. After all, all mobile touchscreen browsers will fire one (and only one) mousemove event when the user releases the screen, and that does not constitute evidence for a mouse-based interface.

Of course we can only distinguish them if keyboard use fires different events than mouse use. As we know it does: the keydown, keypress, and keyup events have long been supported by all browsers. Imagine a browser in which depressing a key would fire a mousedown event!

This sort of distinguishing between different interaction models is only possible if the three models use different events. If the touch actions would also use the mouse events, it would be impossible to distinguish between the two.

Issue 3: prior art

Finally, let’s repeat it once more, all touchscreen browsers (with the sole exception of the two Vodafone Widget Managers) already fire all mouse events in one long cascade when the user releases the screen.

We shouldn’t require all browsers to change this behaviour. First of all it doesn’t make sense, and secondly they won’t all do it, leaving us stuck with an impossible situation where some browsers support the mouse events as if they’re a desktop browser, and others as if they’re a touch browser. That’s a recipe for disaster.

Separate touch events

So I strongly believe that all mobile touchscreen browsers MUST (in the sense of RFC 2119) support separate touchstart, touchmove, and touchend events and relegate the mouse events to legacy status.

Web developers will benefit from this since they can easily distinguish between the various interaction modes and create scripts that are suited to every one of them.

This is the blog of Peter-Paul Koch, web developer, consultant, and trainer. You can also follow him on Twitter or Mastodon.
Atom RSS

If you like this blog, why not donate a little bit of money to help me pay my bills?

Categories:

Comments

Comments are closed.

1 Posted by andrej on 5 February 2010 | Permalink

What about multi-touch? You can't do that with a mouse...

2 Posted by Trygve Lie on 5 February 2010 | Permalink

Multi-touch is a good example. At the top of my head it seems like it would be very nice to distinguish between a mouse event and a touch event when both are in use at the same time.

Example; Let's say we have an "iPad like device" where the user use one hand on the screen and the other hand on the mouse. One can imagine the user using the hand to manipulate the view, panning, zooming, etc.. on the screen and the mouse to select things in the view. Zooming on a touch screen where one can use to fingers to zoom is quite different from zooming with a mouse.

In this case I would like to know what is a touch event and what is a mouse event.

3 Posted by Roland van Ipenburg on 5 February 2010 | Permalink

Separate touch events make it possible to create better interaction but I doubt we should have them at this level. There are a lot of input devices that benefit from a special treatment, but that is usually handled by a layer closer to the device and then translated to generic input that works independent of an application. At this point it looks like most mobile devices are just crap at providing a default configuration to handle input in a nice way and application developers have to develop workarounds for this using touch events.

4 Posted by kl on 5 February 2010 | Permalink

Why not have MouseEvent.touches? You'll be able to tell it apart from regular mouse events.

5 Posted by nico on 5 February 2010 | Permalink

Sorry, I may be dumb, but your iPhone script works perfectly and makes sense on my desktop... (FF 3.5, Fedora).

What exactly does not make sense about it on a desktop?

6 Posted by Ben Combee on 5 February 2010 | Permalink

One problem with doing touch events for Firefox on mobile devices (Fennec) is that our UI relies on dragging to expose itself to the user. I'm worried that if we let the JS-based code in the web page intercept those, we can get into situations where the user is trapped with a full screen webpage and no way to go back to another because they can't drag the UI back into view. Any ideas?

7 Posted by ppk on 6 February 2010 | Permalink

@Ben: Yes, I see what you mean. Maybe let two-finger drag overrule script?

But there is a fundamental problem here, I agree.

8 Posted by Truman on 6 February 2010 | Permalink

What about sensors allowing for "FingerOver" events, wouldn't it be quite useful?

9 Posted by Erik on 6 February 2010 | Permalink

The "second scrolling layer script" works fine and makes perfect sense on my laptop running Firefox 3.6 using a touchpad. Maybe could benifit from a hand-cursor instead of the normal pointer but otherwhise i see no problem with it. Or are you refering to that its impossible to select the text?

I can't really see why a touchdown should be any different from a mousedown. Selection vs scrolling should be handled on the device before the javascript imo.
Compare with desktop-development: Would you like to create your own onscroll-event for every webpage you make to enable scrolling using the mousewheel?

What differs though is multi-touch and touch area/pressure. But if these properties should be added it has to be done generic enough to work with configurations like multiple mice, drawing tablets and future input devices - that's another reason why it's a bad idea to separate the touch-event from mouse because there is no actual difference. All you need to know as a developer is where the user is pointing and how the user is pointing. What device he is using doesn't say anything.

10 Posted by Nux on 7 February 2010 | Permalink

The script acts a bit strange, because when I simply click on an item they get moved to the left. Beside that and the fact that (as mentioned by Erik) one cannot select text it works quite natural on my laptop...

So anyway I made some experiments and found a little bug: as seems testEl.offsetLeft is not 0 at start because body element has a margin and because testEl has a border. So to make the script act more natural you would have to change:
startPos = testEl.offsetLeft;
to:
startPos = testEl.offsetLeft-10;
Not a real fix for the general script, but it should be fine for the test case. I guess you could also reed start value onload.

11 Posted by Frans on 11 February 2010 | Permalink

I'll have to join the people who see nothing strange about how the script works. Perhaps it's slightly unusual that something would move if you click on it? And yes, I'm using my laptop with my touchpad at the moment (though I don't see how that would make any difference).

12 Posted by mors on 12 February 2010 | Permalink

It's very easy for Apple to pull touch and gesture events out of their arse because they only need to support one platform, the iPhone, and for them, having iPhone specific web apps is good.

Touch events are useless in the sense that
a) they just duplicate mouse events, they're a reinvention of the wheel
b) the benefits (if any) that they ought to provide should be instead extended over regular mouse events
c) a new event model: double burden on implementors, spec writers and web developers
d) more device specific coding, less interoperability, more fragmentation, more ways a web app can break in a different setting from which it was originally tested. Hell for interoperability.
e)too damn low level. If the user drags the viewport, then it should trigger a pan or beforescroll event. If the user pinches it should trigger a zoom event, not a damn swiped-my-fingers-event. You want the UI actions, not the user input. If my browser provides zooming using double-click (Wii) or a view menu, then the damn gesture event would be useless.
f) Apple, when it comes to multi-touch, tried too hard to take over all IP and patents they could so they can try to monopolize it. Touch events API are US patent US20090225039.


13 Posted by Andrea Giammarchi on 21 February 2010 | Permalink

@ppk I know I am late but I have recently tested iPhone and Android against these events.

What is wrong here, is there as soon as touch events are added, mouse events are ignored.

In few words to emulate mouse events I have dispatched and normalized info for mouse events.

touchstart, if present, tells the device to ignore mousedown.
To have mousedown, we need to create a fake event, populate it with touch info and fire it in the touch.target.

Same is for touchmove, aka mousemove, and touchend, aka mouseup and eventually a click after.

- touchstart (native)
- mousedown (dispatched)
- touchmove (native)
- mousemove (dispatched)
- touchend (native)
- mouseup (dispatched)
- click/dblclick (dispatched via Date checks)

Now, what is the problem? It should probably be the other way round, since delegation/dispatch procedure costs for all touch devices, usually those without power.

In few words, assuming that touch events overwrite mouseevents, we can still set touch events in Firefox and other Desktop Browser, and reverse the logic dispatching touch events.

In this way touch will be native for devices, and dispatched for more powerful CPU.