The AJAX response: XML, HTML, or JSON?

Since my last AJAX project I've increasingly been wondering about the "ideal" output format for the AJAX response. Once you've succesfully fired an AJAX request, what sort of response should the server give? An XML document? An HTML snippet? A JSON string which is converted to a JavaScript object? Or something else? In this entry I'd like to discuss the three formats, with examples, and ask you which format you've used in your practical AJAX applications.

(This article has been translated into Spanish.)

When you receive an additional bit of data for your AJAX application, you should start up a script that incorporates this extra data into your single-page HTML interface. Of course the form of the script heavily depends on the format of the data you've received. Should you search an XML document for specific nodes and copy their text to the HTML? Or did you receive an HTML snippet that should be added to the page "as is"?

In my last project I received some data as XML documents and some as HTML snippets, and they needed different kinds of scripts to write the data to the page. Both formats, and both kinds of scripts, have their advantages and disadvantages.

After I'd finished the application I delved a bit deeper into JavaScript Object Notation, invented by Douglas Crockford and recently chosen as the default output format for most Yahoo services, and I think I like it, although I've never yet used it.

I'm left wondering: which format is the best? Which format do you think is best, or at least most useful in a practical AJAX environment?


As an example of the three formats, let's take an AJAX-driven online bookstore. We ask for the JavaScript books they have in store, and by a staggering coincidence they have exactly those three JavaScript books I keep lying around on my desk. An AJAX request returns these three results, and you have to incorporate them in the HTML of your single-page interface.

I'll give examples of all the three formats, and a simple script to show the results in a <div id="writeroot">.

XML documents

The first and most obvious choice for an output format is the XML document. The original idea behind the XMLHTTP object was the importing of XML documents, so it's no surprise that most of the attention went to XML and it's still considered the default output format.


The server returns this XML document:

		<title>JavaScript, the Definitive Guide</title>
		<author>David Flanagan</author>
		<cover src="/images/cover_defguide.jpg" />
		<blurb>Lorem ipsum dolor sit amet, consectetuer adipiscing elit.</blurb>
		<title>DOM Scripting</title>
		<publisher>Friends of Ed</publisher>
		<author>Jeremy Keith</author>
		<cover src="/images/cover_domscripting.jpg" />
		<blurb>Praesent et diam a ligula facilisis venenatis.</blurb>
		<title>DHTML Utopia: Modern Web Design using JavaScript &amp; DOM</title>
		<author>Stuart Langridge</author>
		<cover src="/images/cover_utopia.jpg" />
		<blurb>Lorem ipsum dolor sit amet, consectetuer adipiscing elit.</blurb>

We need this script to show the results in our <div>.

function setDataXML(req)
	var books = req.responseXML.getElementsByTagName('book');
	for (var i=0;i<books.length;i++)
		var x = document.createElement('div');
		x.className = 'book';
		var y = document.createElement('h3');
		var z = document.createElement('p');
		z.className = 'moreInfo';
		z.appendChild(document.createTextNode('By ' + getNodeValue(books[i],'author') + ', ' + getNodeValue(books[i],'publisher')));
		var a = document.createElement('img');
		a.src = books[i].getElementsByTagName('cover')[0].getAttribute('src');
		var b = document.createElement('p');

function getNodeValue(obj,tag)
	return obj.getElementsByTagName(tag)[0].firstChild.nodeValue;

Rather a lot of code, as you see. Although the W3C DOM gives us full access to both the XML document from the server and the HTML document the data should be shown in, it doesn't give us an elegant, simple way of extracting exactly that data we need: we have to delve into the XML document time and again.

It's here that XSLT would come in handily, since this language is meant precisely to convert an XML document to another kind of XML, and since XHTML is XML, we can also use it to create Web page snippets. I haven't studied XSL(T) since 1999, though, and doubtless there are many minor compatibility issues that I'd have to solve before getting a workable demo. We'll leave XSLT for another time.


The most important advantage of XML is that it's the most easily readable format for other humans.

A secondary advantage is that XML has been around for quite a while and that many developers are already accustomed to it. Saying "I'd like your server side script to return an XML document" won't cause raised eyebrows, while saying "I'd like the script to return a JSON object" might.


The JavaScript required to insert the data into the HTML page is quite verbose. I wrote a little convenience function getNodeValue() to get rid of the most verbose and boring part of the script: reading out the text in an XML tag. Nonetheless the script won't ever win a beauty contest.

HTML snippets

The second, and maybe the most interesting, output format is an HTML snippet. Note that I call it a snippet, since we do not receive a complete HTML page. Instead, we get exactly that HTML that has to be inserted into our <div>.


The server returns this HTML snippet:

<div class="book">
	<h3>JavaScript, the Definitive Guide</h3>
	<p class="moreInfo">By David Flanagan, O'Reilly</p>
	<img src="/images/cover_defguide.jpg" />
	<p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit.</p>
<div class="book">
	<h3>DOM Scripting</h3>
	<p class="moreInfo">By Jeremy Keith, Friends of Ed</p>
	<img src="/images/cover_domscripting.jpg" />
	<p>Praesent et diam a ligula facilisis venenatis.</p>
<div class="book">
	<h3>HTML Utopia: Modern Web Design using JavaScript & DOM</h3>
	<p class="moreInfo">By Stuart Langridge, Sitepoint</p>
	<img src="/images/cover_utopia.jpg" />
	<p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit.</p>

The script is extremely simple: just put the responseText in the innerHTML of the correct object and you're ready.

function setDataHTML(req)
	document.getElementById('writeroot').innerHTML = req.responseText;


The script's simplicity is the most important advantage of this method.

In addition, this format offers interesting accessibility options. We could cleverly write the server side script to build a complete, accessible HTML page that can be shown to any device. If the request happens to be made by an AJAX script, the server side script would discard all HTML except for the search results, or the AJAX script itself would search for the results.

It's of course perfectly possible to create similar accessibility features when you're working with XML or JSON, but the HTML snippet format is the easiest one for this job.


If the HTML snippet contains forms, or if the receiving HTML element is a form, this method give horrific errors in Explorer.

In addition, HTML snippets may become quite complicated. The example above isn't, but as soon as you want to use advanced CSS techniques that require more elements than strictly necessary, the snippet would have to contain extra <span>s or whichever elements you need. Thus the server side script that generates the HTML may become quite complicated.


The third method is JSON, JavaScript Object Notation. Personally I pronounce it as "Jason", so that yet another ancient Greek hero enters modern JavaScript development. (And please remember that Ajax's father Telamon accompanied Jason as an Argonaut. Jason was older, and on the whole more succesful, than Ajax)

The general idea is to deliver a bit of text (a string, really) which can be interpreted as a JavaScript object. Once it has arrived, you use JavaScript's eval() method to convert the string into a real JavaScript object, which you then read out.


The server returns this JSON string:

		"title":"JavaScript, the Definitive Guide",
		"author":"David Flanagan",
		"blurb":"Lorem ipsum dolor sit amet, consectetuer adipiscing elit."
		"title":"DOM Scripting",
		"publisher":"Friends of Ed",
		"author":"Jeremy Keith",
		"blurb":"Praesent et diam a ligula facilisis venenatis."
		"title":"DHTML Utopia: Modern Web Design using JavaScript & DOM",
		"author":"Stuart Langridge",
		"blurb":"Lorem ipsum dolor sit amet, consectetuer adipiscing elit."

The script looks rather a lot like the XML script. It does the same things, it just reads out the data from another format. Here, too, XSLT might come in handy.

function setDataJSON(req)
	var data = eval('(' + req.responseText + ')');
	for (var i=0;i<data.books.length;i++)
		var x = document.createElement('div');
		x.className = 'book';
		var y = document.createElement('h3');
		var z = document.createElement('p');
		z.className = 'moreInfo';
		z.appendChild(document.createTextNode('By ' + data.books[i] + ', ' + data.books[i].book.publisher));
		var a = document.createElement('img');
		a.src = data.books[i].book.cover;
		var b = document.createElement('p');


The most important advantage is that JSON circumvents JavaScript's same-source policy, if you import the JSON file as a new <script> tag. See Simon Willison's example for the gory details.

JavaScript does not allow you to access documents (be they XML or HTML) that come from another server. However, if you import a JSON file as a script tag you circumvent this problem, and any JSON data can be imported into any website. It depends on your business goals whether this is a Good or a Bad Thing, but right now it's the only data format that allows unrestricted access.

A secondary advantage is that scripts for JSON data are slightly simpler and slightly more in line with the rest of the JavaScript language than scripts for XML data.


The most important disadvantage of JSON is that the format is very hard to read for humans, and that, of course, every single comma, quote, and bracket should be in exactly the correct place. While this is also true of XML, JSON's welter of complicated-looking syntax, like the }}]} at the end of the data snippet, may frighten the newbies and make for complicated debugging.

Your choice?

These are the three output formats you can use for getting AJAX data. Although I'd love to be able to say that one of them is "the best", I think choosing the right format depends on the circumstances, and not on any theoretical musings.

Nonetheless, let's take a stab at finding "the best" format. I have four questions for you:

  1. Can you think of another output format?
  2. Which output format did you use in a practical, commercial AJAX application? (Demos and the like don't count)
  3. Will you switch to another output format in the future? If so, which one and why?
  4. Can you think of other advantages or disadvantages of the three formats?

My own answers are:

  1. No.
  2. Mainly XML documents, a few HTML snippets.
  3. I'm going to study JSON carefully and might switch to it for an unrestricted access application I have in mind. Nonetheless I feel that XML remains the best overall format for the time being, mainly because people are used to it.
  4. I wrote down all advantages and disadvantages I could think of.

This is the blog of Peter-Paul Koch, web developer, consultant, and trainer. You can also follow him on Twitter or Mastodon.
Atom RSS

If you like this blog, why not donate a little bit of money to help me pay my bills?



Comments are closed.

1 Posted by Garrett Dimon on 17 December 2005 | Permalink

With the exception of the Internet Explorer problems, which seem to have reasonable workarounds, the HTML snippets seems to me the best starting point.

However, another advantage to the XML format is that it could easily be used for traditional web services as well. So the flexibility there seems nice.

However, if the only purpose is for AJAX, it seems that putting the business logic and corresponding presentation logic on the server-side makes more sense.

In that scenario, the JavaScript is simply performing the dynamic part of the effort and doesn't get mixed in with any of the presentational information.

Of course, the choice probably depends most on the context of your solution.

2 Posted by StormSilver on 17 December 2005 | Permalink

I think that, when you look at AJAX and where it lies in the application layers, it's pretty obvious that XML is the "best" choice. It's even part of the name. Ideally, AJAX is the "controller" in an MVC-style application, or at least a piece of the controller. So communication with the server, then, should be abstract, clean, and codified: all the things that XML stands for. It helps to modularize things, too, because then if you want to change up the interface you only have to rewrite the JavaScript, and you never even worry about your backend AJAX handlers. Plus, as stated, XML is common, everyone knows it. When you start adding other pieces to your application in other languages that don't run in a browser, you don't have to worry about the fact that you wrote your output in XHTML and that's only renderable in a browser. It's eXtensible.

That's ideal, that's theoretical. In practice, it takes a lot of code to do pretty simple things with XML, as you said. I agree that the choice is situational: When practical, XML is the golden standard. In my applications, I've used both XML parsing and straight XHTML replacement simultaneously. XML is great for data retrieval, but you can do other, more convenient things using innerHTML.

3 Posted by Michael Schuerig on 17 December 2005 | Permalink

One advantage HTML snippets have over the two other formats is that they allow to keep layout code in a single place.

The layout is created entirely on the server and can be appropriately modularized. Consider this case: the app delivers an initial version of a page to the browser. Later on, elements on this page are replaced or new, similar items are added through Ajax requests.

With HTML snippets, the appropriate new markup is generated on the server by the same code that generated the initial markup. With the two other formats, only data is delivered to the browser and there scripts have to massage XML or JavaScript objects into appropriate DOM structures.

In the later case, server-side and client-side code basically duplicate the same logic in two places and have to be kept in sync. With HTML snippets, this problem does not exist.

4 Posted by Tim Connor on 17 December 2005 | Permalink

JSON is the greatest thing since sliced bread for AJAXy purposed. XML is a bit too much of a holy cow, I think.

StromSilver - mainy of those arguments work for JSON too. Well not "It's even in the name" but that is hopefully only a humorous point - please tell me you don't make technical decisions based on reasons like that (Javascript is in the name too, and JSON is just javascript notation).

The "obvious" choice often preculdes the right one. If something is overly "obvious" it is probably the result of incorrect assumptions or an intellectual laziness of going with what one knows. JSON is an just as arbitrarily abstract as XML. It just happens to fit a different standard for formatting that has some nice side effects (and is more succinct, which works in some cases where the verbosity of XML has no benefit, and some detriments).

PPK, you can use a JSON .js parser, if you want to avoid the eval call. I know it seems ironic to parse out JSON in javascript, but it's an option for those who think eval == evil for security purposes (this is client-side, though, so....)

Despite my opening line, I'm not saying JSON is always > XML, just that there is no "obvious" "must be the right answer" answer for all cases. Different formats have different advantages and people should try to look into things before blowing them off.

5 Posted by grumpY! on 17 December 2005 | Permalink

JSON will likely take off now that Yahoo has documented a hack to get around the browser security model with its "callback" apis. We are about to go down a slippery slope of javascript secutiy issues.

6 Posted by David Richardson on 17 December 2005 | Permalink

**It's even part of the name**

That's a tautology. The XMLHttpRequest method can be used in synchronous mode - do we then refer to SJAX?

JSON is a dialect of YAML, a much more human-readable format. For over-the-wire applications, XML is needlessly chatty. All those closing tags.

The sole advantage of XML is (sometimes) in parsing for random access. If you access the data as a stream then a YAML format is easier. My experience is that the need for random access to data nodes is much less common than most people think.

And the bandwidth and RAM savings by going to a YAML format (think slow wireless devices or a large number of users hitting your server, each holding an XML fragment in RAM) can be substantial.

7 Posted by Weston Ruter on 17 December 2005 | Permalink

I side with using XML as the output format. Specifically, I use XML-RPC as the transport protocol for my AJAX applications, thus I use XML for both my server's responses and my client's requests. I have used XML-RPC to power two very successful database manipulation projects.

I chose XML-RPC because it is a standardized protocol. I wanted to make the requests to and responses of my applications accessible to others. I also chose XML-RPC because there are both client-side and server-side libraries available. These libraries make passing data between the server and the client as easy as passing an argument to a JavaScript function on the client and returning arbitrary and language-native data from a function on the server. The XML-RPC transport protocol, then, is mostly transparent to the application in which it is utilized.

Main disadvantage is the verbosity of the XML used, I suppose. Another disadvantage which I have not yet delt with is having to send XML data as escaped CDATA, which makes using XML-RPC look quite similar to using HTML snippets. Using SOAP would eliminate this problem, but the verbosity of SOAP is many times greater than that of XML-RPC, both physically and conceptually.

8 Posted by Georges PLANELLES on 17 December 2005 | Permalink

I use my own compact format (compared to XML). It contains orders and parameters which may be pure HTML or other parameters depending on the order.

Orders are like 'display this javascript alert', 'hide this objet', 'display this HTML in this object', 'show this menu, 'clear all fields', 'put this value in this field' and so on.

Orders are separated by a special character.

When returning from XMLHTTP, I use Javascript to analyze and separate orders and then execute them each after the other, using the provided parameters.

Sure it is proprietary, but it is my own one and it is powerful and extensible (not difficult to add new orders).

9 Posted by Brian LePore on 17 December 2005 | Permalink

I haven't played around with this much, but I found that converting responseText to an array of DOM elements and then inserting them into the document works around IE's problems with forms. My test case: Works for me.

My script converting a string to DOM elements is probably not the best (it does allow some invalid HTML), but I never could find one on the net so I wrote it myself. I am not that experienced in JS, so a more experienced coder than I could probably streamline it further.

10 Posted by Jim Ley on 17 December 2005 | Permalink

The biggest advantage of JSON over XML is in performance, it is orders of magnitude faster than XML in all implementations, so in any situation where you're returning serious amounts of information JSON is the most appropriate method.

On the human readibility, your example above is somewhat unfair, as your JSON is over complicated, containing more information than is actually used in your script:

title:"JavaScript, the Definitive Guide",
author:"David Flanagan",

Is all that is necessary, all the extra books, book is just obfuscation, that makes it difficult to read, as do the extra quotes (which are strictly part of JSON in Crockfords specification, however they are not required by the true javascript object format, so can be safely ignored as you're only delivering this format to javascript)

Now personally I find this much clearer, the information to boilerplate ratio is much higher.

To answer your questions, 1. TSV,CSV, text; 2. JSON, HTML, other, XML in that order mostly JSON, 3. stay with JSON, 4. XML, JSON have poor error recovery, HTML wins here, XML is slow, HTML is bandwidth heavy.

11 Posted by Michael Mahemoff on 17 December 2005 | Permalink

It seems the most common output format, with the possible exception of HTML, is a custom plain-text message, e.g. a comma-separated list or whatever.

To the commenter who mentioned the JSON parser, that works if you get the JS locally, e.g. with XHR. But it doesn't apply where you want it most - for cross-site JSON. With cross-site JSON, you have to put it in a script tag, which will eval() it automatically. So there's apparently no way to clear the risk of running cross-site scripts if you use cross-site JSON.

12 Posted by Dante on 17 December 2005 | Permalink

What about plain .txt files? They're lightweight, simple to read, and easily parsible. For each line you can have something like:

book title|book publisher|author

Not as human readable as XMl, but a lot easier to parse.

13 Posted by Doeke Zanstgra on 17 December 2005 | Permalink

I could think of an other format: javascript embedded in HTML. gmail is using it. I have used it in an commercial application (CMS) and a co-worker has used it in an order entry system for a major dutch telecom operator.

I needed something that was quick and easy to implement and I decided I didn't want to use a library or framework, since I didn't have time to evaluate it. For my co-worker, the most important reason was browser compatibility, as he was unsure about every browser used by mobile phone shops did support the XmlHtttpObject (we found some weird compatiblity problems: threading issues with posting to IFrames, and exceptions not caught with try/catch when using eval() ).

Both our calls are on the level: save this record; lookup street and city for this postal code and house number. Ie: RPC calls. The HTML we both kept client-side.

But sometimes it's better to solely generate the HTML server-side.

I would like to know how many people write AJAX applications from scratch, and how often you need to AJAXify an existing web-application. I bet, the last one accounts for the majority of hours billed...

(and please, don't use the XmlHttpObject in sychronous mode: it freezes your browser).

14 Posted by Jehiah Czebotar on 17 December 2005 | Permalink

PPK: I understand your point about JSON data being slightly less readable by humans compared with xml. But one point that I think you skipped over is that using the difference JSON encode functions for various languages (php,java,python ...) It allows you to construct your data in the objects/notation of your server side langugage of choice, then just translate them into javascript native objects.

Let's be real. Rarely are you going to manually type objects to be returned, it is almost always going to be server generated data that you are returning. And generating that data in a format native to the server side language is great. In fact JSON format is often close to a toString() type operation of native objects.

I wrote the JSON implementation for ColdFusion, so I'm quite possibly bias.

15 Posted by Tino Zijdel on 17 December 2005 | Permalink

I have actually used a javascript implementation of PHP's unserialize() to built an object serverside and sent it as a serialized string, then use my JS unserialize to convert it into a javascript object.
Basically this is quite simular to JSON, except that it uses the PHP serialize format instead of JS object notation and thus needs a small (30 lines) function for the conversion (no eval!).

16 Posted by James Burke on 17 December 2005 | Permalink

I have used/will use full JavaScript (including functions, which are not in JSON).

It is important to differentiate what you are getting via AJAX.

If just Model data, then JSON is best, since you probably want to keep the data around for multiple requests and user actions. Since the page's controller is in JS, it is best to have the model data in JS form. You can fetch the Model data with XHR or via SCRIPT tags. If using SCRIPT, then you just need a way to know when data is loaded/errors occur. My attempt at that:

If the request is for View pieces (HTML elements that will be combined with the model data), then fetch the View pieces as HTML snippets that have been JS-encoded inside JS functions. That way, I can pass a JS model structure to the function to combine the HTML View with the model data. Take the return String from the function and use innerHTML or DOMParser to inject into DOM. The trick there is to allow the developer to make HTML snippets that get converted to proper JS functions. Tagneto has tags to do that.

Combining Model and View data into HTML snippets doesn't allow for the most flexible architectural design. See for reasons.

17 Posted by Justin Perkins on 18 December 2005 | Permalink

1. No, me neither.
2. XML makes the most sense to me and is my preferred method for transferring data from a server. Web services already output XML, so connecting AJAX to webservices on your server is trival (from a server-side perspective).

Callback methods on the server should not (imo) be custom written for AJAX, they should be service oriented and therefore available for anybody to consume, not just you and your AJAX-powered page.

3. I don't think I will switch to another format, unless I'm in a project where I don't have a choice.

4. JSON is cool, but it requires custom written data processing geared solely for AJAX. It's too specific for my tastes. What if you have a search method that returns results in the JSON form, then somebody comes along and tells you they are publishing their API and the search method should be publicly available. JSON won't work anymore.

The biggest disadvantage of all with JSON is eval(). How can you guarantee what is in that string you're eval()ing and how can you stop errors from happening (wrapping in a try{} isn't fair)? You can't.

18 Posted by Tom Hickey on 18 December 2005 | Permalink

I prefer to return JSON, mainly because I would rather deal with the returned data as objects (much as you would for data created on the page) instead of having to use DOM methods to deal with the data (which just feels clunky).

I do not the the idea of returning HTML snippets for general data retrieval, and would really only use that for small once-off purposes (e.g. short return messages).

In general, using XML and JSON allows you to retrieve various data irregardless of how it will be used on the page. Whereas using HTML snippets is directly related to how it will be used on the page. If I am retrieving the data for the list of books, whether they will be displayed in an unordered list, as options in a select box, or divs & paragraphs (as above) should be independent of the data itself.

If you keep the data seperate from the use on the page, it will allow you to use the same request in a variety of ways and will also improve your benefit from being able to cache requests.

19 Posted by Sam Minnee on 18 December 2005 | Permalink

Some comments have been made about XML being "obviously the best" choice for fitting with the way the architecture "should" be put together. This is obviously a matter of opinion, since there are many ways to skin a cat, and not all of them are hacks.

Personally, I prefer the snippet + innerHTML fits best in with degradable Ajax techniques: you can use the same templating system for both your non-Ajax templates and your Ajax snippets. In my own work, I've set up a way of defining a region of a template to be an Ajax snippet.

It lends itself very naturally to building a 'traditional' web-app, and then overlaying the Ajax functionality.

Admittedly, I haven't done enough with forms to run into the IE problems described; so the snippet-solution is no panacea.

Additionally, if you've got bloated / repetitive HTML, a JavaScript solution might be more responsive due to smaller downloads. However, in my experience lately HTML is usually pretty concise if you're clever with your CSS.

20 Posted by minghong on 18 December 2005 | Permalink

Another output format? Yes, what about CSV (for tabular data)? I did write a CSV parser in JavaScript and it works pretty well. The CSV files are static (not generated from server-side) through.

Otherwise, I'll stick with XML.

21 Posted by Jaffa The Cake on 18 December 2005 | Permalink

The HTML Snippet is what I'd do for simple projects. I know innerHTML is an evil, and I'd much rather get the HTML snippet as xml, and copy the parent node across into my document, but as I've experienced (and it's been documented here) you get into a world of browser uncompatibility.

22 Posted by Gerben on 18 December 2005 | Permalink

couldn't application/x-www-form-urlencoded (RFC1738) be a nice output format.
It's already used to send data to the server. Why not use it to receive data from the server.
It's not that appropriate in this case where 'huge' amounts of data are send, but it cases where you just need to send a few variables it I think it's a nice (standard) alternative.

An other alternative is to generate a HTML spippet out of the XML using XSLT. Probably not a good idea due to lack of XSLT support in some browsers.

PS if you just add a around your HTML snippet it's XML too. This way it is possible to (in the future) change the way you output the info without changing anything serverside.

23 Posted by graste on 18 December 2005 | Permalink

I like JSON, but would prefer XML, as it makes simply more sense when you speak about web services that are used by perhaps more than your simple application in the future.
HTML is probably nice for quick tests or small applications or prototypes. JSON or CSV could be the format of choice when you only use AJAX in your applications' pages and no public API etc. will ever be published.

What I wonder is: What is the best way for handling separation of concerns when using AJAX? How does a (CSS-) designer know which HTML is generated or how does the (HTML-/JS-) client developer know what CSS to use or what HTML to generate? That's a problem for team based development of AJAX-enabled applications, I think. A whole lot of communication is needed. Does anyone have a simple solution to this? Templates anyone? ;)

24 Posted by Rube on 18 December 2005 | Permalink

I've always used HTML snippets primarily because on the backend I can use the same code to generate AJAX responses and their non-AJAX equivalents. Using XML or JSON, I would still need to be able to generate the same HTML snippet. This not only doubles my work but means that my HTML creation logic is now in two different places (in the javascript for AJAX capable browsers and in my server side code for those that don't support AJAX).

I used to think that returning HTML snippets violated seperation concerns. I don't believe that anymore. Changing the HTML snippet the AJAX request returns should be no different than changing any other template on your site.

25 Posted by Toby Elliott on 18 December 2005 | Permalink

A quick correction - JSON is not the default output format for the Yahoo! Web Services. You need to explicitly request it, or you'll get the standard XML.

There are other potential output formats. And that's all I'm saying on that at this time.

26 Posted by Justin Perkins on 18 December 2005 | Permalink

I mentioned earlier that XML would be the best solution, but in thinking about this a little more, HTML snippets could well be considered XML, right?

As long as you are writing valid XHTML, I don't see anything different (other than the XML prolog) about the "XML" method.

27 Posted by Shawn Brown on 18 December 2005 | Permalink

Using JSON can be more elegant than using XML in some instances.

I'm running a remote hosted JavaScript/"Ajax" service that loads JSON asynchronously and uses the *same* JSON objects with its fallback method (when asynchronous loading is unsupported). The setup works like this:

try {
// preferred method
...load from URL asynchronously (if needed, throw error explicitly)...
catch (e) {
// fallback method queryURL as variable and refresh page...

Then when the page loads, an init-loader function looks for a fallback queryURL. If one is found, the JSON object is loaded with a document.write("<script src=...") during the initial page load. Of course, XML alone can't be used this way.

To take advantage of this, an app needs to have easy-to-save states.

The Congressional Directory "Lookup" at (a nonprofit project) uses this technique.

This setup is doubly important in my case because the app acts as a 3rd party data supplier. The cross-site nature of it requires the use of appended script tags or its fallback method. And JSON fits these needs quite naturally.

Incidentally, users manually trigger Lookup's fallback when they click on a given result's "permalink".

28 Posted by Ron Derksen on 19 December 2005 | Permalink

Until now, I've only used plain text and XML as output format, but after having tested a few things, I think it's very well possible to use XML/XSLT on the client. Imho, that makes for the best combination with the XHR object, because it can be made to work in IE and Mozilla, the two most prevalent browsers for AJAX apps, I think (correct me if I'm wrong!). Other than that, it also allows the same sources to be used in other circumstances, which might also be a bonus.

29 Posted by kirinyaga on 19 December 2005 | Permalink

As others, I have already use PHPON (same as JSON but with the PHP serialization format).

Anyway, I think it depends what are the data and where they come from. If they are really pure data being pulled from the server, XML is the best choice because everyone is using it. If the data are more a RPC exchange, I think it's best to use a more programmatic format, here JSON. You don't use XML for parameters and return values inside your scripts, right ? Why will you use them when the function is running in a remote location ?
When talking about human-readable format, what we really think is human-editable format. If the data are generated by a script, it is pointless for them to be human-editable, much better to use a program optimized format.

I still remember not so long ago, when processors were much less powerful, one application where the main bottleneck at server start was parsing the configuration files to initialize each of the 200 sessions our app was loading. And it was a human-readable-optimized-for-program-parsing format ... Even now, every time I load an XML parser I shudder at the CPU and memory numbers I see.

30 Posted by Dave Johnson on 19 December 2005 | Permalink

Jim Ley mentioned that "The biggest advantage of JSON over XML is in performance, it is orders of magnitude faster than XML in all implementations, so in any situation where you're returning serious amounts of information JSON is the most appropriate method."

Orders or magnitude??? This is simply not true! Check out my post here ( comparing XSLT and JSON performance in IE and FF. The results from comparing XML DOM vs JSON performance should be up shortly.

I should also mention that XSLT can have a very concise implementation in JavaScript and it can be quite fast. However, the biggest benefit from using XSLT is that you can either
a) run it on the server to generate HTML snippits or
b) run it on the client to enable things like client-side sorting and filtering.


31 Posted by Chris on 19 December 2005 | Permalink

1. No
3. No
4. JSON is light, less bandwidth, XML is bloated.

Jim, JSON is quite easy for error handling actually:

At the top of your server script, set an error flag to on, when your server script is done and is about to encode to JSON, set the error flag to good. In your JavaScript when onreadystatechange == 4, inside a try and catch check the error flag. This prevents 2 errors, one is an expected by the error flag (ie no DB results) and the second, if there is a parse error in your server script, your try and catch will handle it.

32 Posted by Jim Ley on 19 December 2005 | Permalink

Dave, XSLT is a red-herring, if you're doing something that can be simply transformed via XSLT into the output, delivering HTML from the server will be simpler, more reliable (xlst can be blocked independant of xmlhttp in IE) and supported in a wider number of user agents.

So the time you're delivering json to a client, should be only when you're wanting the information available to script, you don't want it simply dumped to screen. Every COM call into an XML document.

xml=new ActiveXObject("MSXML.domdocument");
d=new Date();
for (i=0;i<10000;i++) {
e=new Date();
for (i=0;i<10000;i++) {
f=new Date();
document.write((f.valueOf()-e.valueOf())+' - '+((e.valueOf()-d.valueOf())));

for me gives many orders of magnitude difference, of course if you're not accessing the data as data, then why process it on the client at all?

33 Posted by Luis on 19 December 2005 | Permalink

I didn't run a performance test to compare JSON vs XML, neither DOM vs XSLT, but I agree with Dave that shouldn't be so different.

Google offers a XML/XSLT javascript implementation (named Ajaxslt) that I'm working with; I think it's the same they're using in some parts of Gmail (if not everywhere). I'll post here the results when I got it, if you're interested.

The way I see it, specially enterprise-oriented (how I work here), the UI layer needs a well-separated areas of concern: an UI-coder working with Javascript/DOM for UI dynamics, and an UI-designer working with XML/XSLT/HTML/CSS for writing reusable snippets. Server-side would be serving XML-only, which you can consume from Javascript AJAX, Macromedia Flash, another external application (WebService), etc.

The only thing I'm not sure here is, if you leave the rendering process client-side, what about mobile? Is there a way they can render XML themselves? If not, then rendering should be processed server-side which makes this approaching useless (from a centralized, reusing aspect).

That's my two cents, and excuse me if my english is bad... =)

34 Posted by Jim Ley on 19 December 2005 | Permalink


for me "there has been an error" which is all try/catch and error flags give you is not error handling, that's error catching, it doesn't give you an intelligent way to tell what sort of error.

Basically recovering from a json syntax error, or an XML validity error there's nothing you can give to the user - in the HTML delivery method they may well get some information...

35 Posted by Eric Schwertfeger on 19 December 2005 | Permalink

XML makes sense if you're talking to something that won't be accessed only by your AJAX application.

For just about everything else, I use JSON. Even for HTML snippets, I make the text of the snippet part of the object sent, and insert it. That way I can return more sophisticated return codes without having to tinker with the HTML to be inserted. It also makes it easier to make a multiple-action/response per request model easier to deal with.

I've never had problems with JSON that failed to parse, but that's because if it isn't a trivial response, I use a library on the serverside to generate the JSON, rather than trying to roll my own JSON by hand.

36 Posted by Milo van der Leij on 19 December 2005 | Permalink

I have used JSON (without knowing that that's what it's called nowadays) to retrieve new options for a SELECT object.

If ever becomes stable enough to use, I would prefer that over XMLHttpRequest. And since that only works with XML, I may switch to XML. However, I may also just use <json>[old JSON code]</json>.

37 Posted by Chris on 19 December 2005 | Permalink


The try/catch is only to handle a parse or fatal error from the server script, in which you would alert the responseText giving you the parse or fatal error from the server script. For example, you call an undefined function or you have a parse error on line 101, the script dies then and there without being able to encode your data object into JSON and echo it. JavaScript then tries to decode the responseText but fails, then catches and alerts the responseText giving the actual server language error.

The error flag would be inside the try, not catch, this is a handled/expected error as opposed to an unexpected. For example your server script makes a SQL query but the table does not exists, the server script does not have the fatal error but the database returns one. You handle this in your script and return the error to JavaScript, keeping the error flag on. Despite a backend error your JSON object is still in tact and able to alert the proper database error without JavaScript dying.

38 Posted by Peter Siewert on 19 December 2005 | Permalink

I want to add my two cents as well. My answers are:
1. As long as the server and client agree, the actual format of transmission can be anything. CSV, a simple proprietary text API, Base64 encoded binary stream of GZIPed data. Anything.
2,3, & 4. Asking which of the 3 main formats is better is like asking whether a tank or sports car is better; it depends on what the problem is.
XML is great as a language-agnostic way of transmitting information without any assumption on what the other side is going to do with it. If the server is a public web service sending data to javascript applications, PERL scripts, and whatever else someone codes up to use the information, then XML has a great advantage.
HTML is great as a simple way to get computed information from the server to a client browser which may be javascript capable. As PPK stated, it is very easy to write code that will gracefully fail on older browsers using this method. In my opinion, traditional pages where the browser is only used as a viewer in the MVC paradigm are best served with this format.
JSON is great if a full javascript application is running on the client. JSON formating is good for initializing a model or passing messages to a javascript controller.

39 Posted by jes5199 on 20 December 2005 | Permalink

Is it possible to use HTML snippits without using the innerHTML property?
it doesn't seem to be: if i appendChild a cloneNode from my responseXML, the browser just refuses to render it until i do something like "parent.innerHTML = parent.innerHTML". crazy.

40 Posted by Ryan Cannon on 20 December 2005 | Permalink

XML (including XHTML) seems like the most future-proof method of transferring data. Although JSON may be more lightweight, most technologies have built-in methods of parsing and navigating XML documents, making your server's response more flexible. Although sending and receiving in JSON seems more slick, it also seems like an excercise in wheel reinvention--why rebuild functionality that already exists?

I just completed a project that used application/xhtml+xml server reponse, which transforms various RSS and ATOM feeds server side with XSL. Server-side XSL is wonderful, and could theoretically generate JSON or anything, really, although I don't feel like I can trust client-side XSL; I think it goes against the idea of thin clients too much.

41 Posted by kirinyaga on 20 December 2005 | Permalink

The performance at the client is almost pointless. Even if XML/XSLT/whatever is 100 times slower than JSON, it will still be a sub-second delay, unnoticeable thus pointless to optimize.
However, if you use XML in your messages, it means your server script has to understand it and load an XML parser. And under load, this will make a big difference for the server charge. That's why going for PHP serialize() instead of JSON can even be better sometimes.

The question is really about do you need the JS variables or just the data ? And who/what will call the server script ? You may want to use SOAP just because you can implement a call to it from about any language in a couple mouse clicks.

42 Posted by Robert Nyman on 20 December 2005 | Permalink

Very interesting article!

I have to say, though, that it's really a shame that you leave XSLT out of the discussion, since it is way superior than JavaScript when it comes to handling and selecting data.

To answer question no. 2, I've implemented it in a very large commercial application, and the approach I used there is what I believe to be the most optimal one:

1. I request a URL on the server that has a stored procedure (or Web Service) that returns XML.

2. The XML is then transformed on the server, to avoid XSLT processor differences in different web browsers and to get a more widespread support. In turn, the transformation returns HTML.

3. The HTML is inserted into the DOM through JavaScript.

In my opinion, this is the most safe and scalable way to do it. Also, then you can re-use the XML for a lot of other purposes, but we still only return HTML to the client-side script.

43 Posted by zwetan on 20 December 2005 | Permalink

1. yes, my format: eden (ecmascript data exchange notation)
2. eden
3. no
4. see below

First I think we have to differentiate two use of data:
- object data
- document data

But if you have a way to obtain more advanced object structure
you can easyly generate the document data from the object,
and then you can win on both side.

JSON is good by its simplicity but it got some limitations:
- it does not handle arbitrary object graphs
- you can not serialize/deserialize custom objects
- you almost access its data as it was XML nodes

Taking over your book example in eden it will give something like that:

book0 = new Book( "JavaScript, the Definitive Guide", "O'Reilly", "David Flanagan" );
book0.cover = "/images/cover_defguide.jpg";
book0.blurb = "Lorem ipsum dolor sit amet, consectetuer adipiscing elit.";

book1 = new Book( "DOM Scripting", "Friends of Ed", "Jeremy Keith" );
book1.cover = "/images/cover_domscripting.jpg";
book1.blurb = "Praesent et diam a ligula facilisis venenatis.";


the avantage is client-side when you receive this object
you obtain an instance of the Book constructor, not just a basic object,
and so if your Book Object define a toXML/toHTML method, you can just render
directly the format you need.

44 Posted by Alper on 20 December 2005 | Permalink

1. I have seen one person remark about using plaintext as a format and I do prefer that for single valued results of actions returning '0', '1' or a simple id.

I would not use plaintext to transmit any kind of structured data (comma-separated or otherwise). XML/JSON and such are made for solving that problem.

The difference between XML/JSON is moot, conversions between either are extremely simple and I prefer JSON's ease of handling. Yahoo!'s move of adding 'output=json' to their APIs is a great example of this.

The HTML approach may have some appeal but the complexity you're talking about is properly called tight coupling and is a design no-no.
This might come in handy for single pages, but thinking from a web applications and web services point of view, its usability is zero.
The server side becomes overly complicated and enabling access to your data and services to other clients becomes difficult.

45 Posted by Dave on 20 December 2005 | Permalink

Maybe this is a really terrible method for some reason unknown to me, but I like to return response text as a string of javascript commands. I then send these commands to the EVAL function. I have a class which generates the necessary javascript commands. This enables you to do a heck of a lot more than to just change HTML, as well as enables you to change things all over the place with a few simple lines of code. Hasn't failed me yet...

46 Posted by Dave on 20 December 2005 | Permalink

Additionally, using the javascript eval method enables you to see ALL the output of the requested script for debugging purposes. i.e. you return [OUTPUT TEXT][_SEPERATOR_][JS COMMANDS]

then write the "output text" to the inner HTML of a span or div.

I'm not sure if these other methods provide such a simple way to debug the scripts themselves.

47 Posted by Scott Hughes on 21 December 2005 | Permalink

Does it strike anyone else as relevant that eval()'s potential to halt the script, and innerHTML()'s potential to break the DOM, leaves only one good option remaining? (Well, at least it knocks out HTML snippets and JSON; several alternatives to XML that do not use these functions have already been discussed.)

48 Posted by Sanjaya on 21 December 2005 | Permalink

1. I've used a different XML schema.
2. see 1.

The verbosity of XML ( or JSON ) rendering can be cleaned up. Below is what it looks like using some helpers classes.
XE - wraps DOM elements
HtmlBuilder - uses a 'stack' model to avoid all the temporaries.
I dont think I can fit the source here, but they are very small.

var books = (new XE( req.responseXML )).getChildren('book');

for ( ibook in books ) {
var book = books[ibook];
var builder = new HtmlBuilder( document.getElementById('writeroot2') );
builder.push( 'div' ).set('className', 'book');
builder.push('h3').appendText( book.getChildValue('title') ).pop();
builder.push('p').set('className','moreInfo').appendText( 'By ' + book.getChildValue('author') + ', ' + book.getChildValue('publisher') ).pop();
builder.push('img').set('src',book.getChildAttribute('cover', 'src' ) ).pop();
builder.push('p').appendText( book.getChildValue('blurb') ).pop();

49 Posted by David on 21 December 2005 | Permalink

PPK, is the HTML method at all useful for instances where you want data from the server that will change how multiple parts of the current page will be displayed? Your examples seem to assume that you're going to fill a div with the response, rather than process it for the component data.

I've used AJAX for several systems, and in each case I needed to process the returned data rather than dump it straight into a div. (For instance, one page used the results of the request to update the "status" color of a variety of divs, and change the content of the related "tooltip" divs.) I used XML, which was very straightforward on the server, and moderately so on the client. I think that trying to do the same thing with HTML would have just been extremely problematic. JSON would probably have been easier client-side, but harder on the server side.

50 Posted by Jehiah Czebotar on 21 December 2005 | Permalink

kirinyaga : How can you say optimizing on the client side is pointless, and then go on to point out that your choice for server side implementation is essentialy an exercise in optimization?

Also, I think it is a poor atitude to take to believe that client side performance will always be "sub-second". Client computers differ in speed, available resources... etc etc. Speed is also impacted by what transformation / processing you do, and how much of it you do. XSLT'ing a few thousand records will not be sub-second. Speed matters on both the client AND server. Jim Ley was good to point out the speed diference.

51 Posted by Dave Johnson on 21 December 2005 | Permalink

Jim you are not looking at the time it takes to call eval(myJSONString) ...
this needs to be called on the JSON string that is returned from your XMLHTTPRequest. That will change your order of magnitude to look the other way around I think since it is quite slow. Also, using firstChild.firstChild.firstChild _may_ not be the fastest way of getting to the data in XML DOM.

All in all I don't understand why XSLT is getting such a bad rap - it can be used in the major browsers, it can be used on the server in other cases, your business logic is platform independent, it jives with XML, Web Services etc. Heck it's even good enough for Google Maps! The only thing against it is that it _can_ be hard to understand.

Anyhow, I have posted some new benchmarking data and re-iteration as to why XML + XSLT (XAXT) is generally a safe bet ( Even E4X should be enough to convince everyone!

52 Posted by Jonathan Buchanan on 22 December 2005 | Permalink

Scott: you can use the provided JSON parser ( to avoid having to use eval().

I usually do something like the following with my callbacks which handle JSON:

var response = null;
response = JSON.parse(transport.responseText);
logger.error("Error parsing response");

Then, using a uniform format for indicating success and error messages:

if (response != null)
if (response.success == true)
SelectLoader.populateSelect(this.childSelect, response.options);
this.cache[response.parentId] = response.options;
logger.error("Error getting child select contents: " + response.errorMessage);

In this case, I'm populating a select with the returned data, which looks like this when successful:

{ "success":"true",
"options":[{"value":"1","text":"Town 1"},{"value":"2","text":"Town 2"}]

And like this otherwise:

"errorMessage":"Database == dead"}

JSON is just incredibly easy to use on the Javascript side of things, especially when populating or updating state objects rather than going straight to generating HTML with the data received.

53 Posted by Jim Ley on 22 December 2005 | Permalink

Dave, the eval (which is not a good way of doing it, indeed xmlhttp is rarely a good way of doing this where speed is an issue) is a small part of the overall activity, you looked at _just_ that, and concluded XML was as fast as JSON, I was illustrating that looking at that alone was completely irrelevant, you've got to look at the whole picture, feel free to do a full comparison.

XSLT is not a safe bet, you should not use an ActiceXControl (XSLT) in IE7 when xmlhttp no longer requires ActiveX for example.

But mostly it's down to the fact, if you're only transforming content, your application has no real client-side inteligence at all, it's just rendering different content. In that situation deliver HTML under the KISS principle.

54 Posted by Dave Johnson on 22 December 2005 | Permalink

Jim, eval is very slow (and the JSON parser mentioned in post #52 is slower - check my blog) and eval is a large part of the activity! Otherwise how do you access a variable of type "string" as a JavaScript object?

I did look at the whole process and do a full comparison :) The paths of the data that I looked at are equivalent in terms of inputs and outputs - not sure how much more similar it needs to be?

It is clear that we are coming at the problem from different angles - I am thinking about large datasets that need to be
a) retrieved from a server
b) transformed such that they can be used in JavaScript (either eval a JSON string or just get the responseXML for using XSLT)
c) sorted/filtered and displayed in the browser *without* going back to the server
If you can afford to make another server request then go ahead and use an HTML snippit that is generated on the server (KISS as you say) _unless_ you want to use the data in another application that will need to use a standard data format like XML. I thought I mentioned all these various cases in my blog post :)

You should be careful when you make statements that you should use XYZ data format with no application context - which I have tried to provide in my case.

55 Posted by Alex Lein on 23 December 2005 | Permalink

Great article Dave, however it is based on the assumption that you are using AJAX and JSON/XSLT/XML DOM for presentation only. I'd more concerned with JS object models and functionality.

I'm currently building a site, which will use AJAX primarily to update events and maintain the structure and relationship of objects from the client to the server. Presentation is a side-concern as most of that can be loaded with simple HTML snippets, or generated on the fly from my JS objects. With XML, I'd need to grab the document returned by AJAX and parse it into my own object model, and create a custom interpreter script for each object hierarchy. That function is taken care of easily with JSON.

As a developer, I need to trade off which is faster not only on the client, but also for development time. In my JSON string, I can have calls that create custom objects (which are defined in an included .js file), which then changes values depending on what else is sent through the AJAX request. These objects, which are created automatically, inherit all sorts of internal methods and properties, with XML I'd need to create them based on what type of nodes are sent back, increasing development time.

56 Posted by Dave Johnson on 23 December 2005 | Permalink

Hey Alex, I certainly agree in your case JSON is the way to go! If you are using HTML snippits you can always take advantage of XSLT on the client or server ;)

I would be really interested in hearing more about how you are using JSON - please feel free to get in touch with me offline.

57 Posted by minghong on 24 December 2005 | Permalink

@jes5199, use importNode, not cloneNode.

58 Posted by kirinyaga on 26 December 2005 | Permalink

Jehia: if you go from 200 milliseconds to 400 milliseconds, on the client it's just the same speed for the visitor, on the server with a lot of simultaneous requests it can become a real problem. That's why it's much more important to worry about performances server-side than client-side.
Now, of course if you need 10 sec client-side to process some data you have a huge problem, but it's more a design problem because processing thousands of rows client-side sounds very wrong to me.
Another point is you are supposed to be in control of the server. If it's slow, you can upgrade the hardware. But it's not always true, and there is a limit to what the hardware can do within reasonable costs.

And, dave, the problem is not only about getting the data. It's more about what you do with it once you have it. Sometimes you need to manipulate the data with javascript and the DOM will slow down the process while an array or another programmatic based structure would have been more efficient. Sure, if you only need to display it right away, JSON is no better than XML. But if what you do are RPCs, I see no point (but interoperability maybe) in using XML.

59 Posted by Matt on 27 December 2005 | Permalink

I've been using json for a bit, after dealing w/ xml in an under development project. Web browser is the client and loads an interactive gui that talks to the server.

Haven't looked back since swapping to JSON, the main argument people have against it is the level of human read-ability. Once you have the communications being wrapped in json on each end, you should rarely be looking at raw json. I haven't for literally months. The debugging should be the layer above it at the objects that come in and out of translation. Creating/Editing json by hand for storage is ideal in some circumstances as light data sources, but even this should really be done by scripts encoding objects and caching the output.

60 Posted by Beans on 29 December 2005 | Permalink

Interesting AJAX discussion about using the XMLHttpRequest object in 4 different ways:

61 Posted by Jonathan on 29 December 2005 | Permalink

I've been using a combination of both XML and HTML for AJAX responses.

The reason I like HTML is that I can put the part I need to update...say a table of information...into a seperate server-side script, and just include it into the initial page generation. That way the data is there on page load, and when I need to update it, I just call the same server side script as before using XMLHttpRequest.

The other reason I use HTML is speed. I've found digging through XML, pulling out the data, and building an HTML DOM structure can get very time consuming. I had a table of information that took ~2 seconds to build the DOM structure, when the actual AJAX call only took 1 second. Needless to say I switched to HTML, it's much more responsive now.

I tend to use XML for smaller bits of data that I need to update, like auto-filling form fields. Especially since I've written a 100% automatic JS class that will take an XML source, and feed it into an HTML page through data bindings defined in the HTML.

I haven't messed with JSON much, but it looks interesting. I may give it a try for my next project.

62 Posted by Eric on 31 December 2005 | Permalink

I personally have never heard of JSON before but it definitely seems like something that I'll have to look further into. It seems like a good method for getting information.

As of now I have used HTML snippets the most, followed by the other method, XML. Those have worked alright, though they always seemed awkward to me.

One thing is that people could also create a function to convert XML into a javascript object, though that would of course cause the script to take even longer and be a waste when you could just as easily use JSON. I guess in that case you would give up speed for both human readability and simplicity.

63 Posted by Bix on 1 January 2006 | Permalink

I cant think of another output,
but I'd have to say that an xml out-put is largly beter than other formats, because it's desined to able deal with many forms of web based data.
But you would want to make use of xsl-t in such maters because that is it's perpose, not to mention that xsl-t can modify incoming xml data nomater if the source is local or not, and it can even translate data in to other no xml based formats (as so I have done).

I might switch to JSON for javascript to javascript comunication (maybe). But than again when (or if) brouser suport increases I can use only xml based languages (xsl-t, xsl-fo, xml-events, xlink/xpath with good suport though) and repalce javascript and html maybe just using server side script as an alernative for older brousers.

64 Posted by lalala on 2 January 2006 | Permalink

Can anyone provide some resources about these "horrible internet explorer bugs" concerning forms and innerHTML?


65 Posted by Jonathan Buchanan on 2 January 2006 | Permalink


Scroll down to "Explorer - responseText and forms".

66 Posted by Sjoerd van der Hoorn on 2 January 2006 | Permalink

What I do for large amount of data: create an output script on the server which creates an array in the form of:


Now, when loading this data with xmlhttp functions, just use "bla = eval(data)[1]" or "data = eval(data); bla = data[1];" to get the string "item2". When needed to load large amounts of data, you will not create a big overhead this way by having to define object names and so on.

67 Posted by intrader on 4 January 2006 | Permalink

I am using a variation of JSON where as per example:{"books":[{"book":["title","publisher","author","cover","blurb"]},
[["DOM Scripting","Friends of Ed", "Jeremy Keith","/images/cover_domscripting.jpg","Praesent et diam a ligula facilisis venenatis." ]
,["DHTML Utopia: Modern Web Design using JavaScript & DOM","Sitepoint","Stuart Langridge","/images/cover_utopia.jpg","Lorem ipsum dolor sit amet, consectetuer adipiscing elit." ]]

68 Posted by John Hann on 5 January 2006 | Permalink

We're using all of the above. There's no one tool that always works best every time.

That said, when building modular AJAX web apps, you need to be mindful of several things: structure/layout, style, data, and /or behavior. What seems to work best in this case is to use (X)HTML since you can use one request to retrieve the HTML structure of the module, it's layout as inline CSS, it's data as XML "islands" or JSON, and any scripts in script tags.

We're pretty close to having a method perfected for modular AJAX web apps in which modules are retrieved as needed. This eliminates the delay in the initial page load since functionality is only retrieved as needed. Zimbra works like this -- although we are not privy to the specifics of their modular architecture.

For the general case, XML is great if you can restrict the browser versions (or can force users to update their MSXML library). IE 5.x browsers have some serious XML limitations since you can't be certain which version of MSXML is being used. PPK's example code will work, but common XML formats that use attributes, rather than nodes, to store properties fail miserably (attributes and whole nodes are dropped seemingly randomly) in complex, but well-formed XML.

69 Posted by Tyler McMullen on 5 January 2006 | Permalink

I have to go with the "it depends" method as well.
For larger items, I almost always go with XSLT. (Especially if there is a lot of data to process... XSLT is very fast.) But for little items, I'll sometimes just receive it as an HTML string and plug it in. On still other occasions... if I'm just grabbing data to populate a dropdown for instance, I'll return it as XML.

70 Posted by Marc on 6 January 2006 | Permalink

I seem to be in the minority, but I find JSON to be more human-readable than XML. More concise. XML has too many angle brackets for my taste.

Or maybe it's because I listened to Crockford talk about JSON today and I drank too much Kool-Aid.

71 Posted by Jacob on 6 January 2006 | Permalink

Marc - I think you're right, and I would agree with the article you posted. XML, whilst portable and extensible (blah blah), is really over the top for most things. Also, JSON may not always be the best answer either.

For the majority of my AJAX scripts, all I'm doing is loading a block of (X)HTML and so an HTML snippet looks like the best option. For those scripts that require something else, or scripts that can't use innerHTML for some reason, then I'll use something else. I've also seen people complain that (X)HTML snippets aren't valid (X)HTML - this argument is null as the snippets aren't individually viewalbe, and you can always give them a different file extension if it makes you feel better.

Ultimately, I don't think you can define a single "best" method. If you want it to be easily read, then use XML, but if not, use something else. It's all about suitability. Make up your own if you like - most of the time it won't make any difference anyway, and it'll probably end up in less data transfer and server load.

That's my 1.13 pence :-)

72 Posted by John McMillion on 14 January 2006 | Permalink

I just finished an ajax application that uses a C# library to return JSON from the server. The performance is simply incredible, and it returns a very large data set. I love JSON because it is just the raw data you need, but includes structure as well. I don't use eval, I can directly access the object in Javascript from the response, and with properties identical to the object class structure on the server. I've used XSLT as well and transformed it on the server with great results. The benefit of using the XML document is how many other applications you can use it natively. Overall, it just comes down to the requirements of the project, but I've gotten superb performance out of either approach, even with large amounts of data. But given the choice, my preference is becoming JSON more and more!

73 Posted by Allen on 30 January 2006 | Permalink

1. No
2. JSON & XML, never mixed within the same project.
3. We're having lots of meetings to decide between xml or json
4. Being part of a big company, our particular group is big on RAD, and json really facilitates this. That being said, we're part of a big company.... so everyone wants XML just in case someone else wants to use it. Its a really big debate since we want to maintain a standard, either one or the other, no more mixed techniques between the different developers that use ajax. Thus far we've had 4 or 5 major projects use json and 1 use xml.

74 Posted by Scott Reynen on 31 January 2006 | Permalink

I use XHTML snippets for fastr ( ) because it's simple (as your code demonstrates) and flexible. If I ever have the need, I can easily read XHTML as XML, so I think the distinction there is irrelevant.

I can imagine I might use non-XHTML XML at some point, but I don't expect to ever use JSON because I don't want my back end to be locked into a JavaScript front end. XHTML and XML are both well supported in most programming languages, which I think is a large advantage (and JSON disadvantage).

75 Posted by Michael Jouravlev on 31 January 2006 | Permalink

I use XHTML fragments. It combines benefits of both XML and HTML, and allows me the following:

* To search fragment for an element or an attribute.
* To insert fragment into innerHTML object asynchronously.
* To insert fragment into a page synchronously, using dynamic inclusion, for J2EE that would be <jsp:include> action.

This way I can have a single fragment providing content for both synchronous and asynchronous use cases.

76 Posted by Chuck Bradley on 1 February 2006 | Permalink

So far, I've used exclusively XML because the application needed to handle data rather than formatted content.

Currently, I'm working on a part that includes XHTML for regularly updated tables (download only) and XML for DOM manipulation and handling of data for two-way communication.

I've avoided JSON because the Java developer is not as comfortable with JavaScript - certainly not object notation, but has no problems with the others. On the other hand, for the data handling, it does seem attractive because it is less verbose and doesn't require the burdensome parsing.

Has anyone encountered memory leaks with the constant creation of new anonymous objects? Does the garbage collection work as well as one would hope?

Also, do you have use eval? John McMillion said he didn't, but I don't know how.