As part of my ongoing open source project MultiBit Merchant (MBM) I am keeping a journal of my discoveries and thoughts along the way. This one deals with why I chose the Hypertext Application Language (HAL) as part of the communication protocol for the MBM RESTful API.

Making an API that will last decades

Regular readers will recall that as part of the “genesis requirements” I had to design MBM to provide a web service API in accordance with RESTful principles. One of the promises of a design like that is that you can create an API that can last decades. Just let that sink in for a minute. Decades. Nothing lasts for decades – except maybe that accounting system written in COBOL running on a VAX machine in the sub-sub-basement. Nobody ever goes down there anymore…

Except, actually, there are APIs that have been running for decades that you take for granted every day. I’m talking about HTML, and when you look at its earliest history today, it still renders just fine. “But HTML isn’t an API! It’s a markup language!”, you cry. Nope, hypertext is an API because it offers the concept of a link which allows traversal of states. Thus, the web is just a giant state machine.

The Richardson Maturity Model

Martin Fowler has a great article about web services that describes the Richardson Maturity Model. You should take the time to read it – perhaps over lunch.

In it, he describes how web services are evolving from the “Swamp of POX” to the “Glory of REST”. This journey to greater simplicity brings to mind a Zen quote I heard a long time ago that I would like to share with you:

Before one studies Zen, mountains are mountains and waters are waters;
after a first glimpse into the truth of Zen, mountains are no longer mountains
and waters are no longer waters; after enlightenment,
mountains are once again mountains and waters once again waters.

You see, REST is simply what the web was originally intended to be, as Ryan Tomayko explained to his wife back in 2004. HTTP provides the verbs (GET,POST,PUT and so on), URIs provide the nouns (every resource has a unique name, or identifier) what is missing is how to put it all together.

Hypertext as the engine of application state (HATEOAS)

Awful acronyms aside, the HATEOAS principle removes the coupling between the client and the server. Frequently, the first thing that developers new to web services attempt to make when creating an API is a unified URI structure. Often this simply mirrors one aspect of the internal business domain (perhaps customer/1/order/23). Unfortunately, restricting the URI structure in this manner causes problems later on when it becomes apparent that this structure has started to drive the API, and even the underlying domain, rather than merely being a resource identifier. For example, what if a particular use case requires that the internal domain should be order-driven with non-numeric order IDs making a URI of order/EX1-45/customer/1 more appropriate?

URI structure is irrelevant to machines, and humans hardly ever need to work it out since they rely on presented links instead. It is that linking, and the semantics behind it, where HATEOAS really shines. By providing a set of semantic guidelines (see RFC 5988), often in the form of a rel attribute on a link structure, a resource can provide a client with API updates in terms of link relations that it can work with.

Splitting the Atom

For many, the hardest thing about REST is working with representations. In the Java world, JAX-RS provides an excellent set of annotations to describe RESTful endpoints. Implementations like Jersey (my preference) or RESTEasy work closely with JAXB (for XML or JSON) and provide you with a powerful and concise foundation. However, it is all too easy to use your existing domain objects as the response bodies which introduces the coupling mentioned earlier. Some kind of intermediate representation format is needed.

It comes down to a choice between several representation formats: AtomPub (complex IETF standard), OData (Microsoft oriented full ecosystem) and HAL (lightweight and new). Each has its merits and drawbacks, and I’ve looked in detail at each one so that I can present a summary of my findings:


  • Pro: Well established standard
  • Pro: Works with XML and JSON
  • Pro: Has excellent Java support through Apache Abdera
  • Con: Abdera introduces a lot of dependencies
  • Con: Very complex to work with on server side
  • Con: Difficult to build a complete JavaScript client


  • Pro: Builds on AtomPub
  • Pro: Works with XML and JSON
  • Pro: Has good Java support through the odata4j project
  • Pro: Provides a good URI query structure
  • Con: Introduces a complete framework (essentially replaces Dropwizard)
  • Con: Very complex to work with, particularly the entity data model (EDM)
  • Con: Difficult to find a good JavaScript client library without relying on Windows-only tools for EDM


  • Pro: Introduces a lightweight and extensible approach
  • Pro: Works with XML and JSON
  • Pro: Trivial to create a JAXB model to implement it (no dependencies)
  • Pro: Provides a good linking framework
  • Pro: Trivial to create a JavaScript client using jQuery XML parsing
  • Con: Not ratified (although IETF have been approached)

Looking at the above list it is pretty clear to me that HAL is the more appropriate choice for me at this time. It is lightweight, extensible and sufficient for the kind of data that I’ll be exposing as part of my project. If I was going for a much higher grade data offering, I may have settled on OData because of the excellent filtering structure it offers. In fact, I may just think about incorporating something similar into MBM.

So now that we’ve covered why, it is time to to cover how. Let’s open those pod bay doors.