Data formats image
Health Interoperability
7 Min Read

Profiles in FHIR

Ewout Kramer

Subscribe to our newsletter


As I said last week, most of my work currently centers around implementing FHIR’s Profile functionality. It’s a very important bit of the FHIR standard, but it is -as yet- invisible and hard to grasp for many. As well, I know from my tutorials that many who are new to FHIR do not know what purpose Profiles serve, and where they fit in the standard. I’d like to shed some light on that today.

First off, I want to show you the little “proverb tile” that both Grahame and we at Firely have on our walls:

Ontology / Mapping : Thorough, Deep, Authoritative
Exchange : Simple, Concise, Modular
Conformance : Capable, Comprehensive, fine-grained

These are the three basic pillars of FHIR. It’s worth discussing them one by one to see how and where Profiles fit in.


The currently most well known is the “Exchange” bit. It specifies “how we exchange data”, so this part of the spec is about:

  • What does the data look like: These are the XML and Json formats
  • How do we exchange data: the Http-based REST and Search specification.
  • How is data composed into useful exchanges: Bundles, Documents and Messages.

The Exchange bit of FHIR is one of the most well-known and oldest (relatively speaking of course) bits of the specification and the most stable as well. We’ve tested it during seven connectathons and we’re seeing quite some support for it in API libraries and new products. It’s not completely ready yet, as we are working on new interaction patterns, like notifications and more powerful querying.

As you can see on the tile, we wanted this part to be “simple”: everyone who implements a FHIR exchange will need to be familiar with this part of the spec -from health IT veteran to freshly schooled app developers- so it better be simple.


While the Exchange tells you how to exchange data, the Ontology part tells you what you can exchange. At it’s most simple, these are the core data models we provide with the specification, better known as FHIR’s Resources. But it does not stop there. FHIR had to find a solution for a simple fact of life for standards: no matter how many data elements you add, you will never cover everyone’s needs. Conversely, no matter how concise you try to be, there will be too much complexity for some.

So FHIR adopted the Profile: a computer-readable statement that allows you to simplify or augment parts of the standard datamodels. In practice: as soon as you start using FHIR in your organization, country or any other context of exchange, you will produce a Profile for that context in which you:

  1. specify which parts of the core models are of no use to you and may not be used. Remember, that removing flexibility you don’t use makes it easier and less expensive for your exchange partners to implement the standard.
  2. add data elements to models that are not part of the base specification but that you need and are specific to your usecase. Adding your own elements for use with your exchange partners is considered a “normal” thing to do in FHIR, even if this means the data can only be interpreted by you and your business partners.

These Profiles, combined with narrative that explains the business context of your exchanges, information about your organization’s or country’s architecture and service endpoints, security, etcetera will form what is otherwise known as an implementation guide for FHIR.

As a developer, you could think of these Profiles as a mechanism comparable to an XSD schema. It can be used to validate incoming messages and see whether they do in fact conform to the agreements you made with your exchanging partners. But it’s functionality is broader: it serves to document the way the standard is used and it has additional formalisms that XSD cannot support. For this reason, we can derive XSD schemas and schematrons from Profiles, but schemas and schematrons alone don’t cover ground for what we need in the Ontology department of FHIR.

There’s no denying that Profile is a complex beast (this is the “deep” part of the proverb tile), and if you look at its documentation you’ll see it is a big data structure with a lot of documentation. Even so, the documentation we currently have is incomplete and probably partially impenetrable without having one of us sitting next to your desk. I’ll try to fix some of that in future posts!

The Profile is not the only inhabitant of the Ontology, other notable partners are ValueSet and ConceptMap. They will be joined in DSTU2 by Namespace and DataElement.

Much work still needs to be done here. We are busy writing a modeller-friendly authoring tool for FHIR (called Forge) and both Grahame and I are writing support libraries for Java and .NET to do the heavy lifting of working with Profiles (validation, using their (meta)data in your server, graphical display). I am sure this will be an enjoyable summer, and we are planning to test the fruits of our work in the upcoming September 2014 FHIR connectathon in Chicago.


Finally, conformance. This is a long blogpost already, so I will not spend too much time on it, but Conformance will complete the trio by allowing servers to specify which parts of the FHIR specification (and your profiles) they know about and support. As well, FHIR clients can use Conformance to dictate which parts of FHIR and which profiles they need a server to support for them to be able to work with the server. Since Conformance is a computable, computer-readable specification (in fact it is a Resource, just like Profile), you could write functionality to:

  • Compare a server’s Conformance with a client’s Conformance statement and determine wether they can cooperate
  • Read a server’s Conformance and do certification testing based on what the server promises it can do
  • Publish a Conformance statement for certain common sub-sets of FHIR functionality (e.g. “FHIR light”, “FHIR for the US”, “FHIR for organization X”).

Conformance will receive much more attention when FHIR gets more mature, especially in the area of automated conformance testing.


FHIR is a big building, and so far we have mainly discussed the “Exchange” bit. The core team is now focusing its energy on the “Ontology” part and you will see extensions to the reference implementations to help you work with it. As well, we are well aware the current documentation is hard to follow unless you were there when we wrote it (or as Bjarne Stroustoup once put it: “does not try to insult your intelligence”). We’re working on it, and I’ll spend my next posts discussing parts of Profile that deserve some quality time!

Want to stay on top of Interoperability and Health IT? 

Subscribe to our newsletter for the latest news on FHIR. 

3 thoughts

  1. Are the graphical display generation libraries that you mention available for download anywhere or are they still in development?

    “both Grahame and I are writing support libraries for Java and .NET to do the heavy lifting of working with Profiles (validation, using their (meta)data in your server, graphical display).”

    The resource definition pages on the FHIR website appear to be generated from the appropriate profiles. I would like to do something similar for a custom profile created with Forge, but am not having much luck finding the tooling to do so.


  2. Although I’m familiar with using an XSD to validate an XML document am I correct in my understanding of the FHIR extensibility model that through the use of the element that is already defined on all resources in the FHIR schema that there is no need to modify the schema to allow extensions to be included in a resource instance?

    If that is the case, can messages containing extensions be validated for anything other than valid xml structure?

    I’m still reading through the rest of your blog entries and may come across the answer elsewhere but thought I would pose the question


    • Lloyd McKenzie on said:

      Extensions are built into the schema. And you’re correct that validation of extension content – whether the extension is allowed to be used where it appears, has the right type of data, etc. needs to be validated using tooling other than Schema. (Much could be done using Schematron, though in general validation of extensions is done with code as that can be more dynamic

Post a comment

Your email address will not be published. Required fields are marked *