Content management, if done right, bears parallels with quantum physics. (Please stick with me: I will keep this high-level, and only maintain the analogy for a paragraph.)
The principles of quantum physics are confusing. Basically, though, they relate to the smallest elements that can be described, which have a subtle property: their actual state (where they are and what they are doing) can only be determined – is, in fact, only realised – by the presence, the contextual forces, of the elements around them; those they interact with. A single particle can be in multiple places at once, in different phases, until something needs to react to its presence (e.g. it is observed).
When developing (or customising) content systems, we need to give our information structures the granularity of quantum particles, and the flexibility of uncertainty.
Why quantum content?
One of the base principles of any CMS worthy of the name is that content is separated from its presentation. An element of content is reusable. In order to achieve proper reusability, elements of content need to be the smallest that can be formed whilst maintaining identity.
How granular is granular?
The elements of content we want to reuse are tiny: an image, a sentence, a label. If larger than this, their re-use will often entail internal rearrangement.
There are two ways we can look at this type of reuse:
- The content item is everything relating to one “subject,” and the different presentation of its inner parts are renderings
- The content item is the individual grain of information, which has meta-data giving it meaning within the larger scope of a subject
These difference between these views relates only to how data is stored within the CMS; from a functional – reusability – perspective, they are almost interchangeable (the second is slightly more flexible, so that is the one I will use).
It’s not about pages
The concept of the “page” within many so-called CMS’s maintains the perspective of canonicalised content. This is the antithesis of content-presentation separation. We need to think of locations within the CMS as being aggregations of content relating to the same subject; a consistent pattern of elements being a “type.” An instance of a type may have a primary URL path associated with it, which displays the default rendering of the type (and there may be an intuitive mapping between the location of the type instance and the URL), but this does not mean the type instance and the page at that URL are the same thing.
When people think about digital interfaces from an end-user perspective, they think of a URL and something displayed by accessing that address. This mind-set is the problem: it considers content from its presentational context, rather than as a collection of entities which can be combined to render in various ways. Indeed, there are many concepts of content that only ever display as meta-information. Any content, reused across many pages, that promoted external material does not have a local canonical view – there is no primary page – but it needs to be a content type instance within the local system.
Are you (un)certain about this?
The quantum physics view of uncertainty relates to the fact that something can be in multiple states at the same time, until observed. This is famously demonstrated by Schrödinger’s cat.
In the case of content management, uncertainty relates to instances of reuse and contextualisation.
When we consider the content item as described above, as the smallest element that is still something meaningful, it is simple enough to understand it top-down: the result of identifying the constituent elements of a content type. But what if we take a pre-existing content element? For example, the sentence “The Quantum of Content Management.” From its existence, what can we know about it? What is this bit of information? How is it used?
In this case, that group of five words is several things.
- The title of this post
- A quoted sentence within the body
- A descriptive sentence that will be referenced from another post (with a link back to this one)
- The template for part of the canonical page’s default URL
- Part of a tweet about the page (maybe)
- A subtitle in an aggregated listing of posts, on this site, in its RSS feed, and (possibly) on another site
Now, we could just recreate the sentence seven or eight times (as decent reuse is not implemented in the system this blog is hosted on, I will need to), but that would be wasteful; a failure of reusability.
If we see only the content element itself, even with its meta-data, we will not know that much about it. There may be enough information to see that it is used as the title of the post. But its other realities may not be known to us until we see them emerge from the cloud of possibility. (This in itself is a huge subject, which will be covered in a future post: dependency awareness.)
Another aspect of content uncertainty is contextualisation of the content type: how individual items combine to create a particular representation of the type instance. Content that is well-structured will be created once, in a default format, but will have several renderings. These additional views may be algorithmically generated from the default, or customised to suit the situation.
All this physics is doing my head in
Let’s face it: this granular content thing is a nice idea, but isn’t it overkill? The average content author won’t care that every sentence in their blog post is supposed to be a separate content item. They don’t want to know. It’s just too complicated. The average user wants to enter their content as a block of (rich) text, formatted with paragraphs, the odd bullet list and maybe an inline image.
That’s fine for many people, and many applications, but it has sever limitations.
It is not every day that you want to create a body of text, and alternate versions derived from the default instance, which are to be reused. But would you rather have the flexibility available when needed, or be burdened with either the overhead of (unlinked) duplication every time you do want to reuse, or be stuck with some automated system that may not do it in a meaningful way? (The default teaser version of this post is the first 365 characters… which is not enough, and includes a parenthetical statement I really don’t need there.)
A picture of real content management
Done properly, content management gives the author both broad-brush, canonical-view management (the default) and very fine-grained control (on demand) over elements of content and their contextualised behaviours. It separates content from its presentation with a transformation layer just below the default authoring interface; the authoring representation is not the internal storage matrix. Authoring is by type; storage by independent, granular element.
Good content management associates vast quantities of meta-data with every element – my benchmark is that a content type should have as much meta-data as it does canonical content (the meaning of that comparison being intentionally vague). It is this meta-data that links elements together into the canonical-view management context.
We need content management systems that properly separate content from its presentation, that treat information on a quantum level. We need systems that treat storage as storage, and the authoring interface as one particular rendering of elements related through meta-data. It’s not like it’s particle physics…
Source idea for the main uncertainty image
Schrödinger’s cat on Wikipedia
“Stress” image, by Raphaël Desgagnés
Dependency awareness post