Building a DevX Maturity Matrix

I promised in Part I of this article that I'd explain how I generate the headline numbers that I report as part of the KPIs or OKRs for the Developer eXperience of a product.

The answer is rather simple but, at least initially, quite labourious.

Reminder: what a maturity matrix is.

A maturity matrix encodes a maturity level as a value (typically a positive integer) for every unique combination of product, feature, customer/developer type, and maturity type. We use this maturitly level value to set goal levels and to measure our curret maturity level against those goals. For more detail on this see Part I.

Self-evidently, To do this you need to define all these things:

  • What exactly your products are
  • a list of features that constitue the surface area of your developer experience for a product
  • your customer/developer types
  • your maturity types
  • A way of quantifying the level of maturity for each level

I think the products should be trivial enough to define, so I won't go into that. Let's take a tour of the other definitions.

DevX feature list

A DevX feature list will comprise of, at least, the following items:

  • The list of customer-developer visible features of your product
  • The list of non-feature support functionality exposed to customer-developers (e.g. observability tools, or logging)
  • The list of processes, automated or manual, that a customer-developer may engage in (e.g. onboarding, support requests, feature enablement)

Customer or Developer types

Broadly speaking a customer type identifies any developer with a distinct use case. For me this maps, or should map, 1:1 to a user profile. If you have user profiles you can probably just refer to each case by name - if your team are used to talking about "Twanna", "Antje" and "Rakesh", then it makes sense to use them here. As you'll see we won't surface this level of detail to anyone who wouldn't get those references.

If you don't have user profiles, then you might be able to use job titles (for internal customers) or some kind of "stub profile", for example "Python developer working on ML powered backends".

Maturity types and maturity levels

Remember, a maturity level is a quantative statement about the current level and desired quality of developer experience.

Unlike with the Capability Maturity Model I don't just do this as a simple, linear progression, because the cases differ too much. Instead, what I do is this (noting that these are examples, you need to think about your own rules):

  1. I break down developer experiences into mulitple types. For me these are:

    • interface maturity
    • supporting services
    • documentation and supporting materials
  2. I define a maturity level for each of these types as the sum of scores from a series of questions.

Each question I ask has an assigned value which represents its relative importance to our perception of a good DevX.

Pulling the information together

Defining all these metric values is a significant amount of work. If you have the time to do a "big bang" first round assesment it's a great way to start, but if you don't my suggestion would be to pretend you're starting from scrathc and iteratively build up the list as your work on specific areas. That's how ongoing maintentance of this data works too. Essentially before you're done with an item of work you visit the "maturity sticker" for each DevX feature.

I call it a "maturity sticker" because I think of it like the labels they print on products in shops, breaking down the nutrients, or grading the sustainability of the product. The "maturity sticker" is the most detailed breakdown of maturity, it represents the calculation of maturity for a unique combination of product, feature, customer/developer type, and maturity type.

Let's start looking at some realistic examples to try and pull this idea together. What follows might give you an idea of the volume of data required and how we aggregate it. In reality I drop all of the "N/A" questions to keep things terse, but I'll show them here so you see how that's handled.

Example

Product

The product in this example is an imaginary cloud based customer service application for companies offering building services.

Features and process

I'll create sticker for just 1 process: the onboarding process for engineers wishing to develop client applications based on this product. What we'll see in the data is that this process is currently only documented for Python developers, and that it isn't something that's available programmatically. I think that's a useful case in that it indicates that there's quite a lot of data we could choose to leave out, but in this case the user profiles are important.

Stub User Profiles

For the sake of simplicitly we'll use just two "stub" user profiles. Both of these engineers work on software that automates aspects of the business. Rakesh uses machine learning to help automatically generate quotes based on information provided by customers. Antje builds frontends for displaying invoices to the customers.

Name Programming Language Specialities Works on
Rakesh Python Machine Learning Customer Quotation Tool
Antje Javascript Web Frontend Billing

Maturity Stickers

Now we'll produce the individual maturity stickers. As described above this will a series of questions with numeric values as their answers. In this case (and actually in reality) my questions are mostly about coverage rather than qualitative assesments. There's no reason why quality measures can't be included, but personally I've found it hard to find reliable qualitative assesments.

Onboarding process maturity stickers

First we'll answer the questions for Rakesh, our Python developer:

Product Build Base
Feature Onboarding Process
Profile Rakesh

We'll ask first about interface maturity. In the case of onboarding we actually only support this process via the web UI. For obvious reasons you can't onboard via the API, because you can't access the API until you've onboarded. Still, I'll leave the questions here for you to see, in reality I'd simply remove any irrelevant questions because we compare only to our goal states, not across features/processes.

Maturity Type Question Current Score Iteration Goal Goal Note
Interface Does this feature / process exist? 1 1 1 It exists
Is this possible via the API? 0 0 0 The onboarding process won't ever be possible via the API
Does a high level API call exist to automate a multistep process? 0 0 0
Is this possible via the cli? 0 0 0 The onboarding process won't ever be possible via the CLI
Does a single CLI command exist to automate a multistep process? 0 0 0
Does a language specific SDK provide this feature? 0 0 0 The onboarding process won't ever be possible via the SDK
Does a single function/method exist in the SDK to automate a multistep process? 0 0 0
Is there a UI workflow for this? 1 1 1 Onboarding is always initiated via the web UI
Total 2 2 2

Next we'll ask about support services, again, most of this is irrelevant for onboarding, but API calls it would be highly relevant.

Support Service Does this feature or process provide customer-developer visible logging? 0 0 0 No logging is envisaged for this process
Does this feature or process produce human readable and customer-developer visible error messages? 0 1 1 See story: x
Are error messages linked to documentation that might help resolve them? 0 0 1
Is this feature or process traceable / available in observability tools? 0 0 0 N/A
Is this feature or process step-debuggable? 0 0 0 N/A
Is the current version reachable from an older state via automated migration? 0 0 0 N/A
Total 0 1 2

Finally we'll move onto documentation and supporting materials, and we can see that this part is highly relevant for this onboarding process, and that we have open stories that will improve these aspects in this iteration.

Documentation / Materials Reference documentation exists. 4 4 4
Reference documentation indexed and searchable on line. 0 2 2 See Story: x
Reference documentation is generated from the code it documents. 0 0 0 N/A
Reference documentation is available in a form where it can be proactively presented in IDEs. 0 0 0 N/A
Concept documentaton describes all relevant concepts this feature/process touches on. 3 3 3
Major tutorial content covers this feature / process. 2 2 2
Feature/process specific Mini-tutorials cover this feature / process 0 0 0 We don't need this as major onboarding tutorial covers it
Alternate media presentations (e.g. videos) exist for this feature / process. 1 1 1
Total 10 12 12

Having completed the sticker for Rakesh, we move onto Antje. What we see here is that our current onboarding process is somewhat Python specific, so whilst a Javascript customer could use it, it wouldn't be a great experience. I've actually given it a partial score (1 out of a possible 4) to reflect this. I don't tend to do this much, but for important areas that are heavily weighted there's some scope to do it if you wish.

Product Build Base
Feature Onboarding Process
Profile Antje
Maturity Type Question Current Score Iteration Goal Goal Note
Interface Does this feature / process exist? 0 1 1 See Story: y (Create onboarding for JS devs)
Is this possible via the API? 0 0 0 The onboarding process won't ever be possible via the API
Does a high level API call exist to automate a multistep process? 0 0 0
Is this possible via the cli? 0 0 0 The onboarding process won't ever be possible via the CLI
Does a single CLI command exist to automate a multistep process? 0 0 0
Does a language specific SDK provide this feature? 0 0 0 The onboarding process won't ever be possible via the SDK
Does a single function/method exist in the SDK to automate a multistep process? 0 0 0
Is there a UI workflow for this? 0 1 1 Onboarding is always initiated via the web UI
Total 0 2 2
Support Service Does this feature or process provide customer-developer visible logging? 0 0 0 No logging is envisaged for this process
Does this feature or process produce human readable and customer-developer visible error messages? 0 1 1 See story: x
Are error messages linked to documentation that might help resolve them? 0 0 1
Is this feature or process traceable / available in observability tools? 0 0 0 N/A
Is this feature or process step-debuggable? 0 0 0 N/A
Is the current version reachable from an older state via automated migration? 0 0 0 N/A
Total 0 1 2
Documentation / Materials Reference documentation exists. 1 4 4 Generic onboarding docs exist, they need specialisation, see Story y.
Reference documentation indexed and searchable on line. 0 2 2 See Story: x
Reference documentation is generated from the code it documents. 0 0 0 N/A
Reference documentation is available in a form where it can be proactively presented in IDEs. 0 0 0 N/A
Concept documentaton describes all relevant concepts this feature/process touches on. 3 3 3
Major tutorial content covers this feature / process. 1 2 2 Tutorial doesn't yet cover JS users -> see Story y.
Feature/process specific Mini-tutorials cover this feature / process 0 0 0 We don't need this as major onboarding tutorial covers it
Alternate media presentations (e.g. videos) exist for this feature / process. 0 0 0
Total 5 11 11

Tips for making this data set smaller and easier to maintain

Wow that's a lot of data, for just one process! In reality this can get out of hand quickly. We need some tips to reduce the load!

  1. Don't do this in a spreadsheet, use a relational database (you'll thank me when you want to expand your list of questions!)
  2. Develop a trivial interface to that DB (I intend to expand on this with actual tooling)
  3. Don't store any "Not applicable" values - you can always add them later.
  4. Don't break it down to the User Profile level until your really need to. Pragmatically I define an "All Profiles" profile for the generic case. That point is usually when you start having features that are only available to specific languages. If you used a relational database you should be able to easily report on where you currently choose not provide coverage for a profile.
  5. Keep your feature/process list from growing too long - you'll have to judge for yourself the right granularity.

Now we aggregate!

So, what happens next should be pretty obvious. If you look at all the "total" line in bold above it should be clear that we can just extract those ones and ignore the individual questions. Note: You still need to store the individual questions and answers, if you don't you'll have trouble knowing how to update them when you complete some task, and you'll end up having to go through all the questions again. I learned that one the hard way!

Our totals only table looks like this:

Product Feature Profile Maturity Type Current Score Iteration Goal Goal
BuildBase Onboarding Process Rakesh Interface 2 2 2
Support Services 0 1 2
Documentation / Materials 10 12 12
BuildBase Onboarding Process Antje Interface 0 2 2
Support Services 0 1 2
Documentation / Materials 5 11 11

Now there are two obvious ways to reduce this data to a more concise summary. We can either sum across all user profiles or sum across all maturity types:

Sum across all user profiles

Product Feature Maturity Type Current Score Iteration Goal Goal
BuildBase Onboarding Process Interface 2 4 4
Support Services 0 2 4
Documentation / Materials 15 23 23

Sum across all maturity types

Product Feature Profile Current Score Iteration Goal Goal
BuildBase Onboarding Process Rakesh 12 15 16
Antje 5 14 15

We can aggregate up another level so that we no longer see either User Profile or individual maturity types:

Total feature or process maturity

Let's pretend now that we measured another feature too:

Product Feature Current Score Iteration Goal Goal
BuildBase Onboarding Process 17 29 31
List outstanding invoices 22 30 55

… and of course we can then aggregate all the features into maturity scores for the whole product:

Total product maturity

Let's pretend now that we measured another feature too:

Product Current Score Iteration Goal Goal
BuildBase 39 59 86

And then our top level, reporting numbers:

Name % Ratio
Current v Iteration Goal 66 39:59
Current v Goal 45 39:86
Iteration Goal v Goal 69 59:86