Foundation

Telemetry at Scale

Systematically track Data Card efforts across your organization.

Overview

For any form of transparency-oriented documentation to be successfully, the documentation must be treated as a user-centric product in and of itself.

Systematically tracking the usage of both templates and their resulting Data Cards is necessary to inform long-term transparency strategies and broad initiatives that span cross-functional boundaries. While there is no one-size-fits-all to measure the success of an organization’s transparency efforts, there are a variety of factors that you can consider when setting up your impact tracking program – such as the maturity and goals of your transparency effort, the scale of the organization, or the datasets being documented.

For example, you’ll find that some forms of telemetry to measure the efficacy of Data Cards are easier to bake into implementations of interactive Data Cards rather than PDFs. On the other hand, measuring the efficacy of your Data Card template may require you to set up bespoke mechanisms that measure incomplete or abandoned Data Cards in your organization.

Generally speaking, metrics for a Data Card template and its adoption within an organization can be broadly classified into seven categories – Documentation Hygiene, Resilience and Stability, Understanding, Supportability, Conversion, Engagement and Reach. However, these metrics are not equal – rather, they need to be considered in your contexts.

Operationalizing these may require varying levels of resources and support. For example, focus groups that unpack how producer-friendly a template is will require a considerably different set of resources compared to analytics that record template completion rates. Similarly, measuring traffic to a Data Card will require relatively less resources than conducting a series of post-launch interviews that unpack engagement levels. Review these different categories with cross–functional decision makers in your organization to determine which should be used to track impact, and how.

The maturity of a dataset plays a role in the efficacy of its Data Card and the interpretation of the metrics.

Data Card Templates: Data Cards are typically easier to create for new datasets, but these Data Cards may not describe a diverse range of applications and their corresponding caveats, owing to the newness of the dataset. Similarly, Data Cards that describe well-established datasets may be more robust in their descriptions of applications, but may be lacking in provenance if the datasets are significantly old. Metrics that describe the accuracy of a template will need to be interpreted differently in each of these two cases.

Completed Data Cards: When measuring dataset adoption through the number of click-throughs in a Data Card, it’s common to see a surge when the dataset is made available for use. Shortly thereafter, several metrics plateau and become stable. In the same vein, one should expect a sharp decrease in adoption when a dataset is deprecated – though it may never hit zero, indicating that it may still be in use somewhere else.

Documentation Hygiene
Data Card Template How well do Data Cards templates describe the datasets it is intended for?
Completed Data Card How accurately does a completed Data Card describe the dataset and its use?
Why? Documentation Hygiene describes the accuracy and satisfaction with which a reader’s experience in using the dataset aligns with the expectations created by its Data Card.
When? Template: During completion or immediately after dataset producers have completed Data Cards.

Completed Data Cards: Before distributing a completed Data Card with a sample audience group. At a regular cadence, post-distribution with actual readers.
Example Reader Satisfaction Comparison: Collect reader satisfaction scores for a Data Card and compare it against the Heuristics Worksheet for that Data Card.
Follow-up Action If the heuristics worksheet score is disproportionate from reader satisfaction scores, realign templates and/or writing guidelines with readers expectations.
Resilience and Stability
Data Card Template How many different kinds of datasets can the Data Card template capture without edits?

What are the kinds of edits that Data Card producers are making over time?

Do answers align with expectations set out in the template or teams repurposing questions to suit datasets?
Completed Data Card How many revisions, including content addition, have been made? At what frequency?
Why? Resilience and stability indicates a template’s ability to withstand modifications, especially if used in multiple domains or by diverse readers.
When? Templates: During completion or immediately after dataset producers have completed Data Cards. Particularly note revisions made post-launch.

Completed Data Cards: Revisions and additions made post-launch.
Example Edit Ratio: A ratio between the number of Data Cards created using your template and the amount of edits made in the template.

Mean time between failure: The average time between an event in which a template is edited or a Data Card deviates from the template.
Follow-up Action The higher the edit ratio or the mean time between failure, the more resilient your template is. Track edits and patterns in them to inform revisions to templates.
Understandability
Data Card Template How successful are producers in understanding questions in the Data Card template in the context of their datasets?

Are there any sections in a Data Card template that are significantly harder to answer?
Completed Data Card Are readers able to easily understand both, the question being answered and the answer provided in a Data Card? How successful are readers in using a completed Data Card for their tasks? Is the content in the data card suitable for different readers?
Why? Understandability directly contributes to the overall functionality of your template. This tracks how well a producer can onboard, and use a Data Card template – and how efficiently a new reader of a Data Card can onboard, habituate, and use information in a completed Data Card.
When? Template: When providing templates to dataset producers to complete, with check-ins at milestones during the completion process.

Completed Data Cards: Upon public distribution or launch of Data Cards.
Example Formative studies: Proactively recruit readers to participate in surveys and cognitive walkthroughs for specific insights.

Analytics: Track traffic and engagement-focused metrics to see patterns in overall understanding. However, be cautious of vanity metrics.
Follow-up Action Formative studies: Identify if low onboarding rates are coming from how the Data Card is implemented or its content.

Analytics: Consider metrics in the context of the dataset and each other. High readership may not directly translate to dataset use.
Supportability
Data Card Template How much additional time is required to answer questions, address issues, and discuss topics related to the Data Card?

If at all, what kinds of expertise do producers need to rely on to complete their Data Cards?
Completed Data Card Is the quality / uniqueness of questions about a dataset improving over time as traffic to the Data Card increases?

Does this influence the appropriate uses of the dataset?
Why? Tracks the capacity for providing support to sustain Data Cards and the amount of support provided, vis-a-vis the benefits of Data Cards.
When? Template: As soon as you set up a Data Cards effort in your organization, regardless of scale, and if it is ad-hoc.

Completed Data Cards: Start when the Data Card is made available for consumption, and track over time.
Example Office hours: Set up an office hours or support program to help dataset producers create Data Cards. Track the number of teams or individuals who attend, the kinds of datasets, and the questions they have.

Producer check-ins: Producers meet at a regular cadence to share the nature of questions they are asked and Data Card analytics reports.
Follow-up Action Office hours: Synthesize notes from office hours every six months or so. These can provide insights into the kinds of challenges that can be addressed through organizational programs and processes or guidance.

Producer check-ins: Identify patterns across how readers provide feedback or respond to Data Cards. Follow up with qualitative studies that provide specific details to what generally could be improved in Data Cards from an organization.
Conversion
Data Card Template How successful are producers in completing a Data Card template for their datasets?

How quickly are dataset producers able to release a Data Card for their datasets?
Completed Data Card Are readers able to successfully make decisions about the dataset or complete their tasks related to the dataset on the basis of information in the Data Card?

Are there visible improvements to reader decisions that can be directly attributed to the Data Card?
Why? Tracks the percentage of producers and readers who are able to complete their tasks because of a Data Card or its template.
When? Template: As soon as you set up a Data Cards effort in your organization, regardless of scale, and if it is ad-hoc.

Completed Data Cards: Start when the Data Card is made available for consumption, and track over time.
Example Analytics: Track the time to completion, rate of completion, and percentage of relevant sections in a Data Card template.

Qualitative Studies: Run interviews studies and satisfaction studies that yield insight into specific benefits that readers have experienced.
Follow-up Action Analytics: Follow up on problematic numbers with qualitative studies with dataset producers. Adapt the template if necessary.

Qualitative Studies: Run an [Agent Information Journey](https://pair-code.github.io/datacardsplaybook/playbook) workshop with readers of Data Card to understand specific needs, and then translate those into the Data Card template.
Engagement
Data Card Template Are Data Card producers actively sharing templates with other dataset owners? How many Data Cards are being created organically or proactively, in comparison to those that are required?

Is there a visible improvement in the quality of answers provided in Data Cards?
Completed Data Card How often do agents or dataset users refer to the Data Card for more information?

Is new knowledge about a dataset being generated over time that can be directly attributed to the Data Card?
Why? These metrics track how actively involved with your content – be it a Data Card or its template – your audience is.
When? Template: Once Data Card templates have been established and are in circulation in your organization.

Completed Data Cards: Once Data Card are publicly available alongside the datasets they represent. This metric is less useful if the Data Card is not discoverable, or has competing (not complementary) documentation sources.
Example Per section: Measuring engagement metrics per section of a Data Card or its template; and tracking deep-link shares per section of the Data Card.
Follow-up Action Engagement metrics that are disproportionately better than conversion metrics could point to “satisficing” flaws in the Data Card. Conversely, high conversion rates but low engagement metrics should prompt a reassessment of the Data Card for relevance to readers and efficacy.
Reach
Data Card Template How many Data Cards is your organization able to produce vis-a-vis datasets?
Completed Data Card How much traffic does a Data Card get, and how much traffic does it bring to the dataset?
Why? Reach is the total number of unique people who see your template and complete Data Cards. This is an important precursor for additional metrics, such as engagement and conversion.
When? Template: Once Data Card templates have been established and are in circulation in your organization.

Completed Data Cards: Once the Data Card is publicly available alongside the datasets they represent. This metric is less useful if the Data Card is not discoverable, or has competing (not complementary) documentation sources.
Example Friction Logs: Capture the challenges, difficulties or frustrations that both, dataset producers and Data Card readers may have through a friction log.
Follow-up Action Friction Logs: Plan a share out of friction logs in which you can identify the frequency and priority of problems. These can be further augmented with focus groups and qualitative assessments to get to the root cause of the problem.

Key Takeaways

  • Metrics that measure the impact of a Data Card template are different from those measuring the impact of a completed Data Card using that template.
  • The maturity of a dataset can change the way you interpret Data Card metrics. Factor in the maturity and popularity of the dataset, and consider quantitative, qualitative and anecdotal impact in unison.

Actions

  1. Diversify your goals. Establish goals for your transparency efforts for both, Data Card templates and completed Data Cards in your organization.
  2. Define both lead and lag metrics. For each lag metric that tells you when you reach a goal, establish lead metrics to track critical activities that contribute to the goal.
  3. Set a cadence for complementary, qualitative studies. As you set up the necessary infrastructure to measure Data Cards across your organization, create a plan to regularly run qualitative studies to verify results and calibrate quantitative metrics.
  4. Train individual data teams. Enable teams producing datasets and Data Cards to interpret qualitative and quantitative metrics in unison within the context of their datasets and Data Cards.

Considerations

  • Can the selected metrics be interpreted in the context of datasets that your organization produces?
  • Can the selected metrics be implemented in the context of your organization’s transparency efforts?
  • Is there a plan to validate quantitative metrics with qualitative metrics and vice-versa?

Downloadables

Related activities

Module
Audit

Implementation Checklist

A checklist to ensure that your implementation addresses basic needs that can drive the adoption and use of your Data Card and the corresponding dataset.

Module
Audit
Level
Basic
Recommended Duration
< 30 min
Module
Audit

Friction Log Template

A template to track points where you encounter an issue or generally get stuck and feel frustrated.

Module
Audit
Level
Basic
Recommended Duration
< 30 min

Evaluate with Readers

Select from evaluation methods and gain insight into how your Data Card is performing with readers.

Module
Audit
Theme
Stakeholder-focused