Concepts

This guide details key terms and concepts you’ll encounter when setting up your DX account. Use it alongside the Start guide when getting started.

Setting up

Data connectors

Integrations that pull data from your engineering tools—like GitHub, GitLab, Jira, PagerDuty, and CI/CD systems. Connecting tools lets DX correlate what developers say (in snapshots) with what systems show (in metrics). View all available connectors.

Team hierarchy

Your team structure in DX, mirroring your org chart. Teams roll up to parent teams, and data flows accordingly—managers see their team’s results, directors see aggregated data across their org.

User linking

How DX maps identities across systems. DX links users automatically by email when possible; for some tools like GitHub and GitLab, you may need to set usernames manually.

Attributes

Attributes are metadata attached to users—like Start Date, Seniority, Location, or Primary Language. Attributes enable filtering and help you discover patterns across different cohorts (e.g., “How does developer experience differ between remote and in-office engineers?”).

  • DX managed attributes: Automatically populated by DX based on connected system data—like tenure bands (from start dates) and AI usage levels (from tool connections like GitHub Copilot or Cursor). See full list.
  • Self-reported attributes: Gathered directly from users during snapshots. Useful for data that doesn’t exist in HR systems—like primary programming language, preferred work environment, or hardware setup.
  • Admin-set attributes: Uploaded by admins via CSV, API, or manual entry—typically sourced from HR systems. Examples: job level, department, office location.

Measurement

Core 4

Four key metrics that summarize engineering health:

  • Speed: How fast your team delivers (e.g., PR cycle time, deployment frequency)
  • Quality: Stability and reliability of your systems
  • Impact: How much time goes toward high-value work
  • Effectiveness: How developers feel about their work environment. Measured by the Developer Experience Index (DXI).

CSAT (Customer Satisfaction)

Simple satisfaction scores (1-5) for internal tools and platforms your team uses. CSAT questions let you track how developers feel about specific tools over time.

Drivers

Topics that impact developer productivity—like Code Maintainability, Local Iteration Speed, CI/CD, and Documentation. In each snapshot, developers rate drivers and vote on what’s slowing them down most. Driver scores feed into the DXI.

DXI (Developer Experience Index)

A single score (out of 100) representing your overall developer experience, calculated from driver responses. The DXI serves as your baseline for tracking progress quarter over quarter, and can be compared against industry benchmarks.

Snapshots

A short survey (5-10 minutes) sent to developers via Slack or Teams. Snapshots capture how developers feel about their tools, processes, and collaboration—and identify where friction exists. Most organizations run snapshots quarterly.

System metrics

Quantitative data pulled from your connected tools—like PR cycle time, deployment frequency, and incident response time. System metrics complement snapshot data by showing what’s actually happening alongside what developers perceive.

Workflows

Questions that measure how much time developers spend on specific tasks—like waiting for code review, debugging CI failures, or setting up local environments. Workflow results are compared against recommended targets and industry benchmarks.

Taking action

Comments

Qualitative feedback from developers, attached to specific drivers or CSAT questions. Comments provide context behind the scores and often surface specific issues or suggestions. Managers can acknowledge comments with a ‘Like’ or reply directly.

Playbooks

Research-backed guides for improving specific drivers. When you identify an area to focus on, Playbooks offers concrete strategies and tactics your team can implement.

Triage

The process where managers review their team’s snapshot results and set a status for each driver. Triage closes the loop on feedback and creates accountability for improvement.

  • Keep monitoring: Performance is acceptable
  • Make improvements: Action is needed
  • Needs support: Help is required from leadership or other teams