Engineering Productivity team

The Engineering Productivity team increases productivity of GitLab team members and contributors by shortening feedback loops and improving workflow efficiency for GitLab projects.

Mission

  • Constantly improve efficiency for our entire engineering team, to ultimately increase value for our customer.
  • Measure what matters: quality of life, efficiency, and toil reduction improvements with quantitative and qualitative measures.
  • Build partnerships across organizational boundaries to deliver broad efficiency improvements.

Team

Members

Team Members Role
Ethan GuoEthan Guo Acting Engineering Manager
Alina MihailaAlina Mihaila Senior Backend Engineer, Engineering Productivity
David DieulivolDavid Dieulivol Senior Backend Engineer, Engineering Productivity
Jennifer LiJennifer Li Senior Backend Engineer, Engineering Productivity
Jen-Shin LinJen-Shin Lin Senior Backend Engineer, Engineering Productivity
Nao HashizumeNao Hashizume Backend Engineer, Engineering Productivity
Peter LeitzenPeter Leitzen Staff Backend Engineer, Engineering Productivity
Rémy CoutableRémy Coutable Principal Engineer, Infrastructure

Stable Counterpart

Person Role
Greg AlfaroGreg Alfaro GDK Project Stable Counterpart, Application Security

Core Responsibilities

graph LR
    A[Engineering Productivity Team]

    A --> B[Planning & Reporting]
    B --> B1[Weekly team reports<br>Providing teams with an overview of their current, planned & unplanned work]
    click B1 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/32"
    B --> B2[Issues & MRs hygiene automation<br>Ensuring healthy issue/MR trackers]
    click B2 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/32"

    A --> C[Development Tools]
    C --> C1[GitLab Development Kit<br>Providing a reliable development environment]
    click C1 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/31"
    C --> C2[GitLab Remote Development<br>Providing a remote reliable development environment]
    click C1 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/31"

    A --> F[Review & CI]
    F --> F2[Merge Request Review Process<br>Ensuring a smooth, fast and reliable review process]
    click F2 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/34"
    F --> F3[Merge Request Pipelines<br>Providing fast and reliable pipelines]
    click F3 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/28"
    F --> F4[Review apps<br>Providing review apps to explore a merge request changes]
    click F4 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/33"

    A --> D[Maintenance & Security]
    D --> D1[Automated dependency updates<br>Ensuring dependencies are up-to-date]
    click D1 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/40"
    D --> D2[Automated management of CI/CD secrets<br>Providing a secure CI/CD environment]
    click D2 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/46"
    D --> D3[Automated main branch failing pipelines management<br>Providing a stable `master` branch]
    click D3 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/30"
    D --> D4[Static analysis<br>Ensuring the codebase style and quality is consistent and reducing bikeshedding]
    click D4 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/38"
    D --> D5[Shared CI/CD components<br>Providing CI/CD components to ensure consistency in all GitLab projects]
    click D5 "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/41"

    A --> G[JiHu Support]
    click G "https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/35"
  • See it and find it: Build automated measurements and dashboards to gain insights into the productivity of the Engineering organization to identify opportunities for improvement.
    • Implement new measurements to provide visibility into improvement opportunities.
    • Collaborate with other Engineering teams to provide visualizations for measurement objectives.
    • Improve existing performance indicators.
  • Do it for internal team: Increase contributor and developer productivity by making measurement-driven improvements to the development tools / workflow / processes, then monitor the results, and iterate.
  • Dogfood use: Dogfood GitLab product features to improve developer workflow and provide feedback to product teams.
    • Use new features from related product groups (Analytics, Monitor, Testing).
    • Improve usage of Review apps for GitLab development and testing.
  • Engineering support:
  • Engineering workflow: Develop automated processes for improving label classification hygiene in support of product and Engineering workflows.
  • Do it for wider community: Increase efficiency for wider GitLab Community contributions.
  • Dogfood build: Enhance and add new features to the GitLab product to improve engineer productivity.

Metrics

KPIs

Infrastructure Performance Indicators are our single source of truth

PIs

Shared

Dashboards

The Engineering Productivity team creates metrics in the following sources to aid in operational reporting.

OKRs

Objectives and Key Results (OKRs) help align our sub-department towards what really matters. These happen quarterly and are based on company OKRs. We follow the OKR process defined here.

Here is an overview of our current OKRs.

Communication

Description Link
GitLab Team Handle @gl-quality/eng-prod
Slack Channel #g_engineering_productivity
Team Boards Team Board & Priority Board
Issue Tracker gitlab-org/quality/engineering-productivity/team

Office hours

Engineering productivity has monthly office hours on the 3rd Wednesday of the month at 3:00 UTC (20:00 PST) on even months (e.g February, April, etc) open for anyone to add topics or questions to the agenda. Office hours can be found in the GitLab Team Meetings calendar

Meetings

Engineering Productivity has weekly team meeting in two parts (EMEA / AMER) to allow for all team members to collaborate in times that work for them.

  • Part 1 is Tuesdays 11:00 UTC, 04:00 PST
  • Part 2 is Tuesdays 22:00 UTC, 15:00 PST

Communication guidelines

The Engineering Productivity team will make changes which can create notification spikes or new behavior for GitLab contributors. The team will follow these guidelines in the spirit of GitLab’s Internal Communication Guidelines.

Pipeline changes

Critical pipeline changes

Pipeline changes that have the potential to have an impact on the GitLab.com infrastructure should follow the Change Management process.

Pipeline changes that meet the following criteria must follow the Criticality 3 process:

These kind of changes led to production issues in the past.

Non-critical pipeline changes

The team will communicate significant pipeline changes to #development in Slack and the Engineering Week in Review.

Pipeline changes that meet the following criteria will be communicated:

  • addition, removal, renaming, parallelization of jobs
  • changes to the conditions to run jobs
  • changes to pipeline DAG structure

Other pipeline changes will be communicated based on the team’s discretion.

Automated triage policies

Be sure to give a heads-up to #development,#eng-managers,#product, #ux Slack channels and the Engineering week in review when an automation is expected to triage more than 50 notifications or change policies that a large stakeholder group use (e.g. team-triage report).

Experiments

This is a list of Engineering Productivity experiments where we identify an opportunity, form a hypothesis and experiment to test the hypothesis.

Experiment Status Hypothesis Feedback Issue or Findings
Automatic issue creation for test failures Complete The goal is to track each failing test in master with an issue, so that we can later automatically quarantine tests. Feedback issue.
Always run predictive jobs for fork pipelines Complete The goal is to reduce the compute minutes consumed by fork pipelines. The “full” jobs only run for canonical pipelines (i.e. pipelines started by a member of the project) once the MR is approved.
Retry failed specs in a new process after the initial run Complete Given that a lot of flaky tests are unreliable due to previous test which are affecting the global state, retrying only the failing specs in a new RSpec process should result in a better overall success rate. Results show that this is useful.
Experiment with automatically skipping identified flaky tests Complete - Reverted Skipping flaky tests should reduce the number of false broken master and increase the master success rate. We found out that it can actually break master in some cases, so we reverted the experiment with gitlab-org/gitlab!111217.
Experiment with running previously failed tests early Complete We have not noticed a significant improvement in feedback time due to other factors impacting our Time to First Failure metric.
Store/retrieve tests metadata in/from pages instead of artifacts Complete We’re only interested in the latest state of these files, so using Pages makes sense here. This simplifies the logic to retrieve the reports and reduce the load on GitLab.com’s infrastructure. This has been enabled since 2022-11-09.
Reduce pipeline cost by reducing number of rspec tests before MR approval Complete Reduce the CI cost for GitLab pipelines by running the most applicable rspec tests for changes prior to approval Improvements needed to identify and resolve selective test gaps as this impacted pipeline stability.
Enabling developers to run failed specs locally Complete Enabling developers to run failed specs locally will lead to less pipelines per merge request and improved productivity from being able to fix regressions more quickly Feedback issue.
Use dynamic analysis to streamline test execution Complete Dynamic analysis can reduce the amount of specs that are needed for MR pipelines without causing significant disruption to master stability Miss rate of 10% would cause a large impact to master stability. Look to leverage dynamic mapping with local developer tooling. Added documentation from the experiment.
Using timezone for Reviewer Roulette suggestions Complete - Reverted Using timezone in Reviewer Roulette suggestions will lead to a reduction in the mean time to merge Reviewer Burden was inconsistently applied and specific reviewers were getting too many reviews compared to others. More details in the experiment issue and feedback issue

Direction - GDK
Last reviewed: 2021-01-16 GDK Project Issue List Epic List Please comment, thumbs-up (or down!), and contribute to the linked issues and epics on this category page. Sharing your feedback directly on GitLab.com is the best way to contribute to our vision. Please share feedback directly via email, Twitter. There’s also a Discord #contribute channel you can give us feedback and ask questions in. If you’re a GDK user, we’d always love to hear from you!
Engineering productivity project management
Guidelines for project management for the Engineering Productivity team at GitLab
Flaky tests management and processes
Introduction A flaky test is an unreliable test that occasionally fails but passes eventually if you retry it enough times. In a test suite, flaky tests are inevitable, so our goal should be to limit their negative impact as soon as possible. Out of all the factors that affects master pipeline stability, flaky tests contribute to at least 30% of master pipeline failures each month. Current state and assumptions Current state Assumptions master success rate was at 89% for March 2024 We don’t know exactly what would be the success rate without any flaky tests, but we assume we could attain 99% 5200+ ~"failure::flaky-test" issues out of a total of 260,040 tests as of 2024-03-01 It means we identified 1.
Issue Triage
Guidelines for triaging new issues opened on GitLab.com projects
Test Intelligence
Introduction As the owner of pipeline configuration for the GitLab project, the Engineering Productivity team has adopted several test intelligence strategies aimed to improve pipeline efficiency with the following benefits: Shortened feedback loop by prioritizing tests that are most likely to fail Faster pipelines to scale better when Merge Train is enabled These strategies include: Predictive test jobs via test mapping Fail-fast job Re-run previously failed tests early Selective jobs via pipeline rules Selective jobs via labels Predictive test jobs via test mapping Tests that provide coverage to the code changes in each merge request are most likely to fail.
Triage Operations
Automation and tooling for processing un-triaged issues at GitLab
Wider Community Merge Request Triage
Guidelines for triaging new merge requests from the wider community opened on GitLab.com projects
Workflow Automation
Introduction The Engineering Productivity team owns the tooling and processes for GitLab’s internal workflow automation. Triage-ops is one of the main projects the EP team maintains, which empowers GitLab team members to triage issues, MRs and epics automatically. One-off label migrations In the event of team structure changes, we often need to run a one-off label migration to update labels on existing issues, MRs and epics. We encourage every team member to perform the migrations themselves for maximum efficiency.