The Engineering Productivity team maximizes the value and throughput of Product Development teams and wider community contributors by improving the developer experience, streamlining the product development processes, and keeping projects secure, compliant, and easy to work on for everyone.
ℹ️ Note: This page is deprecated. The team has been restructured as Development Analytics and Developer Tooling under the Developer Experience Stage.
Mission
Constantly improve efficiency for our entire engineering and product teams to increase customer value.
Measure what matters: quality of life, efficiency, and toil reduction improvements with quantitative and qualitative measures.
Build partnerships across organizational boundaries to deliver maintainability and efficiency improvements for all stakeholders.
Vision
The Engineering Productivity team’s vision is to focus on the satisfaction of the Product Development teams and wider community contributors while keeping GitLab projects
secure, compliant, and easy to work on.
Integral parts of this vision:
Developer experience: Provide stable development environments and tools, as well as a consistent and streamlined contributing experience.
Product development processes: Help product and engineering managers see the whole picture about their group’s bugs, feature proposals, planned and started work,
as well as automate issues and merge requests hygiene (labels, milestones, staleness etc.).
Maintainability and security of GitLab’s projects: Enforce configuration consistency (project settings, CI/CD pipelines) for all GitLab projects
–including JiHu– to ensure they’re maintainable, compliant and secure in the long-term.
Our principles
See it and find it: Build automated measurements and dashboards to gain insights into the productivity of the Engineering organization to identify opportunities for improvement.
Implement new measurements to provide visibility into improvement opportunities.
Collaborate with other Engineering teams to provide visualizations for measurement objectives.
Improve existing performance indicators.
Do it for any contributor: Increase contributor productivity by making measurement-driven improvements to the development tools / workflow / processes, then monitor the results, and iterate.
Identify and implement quantifiable improvement opportunities with proposals and hypothesis for metric improvements.
Objectives and Key Results (OKRs) help align our sub-department towards what really matters. These happen quarterly and are based on company OKRs. We follow the OKR process defined here.
Engineering productivity has monthly office hours on the 3rd Wednesday of the month at 3:00 UTC (20:00 PST) on even months (e.g February, April, etc) open for anyone to add topics or questions to the agenda. Office hours can be found in the GitLab Team Meetings calendar
Meetings
Engineering Productivity has weekly team meeting on Wednesdays 15:00 UTC, 08:00 PST.
Communication guidelines
The Engineering Productivity team will make changes which can create notification spikes or new behavior for
GitLab contributors. The team will follow these guidelines in the spirit of GitLab’s Internal Communication Guidelines.
Pipeline changes
Critical pipeline changes
Pipeline changes that have the potential to have an impact on the GitLab.com infrastructure should follow the Change Management process.
Non-critical pipeline changes
The team will communicate significant pipeline changes to #development in Slack and the Engineering Week in Review.
Pipeline changes that meet the following criteria will be communicated:
addition, removal, renaming, parallelization of jobs
changes to the conditions to run jobs
changes to pipeline DAG structure
Other pipeline changes will be communicated based on the team’s discretion.
Automated triage policies
Be sure to give a heads-up to #development, #eng-managers, #product, #ux Slack channels
and the Engineering week in review when an automation is expected to triage more
than 50 notifications or change policies that a large stakeholder group use (e.g. team-triage report).
Experiments
This is a list of Engineering Productivity experiments where we identify an opportunity, form a hypothesis and experiment to test the hypothesis.
The goal is to reduce the compute minutes consumed by fork pipelines. The “full” jobs only run for canonical pipelines (i.e. pipelines started by a member of the project) once the MR is approved.
Given that a lot of flaky tests are unreliable due to previous test which are affecting the global state, retrying only the failing specs in a new RSpec process should result in a better overall success rate.
We’re only interested in the latest state of these files, so using Pages makes sense here. This simplifies the logic to retrieve the reports and reduce the load on GitLab.com’s infrastructure.
Enabling developers to run failed specs locally will lead to less pipelines per merge request and improved productivity from being able to fix regressions more quickly
Dynamic analysis can reduce the amount of specs that are needed for MR pipelines without causing significant disruption to master stability
Miss rate of 10% would cause a large impact to master stability. Look to leverage dynamic mapping with local developer tooling. Added documentation from the experiment.
Using timezone in Reviewer Roulette suggestions will lead to a reduction in the mean time to merge
Reviewer Burden was inconsistently applied and specific reviewers were getting too many reviews compared to others. More details in the experiment issue and feedback issue
Please comment, thumbs-up (or down!), and contribute to the linked issues and
epics on this category page. Sharing your feedback directly on GitLab.com is
the best way to contribute to our vision.
A flaky test is an unreliable test that occasionally fails but passes eventually if you retry it enough times.
In a test suite, flaky tests are inevitable, so our goal should be to limit their negative impact as soon as possible.
Out of all the factors that affects master pipeline stability, flaky tests contribute to at least 30% of master pipeline failures each month.
Given we have approximately 91k pipelines per month, that means flaky tests are wasting 31,395 CI minutes per month. Given our private runners cost us $0.0845 / minute, this means flaky tests are wasting at minimum $2,653 per month of CI minutes. This doesn’t take in account the engineers’ time wasted.
Manual flow to detect flaky tests
When a flaky test fails in an MR, the author might follow the following flow:
As the owner of pipeline configuration for the GitLab project, the Engineering Productivity team has adopted several test intelligence strategies aimed to improve pipeline efficiency with the following benefits:
Shortened feedback loop by prioritizing tests that are most likely to fail
Faster pipelines to scale better when Merge Train is enabled
These strategies include:
Predictive test jobs via test mapping
Fail-fast job
Re-run previously failed tests early
Selective jobs via pipeline rules
Selective jobs via labels
Predictive test jobs via test mapping
Tests that provide coverage to the code changes in each merge request are most likely to fail. As a result, merge request pipelines for the GitLab project run only the predictive set of tests by default. These include:
The Engineering Productivity team owns the tooling and processes for GitLab’s internal workflow automation. Triage-ops is one of the main projects the EP team maintains, which empowers GitLab team members to triage issues, MRs and epics automatically.
One-off label migrations
In the event of team structure changes, we often need to run a one-off label migration to update labels on existing issues, MRs and epics. We encourage every team member to perform the migrations themselves for maximum efficiency. For the fastest result, please follow these instructions below to get started on a label migration merge request. The EP team can then help review and run the migrations if needed.
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
User ID: d2d39457-4023-480f-9add-48dd4c3b1902
This User ID will be used as a unique identifier while storing and accessing your preferences for future.
Timestamp: --
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, enabling you to securely log into the site, filling in forms, or using the customer checkout. GitLab processes any personal data collected through these cookies on the basis of our legitimate interest.
Functionality Cookies
These cookies enable helpful but non-essential website functions that improve your website experience. By recognizing you when you return to our website, they may, for example, allow us to personalize our content for you or remember your preferences. If you do not allow these cookies then some or all of these services may not function properly. GitLab processes any personal data collected through these cookies on the basis of your consent
Performance and Analytics Cookies
These cookies allow us and our third-party service providers to recognize and count the number of visitors on our websites and to see how visitors move around our websites when they are using it. This helps us improve our products and ensures that users can easily find what they need on our websites. These cookies usually generate aggregate statistics that are not associated with an individual. To the extent any personal data is collected through these cookies, GitLab processes that data on the basis of your consent.
Targeting and Advertising Cookies
These cookies enable different advertising related functions. They may allow us to record information about your visit to our websites, such as pages visited, links followed, and videos viewed so we can make our websites and the advertising displayed on it more relevant to your interests. They may be set through our website by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant advertisements on other websites. GitLab processes any personal data collected through these cookies on the basis of your consent.