Test Platform Sub-Department

Test Platform Sub-Department enables successful development and deployment of high quality GitLab software applications by providing innovative build automated solutions, reliable tooling, refined test efficiency, and fostering an environment where Quality is Everyone’s responsibility.

Child Pages

Bug Prioritization

Quad Planning

On-call Rotation

Test Coverage

E2E Test Execution Reports


Mission

At GitLab Quality is everyone’s responsibility. The Test Platform sub-department’s mission is to be a world class team that enables successful development and deployment of high quality GitLab software applications with kaizen workflow efficiency, reliability, productivity.

The Test Platform sub-department does this by focusing on:

  • Innovative test architecture, efficiency, and customer results while delivering impact to the company’s critical business initiatives.
  • Broadening our lead in ensuring self-managed excellence, improve deployment confidence, drive visibility and actionability of test results, and expand our Architecture focus.
  • Enabling development and deployment at scale.
  • Fostering a culture of quality evangelism, promoting testing best practices across GitLab.

Vision

The Test Platform sub-department vision is to focus on customer satisfaction and enable GitLab to deliver faster and efficiently by supporting GitLab’s principle of Quality is everyone’s responsibility.

Integral parts of this vision:

  1. Test Tooling: Build tools and frameworks that enable GitLab Engineering & Product teams to ship high-quality & reliable products to our customers efficiently.
  2. Reliable platform: This includes monitoring the platform for performance issues, implementing security measures, and conducting capacity planning to ensure that the platform can handle the expected load.
  3. Technical Support and Expertise: By providing technical support and expertise to development teams, test platform teams can help to solve complex technical challenges and ensure that applications are built with utmost quality.

Our principles

  • Foster an environment where Quality is Everyone’s responsibility.
    • We enable product teams by baking quality early into the product development flow process.
    • We are a sounding-board for our end users by making feedback known to product teams.
    • We are a champion of good software design, testing practices and bug prevention strategies.
  • Improve test coverage and leverage tests at all levels.
    • We work to ensure that the right tests run at the right places.
    • We enable product teams’ awareness of their test coverage with fast, clear and actionable reporting.
    • We continuously refine test efficiency, refactor duplicate coverage, and increase stability.
  • Make Engineering teams efficient, engaged and productive.
    • We build automated solutions to improve workflow efficiency and productivity.
    • We ensure reliability in our tooling and tests.
    • We ensure that continuous integration pipelines are efficient, stable with optimal coverage.
  • Metrics driven.
    • We provide data driven insights into defects, test stability and efficiency.
    • We ensure the data is actionable and is available transparently to the company and the wider community.
    • We use data to make informative next steps and continuously improve with metrics-driven optimizations.

FY25 Direction

GitLab has a Three-Year Strategy. Our Yearlies connect our 3 year strategy to our shorter-term quarterly Objectives and Key Results (OKRs). The sub-department direction is accomplished through these objectives (OKRs).

Our focus is to support our FY25 Yearlies. They can be found in the internal handbook.

OKRs

Objectives and Key Results (OKRs) help align our sub-department towards what really matters. These happen quarterly and are based on company OKRs. We follow the OKR process defined here.

Active Quarter OKRs

Here is an overview of our current Test Platform OKR.

Areas of Responsibility

Self-Managed Excellence

Test Platform owns several tools which form a 3-prong trident for Self-Managed Excellence: the GitLab Environment Toolkit (GET), the GitLab Performance Tool (GPT), and the Reference Architectures (RA). Together, these tools support our broader strategy of cementing customer confidence and contributing to their ongoing success by ensuring their instances are built to a rigorously tested standard that performs smoothly at scale.

For more information, please visit our Self-Managed Excellence page.

Test Infrastructure

Test infrastructure provides stability, dependability, and testing continuity for better planning and implementation. It gives the foundation for engineers to write their tests and an execution platform to execute them. By standardizing and streamlining software development, deployment, and maintenance processes, Test Platform can enable engineers to deliver and improve applications more efficiently, while reducing errors, improving consistency, and increasing speed.

  • Deliver tools and frameworks to increase standardization, repeatability, and consistency of tests performed.
  • Provide controlled environments that allow for precise and reproducible test execution.
  • Provide a platform for test automation to reduce human intervention during test execution.
  • Offer flexibility in scheduling and executing tests at any time with no manual intervention required.

Test Coverage

Given rapidly evolving technologies and our drive to provide a world class experience for GitLab users, Test Platform sub-department strives to meet the increasing demands of efficient, intelligent test coverage and confidence at scale. We aim to test the right things at the right time. We focus on exploring several new testing types and visibility improvements to increase the actionability, speed, and sophistication of our various test suites.

  • Machine learning for test gap recognition, failure analysis and classification, and failing fast.
  • New testing types: visual regression testing, chaos testing, contract testing, permissions testing.
  • Automated test pyramid analysis and code coverage visibility through a central dashboard.
  • Continuous identification of broken, slow, flaky, stale, or duplicated tests.
  • Built-in, one-click templates for performance and contract testing.

Customer Centric Quality

Test Platform has been key to supporting prospect POVs and providing prompt, knowledgeable troubleshooting for external customers, while continuing to have a deep commitment to supporting our internal customers as well. We support our internal and external customers by:

  • Outreach to understand user’s needs of Reference Architectures and performing reviews for new environment proposals or existing environment issues where design is suspected
  • Expand the capability of staging environments according to engineers’ needs.
  • Increase customer empathy by participating in triages that highlight their painpoints.
  • Build tooling needs that enable developers to deliver efficiently and confidentally.
  • Burn down customer bugs to improve user experience.

Metric driven

To order to ensure platform is reliable, scalable, and secure, Test Platform sub-department can help with setting up dashboards for capturing test covearge, performance issues, and conducting capacity planning to ensure that the platform can handle the expected load.

  • Define what metrics to collect.
  • Test covearge ratio across all tiers.
  • Continuous Integration Automated Test Pass Rate.
  • Different Performance Testing matrices like Average latency/ wait time, Average load time, requests per second, etc.

Find relevant dashboards here.

AI-powered Innovations

As AI has evolved to be a foundational and transformational technology that can provide compelling and helpful benefits through its capacity to assist, complement, empower, and inspire people in almost every field of human endeavor, the Test Platform sub-department is looking into ways to boost efficiency and reduce cycle times in every phase of the software development lifecycle.

  • Employing AI for enhanced testing accuracy.
  • Automated Test generation: The ability to generate test scripts.
  • Test Covearge Optimization: The ability to carefully select tests and optimize coverage.
  • AI powered performance testing.
  • Automated bug triage: The ability to triage untriaged bugs for critical details like severity, bug description, logs, etc.

Techincal Expertise

Test Platform Engineers are always available to provide technical support and expertise to development teams to solve complex technical challenges and ensure that applications are built to industry standards. This includes but not limited to:

  • Implement tooling needs to help deliver faster
  • Knowledge sharing
  • Provide guidelines on best testing practices
  • Defines Testing strategy for a complex feature or implementation.
  • Assisting internal and external customers with questions and asks around general GitLab deployments across various cloud providers and on-prem.

Productivity

  • Reduce manual burden for SET team members on-call.
  • Improve test failure debugging through traceable test executions and streamlined, concise logging.
  • Reduce duration of GitLab pipelines through selective test execution.
  • Contribute quality tools to GitLab the product to help mature and dogfood our testing offerings.
  • Increase MR Rate.

Team Structure

Infrastructure Department structure is documented here.

Test Platform sub-department structure

graph TD
    A[Test Platform sub-department]
    A --> B(Performance Enablement)
    A --> C(Test Engineering team)
    A --> D(Test and Tools Infrastructure team)

    click A "/handbook/engineering/infrastructure/test-platform"
    click B "/handbook/engineering/infrastructure-platforms/developer-experience/performance-enablement"
    click C "/handbook/engineering/infrastructure/test-platform/test-engineering-team"
    click D "/handbook/engineering/infrastructure/test-platform/test-and-tools-infrastructure-team"

Engage with us

Feel free to reach out to us by opening an issue on the Quality Team Tasks project or contacting us in one of the Slack channels listed below.

Team GitLab.com handle Slack channel Slack handle
Test Platform @gl-quality/tp-sub-dept #test-platform None
Self-Managed Platform team @gl-quality/tp-self-managed-platform #self-managed-platform-team @self-managed-platform
Test Engineering team @gl-quality/tp-test-engineering #test-engineering-team @test-engineering-team
Test and Tools Infrastructure team @gl-quality/tp-test-tools-infrastructure #test-tools-infrastructure-team @test-tools-infrastructure

Team Members

Management team

Name Role
Vincy WilsonVincy Wilson Director, Test Platform
Abhinaba GhoshAbhinaba Ghosh Engineering Manager, Test Platform, Development Analytics
Kev KlossKev Kloss Frontend Engineer
Ksenia KolpakovaKsenia Kolpakova Engineering Manager, Test Platform, Test Engineering
Kassandra SvobodaKassandra Svoboda Manager, Quality Engineering, Core Platform & SaaS Platform
Nao HashizumeNao Hashizume Backend Engineer, Engineering Productivity
Peter LeitzenPeter Leitzen Staff Backend Engineer, Engineering Productivity

Individual contributors

The following people are members of the Self-Managed Platform team:

Name Role
Kassandra SvobodaKassandra Svoboda Manager, Quality Engineering, Core Platform & SaaS Platform
Andy HohennerAndy Hohenner Senior Software Engineer in Test, SaaS Platforms:US Public Sector Services
Brittany WilkersonBrittany Wilkerson Senior Software Engineer in Test, Dedicated:Environment Automation
Jim BaumgardnerJim Baumgardner Software Engineer in Test, SaaS Platforms:US Public Sector Services
John McDonnellJohn McDonnell Senior Software Engineer in Test, Systems:Gitaly
Nivetha PrabakaranNivetha Prabakaran Software Engineer in Test, Dev:Manage
Richard ChongRichard Chong Senior Software Engineer in Test, Test Engineering, Fulfillment section
Sanad LiaquatSanad Liaquat Staff Software Engineer in Test, Test and Tools Infrastructure
Sofia VistasSofia Vistas Senior Software Engineer in Test, Test and Tools Infrastructure
Vishal PatelVishal Patel Software Engineer in Test, Core Platform:Systems

The following people are members of the Test Engineering team:

Name Role
Ksenia KolpakovaKsenia Kolpakova Engineering Manager, Test Platform, Test Engineering
Désirée ChevalierDésirée Chevalier Senior Software Engineer in Test, Dev:Plan
Harsha MuralidharHarsha Muralidhar Senior Software Engineer in Test, Govern
Jay McCureJay McCure Senior Software Engineer in Test, Dev:Create
Joy RoodnickJoy Roodnick Software Engineer in Test, Test Engineering, Verify:Runner group, Fulfillment section
Tiffany ReaTiffany Rea Senior Software Engineer in Test, CI:Verify
Valerie BurtonValerie Burton Senior Software Engineer in Test, Test Engineering, Fulfillment section
Will MeekWill Meek Senior Software Engineer in Test, Secure

The following people are members of the Test and Tools Infrastructure team:

Name Role
Abhinaba GhoshAbhinaba Ghosh Engineering Manager, Test Platform, Development Analytics
Andrejs CunskisAndrejs Cunskis Senior Software Engineer in Test, Development Analytics
Chloe LiuChloe Liu Staff Software Engineer in Test, Development Analytics
Dan DavisonDan Davison Staff Software Engineer in Test, Development Analytics
David DieulivolDavid Dieulivol Senior Backend Engineer, Development Analytics
Ievgen ChernikovIevgen Chernikov Senior Software Engineer in Test, Development Analytics
Jennifer LiJennifer Li Senior Backend Engineer, Development Analytics
Mark LapierreMark Lapierre Senior Software Engineer in Test, Development Analytics

Communication

In addition to GitLab’s communication guidelines and engineering communication, we communicate and collaborate actively across GitLab in the following venues:

Meetings

GitLab is an all-remote, timezone distributed company as such we optimize for asynchronous communication. While some topics benefit from a real-time discussion, we should always evaluate meetings to ensure they are valuable. We follow the guidance for all-remote meetings, including starting and ending on time - or earlier.

Group Conversation

Group Conversations take the information from the Key Review (plus any additional topics) and shared with all of GitLab. All Team Members are invited to participate in Group Conversations by adding questions and comments in the Group Conversation Agenda.

Coordination of Infrastructure Group Conversation materials and facilitation of the discussion is a rotating role among the managers within the department.

Group Conversation DRI Schedule.

Week-in-review

By the end of the week, we populate the Engineering Week-in-Review document with relevant updates from our department. The agenda is internal only, please search in Google Drive for ‘Engineering Week-in-Review’. Every Monday a reminder is sent to all of engineering in the #eng-week-in-review slack channel to read summarize updates in the google doc.

Engineering-wide retrospective

The Test Platform sub-department holds an asynchronous retrospective for each release. The process is automated and notes are captured here (GITLAB ONLY).

How we Work

While this team operates as a several teams, we emphasize on ensuring the prioritization and needs of Engineering Leaders via stable counterparts.

Stable counterparts

Every Software Engineer in Test (SET) takes part in building our product as a DRI in GitLab’s Product Quad DRIs. They work alongside Development, Product, and UX in the Product Development Workflow. As stable counterparts, SETs should be considered critical members of the core team between Product Designers, Engineering Managers and Product Managers.

  • SETs should receive invites and participate in all relevant product group collaborations (meeting recordings, retro issues, planning issues, etc).
  • SETs should operate proactively, not waiting for other stable counterparts to provide them direction. The area a SET is responsible for is defined in the Product Stages and Groups and part of their title.
  • SETs meet with their counterpart Product Manager (PM), Engineering Manager (EM), Product Designer, and developers every month to discuss scheduling and prioritization.

Every Engineering Manager (EM) is aligned with an Engineering Director in the Development Department. They work at a higher level and align cross-team efforts which maps to a Development Department section. The area a QEM is responsible for is defined in the Product Stages and Groups and part of their title.

Milestone Planning

Milestones (product releases) are one of our planning horizons, where prioritization is a collaboration between Product, Development, UX, and Quality. DRIs for prioritization are based on work type:

  • Feature - PM
  • Maintenance - EM
  • Bug - QEM

We use type labels to track: feature, maintenance, and bug issues and MRs. UX Leadership are active participants in influencing the prioritization of all three work types.

QEMs meet with their PM, EM, and UX counterparts to discuss the priorities for the upcoming milestone. The purpose of this is to ensure that everyone understands the requirements and to assess whether or not there is the capacity to complete all of the proposed issues.

For product groups with a SET counterpart, QEMs are encouraged to delegate bug prioritization to the SET as the bug subject matter expert for that group. In these situations, QEMs should provide guidance and oversight as needed by the SET and should still maintain broad awareness of bug prioritization for these delegated groups.

While we follow the product development timeline, it is recommended that you work with your counterparts to discuss upcoming issues in your group’s roadmap prior to them being marked as a deliverable for a particular milestone. There will be occasions where priorities shift and changes must be made to milestone deliverables. We should remain flexible and understanding of these situations, while doing our best to make sure these exceptions do not become the rule.

Section-level members of the quad are QEMs, Directors of Development, Directors of Product Management, and Product Design Managers aligned to the same section. These counterparts will review their work type trends on a monthly basis.

Building as part of GitLab

  • GitLab features first: Where possible we will implement the tools that we use as GitLab features.
  • Build vs buy: If there is a sense of urgency around an area we may consider buying/subscribing to a service to solve our Quality challenges in a timely manner. This is where building as part of GitLab is not immediately viable. An issue will be created to document the decision making process in our team task issue tracker. This shall follow our dogfooding process.

Test Platform sub-department on-call process

The Test Platform sub-department has two on-call rotations: pipeline triage (SET-led) and incident management (QEM-led). These are scheduled in advance to share the responsibilities of debugging pipeline failures and representing Quality in incident responses.

Pipeline triage

Every member in the Test Platform sub-department shares the responsibility of analyzing the daily QA tests against master and staging branches. More details can be seen here

Incident management

Every manager and director in the Test Platform sub-department shares the responsibility of monitoring new and existing incidents and responding or mitigating as appropriate. Incidents may require review of test coverage, test planning, or updated procedures, as examples of follow-up work which should be tracked by the DRI. More details can be seen here.

Please note: Any manager or director within Test Platform sub-department can step in to help in an urgent situation if the primary DRI is not available. Don’t hesitate to reach out in the Slack channel #test-platform.

Processes

General tips and tricks

We have compiled a number of tips and tricks we have found useful in day-to-day Test Platform related tasks.

For more information, please visit our tips and tricks page.

Quad planning

The Test Platform Sub-Department helps facilitate the quad-planning process. This is the participation of Product Management, Development, UX, and Quality which aims to bring test planning as a topic before the development of any feature.

For more information, please visit our quad planning page.

Borrow Request for SETs

A borrow is used when a team member is shifted from one team to another temporarily or assists other teams part-time for an agreed-upon period of time. Currently, we do not have an SET embedded within every product group, hence for product groups with no SET counterpart, the following would be the process to request one:

  1. Create a borrow request issue with ~SET Borrow label.
  2. Based on the priorities, the request will be handled accordingly.

Please note that the borrow request might not guarantee 100% allocation to the requested product group. The temporary allocation will depend upon ongoing priorities.

The list of all SET borrow requests can be seen here.

Risk mapping

The Test Platform Sub-Department helps facilitate the risk mapping process. This requires the participation of Product Management, Development, UX, and the Quality team to develop a strategic approach to risk and mitigation planning.

For more information, please visit our risk mapping page.

Test engineering

The Test Platform Sub-Department helps facilitate the test planning process for all things related to Engineering work.

For more information, please visit our test engineering page.

Test failures

If you need to debug a test failure, please visit our debugging QA pipeline test failures page.

Test Platform Project Regulations

The Test Platform Sub-Department follows a regulation processes for ensuring efficient and consistent management of projects with clear guidelines. For more information, please visit our project management page.

ChatOps for Test Platform

The Test Platform Sub-Department maintains ChatOps commands for Test Platform sub-department which provides quick access to various information on Slack. These commands can be run on any Slack channel that has the GitLab ChatOps bot such as the #test-platform and #chat-ops-test channels.

Commands that are currently available are:

Command Description
/chatops run quality dri schedule Lists the current schedule for on-call rotation
/chatops run quality dri report Show current and previous Test Platform pipeline triage reports
/chatops run quality dri incidents Lists currently active and mitigated incidents

For more information about these commands you can run:

/chatops run quality --help

Submitting and reviewing code

For test automation changes, it is crucial that every change is reviewed by at least one Senior Software Engineer in Test in the Test Platform team.

We are currently setting best practices and standards for Page Objects and REST API clients. Thus the first priority is to have test automation related changes reviewed and approved by the team. For test automation only changes, the Test Platform Sub-Department alone is adequate to review and merge the changes.

Weights

We use Fibonacci Series for weights and limit the highest number to 8. The definitions are as below:

Weight Description
1 - Trivial Simple and quick changes (e.g. typo fix, test tag update, trivial documentation additions)
2 - Small Straight forward changes, no underlying dependencies needed. (e.g. new test that has existing factories or page objects)
3 - Medium Well understood changes with a few dependencies. Few surprises can be expected. (e.g. new test that needs to have new factories or page object / page components)
5 - Large A task that will require some investigation and research, in addition to the above weights (e.g. Tests that need framework level changes which can impact other parts of the test suite)
8 - X-large A very large task that will require much investigation and research. Pushing initiative level
13 or more Please break the work down further, we do not use weights higher than 8.

Performance Indicators

The Executive Summary of all KPIs can be found here.

Test Platform owns and maintains the following:

Key Performance Indicators

Regular Performance Indicators

Learning Resources

We have compiled a list of learning resources that we’ve found useful for Software Engineer in Test and Engineering Manager growth.

For more information, please visit our learning resources page.


Bug Prioritization
This page describes the bug prioritization process performed by the quality engineering sub-department as part of the cross-functional prioritization process.
Debugging Failing Tests and Test Pipelines
Guidelines for investigating end-to-end test pipeline failures
GitLab Data Seeder (GDS)
Demo and Test Data generator
Performance and Scalability
The Quality Department has a focus on measuring and improving the performance of GitLab, as well as creating and validating reference architectures that self-managed customers can rely on as performant configurations.
Pipeline Monitoring
Overview of our monitoring tools and practices
Pipeline Triage
Overview of our pipeline triage processes
Quad Planning
The Quality Engineering Sub-Department helps facilitate the quad-planning process. This is the participation of Product Management, Development, UX, and Quality which aims to bring test planning as a topic before the development of any feature.
Quality Engineering Learning Resources
The Quality Engineering Sub-Department has compiled a list of learning resources for SET and QEM growth.
Quality Engineering Tips and Tricks
This page lists a number of tips and tricks we have found useful in day to day Quality Engineering related tasks.
Risk Mapping
Developing a strategic approach to risk and mitigation planning.
Self-Managed Excellence
This page lists more details about Self-Managed Excellence initiatives
Test and Tools Infrastructure Team
Test and Tools Infrastructure Team in Test Platform sub-department
Test Coverage
The Test Platform Department has coverage to support testing particular scenarios.
Test Engineering
The Quality Engineering Sub-Department helps facilitate the test planning process for all things related to Engineering work.
Test Engineering team
Test Engineering team in Test Platform sub-department
Test Platform Dashboards
This handbook page serves as a central repository for all our Test Platform dashboard details
Test Platform On-call Rotation
The Test Platform Sub-Department has two on-call rotations: pipeline triage (SET-led) and incident management (QEM-led).
Test Platform Onboarding
Guidelines for onboarding as a new Test Platform team member
Test Platform Project Management
Guidelines for project management for the Test Platform Sub-Department at GitLab
Test Platform Roadmap
Roadmap for the Test Platform Sub-Department at GitLab
Last modified December 19, 2024: Fix work-type-classification links (7c64ce2a)