Create:Code Review Group

The Create:Code Review Group is responsible for all product categories that fall under the Code Review group of the Create stage.

The Create:Code Review Group is responsible for all aspects of the product categories that fall under the Code Review group of the Create stage of the DevOps lifecycle.

Group overview

Group members

The following people are permanent members of the Create:Code Review Group:

Name Role
André LuísAndré Luís Senior Engineering Manager, Create:Source Code, Create:Code Review
François RoséFrançois Rosé Engineering Manager, Create:Code Review
Gary HoltzGary Holtz Backend Engineer, Create:Code Review
Kai ArmstrongKai Armstrong Principal Product Manager, Create:Code Review
Marc ShawMarc Shaw Senior Backend Engineer, Create:Code Review
Patrick BajaoPatrick Bajao Staff Backend Engineer, Create:Code Review
Phil HughesPhil Hughes Staff Fullstack Engineer, Create:Code Review
Sincheol (David) KimSincheol (David) Kim Senior Backend Engineer, Create:Code Review
Stanislav LashmanovStanislav Lashmanov Senior Frontend Engineer, Create:Code Review
Thomas RandolphThomas Randolph Senior Frontend Engineer, Create:Code Review

Sub-department specific pages

Product categories

The Code Review group is responsible for the following product categories:

Category performance indicators

Work

In general, we use the standard GitLab engineering workflow. To get in touch with the Create:Code Review team, it’s best to create an issue in the relevant project (typically GitLab) and add the ~"group::code review" label, along with any other appropriate labels (~devops::create, ~section::dev). Then, feel free to ping the relevant Product Manager and/or Engineering Manager as listed above.

For more urgent items, feel free to use #g_create_code_review on Slack.

Work on the GitLab VS Code Extension follows a simplified development process. Learn more about it by looking at CONTRIBUTING.md.

Take a look at the features we support per category here.

Metrics of success

The metrics by which we measure the success of the Code Review category are aligned with our goals for code review, specifically ease of use, love-ability, and efficiency.

Primary metric

Our primary metric is: reducing the duration of the Code Review. This is measured as the duration from the first merge request version to merged.

The MTTM can be found on this dashboard.

Secondary metrics

Secondary metrics of success act as support for the primary metric, helping build a more complete picture of how successful the category is.

Once in a while, we conduct UX scorecards to track the user experience through various heuristicssee all UX scorecards for Code Review. At the Create stage level, we conduct usability benchmarking studies.

Right now we’re focused on measuring and improving perceived performance: “how fast, responsive, and reliable a website feels to its users. The perception of how well a site is performing can have more impact on the user experience that the actual load and response times.” Perceived performance is not only technical performance (i.e. load and response times), but also user performance (i.e. efficiency in completing tasks), and can be formulated as:

perceived performance = f(expected performance, UX, actual performance)
experience = f(perceived performance, task completion)
Aspect Measured by Results
Expected performance and UX Primarily by user’s feedback, and secondarily by actual performance of competitors. SaaS user’s feedback (in progress)
Competitor performance (Software Forge Performance Index) (maintained by SourceHut)
Largest Contentful Paint of SaaS vs GitHub.com for key pages
Actual performance (load and response times) Primarily by the Largest Contentful Paint (LCP) metric, and secondarily by other important metrics. Test instance (test samples: large MR overview and changes tabs, large MR commits tab)
SaaS: gitlab-foss large MR overview tab (test sample)
SaaS: gitlab-foss large MR changes tab (test sample)
SaaS: gitlab-foss empty MR overview tab (test sample)
SaaS: gitlab large MR overview tab (test sample)
SaaS: gitlab small MR overview tab (test sample)
SaaS: Other project MR overview tab (test sample)
Task completion (task times) Estimates of user’s execution time of primary tasks through the GOMS approach. We focus on the percentage difference of GitLab and competitors, or of current and proposed designs. July 2021 estimates

Development Metrics

Exploration and experimentation

The Code Review group believes it’s important that team members are empowered to explore and experiment with areas of the product that interest them. Sometimes, the best way to get the conversation started is with a merge request.

Allocate time

In order to better provide opportunities for team members to pursue interest areas, engineers are encouraged to reserve about 10% of their scheduled capacity.

Setting expectations

If you’re choosing to work in these areas or explore new ideas, there’s a few ground rules to make sure we’re all on the same page:

  1. Work in these areas doesn’t come at the cost of planned deliverables for the milestone
  2. Not all efforts in these areas will be merged into the product, but sharing them with product and design can help steer conversations for future work
  3. Work does not need to be in the code review area; engineers are encouraged to explore areas of interest

Areas of inspiration

It can be hard to figure out where to get started, so here’s a handy list of places you might look for inspiration:

  1. Top level Code Review epic - Epics in this list are loosely sorted by importance
  2. Top level Editor Extension epic - Epics in this list encompass an entire category of features, but the group is primarily focused only on VS Code
  3. Group level gitlab-org issue list - filter this to issues with labels you’re interested in
  4. Code review issues ready for development
  5. Performance and Performance Refinement issues
  6. Easy wins

Meetings

Whenever possible, we prefer to communicate asynchronously using issues, merge requests, and Slack. However, face-to-face meetings are useful to establish personal connection and to address items that would be more efficiently discussed synchronously such as blockers.

We record our meetings and upload them to the Create Code Review Playlist on GitLab Unfiltered.

Code Review Weekly

This is a chance for all members of the Code Review Group to meet to discuss current priorities, blockers, planning, and discuss the middle of milestone check-in.

The agenda for this meeting is set in advance and anyone can contribute topics. If there are no items on the agenda 30 minutes before the meeting is scheduled to start, we cancel the meeting.

Code Review UX Sync

This meeting is focused on collaboration between UX and PM, though all are welcome to attend and contribute.

Code Review Performance Round Table

This meeting is for discussing new performance topics, project proposals, and ongoing performance work or concerns.

The catch-all issue includes the agenda, issue board, and goals.

Working with product

Weekly calls between the product manager and engineering managers (frontend and backend) are listed in the “Code Review Group” calendar. Everyone is welcome to join and these calls are used to discuss any roadblocks, concerns, status updates, deliverables, or other thoughts that impact the group. Every 2 weeks (in the middle of a release), a mid-milestone check-in occurs, to report on the current status of ~“Deliverable"s. Monthly calls occurs under the same calendar where the entire group is encouraged to join, in order to highlight accomplishments/improvements, discuss future iterations, review retrospective concerns and action items, and any other general items that impact the group.

Collaborating with other counter parts

You are encouraged to work as closely as needed with stable counterparts outside of the PM. We specifically include quality engineering and application security counterparts prior to a release kickoff and as-needed during code reviews or issue concerns.

Quality engineering is included in our workflow via the Quad Planning Process.

Application Security will be involved in our workflow at the same time that kickoff emails are sent to the team, so that they are able to review the upcoming milestone work, and notate any concerns or potential risks that we should be aware of.

Working with the wider GitLab community

Since we support such a large feature set, our team often reviews community contributions from the wider GitLab community. You’re encouraged to give each contributor our version of “white glove treatment”. Providing recognition for their donated time, giving exceedingly helpful reviews, and encouraging them in their contribution are all excellent ways to build a sense of community. If you don’t have time to respond to a ping for a review or suggestion, please quickly let the person who pinged you know so they can ping someone else.

Tips and Tricks

For tips, tricks, or quick shell scripts that aren’t “ready” or sufficient enough to add to our developer documentation or handbook, we use the Create stage wiki.

Middle of milestone check-in

Expanding on the concept of Middle of milestone check-in:

The way we try to grasp how well we are doing according to the scheduled and committed set of Deliverables is simply trying to calculate the level of completeness of all of them.

We do this by tallying up:

Closed/Verification/Awaiting Security
100% done
In review
80% done
In dev
40% done
Unstarted
0% done

We then compile a small report like this:

Done + Verification:         1     w1     2.27%
In review:                 6     w15     34.09%
In dev:                 6     w20     45.45%
Unstarted:                 3     w8     18.18%
Progress:                         47.73%
Conclusion: ...

Progress is calculated with:

(100% * 2.27) + (80% * 34.09) + (40% * 45.45) + (0% * 18.18)

In the conclusion we write an interpretation of what this means and what we’ll be doing to correct course, if needed.

Workflow labels

The easiest way for engineering managers, product managers, and other stakeholders to get a high-level overview of the status of all issues in the current milestone, or all issues assigned to specific person, is through the Development issue board, which has columns for each of the workflow labels described on Engineering Workflow handbook page under Updating Issues Throughout Development.

As owners of the issues assigned to them, engineers are expected to keep the workflow labels on their issues up to date, either by manually assigning the new label, or by dragging the issue from one column on the board to the next.

Async standup

The groups in the Create stage conduct asynchronous standups in the #g_create_standup channel 3 times a week, on Monday, Wednesday, and Friday.

The goal is to support the members of these groups in connecting at a personal level, not to check in on people’s progress or replace any existing processes to communicate status or ask for help, and the questions are written with that in mind:

  1. What did you do outside of work since we last spoke?
  2. What are you planning to do today?
  3. Is anything blocking your progress or productivity?

For more background, see the Async standup feedback issue on the Create stage issue tracker.

Our team is encouraged to post links to their deliverable issues or merge requests when they are mentioned in relation to the second question. This helps other team members to understand what others are working on, and in the case that they come across something similar in the future, they have a good reference point.

Retrospectives

We have 1 regularly scheduled “Per Milestone” retrospective, and can have ad-hoc “Per Feature” retrospectives more focused at analyzing a specific case, usually looking into the Iteration approach.

Per Milestone

The Create: Code Review group conducts monthly retrospectives in GitLab issues. These include

the backend team, plus any people from frontend, UX, and PM who have worked with that team during the release being retrospected.

These are confidential during the initial discussion, then made public in time for each month’s GitLab retrospective. For more information, see team retrospectives.

Per Project

If a particular issue, feature, or other sort of project turns into a particularly useful learning experience, we may hold a synchronous or asynchronous retrospective to learn from it. If you feel like something you’re working on deserves a retrospective:

  1. Create an issue explaining why you want to have a retrospective and indicate whether this should be synchronous or asynchronous
  2. Include your EM and anyone else who should be involved (PM, counterparts, etc)
  3. Coordinate a synchronous meeting if applicable

All feedback from the retrospective should ultimately end up in the issue for reference purposes.

Merge Request Report Widgets shared responsibility

Even though the topic of Merge Request falls under Code Review, the code powering the Merge Request Report Widgets (see Working Group) is written and maintained by a larger group.

Please refer to the List of DRIs for communication and troubleshooting purposes relating to these Widgets.


Create:Code Review BE Team
The Create:Code Review BE team is responsible for all backend aspects of the product categories that fall under the Code Review group of the Create stage.
Create:Code Review FE Team
The Create:Code Review FE team is responsible for all frontend aspects of the product categories that fall under the Code Review group of the Create stage.
Merge Request Report Widgets - DRI list
Listing the DRIs for Merge Request Report Widgets which share ownership over the code powering the several widgets.
Last modified October 29, 2024: Fix broken links (455376ee)