The responsibilities of this collective team are described by the Plan stage. Among other things, this means
working on GitLab’s functionality around issues, boards, milestones, to-do list, issue lists and filtering, roadmaps, time tracking, requirements management, notifications, value stream analytics (VSA), wiki, and pages.
I have a question. Who do I ask?
In GitLab issues, questions should start by @ mentioning the Product Manager for the corresponding Plan stage group. GitLab team-members can also use #s_plan.
When we’re planning capacity for a future release, we consider the following:
Availability of the teams during the next release. (Whether people are out of the office, or have other demands on their time coming up.)
Work that is currently in development but not finished.
Historical delivery (by weight) per group.
The first item gives us a comparison to our maximum capacity. For instance, if the team has four people, and one of them is taking half the month off, then we can say the team has 87.5% (7/8) of its maximum capacity.
The second item is challenging and it’s easy to understimate how much work is left on a given issue once it’s been started, particularly if that issue is blocking other issues. We don’t currently re-weight issues that carry over (to preserve the original weight), so this is fairly vague at present.
The third item tells us how we’ve been doing previously. If the trend is downwards, we can look to discuss this in our retrospectives.
Subtracting the carry over weight (item 2) from our expected capacity (the product of items 1 and 3) should tell us our capacity for the next release.
Estimating effort
Groups within Plan use the same numerical scale when estimating upcoming work.
When weighting an issue for a milestone, we use a lightweight, relative estimation approach, recognizing that tasks often take longer than you think. These weights are primarily
used for capacity planning, ensuring that the total estimated effort aligns with each group’s capacity for a milestone.
Key Principles
Relative Estimation: We focus on the relative complexity and effort required for each issue rather than precise time estimates.
Aggregate Focus: The sum of all issue weights should be reasonable for the milestone, even if individual issues vary in actual time taken.
Flexibility: It’s acceptable for an issue to take more or less time than its weight suggests. Variations are expected due to differences in individual expertise and familiarity with the work.
Weight Definitions
Weight
Meaning
1
Trivial, does not need any testing
2
Small, needs some testing but nothing involved
3
Medium, will take some time and collaboration
4
Substantial, will take significant time and collaboration to finish
5
Large, will take a major portion of the milestone to finish
Initial Planning: During milestone planning, tasks can be estimated up to a weight of 5 if necessary. However, as the milestone approaches and the team moves closer to starting implementation, any task with a weight larger than 3 should be decomposed into smaller, more manageable issues or tasks with lower weights.
Why This Approach: Allowing larger weights early on provides flexibility in high-level planning. Breaking down tasks closer to implementation ensures better clarity, reduces risk, and facilitates more accurate tracking and execution.
We assess the available capacity for a milestone by reviewing recent milestones and upcoming team availability. This ensures that our milestone planning remains realistic and achievable based on the collective effort estimated through these weights.
Issues
Issues have the following lifecycle. The colored circles above each workflow stage represents the emphasis we place on collaborating across the entire lifecycle of an issue; and that disciplines will naturally have differing levels of effort required dependent upon where the issue is in the process. If you have suggestions for improving this illustration, you can leave comments directly on the whimsical diagram.
Everyone is encouraged to move issues to different workflows if they feel they belong somewhere else. In order to keep issues constantly refined, when moving an issue to a different workflow stage, please review any open discussions within the issue and update the description with any decisions that have been made. This ensures that descriptions are laid out clearly, keeping with our value of Transparency.
Epics
If an issue is > 3 weight, it should be promoted to an epic (quick action) and split it up into multiple issues. It’s helpful to add a task list with each task representing a vertical feature slice (MVC) on the newly promoted Epic. This enables us to practice “Just In Time Planning” by creating new issues from the task list as there is space downstream for implementation. When creating new vertical feature slices from an epic, please remember to add the appropriate labels - devops::plan, group::*, Category:* or feature label, and the appropriate workflow stage label - and attach all of the stories that represent the larger epic. This will help capture the larger effort on the roadmap and make it easier to schedule.
Themes
A small number of high priority features will be chosen as ’themes’ for a period of time. Themes provide an opportunity for the whole team to rally around a deliverable, even if they don’t contribute directly to it. These items are given especially close attention by all those involved with a view to delivering small iterations and keeping work unblocked. There should never be more than two themes in progress at a time per team.
A Slack channel is created with the convention #f_[feature name].
An epic hierarchy is created with sub-epics mapping to iterations, each achievable within a milestone.
Iterations are broken into multiple issues that can be accomplished independently, and PMs schedule those as normal.
Other actions may be established, such as regular ‘office hours’ calls.
Team-members work together to continuously refine the iterations as complexity is revealed.
In product development at GitLab, Product is responsible for the what and why, Engineering is responsible for the how and when [1]. Maintaining a credible roadmap is therefore a collaborative process, requiring input from both.
The Product Roadmap outlines what the team aims to accomplish over a 4-6 quarter timeline. It is shared across the organization to ensure alignment with the go-to-market strategy and enable reliable commitments to customers.
Changes to the Plan Product Roadmap, made by the Product Manager, are reviewed and accepted by the Engineering Manager of the affected group. This happens at least once a month and is captured in a Wiki Page.
Most items being reviewed during roadmap planning have not yet had detailed technical investigation from engineering. Planning at this resolution is intended to be thoughtful but not perfect. Velocity remains our priority.
Reviewing the Roadmap
By performing a review, Engineering Managers play a key role by ensuring the roadmap is achievable and effectively sequenced to maximize velocity. Below are some best practices to guide a thoughtful review:
Assess Achievability: Is the timeline realistic given the team’s current capacity, skills, and dependencies?
Account for Technical Preparation: Does the roadmap allocate time for necessary technical preparation, such as technical spikes or investigations?
Optimize Team Utilization: Does the sequence of work align with the team’s skill profile, avoiding periods of underutilization or skill mismatches?
Evaluate Redundancy: How robust is the rest of the roadmap if one item takes longer than anticipated?
Clarify Requirements: Do you sufficiently understand each proposed change or do you need additional information?
Ensure Shared Understanding: Do you and your Product and UX counterparts have a shared understanding of all terminology used?
Seek Opportunities to Optimize: Have you identified opportunities to iterate or increase velocity by adjusting the order of work?
Reduce Friction: Is the sequence of work likely to cause avoidable conflicts, such as multiple engineers committing to the same codebase areas simultaneously?
Identify Process-Driven Delays: Are there items expected to take longer due to process requirements (e.g., multi-version compatibility) rather than capacity constraints?
Account for Cross-Team Dependencies: Are there cross-team dependencies that could put parts of the timeline at risk?
Incorporate a Buffer: Is a proportion of capacity allowed for exogenous shocks; such as unexpected PTO, or a high-severity incident?
Lean on Your Experience: When you look at the roadmap as a whole and think about recent quarters, does it look achievable?
Roadmap Organization
graph TD;
A["devops::plan"] --> B["group::*"];
B --> C["Category:*"];
B --> D["non-category feature"];
C --> E["maturity::minimal"];
C --> F["maturity::viable"];
C --> G["maturity::complete"];
C --> H["maturity::lovable"];
E--> I["Iterative Epic(s)"];
F--> I;
G --> I;
H --> I;
D --> I;
I--> J["Issues"];
Talking With Customers
In a perfect world, we would have cross-functional representation in every conversation we have with customers.
Customer Conversations calendar
Anyone who is scheduling a call with a customer via sales, conducting usability reasearch, or generally setting up a time to speak with customers or prospects is encouraged to add the Plan Customer Conversations calender as an invitee to the event. This will automatically populate the shared calendar with upcoming customer and user iteractions. All team members are welcome and encouraged to join – even if it’s just to listen in and get context.
All team members are welcome and encouraged to join customer calls – even if it’s just to listen in and get context.
To ensure upcoming calls appear in your calendar, subscribe to the Plan Customer Conversations calendar. Product Managers add upcoming customer interviews to this calendar and you’re welcome to shadow any call.
In GCal, next to “Other Calendars” in the left sidebar, click the +
Upcoming customer calls will often be advertised in the #s_plan channel in advance, so look out there also.
Review previous calls
All recorded customer calls, with consent of the customer, are made available for Plan team-members to view in Dovetail.
To access these, simply go to the Plan Customer Calls project on Dovetail and log in with Google SSO. More information is available in the Readme of this project.
If you find you do not have access, reach out to a Plan PM and ask to be added as a Viewer.
Review previous UX Research calls
UX Research calls are scripted calls designed to mitigate bias and to address specific questions related to user needs and/or usability of the product. A selection of UX Research calls are available in the Plan Customer Calls Dovetail Project in the column titled UXR - Research and Validation.
While we operate in a continuous Kanban manner, we want to be able to report on and communicate if an issue or epic is on track to be completed by a Milestone’s due date. To provide insight and clarity on status we will leverage Issue/Epic Health Status on priority issues.
Keeping Health Status Accurate
At the beginning of the Milestone, Deliverable issues will automatically be updated to “On Track”. As the Milestone progresses, assignees should update Health Status as appropriate to surface risk or concerns as quickly as possible, and to jumpstart collaboration on getting an issue back to “On Track”.
At specific points through the milestone the Health Status will be automatically degraded if the issue fails to progress. Assignees can override this setting any time if they disagree. The policy that manages this automation is here. It can be disabled for any individual issue by adding the ~“Untrack Health Status” label.
Health Status Definitions for Plan
On Track - We are confident this issue will be completed and live for the current milestone
Needs Attention - There are concerns, new complexity, or unanswered questions that if left unattended will result in the issue missing its targeted release. Collaboration needed to get back On Track
At Risk - The issue in its current state will not make the planned release and immediate action is needed to rectify the situation
Flagging Risk is not a Negative
We feel it is important to document and communicate, that changing of any item’s Health Status to “Needs Attention” or “At Risk” is not a negative action or something to be cause anxiety or concern. Raising risk early helps the team to respond and resolve problems faster and should be encouraged.
OKRs
Active Quarter OKRs
FY25-Q2 Stage-level Objectives are available here (internal).
Previous Quarter OKRs
FY25-Q1 Stage-level Objectives all closed out between 74% and 88% and are available here (internal).
GitLab currently offers some freedom in how to structure OKR hierarchies. We take the following approach in Plan:
EMs are encouraged to create group-level KRs under stage-level Objectives directly, without creating their own OKR structure.
Group KRs and Stage Objectives should ladder into a higher Objective, which can exist anywhere in the organization. In the development of OKRs a stage-level Objective laddered directly into a CEO KR.
They should be created or added as child objectives and key results of their parent so that progress roll-ups are visible.
Product development goals are established in milestone planning, following the regular Product Development Flow, and not in OKRs.
Doing this ensures the hierarchy will be as simple, consistent and shallow as possible. This improves navigability and visibility, as we currently don’t have good hierarchy visualization for OKRs.
An example of a valid single OKR hierarchy is:
flowchart TD
A[Plan Objective] --> B(Project Management KR)
A --> C[Product Planning KR]
A --> D[Optimize KR]
A --> E[Knowledge KR]
A --> K[Principal Engineer KR]
A --> L[SEM KR]
Ownership is indicated using labels and assignee(s). The label indicates the group and/or stage, assignee the DRI.
OKRs should have the following labels:
Group, Stage, and Section (as appropriate).
Division (~“Division::Engineering”) to distinguish from other functions.
updates::[weekly, semi-monthly, monthly] depending on how often the OKR is expected to be updated by the DRI.
The Plan Stage team encourages the use of Internal Notes as well to further adhere to SAFE Guidelines. Internal notes remain confidential to participants of the retrospective even after the issue is made public, including Guest users of the parent group. Dogfooding this feature aligns with an FY23 Q4 OKR of improving the GitLab Product development flow by driving the adoption of Plan features.
Examples of information that should remain Confidential per SAFE guidelines are any company confidential information that is not public, any data that reveals information not generally known or not available externally which may be considered sensitive information, and material non-public information.
The retrospective issue is created by a scheduled pipeline in the
async-retrospectives project. It is then updated once the milestone
is complete with shipped and missed deliverables. For more information on how
it works, see that project’s README.
An EM from the Plan stage is assigned to each retrospective on a rotational
basis as the DRI for conducting and concluding the retrospective, along with
summary and corrective actions. The rotation for upcoming milestones is as follows:
Milestone
DRI
16.10
Donald Cook
16.11
Kushal Pandya
17.0
John Hope
17.1
Brandon Labuschagne
17.2
Vladimir Shushlin
17.3
Kushal Pandya
17.4
Donald Cook
17.5
John Hope
17.6
Donald Cook
17.7
Kushal Pandya
17.8
Vladimir Shushlin
17.9
John Hope
17.10
Donald Cook
17.11
Kushal Pandya
The role of the DRI is to facilitate a psychologically safe environment where team-members
feel empowered to give feedback with candour. As such they should refrain from participating
directly. Instead they should publicise, conclude and make improvements to the retrospective
process itself.
Timeline
27th (Previous Month) A retrospective issue is automatically created for the milestone in progress.
18th The milestone is closed and open issues in the build phase are labeled with ~“missed deliverable”.
21st The issue description is automatically updated with shipped and missed deliverables and the team are tagged to add feedback.
4th (Next Month) A final reminder is created automatically in #s_plan for final feedback.
Dogfooding Value Stream Analytics (VSA) in the Milestone Retrospective
To improve the retrospective data-driven experience, we are dogfooding VSA to simplify the data collection for the retrospective. This been done by automatically adding a link to the VSA of the current milestone filtered by group/stage to the retrospective.
With Value stream analytics (VSA) our team is getting visibility to the lifecycle metrics of each milestone through the breakdown of the end-to-end workflow into stages. This allows us to identify bottlenecks and take action to optimize actual flow of work.
The DRI is responsible for completing the following actions:
Adding a comment to the retrospective issue summarizing actionable discussion items and suggesting corrective actions.
Finding a DRI for each corrective action. Creating an issue in gl-retrospectives/plan for each is optional, but doing so and adding the ~“follow-up” label will ensure they’re included automatically in the next retrospective.
Recording a short summary video and sharing in #s_plan. This can be discussed in the next weekly team call and can be added to the Plan Stage playlist on Youtube so that it shows up on team pages.
Closing the issue and making it public.
In both the summary comment and video the DRI should be particularly careful to ensure all information disclosed is SAFE. If the retrospective discussion contains examples of unSAFE information, the issue should not be made public.
Regressions
Regressions contribute to the impression that the product is brittle and unreliable. They are a form of waste, requiring the original (lost) effort to be compounded further with a fix or a reversion and reimplementation of the intended behavior.
Engineering Managers are strongly encouraged to conduct a simple Root Cause Analysis (RCA) when a regression takes place in a feature owned by their group, in order to:
Inform the author and reviewers of the original MR that it caused a regression.
Define corrective actions that might prevent or reduce the likelihood of a similar regression in future.
Identify trends or patterns that can lead to human error.
The following RCA format was trialed in a FY23 Q2 OKR. It can be posted as a comment on the original MR when the regression has been successfully reverted.
**Description of the regression:**
_One-line description of the regression in behavior._**Bug report:**_[Issue link]_`@author`` (if internal) `@approvers` Please could you reply to this comment, copying the questions below and giving some short answers?
1. Were you aware this MR was reverted in the course of your normal work (e.g. through email notification, general work process)?
1. Did you identify the problematic behavior before approving this MR?
1. If not, what would've made the regression more obvious during review?
1. What changes to our tooling or review process would have prevented this regression from being merged?
1. Were the steps to test the MR mentioned clearly in the description? Were they easy to follow?
1. Do you have any other comments/suggestions?
Please reassure the participants that the purpose is not to apportion blame but to gather data, identify causal factors and implement corrective actions - but ask for a swift and brief response while the information is still fresh.
Technical Debt
The ~“technical debt” label, used in combination with ~“devops::plan,” helps track opportunities for improving the codebase. These labels should be applied to issues that highlight:
improvements to existing code or architecture;
shortcuts taken during development;
features requiring additional refinement;
any other items deferred due to the high pace of development.
For example, a follow-up issue to resolve non-UX feedback during code review should have the ~“technical debt” label.
Issues marked with this label are prioritized alongside those proposing new features and will be scheduled during milestone planning.
UX
The Plan UX team supports Product Planning, Project Management and Optimize. Product Planning and Project Management are focused on the work items architecture effort. This page focuses mainly on the specifics of how we support this, since it requires alignment and cross-group collaboration.
UX issue management, weights and capacity planning
UX issues are the SSOT for design goals, design drafts, design conversation and critique, and the chosen design direction that will be implemented.
Product requirement discussions should continue to happen in the main Issue or Epic as much as possible.
When the Product Designer wants to indicate that the design is ready for ~“workflow::planning breakdown”, they should apply this label to their issue, notify the PM and EM, and close the issue.
When should a UX issue be used?
UX issues should be used for medium or large projects that will take more than one dev issue to implement (e.g., end-to-end flows, complicated logic, or multiple use cases / states that will be broken down by engineering into several implementation issues). If the work is small enough that implementation can happen in a single issue, then a separate [UX] issue is not needed, and the designer should assign themselves to the issue and use workflow labels to indicate that it’s in the design phase.
Weighting UX issues
All issues worked on by a designer should have a UX weight before work is scheduled for a milestone.
If the issue is a dedicated [UX] issue, then the issue weight can be added to the weight field, but it should also be duplicated as a ~‘design weight:" label. This is for UX Department planning purposes. For smaller issues where implementation and UX work happen in the same issue, UX weight should be added using the ~‘design weight:" label (the weight field is used by engineering).
Product Managers and Product Designers can use issue weights to ensure the milestone has the right amount of work, to discuss tradeoffs, or to initiate conversations about breaking work into smaller pieces for high-weight items.
Work Items
When designing for objects that use the work items architecture we will follow this process intending to ensure that we are providing value-rich experiences that meet users needs. The work items Architecture enables code efficiency and consistency, and the UX team supports the effort by identifying user needs and the places where those needs converge into similar workflows.
About work items
The first objects built using the work items architecture support the Parker, Delaney and Sasha personas in tasks related to planning and tracking work. Additional objects will be added in the future, supporting a variety of user personas.
Work items refers to objects that use the work items architecture. You can find more terms defined related to the architecture here: work items terminology.
When we talk about the user experience, we avoid using the term ‘work items’ for user facing concepts, because it’s not specific to the experience and introduces confusion. Instead, we will use descriptors specific to the part of the product we’re talking about and that support a similar JTBD. Here are examples of how we are categorizing these:
Team Planning Objects: Objects that belong to the Planning JTBD. Currently these are Epics, Issues and Tasks but could include others in the future.
Strategy Objects: Objects that support strategic, organization wide objects. Currently these are Objectives and Key results.
Development/Build Objects: Objects that support development tasks. These could be MRs, Test Cases, or Requirements
Protecting Objects: These may include Incidents, Alerts, Vulnerabilities, Service Desk Tickets
This enables us to differentiate these by persona and workflow. While they may share a common architecture on the backend and similar layout on the frontend, in the UI they may:
appear in different workflows and areas of the application
have different data fields
have different actions users can take on them
Guiding principles
The DRI for the user experience is the Product Designer assigned to the group that is using the work item architecture for their object(s).
We work in a user-first mindset, rather then technology-first. To support this, we have created a research plan for supporting work item initiatives.
Pajamas is our design system and new patterns introduced via work item efforts need to solve a real problem that users have, be validated by user research, and follow the Pajamas contribution process.
MVCs provide value to users, are bug-free and a highly usable experience, as described in Product Principles.
How the architecture is intended to work
When designing with the work items architecuture, Product Designers should understand roughly how the architecture works and what implications exist for the user experience.
A work item has a type (epic, incident), and this controls which widgets are available on the work item and what relationships the work item can have to other work items and non-work item objects.
The behavior of the work item in terms of performing its targeted JTBD(s) is powered by the collection of widgets enabled for a work item type.
We want to avoid building logic or views specific to a type. When you need to support a workflow that isn’t currently supported, you can introduce new behaviors through widgets (fields, apps, actions). A practical example: Epics can parent other Epics and Issues. Instead of interconnecting epics and issues this behavior is encapsulated in a ‘hierarchy’ widget, which could be utilized in other work item types that implement hierarchies; such as Objectives and Key Results.
Similarly, the work item view should not be customized directly for a type. However, the Product Designer can propose a different user experience and the team implementing the work item will incorporate the necessary use cases into the work items architecture.
Work items can be organized and presented to users in any groupings from an IA/Nav standpoint so long as all views leverage the same SSoT grouping FE components (ex: list, board, roadmap, grid, …). We should only ever need to build and maintain one version of each grouping view that can then be re-used across anywhere we want to display that set of work items. Groupings are determined iteratively based on user needs.
If the quad discovers that the desired user experience would require a greater contribution to the work item architecture than initially thought, they would discuss trade-offs as a team in order to decide whether to proceed or leave the object separate.
Design Process for Work Items
Problem Validation
The quad that owns the code for the object (incident, epic, etc) decides if something should use the work item architecture based on trade-offs around code reuse and user experience. This should be a cross-functional decision, and the group Product Designer should advise their team regarding how well the user’s ideal workflow could or could not be supported by the work items architecture. This will allow the team to evaluate how much existing frontend pieces of the architecture could be re-used, and what would need to be added or customized in order to support the desired experience.
As part of the decision making process, Product Designers should do problem validation user research (or leverage existing) to understand the desired user experience, including user goals, tasks, content/data field needs, and whether or not this work item type has relationships and the nature of those relationships.
During this phase, the Product Designer and Product Manager should ensure that success metrics are defined per our work item research process (link TBD)
High level wireframes should be produced to ensure everyone has a shared understanding of what is wanted and to establish a medium term vision for the work.
Solution Validation
After the quad decides the work item architecture is suitable, the Product Designer will design the experience in detail. As part of the detailed design, Product Designers, in collaboration with the quad, will:
Design how existing widgets will be utilized, and any new widgets needed or if existing widgets could be abstracted to fit a new use case. For example: The Timeline widget for incidents was designed in isolation specific to the incident use case. It could be reworked slightly to support more use cases, such as objective or key result check-ins.
Define how users will access this work item. Design how this work item will appear in existing views, such as lists, or any new views needed for this work item.
Ensure new components and patterns are contributed back to Pajamas.
Solution validation should be conducted as needed to ensure the workflow and usability meets the user needs.
In addition to these, we’re working on gaining an efficiency bonus by using a common screener and building a mini-database of qualified participants aligned to our research needs.
We do a confidence check at different points in the process, particularly before moving a design into the build phase. Sometimes, a design solution is straightforward enough where we’re very confident to move ahead without solution validation. However, there are times when we’re unsure how the design solution will perform in production, thereby resulting in a low level of confidence. When this happens we will do usability testing to build confidence.
UX Paper Cuts
The UX Paper Cuts team has a dedicated role addressing Paper Cuts concerns within the Plan stage.
The UX Paper Cuts team member covering Plan will regularly triage the list of UX Paper Cuts issues that are related to the Plan stage as outlined above, but will also add actionable candidates to a Plan-specific epic for transparency.
There are many company, team, process (and other) updates that are important to communicate to team members so that they are not missed. Besides that, there is other information important for day-to-day work. In Plan we use async Weekly updates, called Plan Weekly digest, to communicate these to our team members.
The Engineering Managers in the Plan stage alternate each week as the DRIs. There are 4 groups in the Plan stage, and one SEM, so every EM is the DRI roughly once / 5 weeks.
The responsibility of the DRI is simply to collect information and to ensure the issue is ready to be publicized in time for the coming week. All team-members are welcome to participate in suggesting content using discussions or adding it directly by editing the description.
Process
A new confidential issue is created every Monday, 8 UTC. (automatically)
The issue is assigned to all Plan Engineering Managers.
The EM responsible for the content of the issue can be found in the schedule below but all other EMs can contribute to the issue as well.
On Saturday, 8 UTC all team members are alerted on the issue via a comment (automatically).
There is a shared Plan stage calendar which is used for visibility into meetings within the stage.
To add this shared calendar to your Google Calendar do one of the following:
Visit this link (GitLab internal) from your browser.
Click the ‘+’ next to ‘Other calendars’ in Google Calender, select ‘Subscribe to calendar’, paste c_b72023177f018d631c852ed1e882e7fa7a0244c861f7e89f960856882d5f549a@group.calendar.google.com into the form and hit enter.
To add an event to the shared calendar, create an event on your personal calendar and add Plan Shared as a guest.
Team Day
Team Days are organized on a semi-regular basis. During these events we take time to celebrate wins since the last team day, connect with each other in remote social activities, and have fun!
Anyone can organize a team day. It starts with creating a Team Day planning issue in the plan-stage tracker and then proceeding to find a suitable date.
Setting a date
A time-boxed vote no more than 3 months but no less than 1 month out has proven to be the most inclusive way to set a date so far. This allows enough time to organize sessions but is usually close enough to avoid colliding with off-sites, or other company-wide activities.
Including at least three major timezones, one for each of AMER, EMEA, and APAC, in the issue description allows people to better see how the day will be divided for them and what they can attend.
It’s good practice to rotate the ‘base’ timezone of the Team Day to spread the opportunity for attendance. For example; the FY23-Q4 Team Day was based on a full UTC day, the FY24-Q3 on a full day AEST.
Sessions
The day is composed of sessions proposed and organized by team-members. These are typically allocated 1hr, though they can be longer or shorter. Sessions can be scheduled in advance to allow other team-members to plan their attendance and participation.
Sessions can be anything really, so long as it aligns with the values. Team-members can organize a game, teach a skill, give a talk on something they know, or anything else they think others might enjoy.
Some examples of sessions we’ve had on previous team days include:
A cooking class with a former professional chef.
Watching a holiday film together.
Lateral Thinking Games.
A home woodworking workshop tour and demonstration.
Remote games; such as Gartic Phone and Drawsaurus.
Free time slots can be used on the day to hold impromptu events requiring little or no preparation.
Participation
Participation in team day is encouraged for any team-member or stable counterpart in Plan. If you collaborate with Plan team-members on a regular basis you’re also very welcome to attend.
Participating team-members are encouraged to drop non-essential work and take part in any sessions during the day that they wish to. Those assigned to essential work; such as critical bugs, incidents, or IMOC shifts, are encouraged to participate between their other obligations.
Team day is a normal workday for those choosing not to participate.
Expenses
Some sessions may require small purchases to participate fully; for example, ingredients for a cooking class or hosting of a private video game server.
Unless communicated in advance these are not expensable.
The DRI for organizing Team Day may pursue a budget for expenses under existing budgets; such as the team building budget, or fun budget. If successful it should be made clear to team-members well in advance:
What purchases qualify for reimbursement.
The policy the expense qualifies under; including handbook link, policy category, and classification in Navan.
Any additional handbook guidance that will help team-members utilize the budget.
Watch out for Daylight Savings Time when organizing for Q1 and Q3. When the date is set, check that the timeszones in the planning issue still match the timezones in use on the day (for example, AEST vs. AEDT).
Secure expense budget and communicate at least a week in advance of the Team Day.
Ensure Google Calendar events are transferred from the planning issue to the Plan Shared Calendar a week in advance of the event date.
Ensure everyone has access to the calendar, and have easy step-by-step directions for creating a new event on the calendar (Adding events to a shared calendar can be slightly confusing).
Communicate this change in SSOT, and encourage participants to add their own sessions in the calendar as free slots.
Team Process
Each group within the Plan stage follows GitLab’s product development flow and process. This allows for consistency across the stage, enables us to align with other stages and stable-counterparts, and enables us to clearly understand our throughput and velocity. We’re currently focused on strictly following the process stated in the handbook, as opposed to creating our own local optimizations.
In some cases we need to dogfood a new Plan feature that may adjust our adherence to the GitLab’s process. If that happens we assign a DRI responsible for setting the objective, reporting on the outcomes and facilitating feedback to ensure we prioritize improvements to our own product. This ensures we’re not making a change for the sake of making changes, and gives us clarity into our own evaluation of a change to the product.
In some cases we need to dogfood a new Plan feature that may adjust our adherence to the GitLab’s process. If that happens we assign a DRI responsible for setting the objective, reporting on the outcomes and facilitating feedback to ensure we prioritize improvements to our own product. This ensures we’re not making a change for the sake of making changes, and gives us clarity into our own evaluation of a change to the product.
There are a couple of process-related improvements we’ll continue to adopt:
Iterations: We’ve recently started organizing the prioritized work in a given milestone into weekly iterations. This doesn’t change any of the canonical process, and allows us to break a months worth of work into sizeable timeboxes. Intended outcome: Dogfood iterations (the feature), improve velocity and give more granular visibility into the progress of issues. DRI: @donaldcook
Stage Working groups
Like all groups at GitLab, a working group is an arrangement of people from different functions. What makes a working group unique is that it has defined roles and responsibilities, and is tasked with achieving a high-impact business goal fast. A working group disbands when the goal is achieved (defined by exit criteria) so that GitLab doesn’t accrue bureaucracy.
Stage Working Groups are focused on initiatives that require collaboration between multiple groups within the stage. The structure of stage working groups is similar to company-wide working groups, with DRI and well-defined roles. The initiatives are driven by a stage-level product direction rather than an Executive Sponsor,
and can be formed of just Functional Leads and members who participate in fulfilling the exit criteria.
There can be a gap in understanding between Engineering and Product on a team. We are experimenting with a pilot program that will allow engineers to spend time in the world of Product, with the goal of greater mutual communication, understanding and collaboration. It helps us work more effectively as a team for better features.
Product Shadowing schedule
Engineering team-members can shadow a product stable-counterpart. Shadowing sessions last two working days, or the equivalent split over multiple days to maximize experience with different functions of the role. In particular, the session should include at least one customer call. To shadow a counterpart on the team:
Create an issue in the plan project tracker using the Product-Shadowing template;
Create a WIP MR to this page to update the table below, adding your name and issue link, and
When your counterpart is assigned to the issue, add their name, remove WIP status and assign to your manager for review.
By continually monitoring these tables and applying the planned mitigations, we aim to maintain optimal performance and prevent any scalability issues.
The Plan Frontend Team internship is the result of The Engineering Internship Pilot Program that started at the end of 2019. The ultimate goal of this program is to transform an entry-level candidate in to an Individual Contributor who could meet the requirements for a Junior Engineer.
For this program to be successful, the Roles and Responsibilities must be transparent to the Internship Program participants. The primary roles of the program include: