Product Processes
As a Product Organization, we work to create a flexible yet concise product development framework for developing products that customers love and value.
Principles - Processes - Categories - GitLab the Product - Being a PM - Leadership
Our Product philosophy
As a Product Organization, we work to create a flexible yet concise product development framework for developing products that customers love and value. The Product Principles section is where you can learn about our strategy and philosophy regarding product development, here we discuss the processes we use tactically.
Product Development Flow
Introducing changes requires a number of steps, with some overlap, that should be completed in order. GitLab follows a dual-track product development flow spanning product, engineering, UX, and quality. We use GitLab to power product development flow. When changes are released, we follow the release post process to communicate externally about new capabilities.
This process should be both up front and on an on-going basis when building features.
The Importance of Direction
Documenting a Section, Stage, Group and Category direction is critical to communicating where we are heading and why to all of our stakeholders. We document our direction in direction pages. Read more about related processes under Planning and Direction.
Understanding Milestones and Releases
Relevant links
Communication
For internal team members please feel free to use the #product
channel for any product-related questions but you’ll also find more direct assistance in the various Product Group channels.
Communicating with the Entire Product Management Function
When communicating change or a request for action to the entire product function, utilize the following levels and corresponding activities.
Level |
Description |
Activities |
One |
Suggestion for review from interested PMs and FYI |
Post MR/issue in #product |
Two |
Request for action from all PMs |
Post in #product and mention @gl-product-pm in MR/issue with specific action instructions. |
Three |
Confirmation of understanding |
Post in #product and mention @gl-product-pm ; checkbox for each @gl-product-pm member in an MR/issue description to confirm; assign MR/issue to all @gl-product-pm members |
Internal and external evangelization
Before shipping a new or updated feature, you are responsible for championing
it, both internally and externally. When something is released, the
following teams need to be aware of it as they will all need to do something
about it:
- Marketing: depending on the importance of the feature, we need the help of
marketing to promote this feature on our different communication channels.
- Sales: sales needs to know what’s new or changed in the product so they can
have better arguments to convince new or existing customers during their sales
process.
- Support: as they are in constant contact with our users and customers,
support should know exactly how our products work.
You can promote your work in several ways:
- start with documenting what will be released and share this documentation with
the different teams
- schedule meetings, if you think it’s important, with the teams listed above.
When referencing issues in written communication using just the issue number #123456
and a link is not low-context communication. Instead use the title of the issue and the link or the issue number and description of the problem that issue will solve:
- Good:
We will next be working on [Detect and display code coverage reports on MR](https://gitlab.com/gitlab-org/gitlab/-/issues/21549)
. OR We will next be working on [gitlab#21549](https://gitlab.com/gitlab-org/gitlab/-/issues/21549) which will help developers view code coverage reports directly in GitLab instead of losing context by looking in another tool while reviewing an MR
.
- Avoid:
We will next be working on #21549.
.
In order to support findability and to clearly articulate when we change our minds especially when it comes to product direction, category changes, shifts in investment themes, or priorities for engineering, Product Managers must evangelize these changes in multi-modal communication channels to ensure our users and customers aware.
Some internal methods for communication include:
- Sharing the updates various product-based Slack channels such as:
#product
, #s_
, #g_
, or #f_
Slack channels
- Cross-posting changes in direction or categories into #customer-success and if they impact use cases tag
@cs-leadership
for awareness
- Recording a quick video and sharing with Customer Success that discusses direction updates. Use sync meetings as needed to facilitate efficient communication.
- Collaborate with the Field Communications team to determine if a larger internal communications plan/approach is necessary for the Field (Sales, Customer Success, Channel & Alliances) team.
- Aggregating and sharing highlights of monthly direction page updates at the Section-level across the organization
External channels for consideration linking direction pages to:
- Twitter, LinkedIn, or other social accounts
- Sharing outreach emails via account teams
- Recording walkthroughs on Unfiltered and promoting on social accounts
- Writing a blog about the changes, if they are significant or disruptive
Writing to inspire action
As a PM, it is important to remember a bias towards action (and other value actions like sense of urgency, make a proposal, boring solutions, write things down, don’t wait, make two way doors decisions and accepting uncertainty
which enables PMs to drive an async discussion to being action oriented. Every time you write a comment or create an issue ask yourself: Will this allow us to take an action and move us forward?
Writing about features
As PMs we need to constantly write about the features and upgrades we ship: in a blog post,
internally to promote something, and in emails sent to customers. There are some
guidelines that one should take into account when writing about features,
the most important being a clear communication of the problem we’re solving for users.
When writing about a feature, make sure to cover these messaging guidelines
which help produce clear internal and external
messaging. Please also keep in mind that we should avoid using acronyms that others my not recognize, such as “MVC” for Minimal Valuable Change. For more guidance you can visit our writing style guidelines.
Let’s highlight the messaging guidelines mentioned above with a concrete example, Preventing Secrets in your repositories,
that we shipped in 8.12.
- Start with the context. Explain what the current situation is without the
feature. Describe the pain points and connect back to our Value Drivers (in this case
Reduce Security and Compliance Risk
).
It’s a bad idea to commit secrets (such as keys and certificates) to your
repositories: they’ll be cloned to the machines of anyone that has access to the
repository. If just a single one is insecure, the information will be
compromised. Unfortunately, it can happen quite easily. You write
git commit -am 'quickfix' && git push
and suddenly you’ve committed files that
were meant to stay local!
- Explain what we’ve shipped to fix this problem.
GitLab now has a new push rule that will prevent commits with secrets from entering the repository.
- Describe how to use the feature in simple terms.
Just check the checkbox in the repository settings, under push rules and
GitLab will prevent common unsafe files such as .pem and .key from being committed.
- Point to the documentation and any other relevant links (previous posts, etc).
Here are some additional examples of well written release blog posts for inspiration:
Recording videos to showcase features
In addition to the written medium, video is an important medium that caters to the different goals you are trying to accomplish and learning styles of your audience.
Depending on the type of video you are recording, there are some guidelines to keep in mind.
As our documentation guidelines actively encourage linking video content,
please consider following the Documentation Style Guide section on language,
and working with your technical writing team to include links to your speed runs, walk-throughs and demos at relevant locations in the product documentation.
Using GIFs
Animated GIFs are an awesome way of showing of features that need a little more than just an image, either for marketing purposes or explaining a feature in more detail. Checkout our guide to Making Gifs!
Speed Run
Speed runs are informal videos meant to focus on a single workflow and the experience for performing that workflow. It should not require much planning and is typically short in duration (less than 5 min.). This video type is meant to inform and not necessarily to influence buyers.
Examples:
Demo
Demos are scripted recordings meant to influence buyers. Generally has higher production value and typically involves both a slide-style presentation and/or live screen-sharing. Duration varies depending on the topics being covered.
Examples:
Walk-through
Product walk-throughs are informal videos meant primarily for an internal audience as a recorded, visual form of product critique. Walk-throughs typically focus on the user experience across categories and workflows within a Product Manager’s product scope. There are particular benefits to walk-throughs which span product hierarchy boundaries (multi-category, multi-stage, multi-section) as they help highlight disjointed experiences across our single-application.
Walk-throughs are typically longer in length as they cover more ground and often involve some “live” troubleshooting and are best performed with no planning. Use the Product walk-through issue template when creating a walk-through.
Examples:
QA Release Candidates on staging and elsewhere
After the feature freeze, it’s expected of each product manager to test their own features and perform quality assurance
to the best of their ability and follow up where necessary.
Product managers can use the staging environment once the release managers have deployed a release candidate (RC) to staging.
Release managers should post in the #product
channel in Slack that a new release candidate is available. Product managers
can also use other environments as needed, such as GitLab provisioned on Kubernetes with GKE.
Feature assurance
Before a new feature is shipped, the PM should test it out to make sure it
solves the original problem effectively. This is not about quality assurance
(QA), as developers are responsible for the quality of their code. This is about
feature assurance (FA). FA is necessary because sometimes there are
misunderstandings between the original issue proposal and the final
implementation. Sometimes features don’t actually solve the intended problem,
even though it seemed like it would, and sometimes solutions just don’t feel as
useful as intended when actually implemented.
If you can test out the feature during development, pulling down branches
locally (or with a review app!), that’s great. But sometimes it’s not feasible
to test a feature until it’s bundled into a release candidate and deployed to
GitLab.com. If so, make sure to test out features as soon as possible so any new
issues can be addressed before final release. Also, take the FA cycle into
account when scheduling new milestone work.
If you are looking to test code that has not been merged to GitLab.com or is not yet
part of an RC, you can pull the branch down locally and test it using the GitLab
Development Kit (GDK).
Dealing with security issues
Quality Engineering Managers (QEM) are the DRIs for prioritizing bugs. These include security issues which are prioritized in conjunction with the security team. Product Managers must work with their QEM to set Milestones for issues marked with the bug::vulnerability
type label to guarantee they are shipped by their due date, as defined in the Security Team process.
While Product Managers are the DRIs for milestone planning, they must respect the prioritization order for bugs and maintenance issues as determined by their QEM and EM, respectively. As such they should deeply understand the implications and risks of security-related issues and balance those when prioritizing a milestone work. Addressing a serious security issue by its due date may require temporarily adjusting the desired work type ratio for one or more milestones. Priority labels and Due Date designations for security issues should never be modified by Product Managers as they are directly managed by the Security Team and used to track metrics and progress.
Foundational Requirements
When thinking about new features, we must not only think about the functional requirements of a feature (defining what the feature will do), but also to think about foundational requirements (defining how the feature works). At the highest level, foundational requirements define items such as performance, scalability, compatibility, maintainability and usability characteristics of a feature. It is important to have foundational requirements in place up front, as this is much easier than trying to add them later and change expectations, or break existing workflows. Our definition of done contains specific areas of consideration that are required for the acceptance of new contributions.
For an in depth review of foundational requirements (often referred to as non-functional requirements), see this resource.
To deliver features, we must have both functional and foundational requirements defined.
Introducing application limits
To enhance availability and performance of GitLab, configurable limits should be put in place for features which utilize storage, or scale in a manner which could impact performance. For example, we limit the number of webhooks per project, and we allow admins to set rate limits on raw endpoints. These limits ensure more consistent performance, reduce the likelihood of outages, and offer admins tools to limit abuse or enforce specific standards. While these limits can be configurable, sensible default limits should be defined for our GitLab SaaS and GitLab dedicated offerings.
There is a guide about developing application limits in the GitLab Docs.
When implementing application limits
Application limits should be enabled by default. If we are considering enabling or changing a limit, we should do the following (applies to GitLab.com and self-managed):
- Evaluate if GitLab.com and self-managed should match - Usually, the limits on GitLab.com should be a good match for self-managed but there may be situations in which limits on GitLab.com are not a good match for our self-managed customers. For example, the artifact expiration on GitLab.com was put in place to control costs and this did not apply equally to self-managed customers.
- Evaluate the impact to current users - How many users will be affected by this change? How much of an impact will they feel? If you need help pulling data for GitLab.com, create an issue on the Infrastructure project
- Communicate limits in advance of implementation - Create an issue and facilitate community discussion about the impact the change might have. Raise awareness of the change via social media or a blog post. If the limit will result in a breaking change, do several announcements over a period of time to ensure that everyone has advance notice.
- Communicate the limits in advance to the Quality teams - Quality runs tests against various environments that reuse users and as a result tend to hit limits as a false positive. As a result, Quality needs to be informed to ensure that tests can be adjusted accordingly.
- Proactively notify Customer Success and Support of the change - Reach out in
#customer-success
and #support_escalations
to announce the upcoming change, and consider discussing in the next All CS Team Call
to solicit feedback.
- Ensure Customer Success and Support are equipped to help users - Make sure that Customer Success and Support has access to the documentation that they need to help customers who contact them regarding the limit.
- Document the limits on docs.gitlab.com
- Make sure that the limit is documented on the page for the feature and include details such as if it’s configurable, what the default value is, and what impact this can have on the end user.
- Document the limit for customers on the instance limits help page, ensuring the limit for gitlab.com is specified. Include instructions on how the limit can be changed on self-managed instances.
- If the limit is time based, link to that section from the Rate limits page
- Communicate the limits in the release post - When the limit is rolled out, make sure to document this change in the next release post.
- Communicate directly to affected users - Especially if the limit is going to have a significant impact to users, consider reaching out directly to notify those users of the change, and any available remedies, workarounds, or best practices that may help mitigate that impact. To send out an email to affected users, work with Support to create an email request.
Managing data lifecycle and growth
As we continue to scale our product, we need to consider the amount of data being stored for new features. Data storage is not an infinite resource, so we should think carefully about what data needs persistent storage to provide the desired user experience. We also need to consider the cost implications around data storage. Everything we store impacts our bottom line, and we should therefore be careful to ensure we are only storing necessary data for well thought out time-frames. We are working on defining a sustainable data retention policy, and will iterate on this section as more general guidelines are developed.
Data storage comes in three main forms for GitLab – object storage, database storage, and Git repository storage. While we have dedicated teams devoted to ensuring we can scale these storages appropriately, it is in our best interest to only store what is required for a feature to perform as intended. Additionally, there are situations where storage should be subject to data retention policies.
Considerations around data storage
When evaluating feature data storage, the following data storage topics should be considered.
- What quantity data needs to be stored? - What amount of data will need to be stored for the feature to function as intended. Is this level of data storage bounded, or is there a potential for unbounded growth? Unbounded growth should be avoided if possible.
- How long should data be retained? - We should consider carefully the need to store data indefinitely. For many features, removing certain data after a specified time period won’t impact the functionality of the feature. In these instances, we should put retention policies in place. These retention polices should have a sane default value which is considered best practice for operating the feature long term. Note: it is easier to iterate toward longer data retention time frames, but far harder to reduce retention time frames. Consider starting out with a conservative time frame.
- How often will this data be accessed? - Much like the quantity of data stored can lead to scalability issues, so can the increased load on the data stores when the data is accessed frequently. There are ways to ease the burden on our infrastructure by properly forming queries, caching often used data, or carefully considering how repository data is accessed. If there are questions, consider reaching out to the Database Group or the Git Group for assistance.
A good example where we’ve successfully evaluated data storage is our CI/CD Artifacts. We’ve set some sane default values for both maximum artifact size and for default artifacts expiration, while making these both configurable for administrative users.
Cross-stage features
See this page for details on working across stages at GitLab.
Stages, Groups, and Categories
Stages, groups, and categories serve as a common framework for organizing and communicating the scope of GitLab.
How to work as a PM
If you follow the principles and workflow above, you won’t be writing long, detailed
specs for a part of the product for next year. So how should you be
spending your time?
Invest the majority of your time (say 70%) in deeply understanding the problem.
Then spend 10% of your time writing the spec for the first iteration only and
handling comments, and use the remaining 20% to work on promoting it.
A problem you understand well should always have a (seemingly) simple or obvious
solution. Reduce it to its simplest form (see above) and only ship that.
Prioritization
See the Cross-Functional Prioritization page for more information.
Prioritization Framework
Priority |
Description |
Issue label(s) |
1* |
Security |
bug::vulnerability |
2* |
Data Loss |
data loss |
3* |
Resilience, Reliability, Availability, and Performance |
availability , infradev , Corrective Action , bug::performance |
4 |
OKR’s |
|
5 |
Usability |
Usability benchmark , SUS::Impacting , Deferred UX |
6 |
Instrumentation |
instrumentation |
7 |
xMAU / ARR Drivers |
direction |
8 |
All other items not covered above |
|
*indicates forced prioritization items with SLAs/SLOs
Forced Prioritization
Any of the items with a “*” are considered issues driven by the attached SLO or SLA and are expected to be delivered within our stated policy. There are two items that fall into Forced Prioritization:
- Security Issues labeled with
bug::vulnerability
must be delivered according to the stated SLO
- Issues supporting our product’s scale which include
bug::availability
with specific SLOs as well as infradev
, Corrective Action
, ci-decomposition::phase*
that follow the stated type::bug
SLO
Any issues outside of these labels are to be prioritized using cross-functional prioritization. Auto-scheduling issues based on automation or triage policies are not forced prioritization. These issues can be renegotiated for milestone delivery and reassigned by the DRI.
Engineering Allocation
While we have moved to the cross-functional prioritization process to empower teams to determine the optimal balance of all types of issues, we will keep Engineering Allocations as a way to allow teams to quickly shift to a critical priority, designating the EM as the DRI to drive the effort.
Engineering is the DRI for mid/long term team efficiency, performance, security (incident response and anti-abuse capabilities), availability, and scalability. The expertise to proactively identify and iterate on these is squarely in the Engineering team. Whereas Product can support in performance issues as identified from customers. In some ways these efforts can be viewed as risk-mitigation or revenue protection. They also have the characteristic of being larger than one group at the stage level. Development would like to conduct an experiment to focus on initiatives that should help the organization scale appropriately in the long term. We are treating these as a percent investment of time associated with a stage or category. The percent of investment time can be viewed as a prioritization budget outside normal Product/Development assignments.
Engineering Allocation is also used in short-term situations in conjunction and in support of maintaining acceptable Error Budgets for GitLab.com and our GitLab-hosted first theme.
Unless it is listed in this table, the Engineering Allocation for a stage/group is 0% and we are following normal prioritization. Refer to this page for Engineering Allocation charting efforts. Some stage/groups may be allocated at a high percentage or 100%, typically indicating a situation where all available effort is to be focused on Reliability related (top 5 priorities from prioritization table) work.
During an Engineering Allocation, the EM is responsible for recognizing the problem, creating a satisfactory goal with clear success criteria, developing a plan, executing on a plan and reporting status. It is recommended that the EM collaborate with PMs in all phases of this effort as we want PMs to feel ownership for these challenges. This could include considering adding more/less allocation, setting the goals to be more aspirational, reviewing metrics/results, etc. We welcome strong partnerships in this area because we are one team even when allocations are need to resolving issues critical to our business.
During periods of Engineering Allocation, the PM remains the interface between the group and the fields teams & customers. This is important because:
- It allows Engineering to remain focused on the work at hand
- It maintains continuity for the field teams - they should not have to figure out different patterns of communication for the customer
- It keeps PMs fully informed about the product’s readiness
Group/Stage |
Description of Goal |
Justification |
Maximum % of headcount budget |
People |
Supporting information |
EMs / DRI |
PMs |
|
|
|
|
|
|
|
|
Broadcasting and communication of Engineering Allocation direction
Each allocation has a direction page maintained by the Engineering Manager. The Engineering Manager will provide regular updates to the direction page. Steps to add a direction page are:
- Open an MR to the direction content
- Add a directory under the correct stage named for the title Engineering Allocation
- Add a file for the page named
index.html.md
in the newly created directory
To see an example for an Engineering Allocation Direction page, see Continuous Integration Scaling. Once the Engineering Allocation is complete, delete the direction page.
How to get a effort added to Engineering Allocation
One of the most frequent questions we get as part of this experiment is “How does a problem get put on the Engineering Allocation list?”. The short answer is someone makes a suggestion and we add it. Much like everyone can contribute, we would like the feedback loop for improvement and long terms goals to be robust. So everyone should feel the empowerment to suggest an item at any time.
To help with getting items that on the list for consideration, we will be performing a survey periodically. The survey will consist of the following questions:
- If you were given a % of engineering development per release to work on something, what would it be?
- How would you justify it? Have you tried leveraging cross-functional prioritization process before considering an engineering allocation?
We will keep the list of questions short to solicit the most input. The survey will go out to members of the Development, Quality, Security. After we get the results, we will consider items for potential adding as an Engineering Allocation.
Closing out Engineering Allocation items
Once the item’s success criteria are achieved, the Engineering Manager should consult with counterparts to review whether the improvements are sustainable. Where appropriate, we should consider adding monitoring and alerting to any areas of concern that will allow us to make proactive prioritizations in future should the need arise. The Engineering Manager should close all related epics/issues, reset the allocation in the above table to the floor level, and inform the Product Manager when the allocated capacity will be available to return their focus to product prioritizations.
When reseting a groups Engineering Allocation in the table above, the goal should be set as floor %
, the goal should be empower every SWEs from raising reliability and security issues
, percentage of headcount allocated should be 10%
, and N/A
in place of a link to the Epic.
All engineering allocation closures should be reviewed and approved by the VP of Development.
Feature Change Locks
A Feature Change Lock (FCL) is a process to improve the reliability and availability of GitLab.com. We will enact an FCL anytime there is an S1 or public-facing (status page) S2 incident on GitLab.com (including the License App, CustomersDot, and Versions) determined to be caused by an engineering department change. The team involved should be determined by the author, their line manager, and that manager’s other direct reports.
If the incident meets the above criteria, then the manager of the team is responsible for:
- Form the group of engineers working under the FCL. By default, it will be the whole team, but it could be a reduced group if there is not enough work for everyone.
- Plan and execute the FCL.
- Inform their manager (e.g. Senior Manager / Director) that the team will focus efforts towards an FCL.
- Provides updates at the SaaS Availability Weekly Standup.
If the team believes there does not need to be an FCL, approval must be obtained from either the VP of Infrastructure or VP of Development.
Direct reports involved in an active borrow should be included if they were involved in the authorship or review of the change.
The purpose is to foster a sense of ownership and accountability amongst our teams, but this should not challenge our no-blame culture.
Timeline
Rough guidance on timeline is provided here to set expectations and urgency for an FCL. We want to balance moving urgently with doing thoughtful important work to improve reliability. Note that as times shift we can adjust accordingly. The DRI of an FCL should pull in the timeline where possible.
The following bulleted list provides a suggested timeline starting from incident to completion of the FCL. “Business day x” in this case refers to the x business day after the incident.
- Day 0: Incident:
- Business day 1: relevant Engineering Director collaborates with VP of Development and/or VP of Infrastructure or their designee to establish if FCL is required.
- Business day 2: confirmation that an FCL is required for this incident and start planning.
- Business days 3-4: planning time
- Business days 5-9 (1 week): complete planned work
- Business days 10-11: closing ceremony, retrospective and report back to standup
Activities
During the FCL, the team(s) exclusive focus is around reliability work, and any feature type of work in-flight has to be paused or re-assigned. Maintainer duties can still be done during this period and should keep other teams moving forward. Explicitly higher priority work such as security and data loss prevention should continue as well. The team(s) must:
- Create a public slack channel called
#fcl-incident-[number]
, with members
- The Team’s Manager
- The Author and their teammates
- The Product Manager, the stage’s Product leader, and the section’s Product leader
- All reviewer(s)
- All maintainers(s)
- Infrastructure Stable counterpart
- The chain-of-command from the manager to the VP (Sr Manager, Sr/Director, VP, etc)
- Create an FCL issue in the FCL Project with the information below in the description:
- Name the issue:
[Group Name] FCL for Incident ####
- Links to the incident, original change, and slack channel
- FCL Timeline
- List of work items
- Complete the written Incident Review documentation within the Incident Issue as the first priority after the incident is resolved. The Incident Review must include completing all fields in the Incident Review section of the incident issue (see incident issue template). The incident issue should serve as the single source of truth for this information, unless a linked confidential issue is required. Completing it should create a common understanding of the problem space and set a shared direction for the work that needs to be completed.
- See that not only all procedures were followed but also how improvements to procedures could have prevented it
- A work plan referencing all the Issues, Epics, and/or involved MRs must be created and used to identify the scope of work for the FCL. The work plan itself should be an Issue or Epic.
- Daily - add an update comment in your FCL issue or epic using the template:
- Exec-level summary
- Target End Date
- Highlights/lowlights
- Add an agenda item in the SaaS Availability weekly standup and summarize status each week that the FCL remains open.
- Hold a synchronous
closing ceremony
upon completing the FCL to review the retrospectives and celebrate the learnings.
- All FCL stakeholders and participants shall attend or participate async. Managers of the groups participating in the FCL, including Sr. EMs and Directors should be invited.
- Agenda includes reviewing FCL retrospective notes and sharing learnings about improving code change quality and reducing risk of availability.
- Outcome includes handbook and GitLab Docs updates where applicable.
Scope of work during FCL
After the Incident Review is completed, the team(s) focus is on preventing similar problems from recurring and improving detection. This should include, but is not limited to:
- Address immediate corrective actions to prevent incident reoccurrence in the short term
- Introduce changes to reduce incident detection time (improve collected metrics, service level monitoring, which users are impacted)
- Introduce changes to reduce mitigation time (improve rollout process through feature flags, and clean rollbacks)
- Ensure that the incident is reproducible in environments outside of production (Detect issues in staging, increase end-to-end integration test coverage)
- Improve development test coverage to detect problems (Harden unit testing, make it simpler to detect problems during reviews)
- Create issues with general process improvements or asks for other teams
Examples of this work include, but are not limited to:
- Fixing items from the Incident Review which are identified as causal or contributing to the incident.
- Improving observability
- Improving unit test coverage
- Adding integration tests
- Improving service level monitoring
- Improving symmetry of pre-production environments
- Improving the GitLab Performance Tool
- Adding mock data to tests or environments
- Making process improvements
- Populating their backlog with further reliability work
- Security work
- Improve communication and workflows with other teams or counterparts
Any work for the specific team kicked off during this period must be completed, even if it takes longer than the duration of the FCL. Any work directly related to the incident should be kicked off and completed even if the FCL is over. Work paused due to the FCL should be the priority to resume after the FCL is over. Items created for other teams or on a global level don’t affect the end of the FCL.
A stable counterpart from Infrastructure will be available to review and consult on the work plan for Development Department FCLs. Infrastructure FCLs will be evaluated by an Infrastructure Director.
Please also note the corresponding Engineering handbook section about the relative importance and prioritization of availability, security, and feature velocity. To ensure we’re providing an appropriate focus on security, data loss, and availability, PMs should consider:
- tracking the appropriate labels for each prioritization category: Use a standing item to discuss these issues with an engineering manager and ensure you understand the impact of related issues in your area before planning a release.
- optimizing for quality once a merge request is ready for review: This means ensuring that Engineering has sufficient time to meet our definition of done - including a high-quality code review - without cutting corners to get something into production.
Prioritization sessions
To help PMs plan, stage group stable counterparts can participate in prioritization sessions. They serve mainly as an internal sensing mechanism for PMs to make more informed prioritization decisions for different planning horizons. Usually, teams focus on the product releases horizon, but can also focus on the FY themes or strategy horizons. This group exercise also boosts team morale, improves communication and empathy, and broadens individual’s perspectives. Besides, it can be a more informal and joyful way of connecting the team and discussing work.
The output of these sessions is a priority matrix that shows the relative priority of a set of items based on two weighted criteria. Generally, the criteria are importance and feasibility, each one visualized as an axis of the matrix. You can change the criteria depending on the planning horizon or goals. To better understand how the sessions work, see an example mural and session recording.
Always consider asynchronous sessions first, in an effort to be more inclusive and respectful of others time. That said, if possible, synchronous sessions can be ideal, as they allow limiting the time spent and make great use of the activities’ momentum for a more efficient discussion and voting.
Use our Mural template for prioritization sessions, built for product releases but adaptable for other planning horizons or criteria.
Process template
Adapt this process as needed, and consider changing it to an asynchronous mode of communication. For example, participants can review the items async, add questions as comments in Mural, and vote using dot voting or in voting sessions held on different days for each criterion.
- Before:
- The facilitator creates a mural from our template for prioritization sessions, with the stage group and milestone in its name.
- The facilitator invites the stage group counterparts for a 50-minute call, scheduled sometime before the team finalizes the release scope (see the product development timeline). Includes the URL of the mural and planning issue in the event description.
- The facilitator shares the preparation work with the participants, preferably in the group’s planning issue (see the template after this list and an example.
- Participants do the preparation work (see the template after this list).
- During (see an example session recording):
- The facilitator starts recording the call.
- Present: For each participant, the facilitator sets the timer for 10 minutes (adapt per the no. of participants). A participant then presents their issues, preferably using the RICE framework. Only after the participant presents all issues should other attendees ask questions. Once in a while, the facilitator announces how much time remains. When the timer goes off, repeat this for another participant.
- Vote: After all participants have presented, the facilitator runs two voting sessions: first for importance, and then for feasibility. Each participant has 5 votes (adapt per the no. of issues). The facilitator sets the timer for 2 minutes, repeating for each voting session.
- Visualize: Review your voting session results and everyone helps place the stickies on the matrix, depending on their number of votes for each criterion.
- If there’s still time, discuss the most-voted issues as a group.
- After:
- The facilitator uploads the recording to GitLab Unfiltered, sets its visibility (see SAFE framework), adds to relevant playlists, and includes the URL of the mural and planning issue in the description.
- The facilitator shares the recording URL and voting results in the planning issue, preferably with a screenshot of the matrix and links to the highest voted issues (see an example.
Preparation work template
## :map: Prioritization session
`@-mention participants` for our [prioritization session](/handbook/product/product-processes/#prioritization-sessions), here's the [**Mural**](URL) for us to add the issues we want to see in **MILESTONE**. I scheduled our 50-minute session for **DATE**.
1. Add your issues to the Mural before the call. Let's try to limit to **5 issues per person**, so it's easier to vote on them and keep things focused. You can find instructions on how to add them in the "Outline" panel on the right side of the Mural UI.
1. Try not to add Security or Availability issues. This is also noted in the [product processes page](/handbook/product/product-processes/#prioritization), as those issues have forced prioritization with SLAs/SLOs.
1. If you can, mark issues that appeared in previous sessions by changing their sticky color to **orange**.
Thanks and see you soon :bow:
Using the RICE Framework
RICE is a useful framework for prioritization that can help you stack rank your issues. The RICE framework is a great tool for prioritizing many issues that seem to be of equal value at first glance. In order to drive clarity and alignment in the prioritization of work across the entire DevOps platform, and to help prioritize items that may compete for resources from different teams, we have set a standard for the RICE factors so all prioritization decisions based on RICE are using the same metric.
Reach How many customers will benefit in the first quarter after launch? Data sources to estimate this might include qualitative customer interviews, customer requests through Support/CS/Sales, upvotes on issues, surveys, etc.
Higher reach means a higher RICE score:
- 10.0 = Impacts the vast majority (~80% or greater) of our users, prospects, or customers
- 6.0 = Impacts a large percentage (~50% to ~80%) of the above
- 3.0 = Significant reach (~25% to ~50%)
- 1.5 = Small reach (~5% to ~25%)
- 0.5 = Minimal reach (Less than ~5%)
Impact How much will this impact customers and GitLab? Impact could take the form of increased revenue, decreased risk, and/or decreased cost (for both customers and GitLab). This makes it possible to compare revenue generating opportunities vs. non-revenue generating opportunities. Potential for future impact should also be taken into account as well as the impact to the GitLab brand (for example unlocking free-to-paid conversion opportunities).
Higher impact means a higher RICE score:
- Massive = 3x
- High = 2x
- Medium = 1x
- Low = 0.5x
- Minimal = 0.25x
Confidence How well do we understand the customer problem? How well do we understand the solution and implementation details? Higher confidence means a higher RICE score.
- High = 100%
- Medium = 80%
- Low = 50%
Effort How many person months do we estimate this will take to build? Lower effort means a higher RICE score.
Calculating RICE Score
These four factors can then be used to calculate a RICE score via the formula:
(Reach x Impact x Confidence) / Effort = RICE
Here is an example RICE calculation you can use to help prioritize work in your area. Feel free to embed this at the Epic level to provide context for why you did or did not prioritize.
RICE Factor |
Estimated Value |
Reach |
10.0 |
Impact |
.5 |
Confidence |
80% |
Effort |
2 month |
—— |
—— |
Score |
(10.0 x .5 x .80) / 2 = 2.0 |
Other important considerations:
- Is this in support of a company or team OKR?
- Does it bring our vision closer to reality?
- Does it help make our community safer through moderation tools?
- Does it meaningfully improve the user experience of an important workflow?
- Is it something we need ourselves?
- Is it particularly important to customers?
- The technical complexity is acceptable. We want to preserve our ability to make
changes quickly in the future so we try to avoid complex code, complex data structures, and optional settings.
- It is orthogonal to other features (prevents overlap with current and future features).
- The requirements are clear.
- It can be achieved within the scheduled milestone. Larger issues should be split up, so that individual steps can be achieved within a single milestone.
- Refer to research participant gratuities section to understand if your study qualifies for incentive distribution.
We schedule a prioritized issue by assigning it a milestone; for more on this see
Planning a Future Release.
Async RICE Exercise
Conducting a RICE prioritization exercise with your cross-functional counterparts is a powerful way to make the process more inclusive and improve the quality of your rankings. Consider making this an async-first process to accommodate team members across different timezones. For an example of how to do this async-first, see this issue that the Geo team used to collaborate on a RICE prioritization exercise. This blank async RICE template is also available for you to copy for your own async prioritization exercise.
Issues important to customers
For prioritizing most issues, we should utilize the RICE framework noted above, which will capture an aggregate of customer demand. You can also augment RICE scores with the Customer Issues Prioritization Framework Dashboards:
Customer Requested Issues (Product) for product managers
Customer Requested Issues (CSM) for Sales, CS and CSM
These dashboards provide several inputs for calculating RICE and aggregate all customer requested issues and epics into a single dashboard. These dashboards are not meant as a replacement or sole input for Top ARR Drivers for Sales/CS. Further requirements such as the integration of themes need to be implemented before this framework can be used to fully inform or replace tools such as the Top ARR tracker.
In some cases however, we may become aware of a feature which is particularly important to deliver on by a certain date. Examples of this could include an issue necessary to embark on a new GitLab rollout, a feature needed by a partner to launch an integration, or a method to import data from a service which is being discontinued. In these instances, the responsible PM can apply the customer
or customer+
label along with a due date
and initial milestone
. This set of labels can serve to indicate externally that the issue is particularly important, as well as a reminder for internal teams of its importance.
It is important to note that the customer
and/or customer+
label does not constitute a promise for the issue to be delivered in any given milestone or time frame.
GitLab is open-source, encouraging and promoting a large ecosystem of contributors is critical to our success. When making prioritization decisions,
it’s important to heavily weight activities which will encourage a stronger community of contributors. Some of those activities are:
- The creation of small primitives that can be utilized and iterated on by community members
- The building of integration points which can entice independent third parties to contribute an integration
- The addition of tools or features which make the contribution experience easier
Product managers are not responsible for prioritizing contributions outside of their group. These contributions should be
reviewed and merged swiftly allowing everyone
to contribute, including non-product teams at GitLab.
SaaS-First Framework
The SaaS-First product investment theme will put us in a better position to support our customer base who is expected to accelerate adoption of SaaS products in the coming years. Features will also end up more secure, resilient, performant, and scalable for our self-managed customers if initially built to the expectations of SaaS. Therefore, it is important for PMs to understand and prioritize needs related to the SaaS business. When prioritizing SaaS related issues, we follow the same guidelines above. Within those guidelines there are a few areas that are especially important for PMs to focus on to ensure the success of our SaaS users.
Availability
Downtime of GitLab.com has a material impact on our customers. From a 2014 report Gartner estimates that downtime costs companies on average “$5,600 per minute, which extrapolates to well over $300K per hour.” Furthermore, SaaS downtime can severely disrupt the productivity of GitLab Inc since we rely heavily on GitLab.com to run our business. Finally, downtime can also lead to customer churn and damage to our reputation. Thus, it is crucial as a company we collectively work towards consistently maintaining our 99.95% SLA on GitLab.com. There are a few things that PMs can do in partnership with their engineering team to help ensure overall Availability for GitLab.com.
- Make sure each new feature that gets built has full end to end test coverage.
- Before rolling out a new service to support a major new feature launch, ensure that your team has gone through the readiness review process. The effort and timing for a readiness review will vary depending on the complexity of the feature. It is recommended to start this process as early as practical when a significant number of the questions can be answered but not too late to further develop the feature based on learnings from the review.
- Ensure there are application limits for your product areas enabled on GitLab.com to reduce abuse vectors.
Infradev
The infradev process is used to triage issues requiring priority attention in support of SaaS availability and reliability. As part of the broader effort to responsibly manage tech debt across the company, PMs should partner with their EMs to identify and incorporate infradev labeled issues of all severities. Note, issues labeled with a severity must be mitigated and resolved within specific time frames to meet the SLO. As EMs are the DRIs for prioritizing infradev work, PMs should familiarize themselves with the infradev process and Board.
Other resources PMs can consult to identify and prioritize Infradev issues include:
While not required, PMs are encouraged to listen in on Incident Management calls for incidents related to their product areas to 1) build empathy with the SRE team by gaining insight into how they handle incidents 2) gain a better sense of the impact of the incident to their customer base, and 3) identify improvements to their product areas, whether technical or feature-related, that could have prevented the incident. PMs are not expected to be in the decision-making path on actions taken to resolve the incident. They are there to listen and learn rather than attempting to decide/influence the course of resolution. After incidents involving their product area, PMs are also encouraged to engage in the Incident Review, including attendance at the Sync Incident Review call if their incident is scheduled. PMs can periodically review incidents via the Production Incident Board
Enterprise Customer Needs
Enterprise customers interested in adopting SaaS may have common hard requirements to be able to use the product. For example, large enterprises may need certain security related features, such as Audit Logs, available before their security team will agree to the use of GitLab.com. This can also be about more than just features; it may include how and where we apply features so they can administrate their GitLab instance at enterprise-scale. For instance, permission management and shared configurations are best implemented top-down first instead of Project-up to meet the requirements of large organizations who may have 100s or 1000s of projects and only a small handful of people to perform these system-wide administrative tasks. In order to encourage more Enterprise adoption of GitLab.com, prioritize these common “hard-blockers” to adoption over “nice to have” features. PMs can use customer interviews to hone in on which issues are hard blockers to adopting SaaS vs more “nice to have” features that can be delivered later.
To track hard adoption blockers, use the ~“GitLab.com Enterprise Readiness” label within the GitLab-Org and GitLab-com groups.
SaaS Features
There are a few special considerations when it comes to delivering features for SaaS. In order to achieve parity between SaaS and Self-managed installations PMs should prioritize efforts to eliminate existing feature gaps that exist across the two installations. Additionally, new features should ship for SaaS and self-managed at the same time. Features should be implemented at the group level first, before being implemented at the instance level, so that they will work across both self-managed and SaaS. Finally, in order for new features to be adequately monitored, they should include appropriate logging and observability, which makes troubleshooting much easier.
Working with Your Group
As a product manager, you will be assigned as the stable counterpart to a single group. At GitLab we abide by
unique, and extremely beneficial guidelines when interacting with our groups. These include:
- Product managers are the DRIs for overall work prioritization but work collaboratively with their EM, UX, and QEM stable counterparts to ensure the right priorities from each work type are considered as each has a different DRI. Product Managers are responsible for communicating overall priority.
- Product Managers provide the what and when for feature work. Engineering (UX, Backend, Frontend, Quality) provide the how. This process is documented as part of our monthly product, engineering and UX cadence. We define stable counterparts for each of these functions within a group.
As an all-remote company, our crispness when it comes to responsibilities throughout the Product Delivery process was born out of necessity, but it pays untold dividends. Some of the benefits include:
- We avoid the ambiguity in handoffs between teams
- We avoid the confusion of many responsible individuals
- We avoid the slowness of consensus driven decision making
- We avoid the disruption of frequent context switching
- We gain the rigidity to be consistent
- We gain the freedom to iterate quickly
From Prioritization to Execution
As described above, prioritization is a multi-faceted problem. In order to
translate the priorities of any given group into action by our engineering
teams, we need to be able to translate this multi-faceted problem into a flat
list of priorities for at least the next release cycle. Product Managers are
responsible for taking all these prioritization considerations and creating a
clear, sequenced list of next priorities. This list should be represented as an issue board
so that each team has a clear interface for making decisions about work. From
this list, Product Designers, Engineering Managers and Product Managers can work together to
determine what items will be selected for work in the immediate future.
This does not mean that items will be addressed in strict order - Product Designers, EMs and PMs
need to be cognizant of dependencies, available skill sets, and the rock/pebbles/sand
problem of time management to make the best decisions about selecting work.
Reviewing Build Plans
Together with your Engineering Manager, you will have an important role in ensuring that the Build Plans defined for issues are created with iteration in mind. Iteration is highly valuable for the following reasons:
- It can result in discovering ways to parallelize effort, resulting in less team WIP and increase throughput
- It can result in shipping something of value during an iteration rather then delaying everything
- It can re-risk unknown unknowns by bringing them to light sooner in the development process
Prioritizing for Predictability
As a company we emphasize velocity over predictability. As a product manager this means
you focus on prioritizing, not scheduling issues. Your engineering stable counterparts are
responsible for velocity and delivery. However, there are instances when there is desire for predictability, including:
- Security, Bugs and Infra priorities with SLOs
- Customer Commitments
- Infrastructure projects with IACV driver impact or those that result in significant cost savings for gitlab.com
- Infrastructure projects with customer commitment or heavily upvoted should be given a priority indicative of other customer commitments
- Vision or Direction items for a launch
As the DRI for milestone prioritization, it is the Product Manager’s job to prioritize for predictability when it is needed. You should do so by ensuring you prioritize a deliverable, and its dependencies, so that it can reasonably be expected to be delivered by any committed dates. If there is time pressure to hit a date, the PM should also explore de-scoping the issue to meet the deadline, rather than pressuring engineering to move abnormally fast or cut corners.
These information sources may be useful to help you prioritize.
Global Prioritization
Individual product managers must consider, and advocate for global optimizations
within the teams they are assigned to. If your assigned team requires expertise
(remember everyone can
contribute)
outside the team you should make all reasonable efforts to proceed forward
without the hard dependency while advocating within the product management team
for increased prioritization of your now soft dependencies.
Execution of a Global prioritization can take many forms. This is worked with both Product and Engineering Leadership engaged. Either party can activate a proposal in this area. The options available and when to use them are the following:
- Rapid action - use when reassignment isn’t necessary, the epic can have several issues assigned to multiple teams
- Borrow - use when a temporary assignment (less than 6 months) to a team is required to help resolve an issue/epic
- Scope Reassignment - use when scope that will take longer than 6 months to deliver is a high priority and the team member reporting structure does not need to change to accomplish the effort.
- Realignment - use when a permanent assignment to a team is required to resolve ongoing challenges. This has the highest impact to team members and should be considered if other options cannot achieve the desired goal. We strive to hire team members in the groups that will need them most.
We have found the following methods less successful in ensuring completion of work that warrants global prioritization:
- Working Groups - This method involves convening a group of individuals who maintain full-time responsibility to other Product Groups and completing work as part of the working group structure. This method isn’t preferred for completing product improvements, instead it can be utilized to scope work, or determine plans for future product delivery.
- Fan Out Prioritization - This method of prioritization involves communicating a global prioritization to a number of Product Groups in an effort to ensure each individual product group’s PM prioritizes the work in the time frame you’d prefer. This method requires significant coordination costs and puts delivery at risk due to the lack of central prioritization responsibility. In most cases it is preferred to execute a scope reassignment, borrow or realignment to complete the improvements.
Planning and Direction
As a PM, you must plan for the near term milestones (more detailed) as well as for the long
term strategy (more broad), and everything in between.
While monthly milestone planning is done in GitLab, longer horizon planning (1-3 years) is done in direction pages.
This will enable you to efficiently communicate both internally and externally
how the team is planning to deliver on the product vision.
Managing your Product Direction
Documenting a Section, Stage, Group and Category direction is critical to communicating where we are heading and why to all of our stakeholders. This is especially important to the members of your Product Group. Establishing a direction for stakeholders (including team members) to participate in, and contribute to ensures there is a concrete connection to “Why” we are iterating and how it furthers GitLab’s mission. Here are some of those connections:
- Improving Product Performance Indicators - Usage represents market capture (whether paying or not), and the start of our dual fly-wheel. For existing customers that market capture in new capabilities also represents increased retention and because of the benefits of a single application - user satisfaction.
- Improving Competitiveness against alternative DevOps tools - Leads to increased Stages Per user, and sales as they add to our “Increase Operational Efficiency”
As a Product Manager you can highlight these connections in:
- Direction Content and Overview Videos
- Weekly Meetings
- Individual Issue Descriptions
- Planning Issues
- Kickoff Videos
- Customer Discovery Interview Summaries
Communicating this connection requires a multi-channel approach. We should strive to share and communication about the connection to our Direction warrants consistent reinforcement.
Section and Stage Direction
Section leaders are responsible for maintaining Direction pages that lay out the strategy and plan for their respective section and stages. The direction pages should include topics outlined in this template.
Category Direction
A category strategy is required which should outline various information about
the category including overall strategy, status, what’s next, and the competitive landscape.
The category strategy should be documented in a handbook page, which allows for version control
of the category strategy as well as the ability to embed video assets.
One of the most important pieces of information to include in the category strategy is a tangible next step or MVC
and a clear description of focus and out-of-focus/maintenance areas.
Your category strategies should contain short paragraphs with lots of references to specific epics and issues.
Referencing topics, instead of features is encouraged as it’s more stable over time.
We use this category strategy template
as the outline for creating the handbook pages. If additional headings are needed you are empowered
to create and populate them in your category strategy. You must keep these categories in sync with categories.yml
and for
new categories.
Category direction should be reviewed on a regular basis (at least monthly) by the responsible product
manager. To indicate the last time a category direction page was reviewed, please ensure pages
include Content Last Reviewed: yyyy-mm-dd
at the top of the category content. Update this date with every
review, even if other content on the direction page has not changed.
You should link to your category strategy from your stage strategy page.
For categories that have already shipped, and that have a marketing
product page, categories.yml
should link to the product page.
Inside of the categories.yml
file there are dates assigned for either achieved or anticipated maturity achievement. These should be kept inline with communicated dates for achievement and updated as required.
If the category has developed a UX Roadmap we recommend the product designer to create a merge request to incorporate UX Roadmap themes into the category direction page roadmap. Assign the MR to the PM for review and merge.
Navigating cross-stage or cross-section direction pages
In some cases there may be direction pages that span multiple stages or sections. A direction page that summarizes the collective vision as well as all the contributors of that direction is critical to maintain transparency and adequate assignment of ownership.
There are several examples of these types of direction pages today:
- Software Supply Chain Security Direction
- AutoDevOps Direction
- Monorepo Product Direction
- Versioned Dependencies Direction
- Customizable Dashboards Direction
The steps for creating and managing a cross-section or stage direction are:
- Create a direction page merge request adding the direction page to the GitLab direction directory
- Select the category change template in the merge request
- Follow the process for category changes
- Add CODEOWNERS by adding an entry with the direction page link and the page DRI GitLab Handle.
- Once approved,
@
all relevant product managers on the addition
Once the direction page has been added, there needs to be an assigned DRI for maintaining monthly updates for the page. It is the DRIs responsibility to ensure the shared direction page is regularly reviewed and is up to date. This requires cross-section / cross-stage collaboration from the DRI.
What makes a Product Direction issue?
You should use the ~direction
label together with category and section labels to mark epics and issues that fall into the given direction.
Product Direction items (i.e., with the label) should be direction-level items that move the strategy forward meaningfully. This is up to the PM to set the bar for, but there should be a clear step forward with real user value.
It’s important to note here that your plan is not simply a list of new features and innovation.
Those are included for sure, but so are issues related to all of your sensing mechanisms.
A category upgrade from minimal to viable or delivery of a top customer issue (for example) can contribute to your plan just as much as a brilliant new innovative feature can. It’s up to PMs to balance this through a coherent longer-term strategy.
Conversely, in a broad sense anything could move the plan forward in a general way.
Finally, issues are the substance of your plan. Ensure you are applying the label to both revelant epics and its issues.
Communicating dates
As product managers, a core job is to set the correct expectations. We do this typically through discussing our direction and assigning issues to milestones. When you need to communicate specific dates, it’s recommended doing it with limited visibility internally or directly to the customers. When you need to communicate specific dates use calendar year (CY) dates. Fiscal year (FY) does not translate well outside the company.
Accordingly, the direction pages are expected to refer to specific issues only for the next 3-4 months. Everything beyond that should discuss the topic, not specific issues.
Planning is indispensable but adjust, iterate
Creating a thoughtful direction for your section, stage, or category is a useful thought exercise that can help focus efforts, aid in prioritization, and get large groups of people on the same page. But beware of simply executing your long term plan. Our industry is incredibly dynamic, and we learn new things every day that can and should cause us to re-think our long term plans.
Delivery follows discovery
We should ship what brings value to our customers, not what is easy to ship. Stay focused on creating value each and every milestone, and be quick to adjust your longer term direction as you learn more.
- When working on a larger theme, you should start with validating the end state knowing that it will change as you start shipping features and you learn more from actual usage.
- Once the final vision is validated, you should work with your designer and engineering counterparts to break it down to the smallest possible iterations in order to ship value quickly.
- You might still prefer to validate the first “milestone” before getting into delivery.
- It’s totally fine to never ship the initial vision and refine the vision after every iteration. A feature not built is much more valuable than a feature that is built but never used.
Maturity Plans
For each category, we recommend tracking the improvements required to advance to the next level of maturity. You are welcome to track maturity plans either with ~maturity::...
labels or maturity issues.
Maturity plans are highly encouraged - but not required - for non-marketing categories.
Planning and OKRs
GitLab users quarterly OKRs that cascade into Product OKRs and product group OKRs.
You should have plans for the next three months in terms of driving specific product metrics through discovery and delivery actions.
You should discuss the product metrics with your manager, your design and engineering counterparts and the actions to reach the results with your design and engineering counterparts.
You can read more about the OKR process at GitLab at the two links shared above.
Planning Issue for Milestone
For each milestone, the planning quads come together to scope and plan work for the group for the upcoming milestone. Planning begins asynchronously with the creation of the planning issue. The planning issue is the SSOT for communication and all resources that are needed to plan a successful milestone. There are many ways to achieve to plan a milestone that should be curated based on the needs of the team. Below are a few examples of planning issues from groups acorss R&D to aid you in creating one that works best for your team.
As you adapt your own issue, it is recommended you apply the label planning issue
to aid in tracking and to incorporate our Product Principles into the process.
Managing Upcoming Releases
Refer to the Product Development Timeline
for details on how Product works with UX and Engineering to schedule and work on
issues in upcoming releases.
Planning for Future Releases
There are two non-exclusionary ways to plan and communicate work for future releases
Planning with boards
As a Product Manager you can maintain prioritization of your groups issues using
a fully prioritized issue board where the ordering of the issues reflects their priority.
Planning with milestones
Product Managers can assign milestones to issues to indicate when an issue is likely
to be scheduled and worked on.
Still, whether an issue can be delivered within a milestone is the decision of the engineering team.
As we consider more distant milestones, the certainty of
the scope of their assigned issues and their implementation timelines is increasingly
vague. In particular, issues may be moved to another project, disassembled, or merged
with other issues over time as they bounce between different milestones.
The milestone of an issue can be changed at any moment. The current assigned milestone
reflects the current planning, so if the plan changes, the milestone should be updated
as soon as possible to reflect the changed plan. We make sure to do this ahead
of starting work on a release. Capacity is discussed between the PMs and the
engineering managers.
There are helper labels to signals these plans like ~next::1-3 releases
and its variants.
Special milestones
In addition, we have two special milestones: Backlog
and Awaiting further demand
.
Product Managers assign these issues to milestones that they have reviewed and
make sense, but do not fit within the upcoming release milestones due to either
a lack of comparative urgency or because we have not yet seen enough user
demand to prioritize the item yet. The best way to demonstrate urgency on
either of these items is to vote on them and, if possible, add comments
explaining your use case and why this is important to you.
Recommendation for when to change ‘Awaiting further demand’:
Always focus on the overall value of the feature.
Do you have a good understanding of the user problem?
Do you have a good understanding of the impacted user base?
Was the proposed solution validated?
Issues with the ‘Awaiting further demand’ label often mean poorly understood requests that require more information from our users and the market.
Often public feedback only comes from a small percentage of people using or evaluating a feature or product.
You should always consider reaching out directly to our users to learn more about their use cases.
Recommendation when changing a previously planned issue to Backlog
: When moving
a previously planned issue to Backlog
, especially one planned for within the next release or two,
consider the message that this may be sending to parties that were interested in this feature.
In some cases, they may have been depending or planning upon the issue to be delivered around
the assigned milestone, and with the change to Backlog
that is now unlikely to occur. In these instances,
it is best to concisely explain the rationale behind the change in a comment, so
the community can understand and potentially respond with additional justification or
context. It is also encouraged to move the issue to the Backlog
as soon as it is clear that it will not be scheduled in the near future. This will help with understanding the change, as it will not seem like a last minute change.
Communicating clearly changing priorities might encourage the community to contribute the issue to GitLab.
Again, the milestone of an issue can be changed at any moment, including for both
of these special milestones.
Shifting commitment mid-iteration
From time to time, there may be circumstances that change the ability for a team
to ship the features/issues they committed to at the beginning of the iteration.
These steps also apply when an issue is broken into multiple issues.
When this happens, as a PM you must coordinate with your EM counterpart that
the impacted issues and their milestones
are updated to reflect the new reality (for example, remove deliverable
tag, update milestone
, etc.). Additionally, notify your manager of the shift.
Utilizing our design system to work autonomously
Our design system provides the means to work
autonomously, without always needing UX insight, feedback and design. When problems can
be solved using an already documented paradigm, you don’t need to wait for UX
approval to bring an issue to a reasonable state within a first iteration.
If lingering questions remain, subsequent iterations can address any shortcomings
the feature might have.
Always consider that with a dedicated product designer, it’s much faster and cheaper to iterate on a design than to re-implement it.
At the same time, not everything needs a design, and the design system is here to support your engineers and you in those cases.
Iteration Strategies
Iteration is a core value of GitLab, and product management has a central role to play in it. Iteration should be apparent as we deliver new features in MVCs, but it has implications for discovery too. As solution validation can move much faster than delivery, we should aim to validate features before building them. At this point, the feature validated is likely way bigger than an MVC if we would build it. We should pay special attention as product managers to still aim at iterative delivery after a bigger feature-set got validated, as delivered features provide the final validation. For example, once a direction is validated, we can start the delivery by documentation. As product managers we should aim to iterate as part of solution validation, and while delivering already validated solutions too.
Here are several strategies for breaking features down into tiny changes that can be developed and released iteratively. This process will also help you critically evaluate if every facet of the design is actually necessary.
Workflow steps
As part of design and discovery, you likely created a minimal user journey that contains sequential steps a user is going to take to “use” the feature you are building. Each of these should be separated. You can further by asking yourself these questions:
- Can/is it desirable to perform this action via the UI or can we use a non-UI approach as a start (for example, CLI, API or .csv download of data)? This is a great starting point before adding UI components that achieve the same thing.
- Will there be different UI paths to perform the same task? Identify which are the most useful and which are the easiest to implement. Weight both factors when determining which to start with, and build from there.
User operations
View, Create, Update, Remove and Delete are actions users take while interacting with software. These actions naturally provide lines along which you can split functionality into smaller features. By doing this, you prioritize the most important actions first. For example, users will likely need to be able to visually consume information before they can create, update, remove, or delete.
Functional criteria
Often, the criteria for features are built on is implicit. It can help to use a test-driven development mindset where you write the tests and the outcomes you need from the software before building the software. Writing these tests can uncover the different criteria you need the development team to meet when building the new feature. Once you’ve outlined these tests, you may be able to use them to continue to break down the feature into smaller parts for each test. Here are a few examples:
- What is the default behavior when there is no data (empty/null state)?
- Are there automatic actions or events that occur as part of your feature? Write them down, and identify those that can be done manually by the user before adding automation.
- Will users of different roles have unique experiences? Can you prioritize and build one of these experiences first? (for example: guest, user, developer, maintainer).
- Do users want to be able to customize their view of information? Define all of the customizations you want to offer, and build them one at a time (for example, toggle on/off, filter, sort, search).
Exception & error cases
Software often fails and can fail in different ways depending upon how it is architected. It is always best to provide the user with as much information as possible as to why something did not behave as expected. Creating and building different states to handle all possible errors and exceptions can easily be broken down into individual issues. Start by creating a generic error state to display when anything goes wrong, and then add on to handle different cases one by one. Remember to always make error messages useful, and add additional error messages as you identify new error states.
Breaking down the UI
Breaking down a design into pieces that can be released iteratively is going to depend on what you are building. Here are a few helpful questions to guide that process:
- What components already exist that you can reuse to go faster?
- What constitutes “extra styling”? Is there a way to display the information you need to display plainly and then add details later?
- Do you have lots of interactions in the design that make the UX lovable? Can you pull those out into separate issues and add them iteratively? (e.g. hover states, drag & drop, toggles, options to show/hide info, collapse/expand, etc)
Refactors
Continuously improving the software we write is important. If we don’t proactively work through technical debt and Deferred UX as we progress, we will end up spending more time and moving slower in the long run. However, it is important to strike the right balance between technical debt, deferred UX, and iteratively developing features. Here are some questions to consider:
- What is the impact if we do not refactor this code right now?
- Can we refactor some of it? Is a full re-write necessary?
- Why do we need to use that new technology? (You may need to ask WHY multiple times to get to the root of the problem)
Separate announcement from launch
For large projects, consider separating the announcement from the actual feature launch. By doing so, it can create more freedom to iterate during the customer rollout. For example, you could announce in advance to give customers ample notice, and then roll it out to new customers first, then to existing Free customers, then to existing paid customers. Or you could do the opposite, and roll it out to customers first, before announcing broadly, to ensure the user experience is great before making a marketing splash.
When considering dates for a product announcement or launch that may impact our Field team, consider the blockout restrictions recognized by the Field team to ensure there won’t be any major disruption to the business near quarter end.
Four phase transition
Sometimes the objective is to cut over from one experience, or one system, to another. When doing so, consider having four transition phases rather than a hard cut over. The phases are: 1) Old experience. 2) Run the old experience and new experience side-by-side, with the old experience the default, and the new experience is gradually rolled out to a subset of users. 3) Run them side-by-side, with the new experience the default for the majority, but the old experience is still available as a fallback in case of problems. 4) Deprecate the old experience and offer only the new experience. This strategy enables teams to have more flexibility and demonstrate more iteration in the rollout, with reduced risk.
Iterate to go faster
When something is important, it is natural to want to launch it all at once to get to the end game faster. However, big bang style launches tend to need everything perfect before they can happen, which takes longer. With iteration you get feedback about all the things that aren’t a problem and are done enough. It’s better to launch in small increments, with a tight feedback loop, so that the majority of users have a great experience. This tends to speed up the overall timeline, rather than slow it down.
Remote Design Sprint
A Design Sprint, is a 5-day process used to answer critical business questions through design, prototyping and testing ideas with customers. This method allows us to reduce cycle time when coming up with a solution.
As an all-remote company we run Remote Design Sprints (RDS). Check out our guidelines for running an RDS to determine if it’s the right approach for the problem at hand.
Spikes
If you’re faced with a very large or complex problem, and it’s not clear how to most efficiently iterate towards the desired outcome, consider working with your engineers to build an experimental spike solution. This process is also sometimes referred to as a “technical evaluation.” When conducting a spike, the goal is write as little code within the shortest possible time frame to provide the level of information necessary the team needs to determine how to best proceed. At the end of the spike, code is usually discarded as the original goal was to learn, not build production-ready solutions. This process is particularly useful for major refactors and creating architecture blueprints.
Feedback issues
When launching a feature that could be controversial or in which you want to get the audience’s feedback, it is recommended to create a feedback issue.
Timeline:
- Create the issue and include in the release post.
- If announcing in Slack or doing dogfooding, include a link to the feedback issue
- Leave the issue open for at least 14 days after launch
- Respond and catalog the feedback into separate issues
- Close the issue once the time frame has passed and summarize the learnings from the feedback issue
Here are some examples of feedback issues:
Feedback issue considerations
Feedback issues are intended to collect feedback from the wider community and users. In some cases, internal user will be posting on behalf of users and customers. As a result we need to consider the following:
- Feedback issues that are public cannot contain SAFE information
- A linked confidential issue for Field feedback can be used, if needed, to support the exchange of customer details and feedback
- Leverage internal comments as needed if customer details are being shared
Other best practice considerations
Consider the following to improve iteration:
- Successfully iterating should mean you’re delivering value in the most efficient way possible. Sometimes, this can mean fixing an underlying technical issue prior to delivering a customer facing feature.
- Wherever possible, consider reuse of components that already exist in the product. A great example of this was our approach to creating our Jira importer, which reused the Jira service integration. Reuse also aligns well with our efficiency value.
- Avoid technical dependencies across teams, if possible. This will increase the coordination cost of shipping and lead to a slow down in iteration. Break down silos if you notice them and consider implementing whatever you need yourself.
- Consider a quick POC that can be enabled for small portion of our user base, especially on GitLab.com. An example of this was search, where it was originally enabled just for a few groups to start, then slowly rolled out.
- Great collaboration leads to great iteration. Amazing MVCs are rarely created simply by product managers, they often arise out of collaboration and discussion between product, engineering, design, quality, etc.
- Keep the initial problem statement front and center for the team. Tight problem statements enable the team to identify a tight, iterative solution.
- Bring data to the table early to help the team triangulate on the smallest iteration that will have the largest impact in solving the identified problem.
- If the project is multi-phase, consider iterative targets and guardrails to help the team focus on the next iterative milestone, rather than the final end state goal.
- If your team needs to do repetitive work on behalf of customers, partners, or other GitLab teams, consider using a framework approach so that dependent teams can self-serve.
Engaging directly with the community of users is an important part of a PM’s job. We
encourage participation and active response alongside GitLab’s Developer Relations team.
Conferences
A general list of conferences the company is participating in can be found on our
corporate marketing project.
There are a few notable conferences that we would typically always send PMs to:
If you’re interested in attending, check out the issue in the corporate marketing
site and volunteer there, or reach out to your manager if you don’t see it listed
yet.
Stakeholder Management
What is a Stakeholder?
A stakeholder, or stable counterpart, is someone that is outside of your direct team who meets one or more of the following:
- Is directly or indirectly impacted
- Has the ability to stop, delay, or cancel
Examples of stakeholders include Leadership, Sales, Marketing, Customer Support, and Customer Success. You may have stakeholders in any area of GitLab depending on your focus area and the specific issue. Stakeholders are also present outside of GitLab, for example, when a feature is being developed for a specific customer or set of customers. If you’re not sure who the stakeholder is to collaborate with or keep informed, visit product sections, stages, groups, and categories.
Updated SSOT for stakeholder collaboration
Stakeholder collaboration and feedback is a critical competitive advantage here
at GitLab. To ensure this is possible, and facilitate collaboration, you should
maintain an updated single source of truth (SSOT) of your stage direction, category
strategies, and plan, at all times. This equips anyone who wants to contribute to
your stage’s product direction with the latest information in order to effectively
collaborate. Some sections and teams use the scheduled Direction Update issue template to
remind themselves of this task.
Actively and regularly reach out to stakeholders. Encourage them to view and collaborate
on these artifacts via these (non-exhaustive) opportunities:
Here is some guidance for new PMs to ensure your stage direction, category strategies and plan
are up-to-date and visible to critical stakeholders:
- Seek feedback from the CAB once every six months.
- Present your plan to your manager once a month.
- Present the plan and stage/category strategies to your stable counterparts
- Present your stage strategy and plan in a customer meeting once every two weeks.
- Present changes to your stage strategy, category strategies, and plan to your
stage group weekly meeting once a month.
Working with customers
Customer meetings
It’s important to get direct feedback from our customers on things we’ve built, are building, or should be building. Some opportunities to do that will arise during sales support meetings. As a PM you should also have dedicatedcustomer discovery meetings or continuous interviews with customers and prospects to better understand their pain points. As a PM you should facilitate opportunities for your engineering group to hear directly from customers too. Try to schedule customer meetings at times that are friendly to your group, invite them, and send them the recording and notes. If you’re looking for other ways to engage with customers here is a video on finding, preparing for, and navigating Customer Calls as a Product Manager at GitLab.
Sales support meetings
Before the meeting, ensure the Sales lead on the account has provided you with sufficient
background documentation to ensure a customer doesn’t have to repeat information they’ve already
provided to GitLab.
During the meeting, spend most of your time listening and obtaining information.
It’s not your job to sell GitLab, but it should be obvious when it’s the time
to give more information about our products.
For message consistency purposes, utilize the Value Drivers framework when posing questions and soliciting information.
After the meeting:
- Create an interview snapshot
summarizing the meeting in the gitlab-com/user-interviews project.
This project is private so that detailed and unredacted feedback can be shared internally.
- Link the Google Doc where detailed notes were taken.
- Create or update related issues to publicly document feedback.
The synthesis of feedback from multiple meetings should happen publicly in an epic or issue.
Customer discovery meetings
Customer discovery meetings aren’t UX Research. Target them to broad-based needs
and plan tradeoff discussions, not specific feature review. There are
two primary techniques for targeting those topics:
- Top Competitors - Identify the top 3 competitors in your categories and talk to
customers using those competitor asking: What is missing to have you switch from
X to us? We’re not aiming for feature parity with competitors, and we’re not
just looking at the features competitors talk about, but we’re talking with
customers about what they actually use, and ultimately what they need.
- User Need - Identify GitLab users from key customers of your group’s
categories and features. Solicit them for what they love about the features and
ask about their current pain points with both the features as well as the surrounding
workflows when using those components of GitLab?
Follow the below guidance to prepare and conduct Customer Discovery Meetings:
Set up a meeting:
- Identify what you’re interested in learning and prepare appropriately
- You can find information about how customers are using GitLab through Sales and version.gitlab.com. Sales and support should also be able to bring you into contact with customers
- There is no formal internal process to schedule a customer meeting, however you can check this template for gathering questions from interested parties and for capturing the notes during the customer discovery meetings.
During the meeting:
- Spend most of your time listening and documenting information
- Listen for pain points, delightful moments and frustrations
- Read back and review what you’ve written down with the customer to ensure you’ve captured it correctly.
After the meeting:
- Document your findings. Create a folder (sharable only within GitLab) in Google Drive with a structure as follows:
- Customer Meetings
- Customer Name A
- 2020-04-01
- agenda (Google Doc)
- artifacts (folder for docs, images, etc.)
- 2020-10-03
- Customer Name B
- Competitive Research
- Vendors
- Vendor A
- summary (Google Doc, optional)
- 2020-04-01
- 2020-10-03
- Vendor B
- Projects
- product-10132-code-scan-results (reference GitLab issue number)
- ux-13840-selector-widget
- Share your findings with your fellow product managers and the sales and customer success account teams for the customer
- Make appropriate adjustments to category strategies, feature epics, and personas
You can find some additional guidance on conducting Customer Discovery Meetings from these resources:
Sourcing customers
PMs should also feel free to collect and evaluate customer feedback independently. Looking at existing
research can yield helpful
themes as well as potential customers to contact. You can use the following techniques to source customers directly:
GitLab Solution Architects know our customers the best, especially from a technical perspective.
GitLab Issues customers will often comments on issues, especially when the problem described by the issue
is a problem they are experiencing firsthand. The best strategy is to capture their feedback directly on the issue,
however, there are times when this is not possible or simply doesn’t happen. You can find alternative contact info by clicking on the user’s handle to see their
GitLab user page; this page often includes contact information such as Twitter or LinkedIn. Another option is to
directly mention users in issues to engage async. In popular issues you can just leave a general comment that you’re looking for people to interview and many will often volunteer.
Customer Issues Prioritization Dashboards: The customer issues prioritization framework aggregates customer data with the issues and epics that they have requested. When viewing the dashboard, double click on the issue or epic of interest within the “priority score by noteable” table then scroll down to “QA Table - User request weighting by customer” to see the specific customers that are interested in the issue or epic.
GitLab.com Broadcast Messages Broadcast Messaging is a great tool for acquiring customer feedback from within the product. You can leverage this workflow to use broadcast messaging.
GitLab Sales and Customer Success You can ask for help in Slack customer success channel
or join the Field Sales Team Call and the All CS Team Call to present a specific request via the Zoom call.
Customer Success Managers (CSM) If a customer has a dedicated CSM, they may also have a regular meeting with a CSM. These meetings are a great opportunity to spend 15 minutes getting high-level feedback on an idea or problem. In Salesforce, CSMs are listed in the Customer Success section in the customer’s account information. CSMs are also very familiar with the feature requests submitted by their customers and can help identify customers that may be interested in the feature you are working on.
Zendesk is a great tool to find users who are actively making use of a feature and either came across a
question or an issue. Users who’ve had recent challenges using the product really appreciate PMs taking the time to learn from
their experience. This establishes that we are willing to listen to users, even if they are not having a great experience.
This is also a great opportunity to discuss the roadmap and provide context so that users understand what we are going to improve.
The best way to request a chat is through the support ticket; however, you can also click
on the user that initiated the interaction and their contact information will display on the left hand side panel.
If you don’t have a Zendesk account, see how to request a light agent Zendesk account.
You can use Zendesk’s trigger feature to receive email alerts when specific keywords relevant
to your product area are mentioned in a support ticket. Additionally, it is possible to create a simple dashboard that lists all the currently active support tickets that match the trigger. Reach out
in #support_escalations to receive some help in setting this up.
Social Media can also be effective. If your personal account has a reasonable number of connections/followers, you can post your desire to connect with users on a specific question directly. When posting, remember to include the subject you want to discuss as well as how people can reach out. You can also reach out to the #social-media
channel to have your tweet retweeted by the @gitlab account.
If you want to reach a wider audience, consider asking a community advocate to re-post using the official GitLab account for the relevant platform.
You can reach advocates on the #community-advocates
Slack channel.
You can also reach out to authors of articles related to tech your team is working on, via various publications such as Medium. A clear and brief email
via the publication website or LinkedIn is a good way to engage.
You’re able to request a LinkedIn Recruiter license. This Unfiltered video and slide deck provide an overview on how to use LinkedIn Recruiter to source participants for your study.
If you’ve tried these tactics and are still having challenges getting the customer feedback you need, connect with your manager for support and
then consider leveraging the UX Research team.
Additionally, you can connect with Product Operations directly or by attending Product Operations Office Hours for troubleshooting support.
Non-users are often more important than GitLab users. They can provide the necessary critical view to come up with
ideas that might turn them into GitLab users in the end. The best non-users are the ones who don’t even plan on switching
to GitLab. You can reach these people at local meetups, conferences or online groups like, Hacker News. In every such case,
you should not try to interview the user on spot, instead organize a separate meeting where nobody will be distracted, and
both of you can arrive prepared.
Customer Advisory Board meetings
One specific, recurring opportunity to get direct feedback from highly engaged customers
is the GitLab DevOps Customer Advisory Board.
You may be asked by the CAB to present your stage at these meetings. Here are
some guidelines when doing so:
- Since it will be sent out in advance of your presentation, take the opportunity to update your stage strategy video
- Start the presentation with an overview of your stage strategy
- Emphasize the importance of feedback and dialog in our prioritization process
- Highlight recently completed plan items that were driven by customer feedback
- Come prepared with five questions to facilitate a discussion about industry trends,
plan tradeoffs, pain points and existing features
- Don’t simply look for places to improve, seek to clarify your understanding of what customers
currently value and love
Working with (customer) feature proposals
When someone requests a particular feature, it is the duty of the PM to investigate
and understand the need for this change. This means you focus on what is the problem
that the proposed solution tries to solve. Doing this often allows you to find that:
- An existing solution already exists within GitLab
- Or: a better or more elegant solution exists
Do not take a feature request and just implement it.
It is your job to find the underlying use case and address that in an elegant way that is orthogonal to existing functionality.
This prevents us from building an overly complex application.
Take this into consideration even when getting feedback or requests from colleagues.
As a PM you are ultimately responsible for the quality of the solutions you ship,
make sure they’re the (first iteration of the) best possible solution.
Competition channel
When someone posts information in the #competition
channel that warrants
creating an issue and/or a change in features.yml
, follow this
procedure:
- Create a thread on the item by posting
I'm documenting this
- Either do the following yourself, or link
to this paragraph for the person picking this up to follow
- If needed: create an issue
- Add the item to the
features.yml
- If GitLab does not have this feature yet, link to the issue you created
- Finish the thread with a link to the commit and issue
Reaching out to specific users or accounts based on GitLab usage
You may want to interview a specific account because they are exhibiting atypical usage patterns or behaviors. In this case, request Support to contact GitLab.com user(s) on your behalf.
If it is the weekend, and the contact request is urgent as a result of an action that might affect a users’ usage of GitLab, page the CMOC
Assessing opportunities
Opportunity canvas
One of the primary artifacts of the validation track is the Opportunity Canvas. The Opportunity Canvas introduces a lean product management philosophy to the validation track by quickly iterating on level of confidence, hypotheses, and lessons learned as the document evolves. At completion, it serves as a concise set of knowledge which can be transferred to the relevant issues and epics to aid in understanding user pain, business value, the constraints to a particular problem statement and rationale for prioritization. Just as valuable as a validated Opportunity Canvas is an invalidated one. The tool is also useful for quickly invalidating ideas. A quickly invalidated problem is often more valuable than a slowly validated one.
Please note that an opportunity canvas is not required for product functionality or problems that already have well-defined jobs to be done (JTBD). For situations where we already have a strong understanding of the problem and its solution, it is appropriate to skip the opportunity canvas and proceed directly to solution validation. It might be worth using the opportunity canvas template for existing features in the product to test assumptions and current thinking, although not required.
Reviews
Reviewing opportunity canvases with leadership provides you with an opportunity to get early feedback and alignment on your ideas. To schedule a review:
- Contact the CProdO EBA to schedule a 25 minute meeting. Let the EBA know if you are scheduling a comparative or singular Opportunity Review
- The VCProdO and VP of UX should be included as required attendees.
- The Product Section Leader, Direct Manager, UX counterpart and Product Operations should be included as optional attendees.
- Complete the Opportunity Canvas(es) at least one business day before the meeting to give attendees an opportunity to review content. The attendees will review the canvas(es) in advance and will add questions directly to the canvas document(s).
- When the Opportunity Canvas(es) is complete, inform the meeting participants by tagging them in a post in Slack #product. Include a direct link to the canvases.
- During the review, feel free to present anything you’d like. For comparative reviews it’s helpful to start with your proposal for which Opportunity to pursue first. For singular reviews it’s fine to go straight to Q&A since the attendees should have reviewed the canvas in advance.
References:
Opportunity canvas lite
Opportunity Canvases are a great assessment for ill-defined or poorly understood problems our customers are experiencing that may result in net new features. As noted previously, opportunity canvases are helpful for existing features, except they are tailored for new feature development which is where the Product-Opportunity-Opportunity-Canvas-Lite
issue template delivers. This template offers a lightweight approach to quickly identify the customer problem, business case, and feature plan in a convenient issue. The steps to use the template are outlined in the Instructions section and for clarity, one would create this issue template for an existing feature they are interested in expanding. For example, this template would be great to use if you are evaluating the opportunity to add a third or fourth iteration to an MVC. This issue should leverage already available resources and be used to collate details to then surface to leadership for review. Once you fill out the template, you will assign to the parties identified in the issue and you can always post in the #product
channel for visibility.
Analyst engagement
Part of being a product manager at GitLab is maintaining engagement with
analysts, culminating in various analyst reports that are applicable to your
stage. In order to ensure that this is successful and our products are rated
correctly in the analyst scorecards, we follow a few guidelines:
- Spend time checking in with the analysts for your area so they are familiar with our story and features earlier, and so we can get earlier feedback. This will ensure better alignment of the product and the way we talk about it will already be in place when review time comes. Remember, analysts maintain a deep understanding of the markets they cover, and your relationship will be better if it is bi-directional. Inquire with analysts when you have questions about market trends, growth rates, buyer behavior, competitors, or just want to bounce ideas off of an expert.
- Make paying attention to analyst requests a priority, bringing in whoever you need to ensure they are successful. If you have a clear benefit from having executives participate, ask. If you need more resources to ensure something is a success, get them. These reports are not a “nice to have”, ad-hoc activity, but an important part of ensuring your product areas are successful.
- When responding to the analyst request, challenge yourself to find a way to honestly say “yes” and paint the product in the best light possible. Often, at first glance if we think we don’t support a feature or capability, with a bit of reflection and thought you can adapt our existing features to solve the problem at hand. This goes much smoother if you follow the first point and spend ongoing time with your analyst partners.
- Perform retrospectives after the analyst report is finalized to ensure we’re learning from and sharing the results of how we can do better.
It’s important to be closely connected with your product marketing partner,
since they own the overall engagement. That said, product has a key role to play
and should be in the driver’s seat for putting your stage’s best foot forward in
the responses/discussions.
Engage with internal customers
Product managers should take advantage of the internal customers that their
stage may have, and use them to better understand what they are really using,
what they need and what they think is important to have in order to replace
other products and use GitLab for all their flows.
We want to meet with our internal customers on a regular basis, setting up
recurring calls (e.g., every two weeks) and to invite them to share their
feedback.
This is a mutual collaboration, so we also want to keep them up to date with the
new features that we release, and help them to adopt all our own features.
USAT responder outreach
Each quarter, we reach out to User Satisfaction (USAT) survey responders who opted-in to speak with us. This is a fantastic opportunity to build bridges with end users and for Product Managers and Product Designers to get direct feedback for their specific product area. If a user has taken the time to share a verbatim with us and offered to have a conversation, they deserve to be followed up with - especially if that user is dissatisfied with GitLab.
When we speak to users directly during this workflow, we must be mindful of Product Legal guidance and the SAFE framework, just as we would be with any other documentation or communication within Product.
Overall process
- UX Researcher DRI opens a Responder Outreach issue and notifies Product team members in the comments that the issue is ready.
- Product team members go through the list of USAT responders who have agreed to a follow up conversation. Those team members either sign up for outreach or tag in Product Managers or Product Designers where appropriate.
- Product team members then view the sheet and confirm who they want to talk with.
- Product team members reach out to users and schedule interviews.
- Product team members add notes and video recordings from the interviews to the
USAT
column in this Dovetail project.
- Product team members mark which users they interviewed, the link to the session recording, and include any additional notes about the session in the follow up users sheet.
- As Product team members create or continue to work on issues related to USAT follow up interviews, they should the following label (USAT::Responder Outreach) to help the UX Research team track the impact of those interviews.
Note: GitLab Customer Success Managers can also follow the process above, so please be mindful to coordinate with them if they reach out or if they’ve already signed up to speak with a user. Users should never be contacted by more than one GitLab team member. Users should never be contacted more than twice if they do not respond to an outreach email.
Instructions for product leaders
- Look at the
USAT Follow Up Users
Google Sheet that will be shared with you in an issue. Identify any users you think a Product Manager or Product Designer from your group would be interested in speaking to. Assign the specific Product Manager or Product Designer to reach out to that user by putting their name in the appropriate column. This will also serve as a “hold” on the user and if others are interested they will need to coordinate with that team member.
- If you think another Product Manager or Product Designer in your group or another group would be interested in speaking to the same user, consider notifying that team member for the sake of efficiency.
- If you’re interested in having one of your Product Managers or Product Designers speak with a user that has already been “claimed” by another GitLab team member, have your Product Manager or Product Designer reach out to that team member so they can coordinate a joint conversation. We need to be mindful of our users’ time and should limit this outreach to a single conversation rather than successive conversations.
Instructions for Product Managers and Product Designers
- Another GitLab team member may put your name next to users they felt were relevant for you to speak with.
- If you are unable or unwilling to speak with the user, please either remove your name or find a replacement.
- If you see other users that have not been assigned to another team member and you feel may be relevant to speak with, assign that user to yourself.
- If you see other users that have been assigned to another team member, reach out to that team member and coordinate a joint conversation. It is very important you do not reach out to users that have been assigned to other team member as we want to be mindful of our users time and not risk negative sentiment due to over-communication. We are limiting these conversations to one per user for these reasons.
Process for reaching out to users
- Calendly is the best method for scheduling users. Set up your free Calendly account if you haven’t done so. Add details to the invite description describing yourself and the conversation purpose. Also add your personal Zoom link, either via connecting your Zoom account or pasting in your personal Zoom URL.
- You’ll need to add three extra questions to the invite form in order to ask for consent to record, example below. Please use these questions as written in the example as they closely mirror the content that has been validated by the UX Research Team.
- Draft an email that you’ll send to users. Example copy is below. You can re-phrase things as you wish but make sure you still cover the same points as the example.
- BE ON TIME TO YOUR CALL. Better yet, be 2 minutes early. Be ready to coach people through getting Zoom to work properly. Make sure everyone on the call introduces themselves.
- If people have agreed to recording, still ask them once again if it’s OK if you record before turning it on. Obviously, do not record people who did not give consent.
- See our training materials on facilitating user interviews.
Example email copy:
Hello,
My name is X and I’m the Product Manager/Designer for X at GitLab. Thank you for giving us the opportunity to follow up on your response to our recent survey.
I would be very interested in speaking further about some of the points you raised in your survey response. Would you be willing to do a 30 minute Zoom call to give us some more detailed feedback on your experience using GitLab? You’d be able to schedule the call at a time convenient to you.
Schedule a time for the call using this link:
https://calendly.com/yourname/30min
Thank you for your feedback and let me know if you have any questions.
Best,
Your name
Copy for three extra questions in Calendly invite:
To make sure we correctly represent what you say in any followup issues or discussions, we would like to record this conversation. Please indicate if you give permission to record this conversation.
Yes, you may record our conversation.
No, you MAY NOT record our conversation.
At GitLab, we value transparency. We would love to share the recording of conversation publicly on GitLab. Please indicate whether you give your permission for the recording to be shared on GitLab.
Yes, you may share the recording publicly on GitLab.
No, you MAY NOT share the recording publicly on GitLab.
I agree that by participating in this, and any future, research activities with GitLab, GitLab B.V. will retain all intellectual property rights in any suggestions, ideas, enhancement requests, feedback, or other recommendations I provide which are hereby assigned to GitLab B.V.
Yes
No
After the call
- If multiple GitLab employees are on the call, it can be beneficial to debrief immediately afterwards.
- Collect all notes that were taken and Zoom recording from the interview and add them to the
USAT
column in this Dovetail project.
- If you told the user you’d follow up on anything or promised to send them further information, make sure you do so, ideally within two business days.
- Go back to the spreadsheet and mark that you spoke to a user in the Status column and add a link to the recording in Dovetail.
- If you create any epics/issues to address feedback gathered in the calls, add the label USAT::Responder Outreach and link them to the corresponding USAT responder outreach issue from that quarter.
Note: It’s important to tag your USAT related issues to help tracking/reporting such as the improvement slides in Product Key Reviews.
Cost profile and user experience
Every Product Manager is responsible for the user experience and cost profile of their product area regardless of how the application is hosted (self-managed or gitlab.com). If a feature is unsustainable from a cost standpoint, that can erode the margins of our SaaS business while driving up the total cost of ownership for self-managed customers. If a feature is slow, it can impact the satisfaction of our users and potentially others on the platform.
There are a few questions a Product Manager should ask when thinking about their features:
- What are the costs associated with my product area? What is the impact on the margin for each tier of GitLab.com?
- Consider network, compute, and storage costs
- Are there tools in place to help GitLab, Inc and self-managed admins optimize the cost footprint for running GitLab (e.g. node rebalancing, transitioning objects to less costly storage classes, garbage collection capabilities)
- Are there features and default settings that help users stay within their CI and Storage limits?
- Are there configurable application limits in place for admins to enhance the availability and performance of GitLab and reduce abuse vectors?
- What is the experience of users when interacting with these features on GitLab.com? Is it fast and enjoyable?
These items do not all need to be implemented in an MVC, though potential costs and application limits should be considered for deployment on GitLab.com.
Product Managers should also regularly assess the performance and cost of features and experiences that they are incrementally improving. While the MVC of the feature may be efficient, a few iterations may increase the cost profile.
There are a few different tools PM’s can utilize to understand the operational costs of their features. Some of these are maintained by Infrastructure, based on the operational data of GitLab.com. Others tools, like service ping, can be utilized to better understand the costs of our self-managed users. Ultimately, each product group is responsible for ensuring they have the data needed to understand and optimize costs.
Links to learn more about infrastructure cost management initiatives
Life Support PM Expectations
When performing the role of Life Support PM only the following are expected:
- Management of next three milestones
- Attend group meetings or async discussion channels
- Provide prioritization for upcoming milestones
- MVC definition for upcoming milestones
- Increase fidelity of scheduled issues via group discussion
- Ensure features delivered by the group are represented in the release post
Some discouraged responsibilities:
- Long-term MVC definition
- One year plan
- Category Strategy updates
- Direction page updates
- Analyst engagements
- CAB presentations
Build vs “Buy”
As a Product Manager you may need to make a decision on whether GitLab should engineer a solution to a particular problem, or use off the shelf software to address the need.
First, consider whether our users share a similar need and if it’s part of GitLab’s scope. If so, strongly consider building as a feature in GitLab:
If the need is specific to GitLab, and will not be built into the product, consider a few guidelines:
- Necessity: Does this actually need to be solved now? If not, consider proceeding without and gathering data to make an informed decision later.
- Opportunity cost: Is the need core to GitLab’s business? Would work on other features return more value to the company and our users?
- Cost: How much are off the shelf solutions? How much is it to build, given the expertise in-house and opportunity cost?
- Time to market: Is there time to engineer the solution in-house?
If after evaluating these considerations buying a commercial solution is the best path forward:
- Consider who owns the outcome, as the spend will be allocated to their department. Get their approval on the proposed plan.
- Have the owning party open a finance issue using the
vendor_contracts
template, ensure the justification above is included in the request.
Evaluating Open Source Software
When considering open source software in build vs. “buy” decisions we utilize the following general criteria to decide whether to integrate a piece of software:
- Compatibility - Does the software utilize a compatible open source license?
- Viability - Is the software, in its current state, viable for the use case in question?
- Velocity - Is there a high rate of iteration with the software? Are new features or enhancements proposed and completed quickly? Are security patches applied regularly?
- Community - Is there a diverse community contributing to the software? Is the software governed by broader communities or by a singular corporate entity? Do maintainers regularly address feedback from the community?
Analytics instrumentation guide
Please see Analytics Instrumentation Guide
Post Launch Instrumentation Guide
Goal:
Increase product instrumentation across our offerings to deliver greater product insights. There is a need to retroactively evaluate what features have been instrumented and need instrumentation from past feature launches. Post launch implementation will allow us to gather insights and allow better visibility into feature usage + adoption that may not currently be captured.
Tasks:
- Issue Request
- PM:
- Alignment
- PM/PDI: Once all stakeholders have been added to the issue, Product Data Insights team will set time with the PM counterpart to align on:
- Goals
- Priorities
- Milestones
- TPgM may assist in implementation of planning documentation.
- Category Inventory & Instrumentation Mapping
- PM/PDI: Work together to outline a category inventory using this spreadsheet template.
- Category level implementation should be prioritized by most utilized features and the areas we believe have the largest impact on the business.
- From there, PM and Product Data Insights counterparts will utilize labels outlined here in step 3 for markers of implementation status.
- The PM will lead mapping of instrumentation at a category level, in close partnership with the Product Data Insights counterpart.
- For any metric or event that has been identified to contribute to a categories instrumentation the correct
product_category
should be set in the definition file.
- Audit & Review
- PM/PDI: will audit implementation/review implementation to quality check and ensure accuracy async. TPgM may assist in QA.
- Update Categories yaml file
- PM: Update the categories.yml file with the applicable implementation status (see below) Utilizing the categories.yml file, the Product Data Insights team will create a Tableau dashboard to track implementation at a category level over time.
- Complete - Instrumentation complete and satisfactory
- Incomplete - Some instrumentation, but not complete
- None - No instrumentation - instrumentation needed
- Not needed - Instrumentation not needed
- Analytics Instrumentation
- PM/PDI: Once category instrumentation audit has been completed. For categories marked as either red (needing implementation) or yellow (some instrumentation, not complete),
- PM/EM: will create an instrumentation issue with the label
analytics instrumentation
and utilizing the usage data instrumentation template.
Page load performance metrics
In order to better understand the perceived performance of GitLab, there is a synthetic page load performance testing framework available based on sitespeed.io.
A Grafana dashboard is available for each stage, tracking the Largest Contentful Paint and first/last visual change times. These metrics together provide high-level insight into the experience our users have when interacting with these pages.
Adding additional pages to performance testing
The Grafana dashboards are managed using grafonnet, making it easy to add additional pages and charts.
Testing a new set of pages requires just 2 steps:
- Add the desired URL’s to the sitespeed unauthenticated or authenticated testing list. Add a new line with the URL, then a space, and an alias of the form
[Group]_[Feature]_[Detail]
. The alias needs to be one word, an example MR is here. Note the authenticated user account does not have any special permissions, it is simply logged in.
- Open the relevant stage’s grafonnet dashboard file. Find the section corresponding to the desired group, and add an additional call to
productCommon.pageDetail
. The call arguments are Chart Title
, Alias
from above, and the tested URL
. Ensure the JSON formatting is correct, the easiest way is to simply copy/paste from another line. A sample MR is available here.
Assign both MR’s to a maintainer. After they are merged, the stage’s Grafana dashboard will be automatically updated. A video walkthrough is available as well.
Analytics Instrumentation Overview
At GitLab, we collect product usage data for the purpose of helping us build a better product. Data helps GitLab understand which parts of the product need improvement and which features we should build next. Product usage data also helps our team better understand the reasons why people use GitLab. With this knowledge we are able to make better product decisions.
There are several stages and teams involved to go from collecting data to making it useful for our internal teams and customers.
The purpose of continuous interviews and how to set them up
Overview
The Cross-Functional Prioritization framework exists to give everyone a voice within the product development quad (PM, Development, Quality, and UX). By doing this, we are able to achieve and maintain an optimal balance of new features, security fixes, availability work, performance improvements, bug fixes, technical debt, etc. while providing transparency into prioritization and work status to internal and external stakeholders so they can advocate for their work items. Through this framework, team members will be able to drive conversations about what’s best for their quad and ensure there is alignment within each milestone.
Context
The Customer Prioritization Framework was developed by the Issue Prioritization Framework Working Group as a way to improve the efficiency of the feedback loops among Sales, Customer Success, and Product. It provides a comprehensive system to categorize and measure customer, and prospective customer, demand for capabilities within GitLab. This page covers how the first iteration of the model works and how to interact with and interpret, the not public, internal-only dashboards that it powers.
Dogfood everything
The best way to understand how GitLab works is to use it for as much of your job as possible.
Avoid dogfooding antipatterns
and try to find ways to leverage GitLab (instead of an external tool) whenever
possible. For example: try to use Issues instead of documents or spreadsheets
and try to capture conversation in comments instead of Slack threads.
As a PM, but also as any GitLab team member, you should actively use every feature,
or at minimum, all the features for which you are responsible. That includes
features that are not directly in GitLab’s UI but require server configuration.
Alignment & vision of the GitLab Early Access Program
R&D OKR Overview
This page provides an overview of the joint R&D OKR workflow. All departments within R&D, which includes the Product and Engineering Divisions, collaborate by following this guidance. For clarifications on the OKR process, team members can post in Slack #product or #engineering-fyi.
Timeline and process for OKRs
The OKR process is designed to tie in to the overall OKR process the company uses. That process is driven largely off of the date of the Key Review meetings, so the Product process keys off of that date as well. Dates will not necessarily align with the start of a fiscal quarter as a result.
Animated GIFs are an awesome way of showing of features that need a little more than just an image, either for marketing purposes or explaining a feature in more detail. This page holds all information on the entire process of creating a GIF.
General
The GIF format is popular because it works everywhere and has a no-fuss UI. – Kornel
GIFs are used everywhere for a reason, but as you can read in the referenced article above, they are also expensive. Expensive in that a GIF can quickly become a big file, which takes longer to load. To create great looking GIFs that walk the line between file size and quality, some steps need be considered.
How to launch a product or service at GitLab.
Overview
This section of the handbook is a collection of product management processes that can leveraged in your practice as product management. Some of these are best practices and suggestions that are not required in your day to as product manager, while others are highly recommended workflows that are tried and true paths leading to successful results. These are sourced by our Product Management Department and regularly reviewed by our Director+ Product Management leaders.
Overview
This guide for GitLab Product Managers clarifies and expands on the Regulation FD Training.
Making changes to this page
To make any edits to this page, please create a merge request and add a description of what you want to change and why. Add labels product operations
prodops:release
and product handbook
. Add Product Operations DRI/Maintainer @fseifoddini
as Reviewer for collaboration and approval. If Product Operations is unavailable and the topic is time-sensitive, please add Maintainer @gweaver
for collaboration and approval.
When planning, Product Managers plan to GitLab milestones. Here is the process for creating and maintaining them.
Product Milestone Creation
One quarter ahead, the Engineering team, in partnership with the Product team, will create all of the necessary milestones for the next quarter. Our standard practice is to have the Major release every May, resulting in:
XX.0 - May
XX.1 - June
XX.2 - July
XX.3 - August
XX.4 - September
XX.5 - October
XX.6 - November
XX.7 - December
XX.8 - January
XX.9 - February
XX.10 - March
XX.11 - April
Milestone start and end dates are defined as follows:
Overview
This section of the handbook is a collection of processes that a required to be followed under certain conditions. For example, if a change is being made or if a request is submitted to leadership for approval.
How this page works
In the spirit of “every can co-create”, these procedures can be contributed by any one in the Product Division (or even GitLab!). The custodian of the procedures are Program Management. If you are interested in contributing to this page, please open a merge request and assign to Natalie Pinto, GitLab Handle @natalie.pinto
.
This is the process for quarterly board meeting prep, specific to the Product / R&D Org. This process is revisited on a quarterly basis and aligns with the [broader company process](/handbook/board-meetings/#board-and-committee-composition). Feedback always welcome!
What are sensing mechanisms?
Our ability to iterate quickly is a measure of our efficiency, but our effectiveness
is just as critical. As a product manager you are critical to us not just working correctly,
but working on the correct things. You do that by prioritizing appropriately. Your
prioritization decisions will be enhanced if you maintain a sufficient understanding of the
context in which you make them.
There is no limit to the amount of inputs you can utilize for making prioritization decisions.
We’ve organized these mechanisms into three lists. One for those that primarily sense feedback from users,
one that primarily senses feedback from buyers and another that senses internally
generated feedback which could represent buyers or users. For new PMs consider
these lists as guidance for places to ensure you are plugged in to maintain
sufficient context.
On this page
Tiering strategy
Free is targeted at individual contributor developers. It is a complete DevOps solution and contains capabilities from all ten GitLab stages.
Premium is targeted at Director level buyers and is for teams. The pricing themes for Premium are Faster code reviews, Advanced CI/CD, Enterprise agile planning, Release controls and Self managed reliability. Premium helps teams iterate faster and innovate together.
Ultimate is targeted at Executive level buyers and is for organizations. The pricing themes for Ultimate are Advanced security testing, Security risk mitigation, Compliance, Portfolio management, and Value stream management. Ultimate helps organizations deliver better software faster with enterprise ready planning, security and compliance.
We use GitLab to document product strategy and manage our backlog. A couple of concepts that are key to this process are:
- Milestones: Align with our product releases and are used as our group’s planning timeboxes.
- Issues: Capture an atomic piece user value.which should able to be delivered within a singe milestone.
- Tasks (optional): Decompose an Issue into more detailed implementation steps.
- Epics: Group related issues together into a theme or goal. A best practice is for epics to not be everlasting containers but to represent a concrete scope of work, with the goal is for the epic can be closed once the work is complete.
- Boards: Aid in visualizing work moving through the product development flow and for milestone planning.
- Roadmaps: Aid in visualizing epics in a timeline view.
Issues
We use issues to define narrowly scoped items of work to be done. Issues can focus on a variety of different topics: UX problems, implementation requirements, tech debt, bugs, etc. A good guideline for experience-related issues is that they should address no more than one user story. If an issue includes multiple user stories, then it is likely an epic.