Digital Experience Handbook

Learn more about the Digital Experience purpose, vision, mission, objective and more in this handbook.

Overview

🙌 Purpose

Why we exist

We take a customer-centric approach to educating prospects on how GitLab enables them to deliver software faster and more securely.

Team Members

Role Name
Senior Product Designer Tina Lise Ng
Senior Product Designer Trevor Storey
Frontend Engineer Javi Garcia
Senior Frontend Engineer Megan Filo
Senior Frontend Engineer Laura Duggan
Senior Frontend Engineer Marg Mañunga
Staff Frontend Engineer Nathan Dubord
Fullstack Engineer Miracle Banks
Senior Fullstack Engineer John Arias
Senior Fullstack Engineer Mateo Penagos
Engineering Manager Lauren Barker
Director Filza Qureshi

Scope

Our team leads the GitLab’s digital marketing platform, or simply the “Marketing Site” refers to https://about.gitlab.com. We own the following repositories:

Our team strengths & core capabilities:

  • Engineering and UX desing for about.gitlab.com
  • Cross collaboration
  • Speed and delivery
  • Customer journey maps

Our team supports:

  • HTML email templates
  • PathFactory
  • Marketo
  • learn.gitlab.com
  • www-gitlab-com
  • Globalization

We do not support:

Teams we work closely with:

  • SEO
  • Analytics
  • Product Marketing
  • Content Moarketing
  • Brand Strategy
  • Marketing Ops
  • Blog
  • Globalization
  • Events
  • Competitve Intelligence

OKRs

We collaboratively define OKRs as a team with cross functional partners in advance of each quarter. Once OKR candidates are complete we review, size/scope them and align on which best help achieve our business objectives.

Current Quarterly Plan

FY25Q2 Digital Experience Quarterly Plan & OKRs

Iteration Process

We start our iteration on a Monday. We release throughout the iteration. On the Thursday a week after our Monday iteration planning, we come together for a synchronous meeting to discuss wins, blockers, start/stop/continues, and select 3 things we’d like to highlight in our Slack channel.

Issue Board

Labels and Workflow Boards

We use issue boards to track issue progress throughout a iteration. Issue boards should be viewed at the highest group level for visibility into all nested projects in a group.

The Digital Experience team uses the following issue labels for distinguishing ownership of issues between specialities:

Who Title
User Experience ~dex::ux
Engineering ~dex::engineering

The Digital Experience team uses the following labels for tracking merge request rate and ownership of issues and merge requests.

What & Current Issues Label
Work to be triaged ~"dex-status::traige"
Refinement on issue is needed ~"dex-status::refinement"
Issues in the backlog ~"dex-status::backlog"
Issues to be worked on ~"dex-status::to-do"
Currently being actioned ~"dex-status::doing"
Work in review ~"dex-status::review"
Unplanned work ~"dex-unplanned"
Issue for Conversion team to complete ~"dex-group::conversion"
Issue for Optmization team to complete ~"dex-group::optimization"
Issue for product designer to complete ~"dex::ux"
Issue for engineer to complete ~"dex::engineering"

Digital Experience teams work across the GitLab codebase on multiple groups and projects including:

Estimation

Before work can begin on an issue, we should estimate it first after a preliminary investigation. This is normally done in the iteration planning meeting.

Weight Description (Engineering)
1 The simplest possible change. We are confident there will be no side effects.
2 A simple change (minimal code changes), where we understand all of the requirements.
3 A simple change, but the code footprint is bigger (e.g. lots of different files, or tests effected). The requirements are clear.
5 A more complex change that will impact multiple areas of the codebase, there may also be some refactoring involved. Requirements are understood but you feel there are likely to be some gaps along the way.
8 A complex change, that will involve much of the codebase or will require lots of input from others to determine the requirements.
13 A significant change that may have dependencies (other teams or third-parties) and we likely still don’t understand all of the requirements. It’s unlikely we would commit to this in a milestone, and the preference would be to further clarify requirements and/or break in to smaller Issues.

In planning and estimation, we value velocity over predictability. The main goal of our planning and estimation is to focus on the MVC, uncover blind spots, and help us achieve a baseline level of predictability without over optimizing. We aim for 70% predictability instead of 90%. We believe that optimizing for velocity (merge request rate) enables our Growth teams to achieve a weekly experimentation cadence.

  • If an issue has many unknowns where it’s unclear if it’s a 1 or a 5, we will be cautious and estimate high (5).
  • If an issue has many unknowns, we can break it into two issues. The first issue is for research, also referred to as a Spike, where we de-risk the unknowns and explore potential solutions. The second issue is for the implementation.
  • If an initial estimate is incorrect and needs to be adjusted, we revise the estimate immediately and inform the Product Manager. The Product Manager and team will decide if a milestone commitment needs to be adjusted.

Planning (Iteration Plan Sync)

Iteration planning is an event that kicks off the start of an iteration. The purpose of the meeting is for capacity discussions and to plan work the upcoming iteration.

Cadence: 50min, bi-weekly (zoom)

What:

  • Capcity Discussion
  • Review Iteration Board

Iteration Release: Retrospective and Feedback

We celebrate our wins and discuss what’s working and what’s not. Our discussion topics include identifying any blockers, deciding what we should stop doing, what we should start doing, and what we should continue doing. We then select three pieces of completed work to highlight in the iteration announcement issue and on Slack. Here’s the agenda we use for our planning and release meetings.

When: Thursdays, 50min, bi-weekly (zoom)

What:

  • Celebrate wins
  • Topics for discussion
  • Any blockers?
  • What should we stop doing
  • What should we start doing
  • What should we continue doing
  • 3 things to highlight

Retrospective

The retrospective is an event held at the end of each quarter, used to discuss what went well, and what can be improved on. We use a quarterly retrospective issue to keep track of our progress. Access them here.

When: Week 12 of a quarter, 50min

What:

  • Discuss what went well and what can be improved on.
  • Communicate any process changes, etc.

Iteration Changelogs

At the end of every iteration we run a scheduled pipeline job that generates a changelog for the Buyer Expeirence repository. It shows all the chnages made to the project with semanitc commits.

FAQ:

Iteration boards

How long is an iteration?

An iteration is 2 weeks, running from Monday to the following Thursday.

Where can I find the iteration boards?

Iteration boards are created at the team level, and the individual level:

Digital Experience > Issues - Boards > Then selecting an individual’s name or group from the dropdown.

What are the iteration boards used for?

Iteration boards are meant to give an overview of what the team is working on, and to provide a rough idea on what the team is capable of producing in an iteration.

How do I move issues from start to finish?

At the start of an iteration, all issues will have the dex-status::todo label. As issues are worked on, the dex-status label will need to be updated. This can be done by dragging (on your individual board) between columns, or manually changing the dex-status label on the issue.

What if I wasn’t able to complete my iteration board?

Don’t stress, weight points are estimates, unforeseen events happen. Any carryover can be added to the next iteration.

What if I complete my iteration board early?

A few options for when an individual’s iteration board is complete:

  1. Offer assistance to other team members.
  2. Pull a new issue from the backlog into the current iteration.
  3. Sharpen skills/tools.
  4. General housekeeping.

Weight points

What is a weight point?

A weight point is a unit of measurement that’s used to develop a rough estimate of the work required to complete an issue. 1 weight point is measured as .5 days.

How many weight points should an issue be?

The suggested task duration is between 2-4 weight points (1-2 days). There will be exceptions, but it’s recommended to break issues into smaller units of work. Small units of work allow for quicker review cycles, and facilitates collaboration.

How many weight points should be completed per iteration for engineers?

Engineers are expected to complete 12 or more weight points per iteration which is equivalent to 6 days of engineering. This expectation leaves room for merge request reviews and job requirements described in the Frontend Engineer and Fullstack Engineer job family descriptions. For senior engineers, the expectation is 15+ points.

Issues

What should I do if I’m assigned new issues mid-iteration?

Generally if an issue is added mid-iteration, it’s high priority. It’s recommended to work with your team to remove the same amount of weight points from your iteration to make room. These removed issues should go back in the backlog.

Do I need to add any labels?

Before entering an iteration, an issue should already be refined with the proper labels. The only label that changes is the dex-status label (as the issue moves from start to finish).

What if I’m assigned an issue that I can’t close due to content/data gathering?

Unfortunately there will always be edge case issues that cannot be resolved in an iteration To mitigate the amount of carryover, it’s recommended to break the issue into smaller chunks

Example A: If an issue is open while waiting on content/assets, it’s best to create a content/asset gathering issue and close the original issue.

Example B: An issue is open while gathering data from an AB test, it may be best to create an issue to start the information gathering, and an issue to analyze the data at the end.

Weekly Check In

We use Geekbot to conduct asynchronous, weekly check-ins on iteration progress.

Each member of the Digital Experience team should be listed as a participant in the weekly check ins, and everyone should have permissions to manage the application for our team. The app can be configured through the Geekbot Dashboard, which you can visit directly, or find by clicking the Geekbot Slack conversation, navigating to the About tab, and clicking App Homepage.

Production Change Lock (PCL)

Similar to the engineering department, we sometimes temporarily halt production changes to the Buyer Experience repository when team availability is reduced, or we expect atypical user behavior (such as during high-visibility public events).

Risks of making a production environment change during these periods includes immediate customer impact and/or reduced engineering team availability in case an incident occurs. Therefore, we have introduced a mechanism called Production Change Lock (PCL). We are listing the events here so that teams are aware of the PCL periods.

The following dates are currently scheduled PCLs. Times for the dates below begin at 09:00 UTC and end the next day at 09:00 UTC.

Dates Reason
2024-12-20 to 2025-01-05 End of 2024, limited coverage

During PCL periods, merge requests and deployments can only be made by senior team members, managers, and levels of management above our team.

Figma Process

GitLab Product Process

From time to time, our team has objectives that require us to collaborate on the GitLab product. Read more about the process for our engineers to onboard

Special cases during release post schedule: we hold off on making changes to the www-gitlab-com repository during release post days. The release post process is handled by a different team, and it can be disruptive to their work when we release changes to dependencies, CI/CD, or other major changes around their monthly release cadence.

Repository Health Contributions

At the end of every iteration cycle, Digital Experience team members can spend one day to work on issues related to improving the health of about.gitlab.com, the developer experience, tackle tech debt, or improve our documentation.

The structure of Repository Health Day is as follows:

  1. Team members will choose what they wish to work on for this day.
  2. Each team member will submit a single merge request to the Slippers Design System, Navigation, or Buyer Experience repository by the end of repository health day.
  3. This merge request will be related to an issue from any partner or group within GitLab.

By allowing our team members to contribute to the health of our repositories for a day, we can contribute low-effort, high-impact solutions that will drive results for our team, partners, and the entire marketing site. This will enable Digital Experience team members to use their strengths to efficiently drive results for https://about.gitlab.com/. We’re all good at different things and come from different backgrounds. Let’s use that to our advantage to build a better tech stack that is inclusive of the team members that use it everyday.

Analytics

For any Digital Experience analytics request, please create an issue within the Marketing Strategy and Analytics project using the dex_analytics_request template to outline specific requirements. To ensure a smooth milestone planning, please assign the issue to @dennischarukulvanich ideally a week or more in advance.

Sales Shadows

How to set up a Sales Shadow

SMB

  1. Contact a Sales Development Manager (Jonathan Rivat) or Director, of Sales Dev Operations (Ramona Elliott).
  2. Let them know what team you’re from and that you’d like to shadow a few sales calls to observe real GitLab prospects talking to our Sales team to learn [insert what you’re trying to learn here. Example: what the common topics potential customers want to discuss with our Sales team are.]
  3. Inform the Sales Development Manager or Director, Sales Development how many shadows you’d like to do and a rough timeline for when you’d like to do them.
  4. The Sales Development Manager or Director, Sales Development will inform their Sales Development Reps (SDRs), and they will add you to relevant, upcoming Discovery calls with an Account Executive (AE).
  5. Accept the invite and review any supplied material when you add it to your calendar.
  6. When joining the call, remember:
    1. You’re there to observe. If asked to introduce yourself, come off mute and do so, then go back on mute and let the Sales team do what they do.
    2. Keep your camera on.
    3. Have a notes doc prepared and take notes on your observations and insights.
  7. After the call, review your notes, and synthesize and create action items.
  8. Send a thank you message to the Sales team members who hosted you.
  9. Once all shadows are completed, share your notes and insights with the team.

Team Shadow Expectations

Whoever gets closest to the customer wins. With this in mind, the Digital Experience team is expected to shadow Sales calls regularly.

Contact Us

Slack Group

@digital-experience use this handle in any channel to mention every member of our team.

Slack Channels

#digital-experience-team

Slack Application

We have created a Slack application called Dex Bot to notify our team about important CMS changes, read more about it here

GitLab Unfiltered Playlist

Watch our team in action on YouTube!

Digital Experience

Requesting Support

Things we don’t do

  1. Content changes. You can do these yourself using our CMS, Contentful:
    1. Here’s a quick video on how to search for and edit existing content for the marketing site. For completely new pages, please fill out an issue
    2. Want to learn more about our Contentful CMS? Here’s the documentation
  2. Create content. You can collaborate with our excellent Global Content team for these needs.
  3. Create branded assets, custom graphics, illustrations. Our Brand design team is so good at this, you definitely want their expertise.

Issue template to submit an idea to drive our business goals

We love collaborating on work that drives our North Star and supporting metrics. If you have an idea, a strategic initiative, or an OKR that we requires our support here’s how you can kick off our collaboration:

  1. Review the FAQ section related to pre-work that will increase the chances your issue is prioritized.
  2. Create an issue using this template

DEX team members with platform access

LaunchDarkly
  • @dcharukulvanich
  • @fqureshi
  • @jgarc
  • @lduggan
  • @mpenagos-ext
  • @meganfilo
  • @mpreuss
  • @miraclebanks
  • @ndubord
  • Marketing site deployment process

    From the repositories we own, the Buyer Experience repository and the GitLab Blog push their built files to the same GCP bucket as www-gitlab-com. When a pipeline is triggered (by a merge or a webhook) in any of these projects, a deployment job specific to that repository runs, pushing the built files into the bucket and merging them with the existing files. This process is managed by the Deploy.sh file in each repository:

    Mermaid diagram

    To maintain our bucket clean, we run a scheduled pipeline with a delete flag in these repositories, which deletes outdated files from the cloud bucket (such as pages removed from the marketing site and old JS bundles).

    The deletion of BE files is handled in the same WWW delete job by pulling the latest build artifacts from the BE Repository

    Digital Experience FAQ

    Previous Quarterly Plans & OKRs
  • FY25Q1 Digital Experience Quarterly Plan & OKRs
  • FY24Q24 Digital Experience Quarterly Plan & OKRs
  • FY24Q3 Digital Experience Quarterly Plan & OKRs
  • FY24Q2 Digital Experience Quarterly Plan & OKRs
  • FY24Q1 Digital Experience Quarterly Plan & OKRs
  • FY23Q4 Digital Experience Quarterly Plan & OKRs
  • FY23Q3 Digital Experience Quarterly Plan & OKRs
  • FY23Q2 Digital Experience Quarterly Plan & OKRs
  • FY23Q1 Digital Experience Quarterly Plan & OKRs
  • FY22Q4 Digital Experience Quarterly Plan & OKRs
  • FY22Q3 Digital Experience Quarterly Plan & OKRs
  • Content Wireframe Instructions The Digital Experience team is primarily responsible for facilitating content, not creating it. Please prepare a content plan:
  • Provide the layout you think would work best from existing pages or existing blocks
  • Provide the content in the layout of the existing block or page template
  • Image Requirements
    SEO Requirements
  • Know the URL and keywords you want to use
    • SEO and keyword analysis from the Search Team Issue Templates is recommended.

    Accessibility
    Defining Accessibility for the Digital Experience team
    Analytics
    Buyer Experience Repository
    Learn more about the Buyer Experience repository.
    Coding Standards for Digital Experience
    Learn more about the coding standards used by the Digital Experience team at GitLab.
    Contentful CMS
    Editing and creating content using Contentful
    Core Marketing Site Architecture Plan
    Core Marketing Site Changes
    Data Dictionary
    Our goal is to ensure the consistency of data attribute keys and values for tagging the Marketing site. This will result in properly formatted event data getting added to the dataLayer and sent to Google Analytics.
    Dex Bot
    Digital Experience team Slack application
    DEX Code Review Guidelines
    Code reviews are mandatory for every merge request, you should get familiar with and follow our Code Review Guidelines specific to GitLab Marketing projects.
    DEX Core Web Vitals
    This page provides an overview of Core Web Vitals, key metrics for optimizing website performance and user experience, and introduces various tools for monitoring and improving these metrics, including Google Analytics, Google Search Console, ContentKing, and DebugBear.
    Digital definitions
    The purpose of this page is to present definitions for technical jargon and explanations around related technologies.
    Digital Experience: Foundations Agenda
    The goal of this page is to identify blockers and highlight the value of unlocking our team.
    Engineering A/B tests
    Learn more about how Digital Experience engineers our A/B tests.
    Engineering GitLab Product
    Learn more about how Digital Experience engineers work with the GitLab Product.
    Engineering Marketo
    Learn more about how Digital Experience engineers work with Marketo.
    Figma Process
    The purpose of this page is to present guidelines for Figma.
    Image Guidelines
    The purpose of this page is to present guidelines for image file formats.
    Incident Response Matrix
    A guide for incidents that may occur on the Marketing site
    Localization best practices
    How to have a smooth translation process on the Buyer Experience project from start to finish, with common pitfalls and tips to make that easy for translators, stakeholders, and engineers.
    Major League Hacking Fellows
    Information on the MLHF cohorts working with Digital Experience.
    Marketing Cookies
    Learn more about how Digital Experience uses browser cookies.
    Marketing Site Approval Process

    Going forward, all changes on the marketing site (about.gitlab.com) must follow an approval process prior to merging in changes on the website.

    Executive Summary

    The lack of an approval process for changes going to production on GitLab’s Marketing site creates a risk for our business because, as it stands today, anyone can push a change live to our site at any time (details of risk outlined in the Problem Statement below). We are introducing a review process for any contributions going live on the Marketing Site.

    Marketo page template
    Marketo guided template for modules. Each module can be toggled on or off and has options such as background color.
    Navigation Repository

    The navigation repository (also known as be-navigation) is a separate package that is updated and maintained independently from the rest of the marketing website. This is so that we can make changes in one place, and have any consuming repositories pull from that single source of truth. The navigation is currently maintained by the Digital Experience team.

    Navigation is following Semantic Versioning. The current released version can be found on this npm page under Versions.

    OneTrust
    OneTrust is privacy, security, and data governance software that marketing uses as our privacy and compliance solution on our websites.
    OneTrust Cookie Consent Implementation

    Why OneTrust?

    The Digital Experience team uses OneTrust as a tool for cookie consent.

    This implementation is on about.gitlab.com specifically, but if added to the top-level gitlab.com domain it will propagate down to all subdomains. This allows visitors to any GitLab domain to configure their cookie preferences once across all GitLab tools.

    The OneTrust Tool

    Within OneTrust, we have access to a variety of controls:

    • Dashboards showing the rate of consent and declines in various regions
    • Categorization of all cookies (ie. Functional vs Performance cookies)
    • The test scripts and production scripts that should be placed in the <head> of the GitLab website
    • The design of the banner and modal (button colors, logo)

    To request access to the OneTrust tool, please reach out to #mktgops in slack. Note that we cannot change regions or types of consent without legal review.