Calendar Year 2018 Q2 OKRs

View GitLabs Objective-Key Results for quarter 2 2018. Learn more here!

CEO: Grow Incremental ACV according to plan. 120% of plan, pipeline 3x minus in quarter, 100% at 70% of quota.

  • CMO: Build 3x minus in quarter pipeline for Q3. % of plan achieved.
    • MSD: Generate sufficient demand to support our IACV targets. % of opportunity value creation target achieved.
      • SDR: Support efficient inbound demand creation. Achieve 110% of SAO plan.
      • Content: Publish content on the marketing site to accelrate inbound demand. Publish v2 of /customers. Publish /devops.
      • Content: Execute DevOps integrated campaign to support SDR and Field Marketing demand generation. Produce 2 webinars with 500 registrants. Distribute Gary Gruver’s book to 600 people.
      • Field Marketing: Develop account based marketing strategies to deploy in support of Regional Director territory plans. Generate 41% of opportunity creation target worth of referral sourced opportunities.
      • Field Marketing: Execute on field event plan, pre, during, and post event. Generate 12% of opportunity creation target worth of opportunity sourced through field events.
      • Marketing Ops: Improve campaign tracking. Track UTM parameter values in salesforce.com for closed loop reporting in salesforce.com.
      • Marketing Ops: Improve trial experience and trial engagement. Launch new email nurture series educating trial requesters on EEU. Increase trial to opportunity conversion rate by 20%.
      • Online Growth: Extend SEO/PPC/Digital Advertising programs. Generate 31% of opportunity creation target worth of opportunity originating from the marketing site. Increase the amount of traffic to about.gitlab.com by 10% compared to last quarter.
      • Online Growth: Evaluate and build out ROIDNA CRO project. Increase GitLab EEU trial sign-ups by 15%. Increase GitLab EE downloads by 15%.
      • SCA: Support self serve SMB business. Achieve 130% of SMB IACV plan.
      • SDR: Generate outbound opportunity value. Source 16% of opportunity creation target worth of opportunity through outbound prospecting.
    • PMM: Complete messaging roll-out and activation to include: Sales, Partner and Marketing enablement, tier plans specific messaging and positioning, demo aligned to new positioning and messaging, presenting new messaging at key conferences.
    • PMM: Optimize trial sign-up, trial enablement and trial period experience, including the addition of GitLab.com trial and enhance trial nurture program.
    • PMM: Submit strong submission for Gartner Application Release Orchestration (ARO) MQ and contribute to SCM Market Guide update and continue briefing sweep with all key Gartner and Forrester analysts.
    • Outreach: Raise awareness. Double active evangelists. Launch education program. Double number of likes/upvotes/replies.
    • Outreach: Keep being an open source project. Increase active committers by 50%
    • Outreach: Get open source projects to use GitLab. Convert 3 large projects to self-managed GitLab.
  • CMO: Enough opportunities for strategic account leaders. Make the Q2 SCLAU forecast.
    • MSD: Achieve SCLAU volume target. Inbound SCLAU generation and outbound SCLAU generation.
      • SDR: Achieve SDR SCLAU volume targets. % of revised Q2 SDR targets.
      • Field Marketing: Achieve Field Marketing SCLAU volume targets. % of revised Q2 Field Marketing targets.
      • SDR: Achieve SDR SCLAU volume targets. % of revised Q2 SDR targets.
  • CRO: 120% of plan achieved.
  • CRO: Success Plans for all eligible customers.
    • Customer Success: Enabling a transition to Transformational Selling
      • Solutions Architects: Each Solutions Architect record video’s of top 5 specialized use cases (including pitching services), reviewed by Customer Success leadership team.
      • Customer Success Managers: Do a quarterly business review for all eligible customers
      • Professional Services Engineering: 75% of Big and Jumbo opportunities include Professional Services line item
    • Customer Success: 80% of opportunities advanced to stage 4 (Proposal) from stage 3 (Technical Evaluation) stage based on guided POC’s.
      • Solutions Architects: 100% of Solutions Architect team members participate in at least 1 guided POC.
      • Customer Success Managers: 100% of TAM team members participate in at least 1 guided POC.
      • Professional Services Engineering: Create top 3 integration demonstration / test systems (LDAP, Jenkins, JIRA).
  • CRO: Effective sales organization. 70% of salespeople are at 100% of quota.
    • Dir Channel: Increase Channel ASP by 50%
    • Dir Channel: Triple number of resellers above “Authorized Level”
    • Dir Channel: Implement VAR program (SHI, Insight, SoftwareOne, etc)
    • Sales Ops: Complete MEDDPIC sales methodology training. Account Executives and Account Managers should be proficient in capturing all MEDDPIC data points.
    • Sales Ops: Collaborate with Regional Directors to improve our conversion process in the earlier stages, more specifically between 1-Discovery and 2-Scoping as this is historically our lowest conversion.
    • Sales Ops: Complete 1:1 relationship between Accounts, Billing Accounts, and Subscription, where applicable. This will ensure a much cleaner CRM.
  • CFO: Compliant operations. 3 projects completed.
    • Legal: GDPR policy fully implemented.
    • Legal: Contract management system for non-sales related contracts.
    • Billing Specialist: Add cash collection, application and compensation to job responsibilities.
  • VPE
    • Director of Support
      • Support Engineering: 100% SLA achievement for premium self-managed customers => 81%
      • Support Engineering: Document and implement severity-based ticket processing workflow for self-managed customers => 90%
      • Support Services: 100% SLA achievement for GitLab.com customers => 94%
      • Support Services: Develop and document Support Services workflows, processes, and automation needed to deliver world-class customer support => 90%

CEO: Popular next generation product. Ship first iteration of the complete DevOps lifecycle, GitLab.com uptime, zero click cluster demo.

  • VP Product
    • Product: Ship first iteration of GitLab for the complete DevOps lifecycle.
    • Product: Create a dashboard of usage of features. Replace Redash with Looker.
    • Product: Identify causes of free and paid churn on GitLab.com.
  • CFO: Make progress on having public clouds run us. 2 running everything.
    • Dir. Partnerships: Sign agreement to migrate target OS project
    • Dir. Partnerships: Strategic cloud partner chooses GitLab SCM for an offering
    • Dir. Partnerships: Successfully track and present data on the usage touch points for attribution tracking of our cloud agreement
  • CTO: Make sure cloud native installation, PaaS and cluster work well. Zero clicks.
  • CTO: Make sure we use our own features. Monitoring, CE review app, Auto DevOps for version and license.
  • CTO: Jupyter integrated into GitLab. Hub deploy to cluster and Lab works with GitLab.
  • VPE: Make GitLab.com ready for mission critical customer workloads (99.95% availability) => Rebuilding the team went well. Availability was marred by several incidents, particularly with the database. The GCP project was delayed several times, but now appears to be on a trajectory to complete.
    • Eng Fellow: Improve monitoring by shipping 5 alerts that catch critical GitLab problems
    • UX: Deliver three UX Ready experience improvements per release towards reducing the installation time of DevOps. => 100%
    • UX: Deliver three UX Ready experience improvements per release towards onboarding and authentication on gitlab.com.. => 66%
    • Quality: Deliver the first iteration of engineering dashboard charts and metrics. => 100% done, dashboard is up and running.
    • Quality: Complete the organization of files, directories and LoC into /ee/ directory. = 70% done, remaining work scoped with issues created. Javascript and LoC are the most challenging areas.
    • Security: Automated enforcement of GCP Security Guidelines => 90%, full completion dependent on post GCP migration
    • Security: Design, document, and implement security release process and craft epic with S1 & S2 issues and present to product for prioritization => 100%, security release process
    • Frontend: Deliver 100% of committed issues per release (10.8: 30/38 deliverables, 4/12 stretch; 11.0: 32/39 deliverables, 1/12 stretch; 11.1: 37/47 deliverables, 10/20 stretch)
    • Frontend: Integrate the first 3 reusable Vue components based on design.gitlab.com
    • Dev Backend: Define KPIs and build monitoring for release cycle performance => 90%. Prototype working but not shipped.
    • Dev Backend: Create first iteration of engineering management training materials and merge into handbook => 100%
      • Platform: Deliver 100% of committed issues per release (overall: 41/55 (75%) deliverables, 7/39 stretch; 10.8: 15/21 deliverables, 1/14 stretch; 11.0: 12/17 deliverables, 3/13 stretch; 11.1: 14/17 deliverables, 3/12 stretch)
      • Platform: Ship first GraphQL endpoint to be used by an existing frontend component => 80%, endpoint is shipped, but some more advanced features still need to be added
      • Discussion: Deliver 100% of committed issues per release (overall: 32/39 deliverables, 19/30 stretch; 10.8: 9/10 deliverables, 6/6 stretch; 11.0: 12/14 deliverables, 7/11 stretch; 11.1: 11/15 deliverables, 6/13 stretch)
      • Discussion: Make GitLab a Rails 5 app by default => handful of issues remaining; working on plan for default to customers in 11.3
      • Distribution: Deliver 100% of committed issues per release (10.8: 16/16 deliverables, 1/5 stretch; 11.0: 9/9 deliverables, 1/1 stretch; 11.1: 13/13 deliverables, 2/4 stretch)
      • Distribution: Increase integration test coverage of HA setup => 100% done. End to end HA setup is run on every nightly build and on every release.
      • Geo: Deliver 100% of committed issues per release (10.8: 10/20 deliverables, 3/8 stretch; 11.0: 8/10 deliverables, 2/2 stretch; 11.1: 9/14 deliverables, 0/1 stretch)
      • Geo: Test and perform multi-node secondary failover on GitLab.com to GCP
    • Ops Backend: Design and implement a hiring pool process for Ops backend (possibly in collaboration with Dev Backend) => 100% done (first iteration). Great collaboration with Dev Backend to implement the new hiring pool process.
    • Ops Backend: Goal #2
      • CI/CD: Deliver 100% of committed issues per release (10.8: 27/38 deliverables, 3/10 stretch; 11.0: 39/46 deliverables, 14/25 stretch; 11.1: 9/16 deliverables, 2/6 stretch) => 75% for deliverables, 46% for stretch
      • CI/CD: Cover demo of Auto DevOps with GitLab QA => 100%
      • Monitoring: Deliver 100% of committed issues per release (10.8: 7/16 deliverables, 0/5 stretch; 11.0: 9/15 deliverables, 0/5 stretch; 11.1: 10/15 deliverables, 0/5 stretch) => (57%)
      • Monitoring: Publish official Grafana dashboards (50% complete, first iteration merged)
      • Security Products: Gemnasium infrastructure moved to GitLab => 100%, Gemnasium has been shutdown on May 15th, and everything required for Dependency Scanning migrated to GKE.
    • Infrastructure

CEO: Great team. Active recruiting for all vacancies, number of diverse per vacancy, real-time dashboard.

  • CCO: Active recruiting. 100% of vacancies have outbound sourcing.
  • CCO: Increase double diverse candidates (underrepresented and low rent index). At least one qualified diverse candidate interviewed for each vacancy.
  • CCO: Increase leadership aptitude and effectiveness for the executive team. At least one training per month, one leadership book per quarter, and improvement in 360 feedback.
  • CFO: Real-time dashboard for everything in the Metrics sheet. 100% of metrics.
    • Legal: Scalable solution for hiring added in at least 5 countries.
    • Controller: Close cycle reduced to 9 days.
    • Controller: Audit fieldwork completed with no material weaknesses reported.
  • VPE: Refactor engineering handbook to reflect org structure and have leaders take ownership of their sections => 100% done
  • VPE: Source 150 candidates and hire a director of infra, director of support, and a prod manager: Sourced 150 (100%), Hired 3 (100%) => 100% complete on sourcing and hiring
    • Dev Backend: Source 50 candidates and hire a geo manager: Sourced 50 (100%), Hired 1 (100%)
      • Discussion: Source 150 candidates and hire 3 developers: Sourced 160 (100%), Hired 2 (66%)
      • Platform: Source 150 candidates and hire 3 developers: Sourced 160 (100%), Hired 1 (33%)
      • Distribution: Source 50 candidates and hire 1 engineer: Sourced 52 (110%), Hired 0 (0%)
    • Ops Backend: Source 50 candidates and hire 1 monitoring manager: Sourced 56 (112%), Hired 0 (0%)
      • Monitoring: Source 100 candidates and hire 2 monitoring engineers: Sourced 100 (100%), Hired 0 (0%)
      • CI/CD: Source 100 candidates and hire 2 engineers: Sourced 101 (101%), Hired 1 (50%) => 100% complete for sourcing, 50% complete for hiring
      • Security Products: Source 50 candidates and hire 1 Backend Developer: Sourced 50 (100%), Hired 0 (0%)
    • Quality: Source 100 candidates and hire 2 test automation Engineers: Sourced 128 (128%), Hired 2 (100%) => 100% done for both sourcing and hiring.
    • Frontend: Source 150 candidates and hire 3 developers: Sourced 10 - In progress (6%), Hired 3 (100%)
    • Infrastructure: Source 50 candidates and hire a database manager: Sourced 50 (100%), Hired 0 (0%)
      • Production: Source 200 candidates and hire 4 production engineers: Sourced 71 (35.5%), Hired 3 (75%)
      • Database: Source 100 candidates and hire 1 database engineers and 1 manager: Sourced 20 (20%), Hired 0 (0%)
      • Gitaly: Source 100 candidates and hire 2 Gitaly developers: Sourced 100 (100%), Hired 0 (0%)
    • UX: Source 50 candidates and hire a UX designer: Sourced 74 (148%), Hired 1 (100%)
    • Security: Source 150 candidates and hire an Anti-abuse Analyst, a SecOps Engineer, and a Compliance Analyst: Sourced 202 (135%), Hired 2 (66%)
    • Support Engineering: Source 210 candidates and hire 1 support engineering manager and 6 support engineers: Sourced X (X%), Hired X (X%)
      • Support Services: Source 90 candidates and hire 3: Sourced 20 (22%), Hired 3 (100%)
  • CMO: Hire to plan and prevent burnout. DevRel team plan, corporate marketing team plan, marketing and sales development team plan.
    • MSD: Hire to plan. SDR team plan, Field Marketing team plan, Online Growth team plan

Retrospective

VPE

  • GOOD
    • We rebuilt the Production team with a concerted recruiting effort
    • The Engineering Handbook refactor went smoothly
    • All sourcing goals were met
    • All hires were made
  • BAD
    • Several high-profile DB incidents marred availability of GitLab.com and decreased customer confidence
    • The GCP project was delayed several times in the quarter
  • TRY
    • We need to continue the better predictability that the GCP project finished the quarter with and close it out in July
    • We need to continue adding to our production and database teams and implement the new org structure
    • Directors and managers need to take ownership of their handbook sections in Q3 and build them out
    • I need to assist directors with their manager hiring in Q3, as well as their trickiest IC vacancies

Frontend

  • GOOD
    • Made hiring target and onboarded successfully all new hires
    • New Teams inside the Frontend department
    • We shipped the Bootstrap 4 upgrade and the Merge Request Refactoring was finally finished
    • Our done deliverables count is constantly going up
  • BAD
    • Hard time with sourcing for roles outside of Frontend
    • Bootstrap Upgrade had too many regressions
    • Too many bumpy development flows (too long until actionable, blocking dependencies, etc.) and late realisation of problems
  • TRY
    • Continue Hiring and Team structure transformation
    • Getting better to estimate and especially find stepping stones early
    • Improve workflows, tooling and especially individual planning to get to 100% deliverables

Dev Backend

  • GOOD
    • Two KRs at 100%: Sourcing/hiring + handbook
    • Hire was made quickly from pre-existing applicants, so sourcing was applied to future hires
    • Handbook updates spread through quarter, healthy pace for introducing + discussing change
  • BAD
    • Sourced candidates haven’t been contacted - need better pipeline for sourced talent
    • Handbook KR wasn’t specific enough, had to redefine that halfway through the quarter
    • Dashboards didn’t ship - too much time waiting on external teams
  • TRY
    • Keep KRs from being dependent on other teams as much as possible
    • Ensure KRs are specific and measurable
    • Both of the above are known best practices for KRs. Don’t make exceptions to known best practices for KRs. :)

Discussion

  • GOOD
    • Made two new hires early in the quarter, putting us ahead of schedule for a while
    • Shipped deliverables at a consistent rate
    • Hiring pool puts us in better shape to hire well across all teams
  • BAD
    • Missed hiring target; we also aren’t particularly close to making that extra hire soon
    • Missed a particular deliverable (batch commenting) several times
    • Rails 5 spilled over to Q3, but it is much closer now
  • TRY

Distribution

  • GOOD
    • Being very disciplined in scheduling deliverables allowed us to keep our promises.
    • Managed to deliver Helm charts in beta as promised by working in weekly milestones which allowed us to change direction quicker based on current status and direction.
    • Introduction of Team Issue Triaging allowed us to keep on top of incoming issues with the omnibus-gitlab issue tracker, and reduce the number of stale issues.
    • Replaced a team member fairly quickly after a departure.
  • BAD
    • Team size has reduced which impacted our ability to deliver on Technical Debt and performance improvements.
    • Hiring was impacted by a mismatch in our proposed compensation and real market expectations.
    • Noticeable team fragmentation between two major team projects.
    • No capacity to tackle non-critical projects.
  • TRY
    • Reduce team fragmentation by rotating team between projects.
    • Create a stronger candidate pipeline and hit the hiring targets.

Geo

  • GOOD
    • We conducted more than 8 GCP failover rehearsals and improved them each time.
    • We shipped a repository ref checksum features that gave us confidence in the GCP data for 4+ million repositories.
    • We helped improve Geo at scale by fixing numerous bugs with uploads, Wiki, and repository sync.
    • We upgraded GitLab to git 2.16, which turned out to be non-trivial.
    • We shipped HTTP push to the Geo secondary, including Git LFS support.
    • We made significant progress towards supporting SSH push to the Geo secondary.
    • We made progress towards activating hashed storage in production by finding and fixing more bugs in the implementation.
  • BAD
    • We missed a number of deliverables due to merge requests in review.
    • We overestimated team capacity due to GCP migration tasks and vacations.
    • We still have database performance issues at scale.
    • We still haven’t activated hashed storage in production.
  • TRY
    • Nominate another Geo maintainer
    • Reduce the variability of the deliverables scheduled from month to month
    • Activate hashed storage in dev and Ops instances today

Platform

  • GOOD
    • Consistent deliverable hit rate
    • Finally shipped GraphQL endpoint, which had been an OKR at least once before!
    • Hit sourcing goal
    • Having Deliverable and Stretch issues works well; few Stretch issues get hit, but developers indicate that they feel less pressure as the feature freeze nears than before
  • BAD
    • Multiple deliverables were affected by urgent work related to the GDPR
    • Multiple deliverables slipped because issue scope wasn’t well defined until late into the month, and was often larger than anticipated
    • Only hired 1 person
    • Deliverable hit rate still under 80%
  • TRY
    • Being more conservative with maximum weight per person
    • Work with PMs to ensure that issue scope is defined ahead of the kickoff
    • Keep iterating on hiring process

Gitaly

  • GOOD
    • Got on strategy for identifying remaining work to 1.0 (tripswitch)
    • Began working with agency for sourcing hires (Sourcery)
  • BAD
    • 1.0 didn’t have a fully identified scope until beginning of Q3
    • 1.1 still hasn’t become a priority until 1.0 is delivered
    • Lots of churn around who is on the team made consistent throughput difficult
    • Extended use of feature flags made it easy to create complications (easier to turn off a feature flag than fix a bug)
    • Hard to attract the right kind of candidate (too many applicants who know Rails but only dabble in Golang)
  • TRY
    • Working more closely with recruiting agency to get the right kind of hires
    • Keeping team size more stable + above a minimal threshold for forward progress
    • Being less conservative with feature flags and a little more willing to trade risk for time-to-delivery

Database

  • GOOD
    • Set up of team structure for future team members.
  • BAD
    • Self inflicted downtime from maintenance in June
    • Not enough people (2) to pay attention to production stability and work with Engineering teams on performance
    • Small pipeline of candidates for DB roles
  • TRY
    • Implement and follow plans for change control around any DB work
    • Initial team structure for Site Availability / Site Reliability including DB
    • Pulled in sourcing agency
    • Updated posts to Ruby Weekly and hackernews

Production

  • GOOD
    • Hiring - help from all involved from recruiting to exec to team was great
    • Started to get a handle on infrastructure work for GCP and burnt down work to do
  • BAD
    • Team has been spread thin - only 2 people on call in EU, 4 in Americas regions is hard on productivity
    • GCP dates slipping
  • TRY
    • Continued strict focus on getting GCP done with focus on Production stability, GCP, everything else
    • Getting new people in Q3 up to speed and on call in fast, but reasonable time
    • Focus on hiring in EU regions
    • Team structure changes mentioned in database

Ops Backend

  • GOOD
    • Great collaboration with Dev Backend to implement a new hiring process to pool candidates for backend dev roles.
    • Managers across backend teams working and supporting each other through the changes in hiring process.
  • BAD
    • Got through the entire hiring process with a great manager candidate and was not able to close them at the offer stage due to a mismatch in compensation expectation.
  • TRY
    • Make sure compensation expectation are clear with all candidates during the screening call.
    • Continue to iterate and enhance our pool hiring process.
    • Build a strong pipeline of candidates so that we are choosing the best from multiple good candidates and have a fallback if we loose our first choice

CI/CD

  • GOOD
    • Completed coverage of Auto DevOps with GitLab QA
    • A lot of candidates were sourced
  • BAD
    • 75% deliverable completion rate falls quite short of the 100% goal
    • Issues mentioned at kickoff were some of those that didn’t get merged
    • Configuration Team splitting off reduced capacity
    • Only new developer hire occurred at very end of quarter
    • Team experienced a lot of transition with new manager and change in PM
  • TRY
    • Using weight or throughput to assess delivery impact and team capacity
    • Have new manager focus on hiring for the team especially at the start of the quarter

Monitoring

  • GOOD
    • Great backend work by temporary team member.
    • Delivered features despite being short staffed.
  • BAD
    • Short staffed due losing a team member for 1/3 of the quarter.
  • TRY
    • Continue to source and hire more onto the team.
    • Size workload better to account for being short staffed.

Security Products

  • GOOD
    • Onboarding is done, the team is working at full speed now.
    • We can ship features with backend in GitLab.
    • Gemnasium migration is done, the team can focus on GitLab issues.
    • We have a release process for Security Products docker images.
  • BAD
    • The team should be growing to handle all the domain of Security Products. Even if we sourced 50 people, we didn’t manage to hire anyone in this quarter.
    • Some planning issues, due to our lack of knowledge of some processes.
    • The product vision was not aligned with our OKRs, so we had to update them during the quarter to remove some expectations about Gemnasium integration into Omnibus.
    • Not enough time/resources to maintain legacy code and wrappers.
  • TRY
    • Improve planning with a Gantt chart.
    • Commit to 100% deliverables shipped to production.
    • Better integration with GitLab QA.
    • Release Manager rotation to handle bugs, tools update, support, etc.

Security

  • GOOD
    • We continue to mitigate new security vulnerabilities at a timeframe that is at or better than industry standard.
    • Our response time to GitLab.com abuse cases has significantly improved through detection automation efforts.
    • Our hiring rate is has been fairly healthy, and sourcing efforts have been successful.
  • BAD
    • Logging capabilities are still lacking, such that incident response impact analysis is onerous.
    • Lack of consistent severity and priority labeling on backlogged security issues poses challenge in prioritization.
    • Security roles are difficult to hire for, due to high demand for skilled candidates. Our security compensation benchmark needs to be more competitive to attract top talent.
  • TRY
    • Continue to refine and improve upon critical and non-critical security release processes. This will involve continuing to train new RMs to become familiar with the process.
    • Increase security hiring pipelines and close out more hires than past quarters, in order to scale Security Team with company growth and initiatives.

Support

  • GOOD
    • Focus on SLA achievement with training, process and tool enhancements helped global team reach 88% overall.
    • Functionality to track and report on Customer Satisfaction restored.
    • Very good achievement to hiring plan.
  • BAD
    • Still too many SLA breaches while we grow the team.
    • Closing Support Engineering manager positions needs to happen faster.
  • TRY
    • Source-a-thon for key positions, with less focus on Support Agents and directed regional focus on Support Engineers.
    • Determine ways to streamline technical onboarding.

Support Engineering

  • GOOD
    • More Support Engineers are exposed to High Availability Configurations
    • Ticket Growth for Self-managed customers is growing slower than revenue.
  • BAD
    • We need to grow AMER West & APAC to achieve SLA targets
    • Learning GEO is difficult & upgrading GEO needs clearer documentation.
  • TRY
    • Targeted APAC recruiting
    • Designing a GEO Bootcamp curriculum
    • Finalize Premium ticket Priority to offer better customer experience.

Services Support

  • GOOD
    • With better staffing we’ve been able to consistently increase our SLA performance and work through a sizeable backlog.
    • Services Agents have done an great job collaborating on process and documentation.
  • BAD
    • This quarters hires are starting later in the quarter than we would have liked.
    • We haven’t been able to close any EMEA hires.
  • TRY
    • Get a first iteration on our Statement of Support finalized.
    • Focus on EMEA hiring in source-a-thons and recruiting.
    • Transition GitLab.com free users out of ZenDesk.

Quality

  • GOOD
    • Unblocking ourselves after Looker ETL delays for building quality metrics dashboard. We completed the prototype and it is running at: https://quality-dashboard.gitlap.com
    • Everyone in the team has clear ownership and priorities.
    • We made good progress with hiring and sourcing.
  • BAD
    • We discovered a lot of un-triaged CE & EE bugs as part of the dashboard prototype implementation.
    • ETL for GitLab.com Looker was delayed. We were blocked for multiple weeks.
    • Lack of test coverage in gitlab-qa and in the release process.
    • The team is stretched thin, especially with issue and merge request triage. Quality team rotation hasn’t been enough help, a lot of this still falls on Mark F.
  • TRY
    • We need to plan very early to get help from other teams on cross-functional initiatives (esp with CE/EE code reorg). Issues should be prepped and communicate at least one milestone ahead and communicate again before the start of the milestone.
    • Implement issue triage assignments for the rest of engineering and not just for the quality team.
    • For important deliverables and initiatives, we should lean on tools that are more in our sphere of control.
    • Bring up more gitlab-qa test automation coverage.

UX

  • GOOD
    • Research and design work done for Auto DevOps in Q1 set us up for success in delivering UX Ready solutions.
    • Increased engagement and collaboration between PM and UXers.
    • Sourcing and hiring went smoothly with many good candidates.
  • BAD
    • Hiring and increasing workloads slowed down progress on the design system.
    • Scheduling became time-consuming as we iterated on the process and PM/UXer/FE handoff.
  • TRY
    • Continue to iterate on scheduling and planning process.
    • Focus work in design system on areas that support GitLab vision and increase usability.

Pipe-to-Spend

  • GOOD
    • 148% of opportunity value creation target achieved.
    • 185% of SMB / Self Serve IACV plan achieved.
    • Improved visibility of inbound demand with the introduction of Bizible, and campaign performance with the new campaign dashboard.
    • Content team hit 100% of their KRs and content (especially video content) has improved.
  • BAD
    • 82% of revised Q2 Field Marketing SCLAU forecast.
    • Hiring plan not met by 1 person each in field marketing and online growth.
    • Over committing and under delivering on field events. Communication, collaboration, and organization are not consistently good.
    • Hard to work with other teams that do not work asynchronously. Too many meetings being held with no outcome.
  • TRY
    • Standardize and document process for evaluating whether or not we should sponsor an event with clear expectations in terms of SCLAU and ROI.
    • Respectfully push back when last minute requests are made that won’t move the needle.
    • Share customer feedback and pain points with product management on the CustomersDot to ensure it supports a more self-serve experience involving upgrades, true-ups and renewals.
    • Put more time into cross functional coordination and kickoffs to ensure PMM, GM, Sales et al are working tightly together. Hold each other accountable to GitLab’s async workflow and MVCs.
Last modified November 14, 2024: Fix broken external links (ac0e3d5e)