Calendar Year 2018 Q1 OKRs

View GitLabs Objective-Key Results for quarter 1 2018. Learn more here!

Objective 1: Grow Incremental ACV according to plan

  • CEO: IACV doubles year over year
    • VP Product:
    • CRO
      • Customer Success: Identify success factors
      • Customer Success: Do quarterly business reviews for all eligible customers
      • Sales: Add growth pipeline of 1.5x annual growth plan
      • Sales: Add 30 Fortune 500 companies
  • CEO: Be at a sales efficiency of 1.0 or higher
    • CMO
      • Marketing: know cost per SQO and customer for each of our campaigns
  • CEO: Make sure that 70% of salespeople are at 70% of quota
    • CMO
      • Marketing: Make sure each SAE has 10 SAO’s per month
    • CRO
      • Sales: Increase IACV by 15% for Strategic / Large / Mid Market
      • Sales: 1 month boot-camp for sales people with rigorous testing
      • Sales: Professional Services in 50% of Strategic / Large deals
    • VPE
      • Support: 100% Premium and Ultimate SLA achievement => Was in the 80’s as a percentile
    • CFO
      • Legal: Implement improved contract flow process for sales assisted opportunities
      • Controller: Billing support added for EMEA region.
      • Legal: GDPR policy fully implemented.
    • CMO: Establish credibility and thought leadership with Enterprise Buyers delivering on pipeline generation plan through the development and activation of integrated marketing and sales development campaigns:
      • MSD: Scale sales development organization hiring to plan, accelerating onboarding and getting reps productive to deliver on SCLAU growth plans.
      • MSD: achieve volume target in inbound SCLAU generation.
      • MSD: achieve volume target in outbound SCLAU generation.
      • MSD: develop and execute Automate to accelerate CI; Kubernetes and Concurrent DevOps campaigns.
      • PMM: Activate category strategy, positioning and messaging with sales enablement and certification program and website content.
      • PMM: Develop and roll out updated pitch and analyst decks
      • PMM: CE to EE Pitch Deck and SVN to EE pitch Deck
    • CMO: Website redesign iteration, including information architecture update, to support our awareness and lead generation objectives, accounting for distinct audiences.
    • CMO: Further develop thought leadership platforms for GitLab around topics including forecasting the future of development, redefining cultural excellence, and helping to make security an actionable priority for developers.
  • CEO: GitLab.com ready for mission critical workloads
    • VPE: Move GitLab.com to GKE => Did not happen
      • Geo: Make Geo performant to work at GitLab.com scale
      • Distribution: TBD?
      • Gitaly: TBD?
      • CI/CD: TBD?
    • VPE: GitLab.com available 99.95% and monthly disaster recovery exercises => Hit 99.5% according to inferior monitoring on Pingdom, did not conduct monthly DR (but did do plenty of geo testing)
    • VPE: GitLab.com speed index < 1.5s for all tested pages => Did not focus on this
    • VP Product
      • Product: Ship group-level authentication
  • CEO: On track to deliver all features of complete DevOps
    • VPE: Ship faster than before => We’re shipping about the same speed as before
    • VP Product
      • Product: Plan all features to be done by August 22
    • VPE: One codebase with /ee subdirectory => 40% done
  • CEO: Make it popular
    • CMO

      • Marketing: Get unique contributors per release to 100
      • Marketing: Increase total users by 5% per month
      • Marketing: Facilitate 100 ambassador events (meetups, presentations)
      • Marketing: Be a leader in all relevant analyst reports
    • VP Product

      • Product: Grow usage of security features to over 1000 projects
      • Product: Grow usage of portfolio management features to over 1000 projects
    • VPE: Use all of GitLab ourselves (monitoring, release management) => Did not make progress here due to focus on GCP migration

    • CFO

      • Data and Analytics: Create the execution plan for the data enabled user journey.
    • CMO: Build trust of, and preference for GitLab among software developers.

    • CMO: Hire Director, DevRel.

      • MSD: Develop interactive content for Developer Survey results and promote results through digital/social channels.
      • MSD: Grow followers by 20% through proactive sharing of useful and interesting information across our social channels.
      • MSD: Grow number of opt-in subscribers to our newsletter by 20%.
      • PMM: Plan and execute IBM Think corporate event.
      • PMM: Plan and execute GTM for acquisitions and partner launches.
      • PMM: Generate a customer persona map and 3 customer persona profiles.
    • CMO: Generate more company and product awareness including increasing lead over Bitbucket in Google Trends.

      • MSD: Implement SEO/PPC program to drive increase in number of free trials by 20% compared to last quarter, increase number of contact sales requests by 22% compared to last quarter, increase amount of traffic to about.gitlab.com by 9% compared to last quarter.
    • CMO: PR - G1, G2, T1 announcements.

    • CMO: AR - conduct intro briefings with all key Gartner analysts to include reviewing new positioning.

Objective 3: Great team

  • CEO: Hire according to plan
  • CEO: Great and diverse hires
    • CCO
      • Global hiring
      • Sourced recruiting 50% of candidates
      • Hired candidates, on average, from areas with a Rent Index of less than 0.7
  • CEO: Keep the handbook up-to-date so we can scale further
  • Handbook first (no presentations about evergreen content)
    • CCO
      • Consolidate and standardize role descriptions
    • VPE: Consolidate and standardize job descriptions => 100%, done in partnership with PeopleOps
    • VPE: Launch 2018 Q2 department OKRs before EOQ1 2018 => 100%
    • VPE: Set 2018 Q2 hiring plan before EOQ1 2018 => 100%
    • VPE: Implement issue taxonomy changes to improve prioritization => 100% changed security and priority labels
    • VPE: Record an on-boarding video of how to do a local build and contribute to the GitLab handbook => 0%, didn’t get to it
    • CFO
      • Data and Analytics: Corporate dashboard in place for 100% of company metrics.
      • Data and Analytics: Capability to analyze cost per lead/SAO/SQO and marketing campaign effectiveness.
      • Controller: ASC 606 implemented for 2017 revenue recognition
      • Billing Specialist: Add cash collection, application and compensation to job responsibilities.
      • Controller: Close cycle reduced to 9 days.
      • Accounting Manager: All accounting policies needed to be in place for audit are documented in the handbook.
      • Legal: Add at least one country in which headcount can be grown at scale.
    • VPE: Hire a Director of Engineering => 100% hired Tommy
    • VPE: Hire a Director of Infrastructure => 0%
    • VPE: Hire a Database Manager => 0%
    • VPE: Hire a Production Engineer => 0%
      • Distribution: Hire a Distribution Engineer
      • Discussion: Hire two developers
      • Quality: Hire an Engineering Manager
      • Security: Hire 2 Security Engineer, SecOps
      • Platform: Hire 2 developers
      • CI/CD: Hire 2 developers
    • CMO: Hire Director, Product Marketing
      • PMM: Hire to Product Marketing team plan
    • CMO: Hire Director, Corporate Marketing
    • CMO: Hire to Corporate Marketing team plan
    • CMO: Hire Director, DevRel
    • CMO: Hire to DevRel team plan
      • MSD: Hire to SDR team plan
      • MSD: Hire SMB Customer Advocates
      • MSD: Hire Manager, Online Growth
      • MSD: Hire to Online Growth team plan
      • MSD: Hire to Field Marketing team plan
    • CCO: Launch training for making employment decisions based on the GitLab Values.
    • CCO: Ensure candidates are being interviewed for a fit to our Values as well as ability to do the job, through Manager Training and Follow-up by People Ops.
    • CCO: Analyze and make recommendations based off of New Hire Survey and Pulse surveys which will drive future KRs. Have at least 3 areas to improve each quarter. Ideally, we will also have 3 areas to celebrate.
    • CCO: Iterate on the Performance Review process with at least two changes initiated by March.
    • CCO-TA: Iterate the hiring process to decrease process cycle-times, increase efficiency on screening candidates and provide a better candidate experience.
    • CCO-TA: Re-vamp and enhance our jobs page to help attract diverse quality talent enhance our employment brand and position ourselves as hi-tech company.
    • CCO-TA: Establish level of effort metrics to ensure process efficiencies to include: recruiter screened/hiring manager review ratio, Interview/Offer ratio, and Offer Accept ratio.
    • CCO: Provide consistent training to managers on how to manage effectively. Success will mean that there are at least 15 live trainings a year in addition to curated online trainings.
    • CCO: Align recruiting to Functional Groups with Focus on Low Rent Regions. At least 50% of GitLab team-members should be hired from a Rent Index location that less than 0.7.
    • CCO: Implement actionable recruiting Metrics, including the ability to track an accurate source of hire for the majority of all hires.
    • CCO: Target 2 Diversity recruiting Events/sources to attend and recruit from. Measure success to determine future plan.
    • CCO: Increase Employee Referrals by 5%.
    • CCO: Launch Harassment Prevention Training to all managers.
    • CCO: Identify the right LMS for GitLab.
    • CCO: Now that hiring managers have been trained on Reference Checking, beginning ensuring that Hiring Managers are verifying at least one reference per hire personally.
    • CCO: Hiring at least one sourcer and one recruiter for EMEA/Central Asia.
    • CCO: Prioritize the future countries for increased hiring based on pipeline, regulations, future sales, rent index. Begin steps to enable increased hiring outside the U.S.

Retrospective

VPE

  • GOOD
    • Q2 OKRS launched on time
    • Hiring plan and budget on-time and in-line
    • Hired and on boarded a director of eng (Tommy)
    • Q1 GitLab.com availability/stability was good (although not as good as monitoring indicates)
  • BAD
    • OKRs we tweaked mid-quarter and Sid and I never syncd on the changes, so some things never got attention
    • GitLab.com did not move to GCP
    • Didn’t get to the local handbook set up video
    • Slow progress overall on hiring
  • TRY
    • Double down on hiring Dir of Infra, treat as exec hired
    • Partner with people ops to increase hiring pace
    • Less KRs in Q2 to increase focus

Platform

  • GOOD
    • Deliverable hit rate has been pretty consistent.
    • We resolved all Security SL1 issues.
    • We triaged all Platform Community Contribution MRs.
  • BAD
    • We consistently overpromised what we could deliver in a release. This was done in accordance with earlier statements that OKRs should be ambitious and that if we hit more than 70% of our OKRs, we weren’t ambituous enough, though.
    • We did not resolve all Support SP1 and Availability AP1 issues.
    • We didn’t take over any of the popular “coach will finish” Community Contribution MRs.
    • We didn’t hit our bug target, and actually fixed fewer bugs with each release.
    • Bug target was too low. If we had hit our target, the total backlog size would have stayed approximately constant, but because we didn’t, it actually increased by 17.
    • Not much progress was made on backup/restore integration tests.
  • TRY
    • Putting a limit on the total weight of Deliverable issues so that we can actually deliver all of them, and using the Stretch label for issues we’d like to start on this release, but are fine with letting them slip and get finished in the following release. (See gitlab-com/www-gitlab-com#2022)
    • Specifically allocating time to finish popular Community Contribution MRs, by making their issues Deliverable and not allowing them to slip.
    • Specifically allocating time for Engineering-driven efforts like improving integration tests, by making their issues Deliverable and not allowing them to slip.
    • Finding a better balance between adding new features and fixing bugs in existing features.
    • Adding people to the team to be able to do the 3 items above without significantly interfering with our Product feature output.

Discussion

  • GOOD
    • All top-priority security, support, and availability issues were addressed.
    • Deliverable hit percentage increased to 100% over the quarter.
    • Hit bugs target early.
  • BAD
    • Missed a big issue (Rails 5), due to staff changes.
    • Bug target was too low (backlog only reduced by 11 issues).
    • Didn’t address all of the oldest community contributions we have.
    • Deliverables target ignores an issue, batch commenting, that has been blocked on frontend since January.
  • TRY
    • Not having OKRs for specific priorities, if we also have OKRs for solving issues of a particular type.
    • Better tracking of issues that made it / missed per release:
      • All backend issues.
      • Deliverables.
      • Bugs
      • Performance issues.

Distribution

  • GOOD
    • Delivered all scheduled OKRs, and added one more during the quarter
    • Recognized challenging parts of the tasks and reduced scope in time, which allowed us to move quicker towards the goals
    • Managed to tackle some Technical Debt as part of the OKR tasks
  • BAD
    • Everything that was not an OKR was secondary
    • The scope of the OKR’s might have been too ambitious
    • Basic integration coverage meant that we had two serious regressions
    • Cloud Native charts taking most of the teams bandwidth
    • Less amount of time spent on hiring than what was expected
  • TRY
    • Establish a better ratio of technical debt vs. features shipped in one release
    • Focus on sourcing more candidates
    • Establish a better way of assigning engineers to their tasks instead of encouraging them to choose from milestone

Monitoring

  • GOOD
    • Shipped all GCP required features.
  • BAD
    • Dropped/didn’t update alerting KR to refelect feature scheduling changes.
  • TRY
    • Work on hiring to improve team throughput.

UX Design

  • GOOD
    • Major challenges/opportunities for Auto DevOps installation flow identified.
    • Roadmap for Auto DevOps installation immprovements established.
    • All design pattern issues completed or in final review.
    • Understand who operations/DevOps engineers are, most common tasks/duties, the metrics they’re tracking.
    • Identified DevOps biggest challenges: lack of automation, culture, resource, and tools.
    • Updated and recorded all UX standards covered by the current UX guide.
    • Added many UX standards not documented in the current UX guide.
  • BAD
    • OKRs for Q1 not finalized in December and re-tuned at the end of January, delaying progress.
    • Hiring took up much of the UX team’s time early in Q1.
    • Poor priority management, put design library issues off as the deadline was further out and other OKRs were more pressing for the company.
    • Dependence on other departments for scheduling and review of some issues. Unable to influence progress once they were out of our hands.
  • TRY
    • Finalize OKRs earlier on so we can plan better.
    • Assign UX designer to issues rather than encouraging them to pick them up.
    • Schedule UX OKR issues into milestones to make sure we stay on track.
    • Continue to use epics to drive goals and initiatives.
    • Streamline the hiring process for UX to make it more efficient.

Support Engineering

  • GOOD
    • New hires on boarded quickly and successfully
    • Services is gelling
    • HA & Kubernetes expertises went well
    • No attrition
  • BAD
    • We did not hire to 100% of plan
    • SLAs for premium and ultimate fell short of our standard
  • TRY
    • Hire a director of support
    • Pick up pace of engineer and agent hiring

Security

  • GOOD
    • Resolved all SL1/SL open issues in CE tracker.
    • Conducted successful security assessment on Gitaly.
    • Mean Time To Remediation for new open security vulnerabilities is now below industry standard of 30 days.
    • Started conducting internal security briefings on a biweekly basis to drive company-wide accountability and sense of urgency on all security issues.
    • Successfully drove GitLab’s new pages domain verification mechanism.
    • From a technical security standpoint, completed all requirements for GDPR compliance.
    • There is now a formal data classification policy.
  • BAD
    • Although we are doing well with new open security vulnerability issues, there remains a security debt of old security issues to resolve.
    • FIPS 140-2 effort needs more work, and we did not get as far along in that process as we wanted to, due to reliance on 3rd party partner.
    • We were not able to get as far along in application security reviews, due to other efforts taking higher priority (e.g., remediating new open issues, security release process efforts).
  • TRY
    • Revisit FIPS 140-2 effort to see how we can take on more of the burden internally.
    • We will work to reduce security debt in the old open security issues.
    • Conduct more outreach to Engineering departments in order to get more application security reviews conducted.

Frontend

  • GOOD
    • New team members were overall onboarded well
    • Reduced Technical Debt a lot through different topics
    • Constant improvements on our workflows and tools
    • Delivered a lot of big features
  • BAD
    • Missed the deliverable percentage goals
    • Vue-based MR’s not live yet. This turned out to be far larger than anticipated and had to be re-planned.
    • Too long and too much discussions for specific topics
    • CSS Refactoring underestimated, had to re-plan
    • Hard to estimate velocity, lead to slipped deliverables
    • Hard to schedule with an unclear task at the beginning
    • Didn’t invest enough time on Community Contributions
  • TRY
    • Drive longer planning with Epics + bigger scheduling gates for issues
    • Optimize planning pipeline together with PM’s
    • Domain experts to take over domain expertise in the FE team
    • Frontend Sub teams
    • Skipping CSS Refactor and go straight for reusable Vue Components
    • Extend tooling and improve Workflows
    • Set up a workflow to check community contributions as an actual task by someone every release cycle

Infrastructure

  • GOOD
    • We simplified the GCP project by moving to a life-and-shift strategy
  • BAD
    • Hiring did not keep pace
    • We do not have a project management framework that has taken
  • TRY
    • Pick up hiring
    • Experiment with project management frameworks
    • Better monitoring of GitLab.com for SLA

Database

  • GOOD
    • We managed to achieve more compared to our previous OKR, both by having more manpower and by planning work more carefully.
  • BAD
    • We didn’t get as much workflow related work (e.g. Apdex scores) done as we’d like.
  • TRY
    • For the next quarter we will be hiring a database manager and hopefully also a database engineer. This should further reduce load on the existing team members.

Production

  • GOOD
    • GitLab.com availability in Q1 was better than 2017 average
  • BAD
    • Team attrition
    • Did not move to GCP
    • Did not hire a production engineer
  • TRY
    • Better GitLab.com monitoring for uptime

Gitaly

  • GOOD
    • Hit early milestones
  • BAD
    • We did not stay on pace with our backlog for Gitaly v1.0
    • OKRs were messy and possibly redundant
  • TRY
    • More project mgmt
    • Clearer KR, less for more focus?

Geo

  • GOOD
    • We migrated most projects, attachments, LFS objects, etc. from GitLab.com to Google using Geo.
    • We found and fixed a significant amount of issues in this process.
    • We reduced the number of out of sync/failed repository syncs from over a million to less than a few hundred.
    • Geo appears to be keeping up with the new data pushed to GitLab.com.
    • Customers appear to be using Geo more, helping us find additional issues.
  • BAD
    • We have yet to do a thorough project-by-project verification of data mirrored from GitLab.com.
    • Our verification implementation only covers basic references, but does not check the integrity of the files themselves.
    • We still have a lot of obscenely slow database queries causing high load on the database.
    • We have not made much progress in putting hashed storage into production.
  • TRY
    • Start using our repository checksum feature when 10.7 is deployed.
    • Improve verification to include object integrity.
    • Spending Q2 optimizing these database queries.
    • Assign someone to own the rollout of hashed storage.

Security Products

  • GOOD

    • Whole (new) Team onboarded with success
    • Ability to release since the first milestone
    • Openshift to GCP migration is going well
    • Lot of expectations for security features
  • BAD

    • Each change in the jobs definition means updating docs everywhere, our .gitlab-ci.yml, and possibly introducing break changes. This process is long and error prone.
    • Dependence on other departments, especially Frontend, for implementation on some issues.
    • Lack of automated E2E tests/QA so long and painful testing for now
    • We hit the limitations of our own features (using subgroups, .gitlab-ci.yml include, unprotected container registry, etc.).
    • Lot of processes to assimilate, as the whole team is new
    • Small usage of Security Products so few feedback
  • TRY

    • Improve our tools for security advisories triage.
    • Improve usage of our tools internaly
    • Improve accessibility to our tools (includes in gitlab-ci.yml)
Last modified November 14, 2024: Fix broken external links (ac0e3d5e)