Calendar Year 2018 Q3 OKRs
View GitLabs Objective-Key Results for quarter 3 2018. Learn more here!
CEO: Grow Incremental ACV according to plan. IACV at 120% of plan, pipeline for Q4 at 3x IACV minus in quarter orders, LTV/CAC per campaign and type of customer
- VPE
- Support: Achieved 94.5% CSAT across all customer tickets for Q3.
- Self-managed: 95% on CSAT, 95% on Premium SLAs: Achieved 95% CSAT and 83% against Premium SLAs.
- Services: 95% on CSAT, 95% on Premium SLAs: Achieved 93% CSAT and 94% against SLAs.
- Support: Achieved 94.5% CSAT across all customer tickets for Q3.
- CMO: Achieve 3x IACV in Q4 pipeline minus in quarter orders
- Content: Ensure the content team selects their own work and focuses on results. No distractions.
- Content: Increase sessions on content team blog posts. 5% month over month.
- Content: Measure and increase pipe-to-spend for content team activities. 10% MoM.
- Field Marketing: Improve account based marketing for large & strategic accounts. Pull in 1 large or strategic deal into Q3, increase opportunity size of 2 large or strategic deals, 5 intros to new departments at large or strategic accounts.
- Field Marketing: Increase operational maturity in field marketing tracking and execution. All event lead uploads within 48 hours, all contacts accurate statuses and notes, accurate campaign data for all field marketing campaigns tracked in salesforce.com.
- Field Marketing: Achieve 100% of field marketing portion of SCLAU plan.
- Online Growth: Expand online growth tactics. Increase overall traffic to existing pages by 10% compared to last quarter, increase sign-ups from SEO/PPC/Paid Social by 20% compared to last quarter .
- Online Growth: Improve about.gitlab.com conversion rate optimization. Increase live chat leads 20%, increase contact requests by 20%, increase trial sign-ups by 20%.
- Marketing Program Management: Launch new email nurture series educating trial requesters on .com Premium plan. Keep unsubscribe rate below 1%, 30% of nurture audience at “engaged” progression status or higher.
- Marketing Program Management: Increase awareness and ongoing education on GitLab <> Google Partnership. Increase GKE trials referred by GitLab/week by 3X.
- Marketing Program Management: Improve reporting for marketing programs. Automate and schedule reporting for email & webcast performance, ensure all email and webcast programs are tracked completely in salesforce.com campaigns.
- Marketing Operations: Streamline lead management processes. Refreshed lead & contact layouts with all unnecessary fields removed from view, trim down to a single lead queue view for SCA team.
- Marketing Operations: Deploy multi-touch attribution model in Bizible.
- Marketing Operations: Ensure all online marketing is tracked as campaigns in salesforce.com. Complete tracking of all required campaign fields for all online marketing.
- SMB Customer Advocacy: Achieve 200% of SMB IACV plan.
- SMB Customer Advocacy: Improve SMB account management. Increase SMB gross retention to 90%, increase net retention to 150%, reduce number of manual license key replacements by 30%.
- CMO: Committer Program
- Increase the number of contributions (not contributors) from the wider community. For the 11.3 release, the target MR is 120 & for the 11.4 release the target MR is 150.
- Hired full-time GitLab contributor(s) at customers. One hire made at the customer and a blog post about their initiatives published.
- Implement at least 10 improvement ideas to streamline the contribution process for the wider community: e.g. Improve on-boarding experience
- CRO
- Dir, Customer Success:
- Each SA/TAM team successfully proposes & creates at least one SOW to increase service pipeline. As a result, services bookings increase by 100%.
- Develop an approach to support growth in strategic accounts. Successfully execute program against one Strategic account per region.
- Leverage Customer Succes team growth to increase opportunities at the highest potential Strategic growth accounts. Achieve 300% of the Q3 growth iACV plan.
- Dir, Customer Success:
CEO: Popular next generation product. GA for the complete DevOps lifecycle, GitLab.com ready for mission critical applications, graph of DevOps score vs. cycle time.
- VPE: Make GitLab.com ready for mission-critical customer workloads - 70% (moved to GCP and hired great team, not up to new standards though, lots more to do)
- Frontend: Preserve 100% of error budget: 100%
- Frontend: Implement 10 performance improvements: X/10 (X%)
- Frontend Discussion: Preserve 100% of error budget: 100%
- Frontend Discussion: Implement 5 performance improvements on the MR/issue page X/10 (X%)
- Frontend MD&P: Preserve 100% of error budget: 100%
- Frontend MD&P: Implement and integrate 10 reusable Vue components into GitLab X/10 (X%)
- Dev Backend
- Plan: Preserve 100% of error budget: 94% (S3 incident)
- Plan: Implement 10 performance improvements: 8/10 (80%)
- Distribution: Preserve 100% of error budget: 100%
- Distribution: GitLab Helm charts generally available: 100%
- Release Management: Complete feature flag work necessary for weekly releases: 50%
- Geo: Preserve 100% of error budget: 94%
- Geo: Finish failover to Google Cloud July 28 : Completed mid-August
- Geo: Implement 5 performance improvements (P1/P2/P3 categories): 6/5 (100%)
- Gitaly: Preserve 100% of error budget: 100%
- Gitaly: Ship v1.0 and turn off NFS (100%)
- Manage/Create: Implement 10 performance improvements: 6/10 (60%)
- Gitter: Preserve 100% of error budget: 100%
- No
gitter.im
downtime - Gitter: Open source Android and iOS applications 100%
- https://gitlab.com/gitlab-org/gitter/gitter-android-app
- https://gitlab.com/gitlab-org/gitter/gitter-ios-app
- Gitter: Deprecate Gitter Topics 100%
- https://gitlab.com/gitlab-org/gitter/webapp/issues/1947
- Manage: Preserve 100% of error budget: 97%
- Create: Preserve 100% of error budget: 38%
- No
- Ops Backend
- CI/CD: Preserve 100% of error budget: 100%
- CI/CD: Implement one key architectural performance improvement from https://gitlab.com/gitlab-org/gitlab-ce/issues/46499: 0% (Reducing the per job footprint for the
ci_build
table is code complete but not merged, merging is pending upgrading to Rails 5) - Configuration: Preserve 100% of error budget: 100%
- Configuration: Fully document and automate Auto DevOps local setup (#359): 100%
- Monitoring: Preserve 100% of error budget: 94%
- Monitoring: Implement admin performance dashboard. (Design issue): 30% (design is nearly complete, engineering has not started)
- Secure: Preserve 100% of error budget: 85% (1 S2 incident)
- Secure: Aggregate and store vulnerabilities occurrences into database: 85% (Only SAST reports implemented) (&251)
- Infrastructure: Move to GCP July 28th - GCP migration completed Aug 11, 2018. (100%)
- Database: Preserve 100% of error budget: 91%
- Database: all issues in the Q3 infrastructure epic
- Database: all issues in the Q3 application performance epic
- Production: Preserve 100% of error budget: -17%
- Quality
- Implement Review Apps for CE/EE with gitlab-qa test automation validation as part of every Merge Request CI pipeline: 50%, review apps for EE completed CE ongoing.
- Triage and add proper priority and severity labels to 80% CE/EE bugs with no priority and severity labels: 98.5%, above target for EE, came under by 1.5% for CE. Triage package implemented to maintain triage efforts.
- UX
- Establish the User Experience direction for the security dashboard: Aid in completing a Competitive analysis, identify 10 must have items for the security dashboard, complete a design artifact/discovery issue: 100%
- Identify and document the styles for the first 10 components being developed by the FE team: 90%
- UX Research
- Security
- Identify and push top 10 sources/attributes to S3 instance and ensure at least 90 day retention and tooling for security investigations: 100%
- Triage 50 backlogged security labeled issues and add appropriate severity and priority labels: 102%, triaged both CE and Infra repos. Including link to CE S2 triaged issues only as example.
- CMO: Increase brand awareness and preference for GitLab.
- Corporate marketing: Help our customers evangelize GitLab. 3 customer or user talk submissions accepted for DevOps related events.
- Corporate marketing: Drive brand consistency across all events. Company-level messaging and positioning incorporated into pre-event training, company-level messaging and positioning reflected in all event collateral and signage.
- Corporate marketing: Increase share of voice. 20% lift in web and twitter traffic during event activity.
- Product marketing: Increase inbound leads - Create 20 new web pages to educate market and prospects about GitLab capabilities and solutions
- Product marketing: Deliver 18 enablement sessions to sales and channel teams to increase SCLAU by count of 15
- Product marketing: Launch Customer Advisory Board with 15 key strategic GitLab account customers to create 15 customer champions (references) to help close SCLAU opportunities
- Alliances: Get listed and offered as partner of choice on large cloud providers. 3 public clouds.
- Alliances: Develop an opportunity for migration with a large OSS community. Sign letter of intent.
- Alliances: Secure keynotes at a big cloud conferences for brand building and sales momentum. 2 keynotes.
- Product
- Hire two directors. 0/2
- DevOps score vs. cycle time first two iterations shipped. Prove relationship between DevOps score and releases per year. 0/2
- Hire a growth team. PM hired.
- Make 10 key feature docs more enticing, replacing need for their about.gitlab feature pages 5/10 (50%)
CEO: Great team. ELO score per interviewer, Real-time dashboard for all Key Results.
- VPE: 10 iterations to engineering function handbook: 7.5/10 (75%)
- Infrastructure: 10 iterations to infrastructure department handbook: 10/10 (100%)
- Production: 10 iterations to production team handbook: X/10 (X%)
- Ops Backend: 10 iterations to ops backend department handbook: 4/10 (40%)
- Ops Backend: Define and launch a dashboard with 2 engineering metrics for backend teams => 0%, Throughput and cycle time are the 2 metrics we would like to track. Implementation of the dashboard was not started in Q3.
- Quality: 10 iterations to quality department handbook: 8.5/11 (77%)
- Security: 10 iterations to security department handbook: X/10 (X%)
- Support: 20 iterations to support department handbook: 21/20 (105%)
- Self-managed: 10 iterations to self-managed team handbook focused on process efficiency: 10/10 (110%)
- Services: 10 iterations to services team handbook focused on process efficiency: 11/10 (110%)
- Support: Implement dashboard for key support metrics (SLA, CSAT)
- Infrastructure: 10 iterations to infrastructure department handbook: 10/10 (100%)
- VPE: Source 50 candidates for various roles: 50/50 sourced (100%) confidential spreadsheet
- Frontend: Source 100 candidates by Aug 15 and hire 2 manager and 3 engineers: X sourced (X%), hired X (X%)
- Frontend Discussion: Source 25 candidates by July 15 and hire 1 engineer: X sourced (X%), hired X (X%)
- Frontend MD&P: Source 50 candidates by Aug 15 and hire 2 engineers: X sourced (X%), hired X (X%)
- Dev Backend: Create and apply informative template for team handbook pages across department (70%)
- Dev Backend: Create well-defined hiring process for ICs and managers documented in handbook, ATS, and GitLab projects (70%)
- Dev Backend: Source 20 candidates by July 15 and hire 1 Gitaly manager: 20 sourced (100%), hired 1 (50%)
- Plan: Source 25 candidates by July 15 and hire 1 developer: 50 sourced (100%), hired 0 (0%)
- Distribution: Source 30 candidates by Aug 15 and hire 1 packaging developer and 1 distribution developer: 30 sourced (100%), hired 0 (0%)
- Manage: Source 35 candidates by Aug 15 and hire 2: 35 sourced (100%), hired 0 (0%)
- Create: Source 25 candidates by July 15 and hire 1: 25 sourced (100%), hired 0 (0%)
- Infrastructure: Source 20 candidates by July 15 and hire 1 SRE manager: X sourced (X%), hired X (X%)
- Database: Source 50 candidates by July 15 and hire 2 DBEs: 7 sourced (14%), hired 1 (50%)
- Production: Source 75 candidates by July 15 and hire 3 SREs: 24 sourced (32%), hired 2 (66%)
- Ops Backend: Source 30 candidates by July 15 and hire monitoring and release managers: 30 sourced (100%), hired 1 (50%) => Hired an engineering manager for the Monitor team.
- CI/CD: Source 60 candidates by July 15 and hire 2 developers: 15 sourced (25%), hired 0 (0%)
- Configuration: Source 90 candidates by Aug 15 and hire 3 developers: 90 sourced (100%), hired 0 (0%)
- Monitoring: Source 90 candidates by Aug 15 and hire 3 developers: 137* sourced (100%*), hired 2 (67%) (We don’t have reliable numbers for “sourced” becuse it was a pooled approach)
- Secure: Source 75 candidates by Aug 15 and hire 3 developer: ~50 sourced (66%), hired 0 (0%)
- Serverless: Source 20 candidates by Aug 15 and hire 1 developer: 0 sourced (0%), hired 0 (0%)
- Quality: Source 150 candidates by Aug 15 and hire 3 test automation engineers: X sourced (60%), hired 2 (78%)
- Security: Source 150 candidates by Aug 15 and hire 5 security team members: 105 sourced (105%), hired 5 (100%)
- Support: Source 50 candidates by July 15 and hire an APAC manager: 50 sourced (100%), hired 0 (0%) - pursued top candidate for over a month who ended up signing our offer days after Q3 ended!
- self-managed: Source 350 candidates by July 30 and hire 7 support engineers: 110 sourced (32%), hired 7 (100%)
- Services: Source 200 candidates by July 30 and hire 4 agents: 110 sourced (55%), hired 4 (100%)
- UX: Source 25 candidates by Aug 15 and hire 3 ux designers: 25 sourced (100%), hired 2 (66%)
- Frontend: Source 100 candidates by Aug 15 and hire 2 manager and 3 engineers: X sourced (X%), hired X (X%)
- CFO: Improve payroll and payments to team members
- Controller: Analysis and proposal on Trinet conversion including cost/benefit analysis of making the move (support from peopleops)
- Controller: Transition expense reporting approval process to payroll and payments lead.
- Controller: Full documentation of payroll process (sox compliant)
- CFO: Improve financial performance
- Fin Ops: Marketing pipeline driven revenue model (need assistance from marketing team)
- Fin Ops: Recruiting/ Hiring Model: redesign gitlab model so that at least 80% of our hiring can be driven by revenue.
- Fin Ops: Customer support SLA metric on dashboard
- CFO: Build a scalable team
- Legal: Contract management system for non-sales related contracts.
- Legal: Implement a vendor management process
- Data and Analytics: Real Time Dashboard for Company critical data. Product Event and User Data Dashboards implemented (signed off by Product Management), Customer Success Dashboard implemented (signed off by Dir. of Customer Success), Marketing Dashboard implemented (signed off by marketing team).
- Data and Analytics: Increase adoption and usage of the data warehouse and dashboards. Self serve process for generating new events and dashboards documented, Data tests in place for all current and new data pipelines.
- Data and Analytics: Improve security of corporate data. Every ELT pipeline validates permissions.
- CCO: Efficient and Effective Hiring.
- New and Improved ATS chosen and implemented for more efficient and effective hiring (Key result is improved metrics, decreases time in process, minimizing manual efforts for resume review and scheduling)
- More on-boarding guidance with sessions held for new hires on Monday and Tuesday and recorded for asynchronous value.
- First iteration of ELO score for interviewers.
- CCO: Build a strong and scalable foundation within People Ops
- Select Benefits, 401 (k), Stock options administration, and Payroll providers to bring Payroll and benefits in-house
- Improve scalability and effectiveness of the 360 process and employee engagement survey process.
- CCO: Summit Success
- Complete 2018 Summitt successfully, through a survey to attendees and non-attendees
- Determine location, changes and improvements for the 2019 Summit
Retrospective
VPE
- Good
- We moved to GCP successfully
- Hiring the infra team went great
- Handbook was substantially Improved
- All departments (except development) are on hiring pace
- Sourcing was done on time
- Error budgets incentivized GitLab.com availability without slowing the team down
- Bad
- We moved to GCP later than anticipated
- RepManager felt like a roll-of-the-dice during the migration. Should have been rock solid
- Development teams are behind their needed hiring pace (Backend Rails roles)
- Can’t yet measure availability simply (one metric)
- Pages outage during summit was self-inflicted
- Try
- Work with PeopleOps/Finance to do something about Rails hiring
- Rip out RepManager
- Stick to new Infra roadmap (solves multiple high-priority issues)
- Continue to steer culture of teams through deliberate promotions/hiring
Frontend
- Good
- …
- Bad
- …
- Try
- …
Dev Backend
- Good
- Hired manager for Manage
- Shipped some team pages and encouraged a lot of thought around how best to “market” teams both internally and externally
- Hiring process for developers is very thoroughly documented
- New consistent technical interview process is live and demonstrating great results
- Team is highly engaged in improving hiring
- Bad
- Still only on track to hit 35% of hiring goals by EOY
- Documentation KRs got largely finished then moved to backburner after summit
- Still a lack of clarity in the team about the new product categories and which teams are responsible for what aspects of the work, especially technical debt
- Try
- Be willing to adjust KRs mid-quarter if priorities shift dramatically (like they did in the wake of our hiring analysis)
- Set aggressive lag indicator targets for hiring to spur a sea change
- Communicate more directly and frequently with hiring managers until our pace is more comfortable
Plan
- Good
- Working with PM, FE, and UX, issues are much smaller and easier to schedule
- We got a ‘free’ team member from an internal transfer, who was ready to start working at a high level immediately.
- Bad
- Still no hires: last hire was in May
- Lost six points from error budget very early in the quarter; fixed processes to reduce the human error in that incident
- Missed two performance improvements (one in review but not merged, one pushed to Q4)
- Batch commenting was only merged just before the 11.4 freeze, having started work in January
- Try
- Continue monitoring the due-22nd experiment
- Directly reaching out to hiring prospects, rather than going via Recruiting
- Make our backend developer posting more attractive
- Make our backend developer posting more visible
Distribution
- Good
- Released Charts in GA on time, received positive feedback
- Prioritised important GitLab.com and customer issues in time
- Managed to preserve the error budget
- Sourcing was done on time
- Encouraged discussion with the whole engineering team over feature flags
- Bad
- Team morale dropped after releasing Charts as GA, exhaustion set in after months of pushing towards the big milestones
- Other work suffered with the focus on Charts
- Technical debt is on rise across projects
- Hiring pipeline is of low quality
- Try
- Identify the most critical technical debt items and address them
- Create a better balance between the team owned projects
- Restart the team trainings effort
Geo
- Good
- Completed the GCP migration
- 5m+ projects and millions of attachments/uploads migrated!
- Coordination between teams to execute the migration
- Teamwork to categorize Geo’s backlog to develop the roadmap
- Good blogpost on how Geo was built
- Helpful and timely responses from FE and QA teams
- Completed the GCP migration
- Bad
- Significant miss during the migration that HEAD was not replicated
- Many manual steps in the geo runbooks
- QA test failures - work needs to be done to make them more effective
- Try
- Get back into a rhythm now that GCP migration is concluded
- Develop a more detailed roadmap and focus milestones to that roadmap
- Improve Geo’s part of the QA test suite
- Contribute further technical blog posts
Create/Manage (formerly Platform)
- Good
- Hit sourcing goal
- Bad
- No one new was hired
- Platform team being split up into Create and Manage and Manage getting its own Engineering Manager took focus away from planned performance improvements.
- Try
- Active sourcing
Gitaly
- Good
- 1.0 shipped, NFS turned off for .com
- Gitaly very stable after 1.0 launch
- 1.1 almost complete, code duplication between gitlab-ce and gitaly-ruby is removed
- Plans in place for object deduplication and HA Gitaly
- Bad
- Project discipline has atrophied during push to 1.0
- Still many urgent projects that need to be done “next” with a small team
- Still being interim managed by Director of Dev Backend
- Try
- Source even more aggressively for manager and developer hires
- Shift team rhythms back to being in sync with GitLab releases, focus on establishing normal project discipline rhythms and processes
Gitter
- Good
- Open-sourced Gitter Android and iOS apps that got some nice praise
- No https://gitter.im/ downtime 💪 with many releases
- Gitter Topics removed from the codebase, https://gitlab.com/gitlab-org/gitter/webapp/issues/1947
- Less complexity
- Removes our only usage ofReact
- Solid deprecation plan, blog post, ability to export, complete removal
- Bad
- PagerDuty alerts are too noisy
- Try
- Look at PagerDuty alerts and clean up anything that didn’t actually matter
Infrastructure
- Good
- GCP Migration done in August
- Adding more people to the team - good onboarding in Americas
- Bad
- DR implications and analysis with Hurricane Florence
- SRE team in Europe still on 2 person rotation
- Small Self inflicted incidents happened with GitLab Pages
- Try
- Focus on DR strategy for Next Quarter OKRs
- Focus on prevention of incidents with iterations on change control processes
- Focus hiring in EMEA
Database
- Good
- Hired 2nd SRE manager with DBRE background
- No self inflicted DB incidents
- Bad
- DBRE hiring moving slower than desired
- Try
- More focus on meeting sourcing numbers for DBRE role.
Ops Backend
- Good
- Lots of conversations with the team around throughput and the feedback has been positive. We are seeing comments in retrospective around having large MRs and seeing the value of breaking deliverables to smaller MRs that can be done quickly and reviewed within hours instead of days.
- Hired an Engineering Manager for the Monitor team. Seth joined the team this quarter.
- The Configure team regained Mayra back full-time, Thong joined the team and Dylan has done a great job in his new engineering manager role.
- Elliot joined as the Engineering Manager for the Verify and Release teams.
- Split Verify and Release team members to provide more focus.
- Ops team pages were updated with more details
- Added a director readme
- Bad
- We are still behind on our hiring. Was not able to hire the second Engineering Manager for Release.
- Was not able to start on the implemenation of the dashboard for throughput so we can capture data for the team.
- Throughput OKR was not defined in a measurable way which made it hard to quantify and score.
- Try
- Focus on implementation of Throughput and other engineering metrics
- Try new sourcing methods and work on compensation issues
Monitoring
- Good
- Hired 2 developers. While it still misses our goal of hiring 3, we did pretty well.
- Weren’t able to accurately assess how many candidates were sourced for our positions because of the pooled back-end approach.
- Error budget was preserved at 100%. This may get harder to pull off as we release more features, increase our surface area, and define SLOs for the services we offer.
- Onboarded a new manager and a new developer during this time.
- Bad
- Completely missed the deliverable for the admin dashboard.
- Try
- Determine what SLOs we will be responsible for and implement them.
CI/CD
- Good
- Communication around planning improved a lot and involved more of the team
- Overall adoption of Throughput and smaller MRs
- Started the process of splitting the team into the Verify and Release teams
- Steve joined as a Backend Developer
- Matija joined our team (transitioned from another internal team)
- Bad
- Behind on hiring goals
- Development of features frequently gets slowed down by uncovering tech debt or dependencies
- Runner and pipeline features are hard for Support Team to support leading to an increase in the number of support requests for the team
- Some big MRs which were painful to get merged before feature-freeze
- Try
- Look for more ways to break MRs into smaller pieces
- Async retros to allow for allow greater participation
- Look for new ways to help us plan our releases to make things more predictable and less distracting
Secure
- Good
- CodeQuality successfully handover to Verify.
- We identified friction points in our process, especially with the versioning of our tools. It will be fixed in Q4.
- Backend and GitLab skills are improving.
- We have more and more feedback from customers, to help us prioritize features and bugs.
- Improved a lot communication and planning, by involving BE, FE, and UX earlier in the process.
- Bad
- Technical prerequisites for storing data in DB slipped several times. It’s really hard to define a MVP.
- Could not hire anyone during this quarter. The candidates pool is very low, and sourcing is tedious.
- Storing vulnerabilities data in the DB was more complex than expected. It’s also coupled with other features from other teams.
- We have too many domains (five) to cover. Even with CodeQuality gone, we still switch context all the time.
- Try
- We have a new recruiter for the Secure Team, and new requirements for the candidates. We will target pure Ruby On Rails developers for the open positions.
- We should have a maintainer in the team.
- Put in place the rotation on our Release Manager. We need more people in the team for that.
- Reduce our technical debt, as we add more members in the team.
Configure
- Good
- Managed to have 2 retros across a very wide timezone range [-7 UTC,+13 UTC]
- Hired a new backend developer (Thong) and a new frontend engineer (Jacques)
- Rotating daily standup led to people getting to know each other better
- Got useful feedback about Auto DevOps by working with the support team
- A lot more collaboration between cross functional teams from discovery through to implementation
- Managed to ship Auto DevOps enabled by default for all customers finally
- Our product vision for 2019 looks really exciting and cutting edge
- Our asynchronous communication has improved in order to face the challenge of our timezone range
- Made some progress towards RBAC support which is a feature that has been requested for a long time
- Shipped protected environments which is a first major improvement to support operators
- Bad
- Our features still have a very difficult set up process for local development
- We are not prioritizing our user research findings from Auto DevOps users
- It seems hard to see the impact of our work due to unknown (but seemingly small) user base
- Issues still seem to be too large and often take a whole month or slightly more to finish
- Critical features for Auto DevOps have still not been implemented
- Not enough hires for backend roles leading to not enough capacity to reach our ambitious goals
- Missed deliverables occur most months
- QA test coverage for our products has not improved in some time
- Auto DevOps on by default has received a great deal of negative feedback from customers
- Still have not managed to enable Auto DevOps by default on gitlab.com due to our CI infrastructure not being scalable enough
- Still have not managed to ship RBAC support which is critical for adoption of our Kubernetes integration by most organisations
- Communication via multiple channels (slack, MR, issue) for the same topic is difficult to follow
- Try
- Async retros so that we do not need somebody to stay up very late to attend retro
- Work more closely with Quality team to improve our test coverage
- Get closer to our users by regularly working with support and reading zendesk issues from customers
- Working closely with sourcing to get many candidates that have very specific skills and interest in Ops and Kubernetes to be able to better sell our team
- Use threaded discussions on GitLab issues to discuss at a higher bandwidth on GitLab
- Try out Geekbot so we stay in touch even though our rotating standups are only twice a week
- Beer call once a quarter to get to know each other even better
Quality
- Good
- Focus and speciality within the group. Dev Stage dedicated test automation counterparts and Developer Productivity functions have booted up.
- Setup sub team weekly meetings for collaboration.
- Implemented triage-package to scale out triage issues to all engineering teams.
- Bad
- No assigned resources for ops stage.
- Its a challenge to navigate and prioritize all the work have since we work in multiple projects.
- Did not reach hiring goal.
- As we are adding more tests, suites are taking longer to run.
- Accidentially committed to 4 OKRs, rather than 3 :)
- Try
- Setup project management process and tooling in a common way for all Quality projects.
- Initiate better long term planning (roadmap) before epic/issue creation.
- Come up with a simple MVC for test parallelization in a simple MVC manner
Security
- Good
- All Q3 Goals were met.
- We hired 5 Security Engineers in Q3.
- For S1 security vulnerabilities, our MTTR averages under 30 days, below security industry standards.
- We have delivered many FGUs, which helps to drive overall awareness of security initiatives at GitLab.
- Bad
- Our MTTR for S2 security vulnerabilities needs improvement next.
- Both critical and regular security release processes could use more automation.
- Timezone coverage for Security team is still mostly concentrated in US and EU zones.
- Q3 goals not fully reflecting the breadth of security domain deliverables.
- Try
- Hire security engineers in APAC timezone to increase coverage for security incident response.
- Increase Q4 goals to cover more breadth of Security team initiatives.
- Use FGUs as a forum to drive accountability throughout GitLab, to improve MTTRs for security vulnerabilities.
Support
- Good
- Preparation in advance of Summit to ensure positive customer experience while maximizing support team engagement.
- Positive progress on exposing key Support metrics into Corporate metrics.
- Global collaboration on staying on top of our hiring plan.
- Bad
- Length of time to source and screen excellent Support Engineering Manager candidates in APAC.
- Experience of losing our first voluntary termination.
- SLA’s for self-managed customers continue to be below goal
- Some high profile customers/propspects had a disruptive experience during the summit
- Try
- Establishing a more streamlined candidate to hire process.
- Clarify expectations for each level of Support Engineering and Support Agent job roles to complement career growth.
- Ramp up sourcing/hiring in APAC
- Partner with sales to take especially great care of important customers/prospects prior to renewels and new contracts
Support - Self-Managed
- Good
- New Hires continue to make an impact quickly
- Senior Engineers are focusing on deep performance issues and surfacing problems (Gitaly/NFS).
- Bad
- Small Premium customers are starting to generate too many tickets.
- We haven’t leveraged ticket priorty as much as we should.
- Our bootcamps have atrophied
- Knowledge is getting siloed
- Try
- Work with Customer Success to improve onboarding for ALL premium customers
- Shore up our ticket priority workflows
- Build a process to verify/enhance bootcamps
- Encourage the team (seniors specifically) to share more in a group setting
Support - Services
- Good
- Hit stride in hiring: additional headcount matches volume well.
- Worked cross-team with Security, Accounts and SMB Team on improving process
- Bad
- Ticket volume for .com customers is at a level that a miss severely affects SLA performance
- GitHost app stopped upgrading customer instances after a bad version was posted on version.gitlab.com
- Try
- Revisiting breach notifications to ensure we aren’t missing tickets because of visibility
- Encouraging agents to do adhoc pairing sessions for learning and reducing the bystander effect
UX
- Good
- We hired two excellent UX Designers (Amelia) and soon to be announced!
- Our hiring pipeline is strong with highly qualified candidates.
- Despite an increasing number of deliverables per milestone and unexpected UX needs for the 2019 vision, we were able to achieve 100% and 90% respectively on our two OKRs.
- Our department’s comradery and ability to remain aligned and connected has not been impacted by the company and department’s rapid growth.
- Designers embedded in cross-functional teams (stable counterparts) has enhanced collaboration and allowed the UX department to dig deeper into existing features.
- We added a UX Vision to our department handbook, setting the tone and direction for all of our efforts.
- Bad
- Design pattern library and design system issues take a long time to review and merge.
- Our old UX guide is still live as not all of it’s material have been moved to design.gitlab.
- Design discussions still feel fragmented across multiple channels (issues, MR, slack, sync calls).
- Try
- Async retros for the UX department (separate from group retros) to surface shared problems and solutions.
- Aggressively break down and iterate on design pattern library and design system issues.
- Archive the old UX guide and make design.gitlab the SSOT for UX standards and guidelines.
- Investigate ways to make design discussion a first-class citizen in GitLab.
UX Research
- Good
- We created 100% of personas that were requested by UX Designers or the Product team.
- We conducted 62 user interviews which lead to the creation of 6 new personas.
- Emily von Hoffmann and Andy Volpe supported UX Research considerably by conducting and analysing user interviews. We couldn’t have achieved this OKR without their help.
- Product Marketing were very supportive of our efforts. They actively participated in Key Reviews and helped us shape the personas’ format and content.
- Despite the disappointing low response rate to the survey, the data we collected provided insight into who qualifies as a churned GitLab.com user and how users first interact with GitLab. We also managed to triangulate the data with user interviews and provide Product with a provisional list of pain points for further exploration.
- Bad
- In order to identify 5 pain points for users who have left GitLab.com, we created a survey to send to churned users. We distributed the survey to 8000+ churned GitLab.com users whose details were supplied to us by Product. Unfortunately, the survey only received 126 partial responses. Of those responses, 33% of users confirmed that they were in fact still using GitLab. The recipient list was inadequate for the purposes of our research. This isn’t something Product could have foreseen.
- Try
- Closer collaboration with the Product team when creating OKRs.
Last modified November 14, 2024: Fix broken external links (
ac0e3d5e
)