Security at GitLab
Security Process and Procedures for Team Members
Accounts and Passwords
- Read and follow the requirements for handling passwords and other credentials in the GitLab Password Standards for all accounts used to conduct GitLab related work. Using 1Password to [generate and store] the passwords is strongly recommended.
- Set up your Okta account at https://gitlab.okta.com, and use this as your primary means for accessing Applications supported in Okta. As part of setting up Okta, you’ll need to establish a strong password and set up at least one additional form of authentication.
- For your Okta password and other passwords that you won’t store in Okta, set up 1Password as your password manager and set a strong and unique
- Keep your Master Password a secret. No other team members should know it, including admins. If the Master Password is known or disclosed to someone else, it should be changed immediately.
- Post a message in #it-ops if you forget your Master Password.
- Consider using a generated Master Password. Most human-created passwords are easy to guess. Let 1Password create a strong Master Password. But: you will need to memorize this Master Password.
- Do not let your password manager store the master password. It is okay to store the username.
- For more information, review 1Password’s Getting Started guide and view this video that guides you through the sign-up process.
- For account administrators, review 1Password’s admin guide.
- Enable two-factor authentication (2FA) for every account that supports
it using the most secure option available, as outlined in our password standard. This is required for
Users without 2FA enabled that are stale for over 30 days will be blocked/suspended until resolved. This improves the security posture for both the user and GitLab.If any systems provide an option to use SMS text as a second factor, this is highly discouraged. Phone company security can be easily subverted by attackers allowing them to take over a phone account. (Ref: 6 Ways Attackers Are Still Bypassing SMS 2-Factor Authentication / 2 minute Youtube social engineering attack with a phone call and crying baby)
- A FIDO2/WebAuthn hardware token can be used as a secure and convenient 2-factor authentication method for Okta, Google Workspace, GitLab instances, and many other sites. If you do not have one, you may consider purchasing one. GitLab’s standard is Yubico’s YubiKey. For more information on FIDO2/WebAuthn, visit the Tools and Tips page.
- If shared access to a single account is required by multiple team members, for example, a social media account, an Okta new application setup Issue should be created. The credentials will be stored and shared via Okta.
- If you find an existing shared account in 1Password, create an Issue to get it migrated to Okta.
Laptop or Desktop System Configuration
The following instructions are for Apple (MacBook Pro or Air) users. Linux users please go to the Linux Tools section of the handbook.
GitLab is currently utilizing JAMF for endpoint management and manages your Mac Encryption for you so there’s no need to encrypt your Mac yourself.
Set up a screen saver with password lock on your laptop with a timeout of 15 minutes or less. GitLab is currently utilizing JAMF for endpoint management and can assist with this step.
Never leave your unlocked computer unattended. Activate the screensaver, lock the desktop, or close the lid.
Terminate active sessions when finished, unless they can be secured by an appropriate locking mechanism, like a password protected screensaver. Further, log-off from applications or network services when no longer needed.
When backing up data team members should use GitLab’s Google Drive. Our deployment is regularly tested and data at rest is encrypted by default. For alternative options, please reach out to IT.
Purchase (if necessary) and install security related software.
- Little Snitch is an excellent personal firewall solution for macOS. Recommended to monitor application network communications.
- Refer to Why We Don’t Have A Corporate VPN for more information about personal VPN usage at GitLab
Do not allow your web browser (e.g. Chrome, Safari, Firefox) to store passwords when prompted. This presents an unnecessary risk and is redundant.
Do not install software with many known security vulnerabilities. Follow the Third Party Risk Management Procedure for review of services individually deployed on endpoint devices. After a decision regarding deployment of an endpoint management solution is made the process will be redesigned accordingly and services, where applicable, will be retroactively reviewed. Please ensure you continue to follow the requirements defined in the acceptable use policy.
Enable automatic software updates for security patches. On macOS, this is found under “System Preferences” -> “Software Update”, “Automatically keep my Mac up to date”. GitLab is currently utilizing JAMF for endpoint management and can assist with this step.
Enable your system’s built in firewall. In macOS, this can be found in
Security & Privacyunder the
Firewalltab. If the option reads
Firewall: Off, you will need to click on the lock at the bottom of the dialog box to make changes, and click on
Turn Firewall On(see screenshot).
Sometimes a team member needs to test a particular scenario that requires bypassing of the firewall. If this is the case, ensure one of the following network scenarios/configurations is used for your laptop:
- If you do not need Internet access during your test scenario, disconnect from the Internet before disabling the firewall for your tests and re-enable it before re-connecting to the Internet.
- If you must use a public network (such as while traveling), use a personal VPN to help protect your connection. Refer to the Personal VPN page for more information.
- Make sure the connected network is not a public network, or a network with a publicly-known WiFi password (e.g. a coffee shop WiFi network with the password written on a chalkboard). Your home network with your laptop behind the built-in firewall in your Internet router that protects your network is considered a non-public network. Refer to this guide for more information.
- Keep the firewall active and make use of virtual machines and containers to create a self-contained network configuration.
- If your testing is frequent, configure the firewall to only allow the ports needed for your testing, and stay on an isolated network or use a personal VPN.
- Contact the Security department in the
securitySlack channel if you have questions about this.
Clean Desk/Clear Screen
All GitLab team members must keep their computers locked when not actively being used and any sensitive GitLab information must be stored and secured when not in use when working from a shared or public space.
Refer to this guide for setting up a dedicated WiFi so that your work notebook is isolated from other personal devices in your home network.
Many services that team members use such as Slack and Zoom have mobile applications that can be loaded onto iOS or Android devices, allowing for use of those resources from a mobile phone. Refer to the acceptable use policy for more information on using a mobile device.
Most major applications (Slack, Zoom, Okta Verify) have been examined and vetted by the Security Team, but there are some applications which are not only of limited scope in the data they can access, but also have security issues. In such cases, use the mobile device’s web browser for access to the resource. If you have a question about the security of a mobile app and want to know if you should be using it to access GitLab data, review the security tips on this page or contact the Security Team via Slack in the #security channel.
Google Cloud Resources
Some Google Cloud resources, if deployed with default settings, may introduce risk to shared environments. For example, you may be deploying a temporary development instance that will never contain any sensitive data. But if that instance is not properly secured, it could potentially be compromised and used as a gateway to other, more sensitive resources inside the same project.
Below are some steps you can take to reduce these risks.
Google Compute Instances
By default, Google will attach what is called the Compute Engine default service account to newly launched Compute Instances. This grants every process running on your new Compute Instance ‘Project Editor’ rights, meaning that if someone gains access to your instance they gain access to everything else in the project as well.
This default account should not be used. Instead, you should choose one of the following two options:
- If your instance does not need authenticated access to Google Cloud APIs, you should choose not to bind any service account at all. This can be done by appending the
--no-service-account --no-scopesflags if using the
gcloudcommand, or by selecting the following option in the web interface:
- If your instance does need to authenticate to certain Google Cloud APIs, you should use a specific service account that has been granted only the minimum IAM roles required for your application to function. Access Scopes are not a replacement for properly configured IAM permissions and in general should not be relied upon as a security mechanism.
When permitting access to Compute Instances via firewall rules, you should ensure you are exposing only the minimum ports to only the minimum instances required.
When creating a new firewall rule, you can choose to apply it to one of the following “Targets”:
All instances in the network: This is probably not the option you want. Selecting this option is a common mistake and may expose insecure services on instances other than your own.
Specified target tags: This is probably the option you want. This allows you to limit the rule to instances that are marked with a specific network tag. You should create a descriptive tag name like “allow-https-from-all” so that it can be easily identified and used when needed.
Specified service account: This is a less likely option, but perfectly viable if you have already done some design around custom service accounts. It is similar to a tag but will be assigned automatically to all instances using a specific service account.
When choosing “Ports and Protocols” to expose, you should never select “Allow All” and should never manually enter entire ranges such as
1-65535. Instead, you should choose only the specific required TCP/UDP ports you need to expose.
Google Kubernetes Engine Clusters
GKE nodes are Compute Instances, and by default use the same Compute Engine default service account described above. Despite making it their default, Google specifically states “You should create and use a minimally privileged service account to run your GKE cluster instead of using the Compute Engine default service account.”.
Whether deploying a GKE cluster manually or automatically via Terraform, you can follow these instructions to create and attach a service account with the minimum permissions required for a GKE cluster node to function.
In addition, you should enable Workload Identity and Shielded Nodes on all new clusters. This can be done by appending the
--workload-pool=[PROJECT-ID].svc.id.goog --enable-shielded-nodes flags if using the gcloud command, or by selecting the following options in the web interface (located under the “Security” menu):
Google Cloud Functions
When creating a Cloud Function with a “trigger type” of
HTTP, Google provides two layers of access control. The first is an identity check, via the following two options under Authentication:
- Allow unauthenticated invocations: This will permit anyone on the Internet to invoke your function, supplying any type of input parameters they choose. This option should be avoided where possible.
- Require authentication: This will allow you to manage authorized users via Google Cloud. This is the preferred option.
The second is network-based access control, via the following options under Advanced Settings -> Connections -> Ingress Settings. You should choose the least permissive option that will still allow your function to work:
- Allow all traffic: This will permit HTTP invocations from any IP address.
- Allow internal traffic only: This restricts invocations to a source in the same Google Cloud project or the same VPC SC perimeter.
- Allow internal traffic and traffic from Cloud Load Balancing: This is the same as above with the added ability to send an invocation through Google’s load balancers.
Some uses cases will prevent you from choosing the “best practice” when it comes to authenticating an inbound request. For example, you may wish to host a webhook target for an external service that doesn’t support the use of Google Cloud credentials. For this use case, you can store a complex, machine-generated secrete as an environment variable inside your function and then ensure the requesting service includes that secret inside the request headers or JSON payload. More details and examples can be found here.
Similar to Compute Instances and GKE clusters, Cloud Functions also bind to a service account by default. And once again, Google states that “it’s likely too permissive for what your function needs in production, and you’ll want to configure it for least privilege access”.
For most simple functions, this shouldn’t an issue. However, it is possible that a complex function could be abused to allow the person invoking the function to impersonate that service account. For this reason, you’ll want to configure a new service account with the bare minimum permissions required for your function to operate.
You can then choose to use this new service account via the option under Advanced Settings -> Advanced -> Service account.
- Do not configure email forwarding of company emails (@gitlab.com) to a non-company email address. Follow the Unacceptable Email and Communications Activities policy.
- There are security implications involved in the use of “smart home devices” such as Amazon Echo or Google Home. In rare instances these devices can record conversations you might not have intended them to record. Many smart home devices will provide a visual and/or auditory indicator to let you know they’re activated; for many such devices, when they’re activated, they’re recording you and save a transcript of what you say while it’s active. If a smart home device is activated while you’re verbalizing sensitive information, wait for it to turn off or manually turn it off. If you think a smart device may have been activated while verbalizing sensitive information, most smart home devices allow you to delete transcripts and recordings. Please use your best judgement about the placement of these devices and whether or not to deactivate the microphone during sensitive discussions related to GitLab. If you ever have any questions or concerns, you can always contact the Security team.
- Do not use tools designed to circumvent network firewalls for the purpose of exposing your laptop to the public Internet. An example of this would be using ngrok to generate a public URL for accessing a local development environment. Our core product offers remote code execution as a feature. Other applications we test often expose similar functionality via the relaxed nature of development environments. Running these on a laptop exposed to the Internet would essentially provide a back-door for remote attackers to abuse. This could result in the complete compromise of your home network and all business and personal accounts that have been accessed from your machine. Our Acceptable Use Policy prohibits circumventing the security of any computer owned by GitLab, and using ngrok in this manner is an example of circumventing our documented firewall requirements. An alternative to ngrok is to use GitLab Sandbox Cloud to stand up temporary infrastructure.
- Follow the guidelines for identifying phishing emails provided in the training and How to identify a basic phishing attack.
- During the onboarding process you may receive account registration emails for your baseline entitlements. Before clicking these links feel free to confirm with #it-ops that they initialized the process. Clicking itself is a problem even when you don’t enter a password, because a visit can already be used to execute a 0-day attack. Security Team will, from time to time, simulate phishing attacks to our company email addresses to ensure everyone is aware of the threat.
- If you get strange emails personally or other things related to security feel free to ask the security team for help, they might be aiming for the company.
- If you receive a security report of any kind (issue, customer ticket, etc.) never dismiss it as invalid. Please bring it to the attention of the Security Team, and follow the steps outlined on that team’s handbook page.
- Report suspect situations to an officer of the company or use the engage the Security Engineer on-call.
- If you have security suggestion, create an issue on the
security issue tracker
and ping the security team. New security best practices and processes should be
added to the
- Do not sign in to any GitLab related account using public computers, such as library or hotel kiosks.
Personal Access Tokens
- When creating a Personal Access Token, be sure to choose the appropriate scopes that only have the permissions that are absolutely necessary.
- Oftentimes a Project Access Token might be sufficient instead of a Personal Access Token. Project Access Tokens have a much more limited scope and should be preferred over Personal Access Tokens whenever possible.
- Always set an expiration for your tokens when creating them. Tokens should preferably expire in a matter of hours or a day.
- Be mindful to keep these personal access tokens secret. Be particularly careful not to accidentally commit them in configuration files, paste them into issue or merge request comments, or otherwise expose them.
- Please consider periodically reviewing your currently active Personal Access Tokens and revoking any that are no longer needed.
- Personal Access Tokens will be highly discouraged within the GitLab production environment, and disallowed/disabled wherever possible. Existing tokens shall remain, but additional issuance will not be permissible/possible.
- If you believe a personal access token has been leaked, revoke it immediately (if possible) and contact the security team using the
Should a team member lose a device such as a thumb drive, YubiKey, mobile phone, tablet, laptop, etc. that contains their credentials or other GitLab-sensitive data they should report the issue using the
/security command in Slack to engage SIRT.
GitLab provides a
firstname.lastname@example.org email address for team members to use in situations when Slack is inaccessible and immediate security response is required.
This email address is only accessible to GitLab team members and can be reached from their gitlab.com or personal email address as listed in Workday. Using this address provides an excellent way to limit the damage caused by a loss of one of these devices.
Additionally if a GitLab team member experiences a personal emergency the People Group also provides an emergency contact email.
The Security Department provides essential security operational services, is directly engaged in the development and release processes, and offers consultative and advisory services to better enable the business to function while minimising risk.
To reflect this, we have structured the Security Department around four key tenets, which drive the structure and the activities of our group. These are :
- Secure the Product - Security Engineering Sub-department
- Protect the Company - Security Operations Sub-department
- Lead with Data - Threat Management Sub-department
- Assure the Customer - Security Assurance Sub-department
2021 was a productive and accomplished year for GitLab Security. You can find the many ways we made GitLab and our customers more secure in FY22. In FY23 (Feb 2022 - Jan 2023) we will continue moving the security needle forward as we focus on increased involvement in product features, diversifying our certification roadmap, and increased visibility of our threat landscape.
The Security Assurance sub-department continues to improve customer engagement and advance our SaaS security story. Independent security validation (compliance reports and certifications) is a critical component to ensuring transparency and adequacy of our security practices. Current and prospective customers highly value independent attestations of security controls and rely on these to reaffirm security of the software and inherent protection of their data. FY22 saw expansion of GitLab’s SOC 2 report to include the Security and Confidentiality criteria along with achievement of GitLab very first ISO/IEC 27001 certification. In FY23 we will continue to grow GitLab’s certification portfolio through SOC and ISO expansion with an additional focus on compliance offerings geared towards heavily regulated markets like FIPS 140-2 and FedRAMP. These audits will greatly expand our ability to reach new markets, attract new customers, increase contract values and make GitLab even more competitive in the enterprise space. A heavy focus will be placed on tooling and automation in FY23 to enable our rapid growth.
The Security Engineering sub-department’s focus in FY23 will continue to be in the direction of a proactive security stature. Adoption of additional automation and key technology integrations will help further increase efficiency and effectiveness. After the shift left accomplished last year, our ability to detect and remediate risks pre-production has improved. Building on this capability, improving visibility and alerting on vulnerabilities detected as close to code development as possible will be a new focus. Continued maturity of our infrastructure security, log aggregation, alerting, and monitoring will build upon the increased infrastructure visibility and observability accomplished last year. All of this will contribute towards minimizing risk and exposure in a proactive manner.
For FY23 the Security Operations sub-department will be committed to a focus on anti-abuse and incident response process maturity. Using established maturity frameworks, the program will focus on utilizing existing technologies with new expanded datasets supported by refined processes resulting in faster time to triage and short time to remediate. Additional focus on gaining a deeper understanding of security incidents, abuse, and causes will drive additional preventative practices. Altogether, this will result in fewer security incidents, less abuse, a more secure, and more reliable service for all GitLab users.
Our newest sub-department, Threat Management: FY23 began with the creation of a new sub-department known as Threat and Vulnerability Management. This department will contain our Red Team, Security Research Team, and a newly formed Vulnerability Management team. While the focus of the Red Team and Vulnerability Research teams will not change, the newly formed Vulnerability Management team will take an iterative approach to better understanding and managing vulnerabilities across all of GitLab. Initially, Vulnerability Management will be very focused on implementing an initial process to better track and analyze cloud assets (GCP, AWS, Azure, DO) for vulnerabilities. Once this initial process is in place and being executed on we will begin expanding coverage to the GitLab product, specific business critical projects and other potential weaknesses. The overall goal of this team will be to create a holistic view of GitLab’s attack surface and ensure that the necessary attention is given to remediating issues. FY23 will also see the introduction of several new security teams. In addition to the vulnerability management team mentioned above, we are also adding a Log Management team. This team will report into the Security Engineering sub-department and will be responsible for creating a more holistic approach to log management, incident response, and forensic investigation.
Lastly, we value the opinions and feedback of our team members and encourage them to submit ideas handbook first (directly to the handbook in the form of an MR). We saw incredible gains in our culture amp survey results in FY22 and going forward we are committed to continuous improvement of our leadership team, team growth and development, and GitLab culture within the Security Department.
We are also Product Development
Unlike typical companies, part of the mandates of our Security, Infrastructure, and Support Departments is to contribute to the development of the GitLab Product. This follows from these concepts, many of which are also behaviors attached to our core values:
As such, everyone in the department should be familiar with, and be acting upon, the following statements:
- We should all feel comfortable contributing to the GitLab open source project
- If we need something, our first instinct should be to get it into the open source project so it can be given back to the community
- Try to get it in the open source project first, rather than later, even if it’s 2x harder
- We should be using the whole product to do our jobs
- We are all familiar with our Dogfooding process and follow it
- We should not expect new team members to join the company with these instincts, so we should be willing to teach them
- It is part of managers’ responsibility to teach these values and behaviors
This topic is part of our Engineering FY23 Direction.
Security Vision and Mission
Our vision is to transparently lead the world to secure outcomes.
Our mission is to enable everyone to innovate and succeed on a safe, secure, and trusted DevSecOps platform. This will be achieved through 5 security operating principles:
- Accelerate business success with a focus on:
- Prioritize ‘boring’, iterative solutions that minimize risk
- Find ways to say Yes
- Understand goals before recommending solutions
- Use GitLab first
- Efficient operations with a focus on:
- Technical controls over handbook rules
- Leverage automation first (robots over humans)
- Responsible decisions (Spending, Tooling, Staffing, etc) over low ROI (return on investment) decisions
- Reusable or repeatable over singular solutions
- Transparency with a focus on:
- Responsible protection of MNPI (material non-public information)
- Evangelize dogfooding of GitLab publicly
- Lead with metrics
- Balance security with usefulness
- Risk Reduction with a focus on:
- Secure by default
- Preventative controls over detective controls
- Solving root causes over treating symptoms
- Visibility through Coverage, Discoverability, Observability
- Collaborative Culture with a focus on:
- Working together on common solutions
- Solve shared problems with shared solutions
- Simplifying language for everyone to understand
- Avoiding security jargon
- Seek opportunities to help others succeed
To help achieve the vision of transparently leading the world to secure outcomes, the Security Department has nominated a Security Culture Committee.
Secure the Product - Security Engineering
The Security Engineering teams below are primarily focused on Securing the Product. This reflects the Security Department’s current efforts to be involved in the Application development and Release cycle for Security Releases, Security Research, our HackerOne bug bounty program, Security Automation, External Security Communications, and Vulnerability Management.
The term “Product” is interpreted broadly and includes the GitLab application itself and all other integrations and code that is developed internally to support the GitLab application for the multi-tenant SaaS. Our responsibility is to ensure all aspects of GitLab that are exposed to customers or that host customer data are held to the highest security standards, and to be proactive and responsive to ensure world-class security in anything GitLab offers.
Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:
- Perform vulnerability management and be a subject matter expert (SME) for mitigation approaches
- Support and evolve the bug bounty program
- Conduct risk evaluation of GitLab product features
- Conduct application security reviews, including code review and dynamic testing
- Participate in initiatives to holistically address multiple vulnerabilities found in a functional area
- Develop security training and socialize the material with internal development teams
- Develop automated security testing to validate that secure coding best practices are being used
- Facilitate preparation of both critical and regular security releases
- Guide, advise, and assist product development teams as SMEs in the area of application security
The Infrastructure Security team consists of cloud security specialists that serve as a stable counterpart to the Infrastructure Department and their efforts. The team is focused on two key aspects of security:
- The security of GitLab.com’s infrastructure
- The availability and scalability of Security’s own infrastructure
The Security Logging team is focused on guaranteeing that GitLab has the data coverage required to:
- Perform the threat analysis, alerting and threat detections necessary to protect the company and its customers
- Ensure compliance with internal policies, standards, and regulatory requirements.
Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:
- Assist other security specialty teams in their automation efforts
- Assess security tools and integrate tools as needed
- Define and own metrics and KPIs to determine the effectiveness of security programs
- Define, implement, and monitor security measures to protect GitLab.com and company assets
- Design, plan, and build new products or services to aid and improve security of the product and company
Security External Communications
The External Communications Team leads customer advocacy, engagement and communications in support of GitLab Security Team programs. Initiatives for this specialty include:
- Increase engagement with the hacker community, including our public bug bounty program.
- Build and manage a Security blogging program.
- Develop social media content and campaigns, in collaboration with GitLab social media manager.
- Manage security alert email notifications.
- Collaborate with corporate marketing, PR, Community Advocates, and Developer Evangelism teams to help identify opportunities for the Security Team to increase industry recognition and thought leadership position.
Protect the Company - Security Operations
Security Operations Sub-department teams are primarily focused on protecting GitLab the business and GitLab.com. This encompasses protecting company property as well as to prevent, detect and respond to risks and events targeting the business and GitLab.com. This sub department includes the Security Incident Response Team (SIRT), Trust and Safety team and Red team.
These functions have the responsibility of shoring up and maintaining the security posture of GitLab.com to ensure enterprise-level security is in place to protect our new and existing customers.
Security Incident Response Team
The SIRT team is here to manage security incidents across GitLab. These stem from events that originate from outside of our infrastructure, as well as those internal to GitLab. This is often a fast-paced and stressful environment where responding quickly and maintaining ones composure is critical.
More than just being the first to acknowledge issues as they arise, SIRT is responsible for leading, designing, and implementing the strategic initiatives to grow the Detection and Response practices at GitLab. These initiatives include:
- Work with the internal and external partners to ingest logging and alerting into our centralized monitoring solution
- Triage and analysis of alerting to determine validity, how to remediate and/or prevent incidents, then act accordingly
- Coordinate localized or company-wide response to security incidents
- Define and lead vulnerability management for GitLab Team Members and the production/pre-production environments as part of GitLab.com
- Incorporate current security trends, advisories, publications, and academic research into our security practices
- Deploy and maintain security monitoring and analysis solutions for GitLab the business and GitLab.com
SIRT can be contacted on slack via our handle
@sirt-members or in a GitLab issue using
@gitlab-com/gl-security/security-operations/sirt. If your request requires immediate attention please review the steps for engaging the security on-call.
Trust and Safety
Initiatives for this specialty include:
- Detection and mitigation of abusive activity on GitLab.com.
- DMCA Notice and Counter-Notices processing.
- Escalating potential abuse vectors to stakeholders for mitigation.
- Research and prevention of trending abuse methodologies.
For more information please see our Resources Section
Lead with Data - The Threat Management Sub-department
Threat Management Sub-department teams are cross-functional. They are responsible for collaborating across the Security department to identify, communicate, and remediate threats or vulnerabilities that may impact GitLab, our Team Members or our users and the community at large.
GitLab’s internal Red Team emulates adversary activity to better GitLab’s enterprise and product security. This includes activities such as:
- Performing exercises with SecOps to collaboratively and rapidly iterate on improving GitLab’s security posture. These exercises will be referred to as purple team exercises merging blue (secops) and red teams together.
- Performing exercises to reflect simulated adversarial attempts to compromise organizational mission/business functions and provide a comprehensive assessment of the security state of information systems and organizations.
- Simulating adversarial attempts to compromise organizational missions/business functions and the information systems that support those missions/functions may include technology-focused attacks (e.g., interactions with hardware, software, or firmware components and/or mission/business processes) and social engineering-based attacks (e.g., interactions via email, telephone, shoulder surfing, or personal conversations).
Security Research team members focus on security problems that require a high level of expertise, and development of novel solutions. This includes in-depth security testing against FOSS that is critical to GitLab, and development of new security capabilities. Initiatives for this specialty include:
- Vulnerability Research into tools and applications that are integrated with, or used at GitLab
- Development of proof-of-concept code to demonstrate impact of security findings
- Development and demonstration of novel defensive and offensive capabilities
- Following GitLab’s responsible disclosure policy for third party disclosure
- Sharing results widely through blog posts, conference talks, and participation in industry initiatives
Security research specialists are subject matter experts (SMEs) with highly specialized security knowledge in specific areas, including reverse engineering, incident response, malware analysis, network protocol analysis, cryptography, and so on. They are often called upon to take on security tasks for other security team members as well as other departments when highly specialized security knowledge is needed. Initiatives for SMEs may include:
- Security testing of electronics being used as swag by Marketing to be handed out at GitLab events
- Network analysis and/or reverse engineering of a closed source application used with a third party SaaS app integration (e.g. iOS/Android app)
- “Test” the guidelines outlined in a detailed step-by-step instructional document used in the configuration of an asset to ensure the asset is properly secured
Security research specialists are often used to promote GitLab thought leadership by engaging as all-around security experts, to let the public know that GitLab doesn’t just understand DevSecOps or application security, but has a deep knowledge of the security landscape. This can include the following:
- Submit security-related technical talks for presentations at security conferences as a GitLab team member
- Handle security-related questions by the Marketing/PR teams in response to questions from the press, or even direct press interviews
Security Threat & Vulnerability Management
Security Threat & Vulnerability Management is responsible for the recurring process of identifying, classifying, prioritizing, mitigating, and remediating vulnerabilities. This process is designed to provide insight into our environments, leverage GitLab for vulnerability workflows, promote healthy patch management among other preventative best-practices, and remediate risk; all with the end goal to better secure our environments, our product, and the company as a whole.
Assure the Customer - The Security Assurance Sub-department
The Security Assurance sub-department is comprised of the teams below. They target Customer Assurance projects among their responsibilities. This reflects the need for us to provide resources to our customers to assure them of the security and safety of GitLab as an application to use within their organisation and as a enterprise-level SaaS. This also involves providing appropriate support, services and resources to customers so that they trust GitLab as a Secure Company, as a Secure Product, and Secure SaaS
The Field Security team serves as the public representation of GitLab’s internal Security function. We are tasked with providing high levels of security assurance to internal and external customer through the completion of Customer Assurance Activities, maintenance of Customer Assurance Collateral, and evangelism of Security Best Practices.
Initiatives for this specialty include:
- Facilitating Customer Assurance activities including The Trust Site and The Customer Assurance Package.
- Enabling the Sales organization through security training, collateral development, RFP maintenance and customer support
- Evangelizing Security Best Practices to customers and internal teams
- Managing customer security questions and escalating potential security issues to appropriate teams and drive to resolution
Operating as a second line of defense, Security Compliance’s core mission is to implement a best in class governance, risk and compliance program that encompasses SaaS, on-prem, and open source instances. Initiatives for this specialty include:
- Maintaining a certification roadmap based on customer needs e.g.
- ISO 27001
- SOC 2
- Monitoring the adequacy and effectiveness of GitLab security common controls and timely remediation of observations
- Facilitating external certification audits to include timely remediation of observations
- Assisting Security leadership in developing processes and controls to manage risks and issues
- Proposing compliance features for the GitLab product in order to help our customers more easily achieve their compliance goals
For additional information about the Security Compliance program see the Security Compliance team handbook page or refer to GitLab’s security controls for a detailed list of all compliance controls organized by control family.
We support GitLab’s growth by effectively and appropriately identifying, tracking, and treating Security Operational and Third Party risks.
Initiatives for this specialty include:
- Maintaining a Security Operational Risk Management program, executing annual operational security risk assessments, and managing a consolidated security risk register.
- Maintaining a Third Party Risk Management program
It’s important to note that the three tenets do not operate independently of each other, and every team within the Security Department provides an important function to perform in order to progress these tenets. For example, Application Security may be strongly focused on Securing the Product, but it still has a strong focus around customer assurance and protecting the company in performing its functions. Similarly, Security Operations functions may be engaged on issues related to Product vulnerabilities, and the resolution path for this deeply involves improving the security of product features, as well as scoping customer impact and assisting in messaging to customers.
Other groups and individuals
Security Program Management
Security Program Management is responsible for complete overview and driving security initiatives across Product, Engineering, and Business Enablement. This includes the tracking, monitoring, and influencing priority of significant security objectives, goals, and plans/roadmaps from all security sub-departments. Security Program Manager Job Family
Security Program areas of focus
- Drive Accountability & Visibility for Program Objectives & Goals
- Drive, Gather, & Examine Program Needs & Opportunities through Intra & Inter Organizational Collaboration
- Provide Insights & Suggestions Impacting Program Strategy & Roadmap
- Assist in Gathering & Prioritizing Program Risks, Requirements, & Alignment to Influence Remediation
- Drive & Define Acceptance Criteria, Value Proposition, Milestones to Visualize and Communicate Program Effectiveness
- Develop Repeatable, Scalable, Efficient, Effective, Processes & Procedures
Security Architecture plans, designs, tests, implements, and maintains the security strategy and solutions across the entire GitLab ecosystem.
Contacting the Team
Engaging the Security On-Call
At GitLab, we believe that the security of the business should be a concern of everyone within the company and not just the domain of specialists. If you identified an urgent security issue or you need immediate assistance from the Security Department, please refer to Engaging the Security Engineer On-Call.
Please be aware that the Security Department can only be paged internally. If you are an external party, please proceed to Vulnerability Reports and HackerOne section of this page.
- Use the
/securitySlack command to be guided through a form that engages the Security Engineer On-Call
- For general Q&A, GitLab Security is available in the
#securitychannel in GitLab Slack.
- For low severity, non-urgent issues, SIRT can be reached by mentioning
@sirt-membersin Slack or by opening an issue with
/securityin Slack. Please be advised the SLA for Slack mentions is 6 hours on business days.
Sub-groups and projects
Many teams follow a convention of having a GitLab group
team-name-team with a primary project used for issue tracking underneath
team-name or similar.
- @gitlab-com/gl-security is used for @‘mentioning the entire Security Department
- @gitlab-com/gl-security/security-managers is used for @‘mentioning all managers in the Security Department
- public (!) Security Department Meta is for Security Department initiatives,
~metaand backend tasks, and catch all for anything not covered by other projects
- Security Assurance (@gitlab-com/gl-security/security-assurance)
- Security Engineering (@gitlab-com/gl-security/engineering-and-research)
- gitlab-com/gl-security/engineering-and-research-meta For sub-department wide management and planning issues.
- @gitlab-com/gl-security/appsec is the primary group for @‘mentioning the Application Security team.
- @gitlab-com/gl-security/automation is the primary group for @‘mentioning the Security Automation team.
- Security Operations (@gitlab-com/gl-security/security-operations) Security Operations Sub-department
- @gitlab-com/gl-security/security-operations/sirt is the primary group for @‘mentioning the Security Incident Response Team (SIRT).
- SIRT (private) for SIRT issues.
- @gitlab-com/gl-security/security-operations/trust-and-safety is the primary group for @‘mentioning the Trust & Safety team.
- @gitlab-com/gl-security/security-operations/sirt is the primary group for @‘mentioning the Security Incident Response Team (SIRT).
- #security; Used for general security questions and posting of external links for the great discussions. Company wide security relevant announcements are announced in #whats-happening-at-gitlab and may be copied here.
- #security-department - Daily questions and discussions focused on work internal to the security department. Can be used for reporting when unsure of where to go.
- #abuse - Used for reporting suspected abusive activity/content (GitLab Internal) as well as general discussions regarding anti-abuse efforts. Use
@trust-and-safetyin the channel to alert the team to anything urgent.
#security-department-standup- Private channel for daily standups.
#incident-managementand other infrastructure department channels
#security-alert-manual- New reports for the security department from various intake sources, including ZenDesk and new HackerOne reports.
#hackerone-feed- Feed of most activity from our HackerOne program.
#abuse*- Multiple channels for different notifications handled by the Security Department.
- Use the @sirt-members mention in any Slack channel to tag the members of the Security Incident Response Team (SIRT).
- Use the @sec-assurance-team mention in any Slack channel to tag the members of the Security Compliance and Risk & Field Security teams.
- Use the @field-security mention in any Slack channel to tag the members of the Field Security team.
- Use the @appsec-team mention in any Slack channel to tag the members of the Application Security team.
- Use the @trust-and-safety mention in any Slack channel to tag the members of the Trust & Safety team.
External Contact Information
External researchers or other interested parties should refer to our Responsible Disclosure Policy for more information about reporting vulnerabilities. Customers can contact Support or the Field Security team.
Ransomware is a persistent threat to many organizations, including GitLab. In the event of a ransomware attack involving GitLab assets, it’s important to know the existing response procedures in place. Given the variability of targets in such attacks, it’s critical to adapt to existing circumstances and understand that disaster recovery processes are in place to avoid paying any ransom. GitLab’s red team has done extensive research to determine the most likely targets to be affected. As a result, the following guidelines are intended to help bootstrap an efficient response to protect the organization.
Critical First Steps:
- Engage the SIRT team as soon as a ransomware attack is detected
- The SIRT team will then follow the incident response guide and incident communication plan and reference the relevant run book.
- Responders should leverage GitLab’s established rapid engineering response plan during the mitigation phase.
- The Business Continuity & Disaster Recovery Controls handbook page should be referenced for relevant information.
Depending on the impacted resources, the following teams should be engaged and made aware of the issue created for the rapid engineering response. Note that this is not a comprehensive list depending on impacted assets.
- Database: Disaster Recovery Team - responsible for disaster recovery strategy for the PostgreSQL database.
- Infrastructure Team - availability, reliability, performance, and scalability of GitLab SaaS software
- Infrastructure Security Team - infrastructure teams stable counterpart focused on cloud infrastructure security, best practices, and vulnerability management
- Business Technology Engineering - endpoint and systems access management
- Support Team - responding to customer or employee inquiries regarding system outages
- Legal & Corporate Affairs
- Security Assurance - assuring the security of GitLab as an enterprise application
- Marketing - accurately represent GitLab and our products in our marketing, advertising, and sales materials.
Once we’ve determined that we need to communicate externally about an incident, the SIMOC should kick off our Security incident communications plan and key stakeholders will be engaged for collaboration, review and approval on any external-facing communications. Note: if customer data is exposed, external communications may be required by law.
GitLab releases patches for vulnerabilities in dedicated security releases. There are two types of security releases: a monthly, scheduled security release, and ad-hoc security releases for critical vulnerabilities. For more information, you can visit our security FAQ. You can see all of our regular and security release blog posts here. In addition, the issues detailing each vulnerability are made public on our issue tracker 30 days after the release in which they were patched.
Timing of the monthly security release
Our team targets release of the scheduled, monthly security release around the 28th, or 6-10 days after the monthly feature release and communicates the release via blog and email notification to subscribers of our security notices.
Receive notification of security releases
- To receive security release blog notifications delivered to your inbox, visit our contact us page.
- To receive release notifications via RSS, subscribe to our security release RSS feed or our RSS feed for all releases.
Security release related documentation
- Further definition, process and checklists for security releases are described in the release/docs project.
- The policies for backporting changes follow Security Releases for GitLab EE.
- For critical security releases, refer to Critical Security Releases in
- Incident-Tools (private)
for working scripts and other code during or while remediating an incident.
If the tool is applicable outside of the
GitLab.comenvironment, consider if it’s possible to release when the
~securityissue becomes non-confidential. This group can also be used for private demonstration projects for security issues.
- security-tools (mostly private) contains some operational tools used by the security teams. Contents and/or configurations require that most of these projects remain private.
Other Frequently Used GitLab.com Projects
Security crosses many teams in the company, so you will find
issues across all GitLab projects, especially:
When opening issues, please follow the Creating New Security Issues process for using labels and the confidential flag.
Other Resources for GitLab Team Members
- Security Best Practices, using 1Password and similar tools, are documented on their own security best practices page.
- Secure Coding Training.
- GitLab.com data breach notification policy.
- GitLab Internal Acceptable Use Policy.
- For GitLab.com, we have developed a Google Cloud Platform (GCP) Security Guidelines Policy document, which outlines recommended best practices, and is enforced through our security automation initiatives.
- GitLab Security Tanuki for use on security release blogs, social media and security related swag as appropriate:
- Security READMEs
- Working in Security
AI in Security Learning Group
This group is setup to help interested Security team members get up to speed with AI technologies and how to secure them. For more information, see the AI in Security Learning Group page.
The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here’s how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).
Creating New Security Issues
New security issue should follow these guidelines when being created on
- Create new issues as
confidentialif unsure whether issue a potential vulnerability or not. It is easier to make an issue that should have been public open than to remediate an issue that should have been confidential. Consider adding the
/confidentialquick action to a project issue template.
- Always label as
~securityat a minimum. If you’re reporting a vulnerability (or something you suspect may possibly be one) please use the Vulnerability Disclosure template while creating the issue. Otherwise, follow the steps here (with a security label).
- Add any additional labels you know apply. Additional labels will be applied
by the security team and other engineering personnel, but it will help with
the triage process:
- Team or devops lifecycle labels
~customerif issue is a result of a customer report
~internal customershould be added by team members when the issue impacts GitLab operations.
~dependency updateif issue is related to updating to newer versions of the dependencies GitLab requires.
~featureflag::scoped labels if issue is for a functionality behind a feature flag
- Issues that contain customer specific data, such as private repository contents,
should be assigned
~keep confidential. If possible avoid this by linking resources only available to GitLab team member, for example, the originating ZenDesk ticket. Label the link with
(GitLab internal)for clarity.
Occasionally, data that should remain confidential, such as the private project contents of a user that reported an issue, may get included in an issue. If necessary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.
For review by the Application Security team, @ mention
For more immediate attention, refer to Engaging security on-call.
Severity and Priority Labels on
Severity and priority labels are set by an application security engineer at the time of triage
if and only if the issue is determined to be a vulnerability.
To identify such issues, the engineer will add the
Severity label is determined by CVSS score, using the GitLab CVSS calculator.
If another team member feels that the chosen
need to be reconsidered, they are encouraged to begin a discussion on the relevant issue.
The presence of the
~bug::vulnerability label modifies the standard severity labels(
by additionally taking into account
likelihood as described below, as well as any
other mitigating or exacerbating factors. The priority of addressing
~security issues is also driven by impact, so in most cases, the priority label
assigned by the security team will match the severity label.
Exceptions must be noted in issue description or comments.
The intent of tying
~severity/~priority labels to remediation times is to measure and improve GitLab’s
response time to security issues to consistently meet or exceed industry
standard timelines for responsible disclosure. Mean time to remediation (MTTR) is
metric that may be evaluated by users as an indication of GitLab’s commitment
to protecting our users and customers. It is also an important measurement that
security researchers use when choosing to engage with the security team, either
directly or through our HackerOne Bug Bounty Program.
If a better understanding of an issue leads us to discover the severity has changed, recalculate the time to remediate from the date the issue was opened. If that date is in the past, the issue must be remediated on or before the next security release.
Due date on
~security issues with the
~bug::vulnerability label and a severity of
~severity::3 or higher, the security engineer assigns the
which is the target date of when fixes should be ready for release.
This due date should account for the
Time to remediate times above, as well as
monthly security releases on the 28th of each month. For example, suppose today is October 1st, and
~security issue is opened. It must be addressed in a security release within 30 days,
which is October 31st. So therefore, it must catch the October 28th security release.
Furthermore, the Security Release Process deadlines
say that all merge requests associated with the fix must be ready 48 hours before the due date of the security release, which would be October 26th. So the due date in this example must be October 26th.
Note that some
~security issues may not need to be part of a product release, such as
an infrastructure change. In that case, the due date will not need to account for
monthly security release dates.
On occasion, the due date of such an issue may need to be changed if the security team needs to move up or delay a monthly security release date to accommodate for urgent problems that arise.
Product Managers and Engineering Managers should follow the recommended guidance when scheduling
~security Issues :
|When a team is assigned an ___||This is the expected response|
|S1||Disrupt your milestone and work on the ~“bug::vulnerability” and ~“FedRAMP::Vulnerability” security issue right away|
|S2||Disrupt your milestone and work on the ~“bug::vulnerability” and ~“FedRAMP::Vulnerability” security issue right away|
|S3||Begin working on the ~“bug::vulnerability” and ~“FedRAMP::Vulnerability” security issue at the beginning of the next Milestone|
|S4||Begin working on the ~“bug::vulnerability” and ~“FedRAMP::Vulnerability” security issue at least 2 Milestones prior to the due date|
|S1,S2 or S3 that is blocked||The team that owns the blocking issue, should disrupt their current milestone and work on the blocking issue right away|
The issue description should have a
How to reproduce section to ensure clear replication details are in description. Add additional details, as needed:
- Environment used:
- Docker Omnibus version x.y.z
- Conditions used such as projects, users, enabled features or files used
- A step by step plan to reproduce the issue
- The url or even better the
curlcommand that triggers the issue
Issues labelled with the
security but without
~type::bug + ~bug::vulnerability labels are not considered vulnerabilities, but rather security enhancements, defense-in-depth mechanisms, or other security-adjacent bugs. For example, issues labeled
~"type::maintenance". This means the security team does not set the
~priority labels or follow the vulnerability triage process as these issues will be triaged by product or other appropriate team owning the component.
On the contrary, note that issues with the
severity::4 labels are considered
Low severity vulnerabilities and will be handled according to the standard vulnerability triage process.
The security team may also apply
~internal customer and ~
security request to issue as an
indication that the feature is being requested by the security team to meet
additional customer requirements, compliance or operational needs in
support of GitLab.com.
~security issues are neither vulnerabilities nor security enhancements and yet are labeled
~security. An example of this would be a non-security
~"type::bug" in the login mechanism. Such an issue will be labeled
~security because it is security-sensitive but it isn’t a vulnerability and it isn’t a
~"type::feature" either. In those cases the
~"securitybot::ignore" label is applied so that the bot doesn’t trigger the normal vulnerability workflow and notifications as those issues aren’t subject to the “time to remediation” requirements mentioned above.
Transferring from Security to Engineering
The security engineer must:
- Add group label (
- Add stage label
- Any additional labels, such as
- Mention the product manager for scheduling, such as
@pm for scheduling.
- The engineering team lead should be @ mentioned and followed up with when necessary as noted below for different severity levels.
The product manager will assign a
Milestone that has been assigned a due
date to communicate when work will be assigned to engineers. The
field, severity label, and priority label on the issue should not be changed
by PMs, as these labels are intended to provide accurate metrics on
~security issues, and are assigned by the security team. Any blockers,
technical or organizational, that prevents
~security issues from being
addressed as our top priority
should be escalated up the appropriate management chains.
Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.
Issues with an
severity::2 rating should be immediately brought to the
attention of the relevant engineering team leads and product managers by
tagging them in the issue and/or escalating via chat and email if they are
Issues with an
severity::1 rating have priority over all other issues and should be
considered for a critical security release.
Issues with an
severity::2 rating should be scheduled for the next scheduled
security release, which may be days or weeks ahead depending on severity and
other issues that are waiting for patches. An
severity::2 rating is not a guarantee
that a patch will be ready prior to the next security release, but that
should be the goal.
Issues with an
severity::3 rating have a lower sense of urgency and are assigned a
target of the next minor version. If a low-risk or low-impact vulnerability
is reported that would normally be rated
severity::3 but the reporter has
provided a 30 day time window (or less) for disclosure the issue may be
escalated to ensure that it is patched before disclosure.
Security issue becoming irrelevant due to unrelated code changes
It is possible that a ~security issue becomes irrelevant after it was initially triaged, but before a patch was implemented. For example, the vulnerable functionality was removed or significantly changed resulting in the vulnerability not being present anymore.
If an engineer notices that an issue has become irrelevant, they should @-mention the person that triaged the issue to confirm that the vulnerability is not present anymore. Note that it might still be necessary to backport a patch to previous releases according to our maintenance policy. In case no backports are necessary, the issue can be closed.
Reducing the number of backports
With the approval of an Application Security Engineer a security issue may be fixed on the current stable release only, with no backports. Follow the GitLab Maintenance Policy and apply the
~reduced backports label to the issue.
Internal Application Security Reviews
For systems built (or significantly modified) by Departments that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened. Security reviews aim to help reduce vulnerabilities and to create a more secure product.
When to request a security review?
This short questionnaire below should help you in quickly deciding if you should engage the application security team:
If the change is doing one or more of the following:
- Processing, storing, or transferring any kind of RED or ORANGE data
- If your changes have a goal which requires a cryptographic function such as: confidentiality, integrity, authentication, or non-repudiation, it should be reviewed by the application security team.
- Deployment of a customer facing application into a new environment
- Changes to an existing security control
- Modification of any pipeline security checks or scans
- A new authentication mechanism
- Adding code that touches the authentication model, tokens or sessions
- Dealing with user supplied data
- Touching cryptography functions, see the GitLab Cryptography Standard for more details
- Touching the permission model
- Implement new security controls (i.e. new library for a specific protection, HTTP header, …)
- Exposing a new API endpoint, or modifying an existing one
- Introducing new database queries
- Using regex to :
- validate user supplied data
- make decisions related to authorisation and authentication
- A new feature that can manipulate or display sensitive data (i.e PII), see our Data Classification Standard for more details
- Persisting sensitive data such as tokens, crypto keys, credentials, PII in temp storages/files/DB, manipulating or displaying sensitive data (i.e PII), see our Data Classification Standard for more details
You should engage
How to request a security review?
There are two ways to request a security review depending on how significant the changes are. It is divided between individual merge requests and larger scale initiatives.
Individual merge requests or issues
Loop in the application security team by
/cc @gitlab-com/gl-security/appsec in your merge request or issue.
These reviews are intended to be faster, more lightweight, and have a lower barrier of entry.
Larger scale initiatives
Some use cases of this are for epics, milestones, reviewing for a common security weakness in the entire codebase, or larger features.
Is security approval required to progress?
No, code changes do not require security approval to progress. Non-blocking reviews enables the freedom for our code to keep shipping fast, and it closer aligns with our values of iteration and efficiency. They operate more as guardrails instead of a gate.
What should I provide when requesting a security review?
To help speed up a review, it’s recommended to provide any or all of the following:
- The background and context of the changes being made.
- Any documentation or diagrams which help provide a clear understanding its purpose and use cases.
- The type of data it’s processing or storing.
- The security requirements for the data.
- Your security concerns and a worst case scenario that could happen.
- A test environment.
What does the security process look like?
The current process for larger scale internal application security reviews be found here
My changes have been reviewed by security, so is my project now secure?
Security reviews are not proof or certification that the code changes are secure. They are best effort, and additional vulnerabilities may exist after a review.
It’s important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.
Using third party libraries ?
If you are using third party libraries make sure that:
- You use the latest stable and available version
- Your team has the ability to support and upgrade this library as security patches are published
- The maintainer has a security policy
Vulnerability Reports and HackerOne
GitLab receives vulnerability reports by various pathways, including:
- HackerOne bug bounty program
- Reports or questions come in from customers through Zendesk.
- Issues opened on the public issue trackers. The security team can not review
all new issues and relies on everyone in the company to identify and label
- Issues reported by automated security scanning tools
For any reported vulnerability:
- Open a confidential issue in the appropriate issue tracker as soon as a report is verified. If the vulnerability was reported via a public issue, make the issue confidential. If triage is delayed due to team availability, the delay should be communicated.
~bug::vulnerabilitylabels to the issue. Add the appropriate group label if known.
- An initial determination should be made as to severity and impact. Never dismiss a security report outright. Instead, follow up with the reporter, asking clarifying questions.
- For next steps, see the process as it is detailed below for HackerOne reports, and adhere to the guidelines there for vulnerabilities reported in other ways as well in terms of frequency of communication and so forth.
- Remember to prepare patches, blog posts, email templates, etc. on
devor in other non-public ways even if there is a reason to believe that the vulnerability is already out in the public domain (e.g. the original report was made in a public issue that was later made confidential).
See the dedicated page to read about our Triage Rotation process.
See the dedicated page to read about our HackerOne process.
Security Dashboard Review
See the dedicated page to read about our dashboard review process.
We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.
We currently request CVEs through our CVE project. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there’s no associated vulnerability.
On Release Day
On the day of the security release several things happen in order:
- The new GitLab packages are published.
- All security patches are pushed to the public repository.
- The public is notified via the GitLab blog release post, security alerts email, and Twitter.
- The vulnerability acknowledgements page is updated with appropriate credits to the reporting researchers.
The GitLab issue should then be closed and - after 30 days - sanitized and made public. If the report was received via HackerOne, follow the HackerOne process.
Process for disclosing security issues
At GitLab we value being as transparent as possible, even when it costs. Part of this is making confidential GitLab issues about security vulnerabilities public 30 days after a patch. The process is as follows:
- Check for a
~keep confidentialtag. If one exists
- Decide whether this tag is still appropriate and in line with our Transparency value
- Start a discussion with issue participants, if needed
- If an issue does not have
~keep confidential, remove sensitive information from the description and comments, e.g.
- Issues related to personal data leaks are not disclosed since they are not security issues related to the product. If for some reason it needs to be disclosed then consult with Legal before disclosing.
- Identify all issue description changes, click to expand “Compare with previous version” and click the trash icon to “Remove description history”
- Optionally mention issue participants to notify them you intend to make the issue public
- Edit the Confidentiality of the issue and set it to Public
To facilitate this process the GitLab Security Bot comments on confidential issues 30 days after issue closure when they are not labelled
Handling Disruptive Researcher Activity
Even though many of our 3rd-party dependencies, hosted services, and the static
about.gitlab.com site are listed explicitly as out of scope, they are sometimes
targeted by researchers. This results in disruption to normal GitLab operations.
In these cases, if a valid email can be associated with the activity, a warning
such as the following should be sent to the researcher using an official channel
of communication such as ZenDesk.
Security Engineering Code Contributions
Security Engineers typically act as Subject Matter Experts and advisors to GitLab’s engineering teams. Security Engineers may wish to make a larger contribution to GitLab products, for example a defense-in-depth measure or new security feature.
Like any contributor, follow the Contributor and Development Docs, paying particular attention to the issue workflow, merge requests workflow, style guides, and testing standards.
Security Engineers will need to collaborate with and ultimately hand over their work to a team in the Development Department. That team will be responsible for prioritisation, review, rollout, error budget, and maintenance of the contribution. Security Engineers should ideally open an Issue or Epic as early as possible, labelled with the candidate owning team. The team can inform implementation or architectural decisions, highlight existing or upcoming work that may impact yours, and let them plan capacity for reviewing your work.
If a team does not have capacity or a desire to assist, a Security Engineer’s work can still continue; everyone can contribute.
Requests from Security Engineers for new features and enhancements should follow the process in “Requesting something to be scheduled”
This does not apply to addressing security vulnerabilities or dependency updates, which have separate processes for triage and patching.
External Code Contributions
We have a process in place to conduct security reviews for externally contributed code, especially if the code functionality includes any of the following:
- Processing credentials/tokens
- Storing credentials/tokens
- Logic for privilege escalation
- Authorization logic
- User/account access controls
- Authentication mechanisms
The Security Team works with our Community Outreach Team to ensure that security reviews are conducted where relevant. For more information about contributing, please reference the Contribute to GitLab page.
The packages we ship are signed with GPG keys, as described in the GitLab documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. The Distribution team is responsible for updating the package signing key. For more details that are specific to key locations and access at GitLab, find the internal google doc titled “Package Signing Keys at GitLab” on Google Drive.