To provide content and tools to support the best possible assessment at the earliest possible moment.
Following our single application paradigm,
we integrate and build scanning tools to supply security and compliance assessment data to the main GitLab application
where we develop our vulnerability management system and other features.
While it might be technically feasible, we do not aim at building standalone products that provide this data independently from the GitLab application.
For more details about the vision for this area of the product, see the Secure stage page.
Mission
To support the success of GitLab by developing highly usable, hiqh quality tools for customers to build more secure software.
The Application Security Testing team works on GitLab’s Secure stage.
The Application Security Testing Team is responsible for the security checks features in the GitLab platform, and maps to the application security testing transversal stage.
You can learn more about our approach on the Application Security Testing Vision page.
The features provided by the Application Security Testing Team are mostly present at the pipeline level, and mostly available as container images.
This particularity shapes our processes and QA, which differs a bit from the other stages.
Security Products
We still refer to “Security Products” as the tools developed by the Application Security Testing Team. Hence the home of our projects in GitLab: https://gitlab.com/gitlab-org/security-products/.
We strive to maintain a consistent User Experience across our Security Products but we do not enforce consistency at the implementation level.
Each group faces its own challenges and is in the best position to make the technical choices it deems are the most suitable to achieve its goals.
While UX inconsistencies are considered as bugs,
we rely on individual teams to make smart decisions about when consistency is important and when divergence makes more sense
— either because the divergence itself creates a better experience or because of velocity considerations.
Domains of Expertise
SAST
SAST (Static Application Security Testing) refers to static code analysis.
GitLab leverages the power of various opensource tools to provide a wide range of checks for many languages and support.
These tools are wrapped inside docker images which ensure we get a standard output from there.
An orchestrator, developed by GitLab, is in charge of running these images, and gathering all the data needed to generate the final report.
DAST
DAST (Dynamic Application Security Testing) is used to hit a live application.
Because some vulnerabilities can only be detected once all the code is actually running, this method complements the static code analysis.
DAST is relying on OWASP Zed Attack Proxy Project, modified by GitLab to enable authentication.
Dependency Scanning
Dependency Scanning is used to detect vulnerabilities introduced by external dependencies in the application.
Because a large portion of the code shipped to production is actually coming from third-party libraries, it’s important to monitor them as well.
Dependency Scanning is relying mostly on the Gemnasium engine.
Fuzz Testing
Coverage-guided fuzzing and API fuzzing are used to automatically input data into applications or web apis that has the potential to cause crashes or bugs. Coverage-guided fuzzing relies on open-sourced language-specific fuzzers. API Fuzzing is based on a proprietary GitLab engine.
License Compliance
License Compliance helps with the licenses introduced by third-party libraries in the application.
Licence management relies on the LicenseFinder gem.
Vulnerability Research
The Vulnerability Research team’s purpose is
to perform research and develop proofs of concepts that increase the
capabilities and effectiveness of the
Secure stage.
Skills
Because we have a wide range of domains to cover, it requires a lot of different expertises and skills:
Technology skills
Areas of interest
Ruby on Rails
Backend development
Go
SAST, Dependency Scanning, DAST
Python
DAST
SQL (PostgreSQL)
Dependency Scanning / all
Docker
Container Scanning / all
C#
API Security
Our team also must have a good sense of security, with at least basic skills in application security.
We provide tools for many different languages (ex: sast, dependency scanning, license compliance). It means our team is able to understand the basics of each of these languages, including their package managers. We maintain tests projects to ensure our features are working release after release for each of them.
It’s possible that our security automation tooling may fail.
If this occurs, and the issue cannot be immediately resolved, open an issue to
track the error. Then, announce the failure in #s_application-security-testing to raise awareness,
and follow the manual security triage process outlined below.
View manual process fallback when automation fails
Manually reviewing and resolving vulnerabilities
On a weekly basis: review the vulnerability report to resolve no longer detected ones and close related issues. Note: It is not necessary to investigate vulnerabilities that are no longer detected.
Visit Vulnerability Report Dashboards to verify that there are vulnerabilities that can be resolved.
Execute the security-triage-automation tool to resolve vulnerabilities and close their issues. This tool must be executed separately for each project that have vulnerabilities to resolve.
Verify in Vulnerability Report Dashboards that vulnerabilities have been resolved.
Manually creating security issues for FedRAMP vulnerabilities
Last working day before the 1st of the month, create security issues
for FedRAMP vulnerabilities of the CONTAINER_SCANNING type, and CRITICAL, HIGH,
MEDIUM, LOW, and UNKNOWN severity levels by executing the security-triage-automation
tool to process vulnerabilities for a given project
(please make sure to adjust CLI options accordingly). This tool must be executed
separately for each project.
Manually creating deviation requests for FedRAMP vulnerabilities
Vulnmapper automatically creates Deviation Requests but may fail for various reasons, such as the absence of analysis from NVD.
In cases where automation fails, you must create the Deviation Requests manually before the issues reach SLA.
To do so, use the following procedure.
Update the Vulnerability Details section with a link to the advisory (RedHat tracker usually), CVE ID, severity, and CVSS score.
Update the Justification Section with:
The OS vendor has published an updated advisory for <CVE_ID>, indicating that package <PACKAGE_NAME> has not yet had a fix released for this vulnerability. Until a fix is available for the package, this vulnerability cannot practically be remediated.
Update the Attached Evidence section with:
As this operational requirement represents a dependency on a vendor-published package to address this vulnerability, no additional evidence has been supplied. Please refer to the linked vendor advisory in the above justification.
GITLAB_ACCESS_TOKEN has expired. The automation relies on API requests to manage vulnerabilities and issues on various projects. This requires specific permissions and authentication is achieved with a Private Access Token generated on the service account gl-service-security-triage (credentials available in 1Password). If the token is expired, a new one (with api scope) must be generated by signing in with this account on gitlab.com and then the new value must be configured in the settings of the release project.
FedRAMP vulnerabilities
To ensure compliance, the management of FedRAMP vulnerabilities is handled by automation. Please check the manual process fallback for additional details.
Non-FedRAMP vulnerabilities
We do not yet have the same automation in place for non-FedRAMP vulnerabilities since it represents a too important volume to manage for our teams and some necessary improvements in the vulnmapper tool are required prior to enabling this.
In the meantime, we favor a more specialized approach for these vulnerabilities and there is no standardized process across the groups.
Error Monitoring
500 errors on gitlab.com are reported to Sentry. Below are some quick links to pull up Sentry errors related to Application Security Testing.
Our team occasionally schedules synchronous brainstorming sessions as a method of deep-diving on a specific topic.
This approach can be useful in breaking down complexity and deriving actionable steps for problems that lack
definition.
These are purposefully freeform to allow for creative problem solving.
When possible, time should be reserved for a list of actions to be taken from the open discussion.
As the product evolves, it is important to maintain accurate and up to date documentation for our users. If it is not documented, customers may not know a feature exists.
To update the documentation, the following process should be followed:
When an issue has been identified as needing documentation, add the ~Documentation label, outline in the description of the issue what documentation is needed, and assign a Backend Engineer and Technical Writer(TW) to the issue (find the appropriate TW by searching the product categories).
If the task is documentation only, apply a ~Px label.
For documentation around features or bugs, a backend engineer should write the documentation and work with the technical writer for editing. If the documentation only needs styling cleanup, clarification, or reorganization, this work should be lead by the Technical Writer with support from a BE as necessary. The availability of a technical writer should in no way hold up work on the documentation.
Further information on the documentation process.
Async Daily Standups
Since we are a remote company, having daily standup meetings would not make any sense, since we’re not all in the same timezone.
That’s why we have async daily standups, where everyone can give some insights into what they did yesterday, what they plan to do today, etc.
For that, we rely on the geekbot slack plugin to automate the process.
Standup messages format
Use the “description in backquote + [link to issue](#)” format when mentioning issues in your standup report.
Prepend CI status icons to the answer lines for What did you do since yesterday? to denote the current state:
for successfully accomplished tasks (:ci_passing: emoji)
for tasks that were due on some period of time but were not accomplished (:ci_failing: emoji)
for tasks currently in progress (:ci_running: emoji)
for paused or postponed tasks (:ci_pending: emoji)
Catch-up on all emails and threads after the vacation
Slack Channels:
As our teams focus on different areas, we have Geekbot configured to broadcast to separate channels in addition to our common one at [#s_secure-standup].
Our important meetings are recorded and published on YouTube, in the Application Security Testing Stage playlist.
They give a good overview of the decision process, which is often a discussion with all the stakeholders. As we are a remote company, these video meetings help to synchronize and take decisions faster than commenting on issues. We prefer asynchronous work, but for large features and when the timing is tight, we can detail a lot of specifications. This will make the asynchronous work easier, since we have evaluated all edge cases.
Calendar
We welcome team members to join meetings that are on our shared calendar. The Application Security Testing Calendar is available to all logged in GitLab team members.
Staying informed
GitLab is an extremely active organization which generates a lot of news and activity each week. Everyone in Application Security Testing are encouraged to keep themselves informed as to what is happening in the larger organzation. Everyone is also
encouraged to contribute to these channels and communication paradigms when you have information to share.
In addition to this, each group in Application Security Testing conducts a weekly synchronous meeting. These meetings are publicized on the Application Security Testing Calendar mentioned above. As always at GitLab, we strive to make meeting attendance optional.
Keeping others informed
In addition to keeping yourself informed, team members are encouraged to keep others informed as well. Application Security Testing groups have adopted a practice of including the following topics as standing agenda items in their weekly meetings, with example
topics for each bullet point.
Current status
Work recently achieved against top priorities for that milestone.
Pre-recorded demos are appreciated and encouraged as part of these updates.
Newly discovered scope or dependencies.
Risks
Issues which are blocked or slowed, impacting whether they can be delivered in the desired timeframe.
Help wanted
Issues or topics on which the team or individuals on the team are getting stuck and could use some help.
Praise
Anyone doing a great job and you want to give them kudos?
Any bit of work which has been delivered that’s exceptional?
Engineering Managers are responsible for populating this section of weekly group meetings, though everyone can contribute. In addition to helping the group keep itself informed about what’s happening each week, the SEM for Application Security Testing will collect
this information weekly and broadcast a curated list to the section.
Technical onboarding
New hires should go through these steps and read the corresponding documentation when onboarding in the Application Security Testing Team.
Every new hire will have an assigned onboarding issue that will guide them through the whole process.
The Application Security Testing Team follows the coding standards and style guidelines outlined in the company-wide Contributor and Development Docs, however, please consult the following guidelines which are specific to the Application Security Testing Team:
Some components of the architecture that support Application Security Testing features are shared between multiple groups like the common Go library,
the Security Report Schemas, the rails parsers, etc.
Modifying these shared pieces might impact other groups so we should rely as much as possible on approval rules to ensure
such changes are reviewed by the relevant teams before being merged.
Impactless two-way door changes could skip the approval process, please use sound judgement and common sense in such situations.
The author of changes should announce broadly the changes made on these components to raise awareness (weekly meeting agenda, slack channel).
Development of new analyzers
For a complete guide about developing a new analyzer please refer to our user documentation
Technical Documentation
As our product evolves, the engineering teams are researching ways to achieve new functionality and improve our architecture.
The Application Security Testing sub-department conducts retrospectives at the group level.
Each group’s EM or delegated DRI is responsible to prepare and schedule the retrospective sync sessions and the async retrospective issues can be found in the corresponding project.
Analytics
The Application Security Testing group reviews analytics to help understand customers and their usage of the tools. This data helps drive product and technical decisions. The following links show usage of Application Security Testing functionality.
We also track our backlog of issues, including past due security and infradev issues, and total open SUS-impacting issues and bugs.
Merged Merge Request Types
MR Type labels help us report what we’re working on to industry analysts in a way that’s consistent across the engineering department. The dashboard below shows the trend of MR Types over time and a list of merged MRs.
The API Security team is a standalone team which is part of the Dynamic Analysis group at GitLab. It is charged with developing solutions which perform Fuzzing.
Our stage follows the product development flow process, including the workflow labels. This page documents tweaks and additions to the general GitLab Process. If there’s a conflict, the stage documentation should take precedence.
Some groups prefer to split the Plan phase into two adjacent steps: Planning breakdown and Refinement. Either way, once planning is complete, issues and epics are ready for scheduling.
Planning breakdown
Epics and issues are selected according to your team’s prioritization process, and must have the ~workflow::planning breakdown label applied.
Vulnerability Research sits at the crossroads between the Application Security Testing stage itself, and customers of the stage. We provide enhancements to our products or processes to ensure our products and services are more effective for GitLab’s customers.
We strive to enhance our customer experience with regards to providing correct and accurate results from our services.
The Dynamic Analysis group at GitLab is charged with developing solutions which perform Dynamic Analysis Software Testing (DAST) and Fuzzing. Our work is a mix of open and closed source code.
Mission
To support the success of GitLab by developing highly usable, hiqh quality tools for customers to build more secure software. The Dynamic Analysis group at GitLab is charged with developing solutions which perform API Security Testing, Dynamic Analysis Software Testing (DAST) and Fuzzing.
We expect and require all contributions to our products to go a merge request with a formal review. As such, we follow the Merge Request workflow and code review guidelines articulated in GitLab’s developer documentation. We would, however, like to highlight a few items from these documents and add a few additional considerations for reviewers and authors.
Additional considerations for Merge Request reviewers
The best way to unblock a peer or community member is to provide feedback in a timely manner. If you are at capacity and cannot facilitate a review in the SLO to which we aspire, please let folks know in the merge request so another reviewer may be found.
Additional considerations for Merge Request authors
Being a globally distributed organization can, and frequently does, add latency to back-and-forth communication between folks. Don’t take it personally if it’s taking longer than you expected to get feedback on your changes.
Secure QA Process
The secure analyzers verify merge requests by running a new commit against downstream test projects for their supported languages/frameworks (i.e. the gemnasium analyzer of Dependency Scanning will trigger tests against php, go, and several other test projects). The verification is done by comparing the generated report output against an expected report committed to the analyzer’s repository. If analyzer behavior has changed, then the pipeline will fail because the contents of the expected and generated reports will no longer match.
Feedback(Dismiss, create an issue or a Merge Request)
Overview
The architecture supporting the Secure features is split into two main parts.
flowchart LR
subgraph G1[Scanning]
Scanner
Analyzer
CI[CI Jobs]
end
subgraph G2[Processing, visualization, and management]
Parsers
Database
Views
Interactions
end
G1 --Report Artifact--> G2
Scanning
The scanning part is responsible for finding vulnerabilities in given resources and exporting results.
The scans are executed in CI jobs via several small projects called Analyzers which can be found in our Analyzers sub-group.
The Analyzers are small wrappers around in-house or external security tools called Scanners to integrate them into GitLab.
The Analyzers are mainly written in Go and rely on our Common Go library.
The Static Analysis group is largely aligned with GitLab’s Product Development Flow,
however there are some notable differences in how we seek to deliver software. The engineering team
predominantly concerns itself with the delivery of software, which is the portion of the workflow
states where we deviate the most. What follows is how we manage the handoff from product management
to engineering to deliver software.