AI use cases within the Security Division at GitLab
Learn how the Security Division leverages AI platforms such as Claude and GitLab Duo to optimize workflows, improve productivity and automate manual tasks.
Security Tools Using AI
The Security Division integrates AI capabilities into various tools and automations. Most use cases integrate AI capabilities into existing tools and capabilities to improve productivity and automate manual processes and tasks.
Tool | AI Engine | Use Case | Team |
---|---|---|---|
AI-assisted Incident reporting | Claude | Helps users to report security issues more quickly and efficiently by pre-filling the incident report forms based on a short description of the issue. | Security Incident Response Team (SIRT) |
AppSec Assistant | GitLab Duo | AI reviews development issues to identify security risks at design time | Product Security Engineeringon behalf of Application Security |
Automate our Continous Control Monitoring Program | Claude | Generate entire scripts used for: Pull data from resources (e.g. AWS); Pull policies from a yaml file; Perform an audit analysis and Conclusion (example here) | Security Compliance |
Generation of Test Cases for gitlab-assistant | GitLab Duo | Generate basic and complex test cases for a Python module that standardizes scripting of solutions across the team when building automations and functionality for interactions with GitLab. The module introduces business logic beyond the basic API endpoint interactions. | Security Assurance |
GitLab Duo generated CVE descriptions | GitLab Duo | Generates CVE description from the imported HackerOne reports that can optionally be used in our CVE’s. | Application Security |
Incident Summarization (/sirt_summary) | Claude | Incident summarization slash command in Slack | Security Incident Response Team (SIRT) |
Sir Tanuki (SIRT Incident Review Bot) | GitLab Duo | The SecOps Incident Reviewer (Sir) Tanuki is SIRT’s GitLab Duo-powered incident report reviewing tool. It works by leveraging GitLab Duo to analyze security incident issues and give feedback on the different sections of the issue description. | Security Incident Response Team (SIRT) |
TLDR Customer Threat Detections | Claude | Signals Engineering uses Claude to generate new TLDR threat detections - a Slack slash command (/tldr) is available to kick off a new Claude written MR for review. | Signals Engineering |
AI Driven Process Efficiencies
Process | AI Engine | Efficiency Details | Team |
---|---|---|---|
Tableau Data Manipulation | Claude | Leverage Claude to generate syntax for calculated fields in Tableau to enable data manipulation. These fields enable us manipulate existing data and create new dimensions and measures to support the Security metrics program. | Security Governance |
Policy Generation and Optimization | Claude | Create the foundations of Security and Technology policies and reduce verbosity of policy language to align with Governance expectations. | Security Governance |
Security Training Content Script Creation and Editing | Claude | Create scripts for AI created Security Training videos and editing of Security training content for readability and conciseness. | Security Governance |
Blog and White Paper Optimization | Claude | Optimize the language and readability of blog posts and white papers to support a polished product for customers and the community. | Field Security |
Ideas, Experiments and Tests
The security division works out of GitLab issues to keep track of AI-integrated ideas, experiments, and tests. Generally it is a good idea to add the AI
GitLab label to issues for tracking.
Last modified January 30, 2025: Replace ref links with regular links (
c4c9b3d1
)