AI in Security Learning Group
AI in All We Do in GitLab’s FY24 Yearlies, the company is highlighting its dedication to using artificial intelligence (AI) and machine learning (ML) technologies. In order to help the company achieve this goal, it is important that the GitLab Security team understand these technologies. In fact, the Security Automation team has been a leader in this area, exemplified by the Spamcheck anti-spam engine. This learning group is to help interested GitLab Security team members learn, and share what they have learned about AI/ML technologies. The overarching goal is to organize and disseminate learning resources and lessons learned in order to provide a coherent knowledge base for other team members to consume.
Where to find us?
- Establish an efficient, iterative knowledge base workflow in which information concerning AI/ML can be shared
- Create first iteration of the knowledge base, including resources on AI/ML basics and things to consider when securing AI/ML solutions
- Provide resources for identifying where AI/ML are used within Security and GitLab the product
- Implement other creative, interactive ways to help team members upskill in AI/ML (i.e demos, training projects, etc…)
Who Can Participate?
Everyone Can Contribute! Everyone is invited to help build out the knowledge base and develop teaching solutions. Additionally, we are looking to define a core group of people who will be project advocates, and who might be interested in creating the learning path for their area of interest (i.e. AI/ML basics, secure model generation, threat modeling AI/ML solutions, secure integration of AI APIs, etc…). If you are interested in participating or being an advocate, please reach out in
#lg_security-ai in Slack.
The preferred communication style will be asynchronous. Any synchronous communication (including training events) will be recorded, and we will do all we can to offer multiple times for increased participation. There will also be a group kickoff meeting, date TBD.