Create:Code Creation: Code Suggestions Guide

Introduction

This document contains Code Suggestions development guidelines for engineers.

For an overview of Code Suggestions, please refer to Create:Code Creation Group engineering guide

Supporting new AI models

As the Code Suggestions offering continues to mature and we discover more about our users’ needs as well as available AI models, we will need to add or switch to new AI models that Code Suggestions will use.

Integrating a new AI model into our systems generally consists of three steps:

  1. Evaluation - to make sure that the AI model satisfies the requirements for our use cases.
  2. Implementation - once we have evaluated the AI model as satisfactory, we need to update our code base accordingly.
  3. Rollout - after we have added support for the new model, we need to follow a standard rollout process to slowly introduce it to our users.

Code Suggestions Implementation Guidelines

These are guidelines for supporting a model for Code Suggestions in the AI Gateway (AIGW) and/or GitLab Rails.

Overview

Code Suggestion requests can be routed direct to the AI Gateway or indirect through GitLab Rails.

  • For direct-to-AIGW requests, the IDE gets the model details from GitLab Rails through the Direct Connections API endpoint. The IDE then sends a request to AIGW with the model details fetched from the GitLab Rails.
  • For indirect-through-GitLab-Rails requests, the IDE sends a request to GitLab Rails' Code Completions API endpoint. GitLab Rails then sends a request to the AIGW.

For a more in-depth overview of Code Completions vs Code Generations, and direct-to-AIGW vs indirect-through-GitLab-Rails requests, please refer to the Code Suggestions Technical Overview and the Code Completion guides.

Code Suggestions Model Evaluation Guide

This document serves as a technical how-to guide for evaluating new Code Suggestions models.

Evaluation template

When starting a model evaluation process, you must create an issue using the Model Evaluation Template.

Evaluation criteria

Before supporting a model for Code Suggestions, we must evaluate that model against several criteria, including correctness and latency. For a more detailed list of criteria to consider, please refer to the evaluation template.

Code Suggestions Model Rollout Guide

This document serves as a guide for rolling out Code Suggestions models.

Create a rollout plan

Create an issue using the Rollout Plan Template. This must be done before or during the implementation of the new model. Specific rollout plans may need to be created for different rollout phases.

Rollout method

Rollout of a new model will be done through a beta feature flag.

Pre-rollout checklist

Rollout phases

Rollout of a new model is usually done in 3 phases:

Code Suggestions Testing Guide

This document serves as a technical how-to guide for testing Code Suggestions functionality.

End-to-End testing

Code suggestions is tested by using the API in code_suggestions_spec.rb and tested indirectly through the Web IDE in code_suggestions_in_web_ide_spec.rb.

Code Suggestions Self-managed End-to-End Tests

In MRs, the end-to-end tests exercise the Code Suggestions API against self-managed instances by using an instance of the GitLab Linux (Omnibus) package integrated with the latest version of AI Gateway. The instance of AI Gateway is configured to return mock responses. To view the results of these tests, open the e2e:test-on-omnibus-ee child pipeline and view the ai-gateway job. The ai-gateway job activates a cloud license and then assigns a Duo Pro seat to a test user, before the tests are run.