Workflow Catalog Group

The Workflow Catalog group is focused on developing Workflow Catalog, a catalog of AI Agents, tools, and workflows that can be created, curated, and shared across organizations, groups, and projects.

Vision

The Workflow Catalog group is focused on developing Workflow Catalog, a catalog of AI Agents, tools, and workflows that can be created, curated, and shared across organizations, groups, and projects.

Team members

Engineering Manager & Engineers

Name Role
Sam BeckhamSam Beckham Engineering Manager
Florie GuibertFlorie Guibert Senior Frontend Engineer
Jaydip pansuriyaJaydip pansuriya Backend Engineer
James RushfordJames Rushford Frontend Engineer
Justin HoJustin Ho Senior Frontend Engineer
Keeyan NejadKeeyan Nejad Backend Engineer
Lukas EipertLukas Eipert Staff Frontend Engineer, Foundations:Personal Productivity
Luke DuncalfeLuke Duncalfe Staff Backend Engineer
Vanessa OttoVanessa Otto Senior Frontend Engineer

How to reach us

Depending on the context here are the most appropriate ways to reach out to the Workflow Catalog group:

  • Slack Channel: #g_workflow_catalog
  • GitLab group @gitlab-org/ai-powered/workflow-catalog/engineering (just engineers)

How we work

We’re just getting started and will be defining how we work as we settle in to the new team. Here are some links to get us started:

Communication

The Workflow Catalog Team communicates based on the following guidelines:

  • Always prefer async communication over sync meetings.
  • Don’t shy away from arranging a sync call when async is proving inefficient, however always record it to share with team members.
  • By default communicate in the open.
  • Prefer public channels (#g_workflow_catalog) over private message for work-related Slack messaging.

Frontend-Backend collaboration

We aim to foster high levels of collaboration between frontend and backend engineers to ensure development velocity and code quality.

  • Schema-first development: Before implementation begins, frontend and backend engineers collaborate to design a GraphQL API schema based on UI requirements, user experience needs, and performance considerations.
  • Parallel development processes: Once the schema is agreed upon, the frontend can proceed using mock data, mock endpoints, or API stubs that match the agreed schema. The backend can focus on implementing the data model, business logic, and actual API schema.
  • Maintaining alignment: We value great communication. When requirements or schema need to change, we communicate early through the relevant GitLab issue or in #g_workflow_catalog so our frontend or backend counterparts stay informed of all changes and can provide feedback early to avoid late-stage blockers.

AI stage collaboration

The Workflow Catalog relies on the Workflow Service as a foundational backend service. Most Workflow Catalog features require new capabilities to be developed within Workflow Service, which means our engineers will need to contribute directly to that codebase in partnership with the Duo Agent Platform team.

Collaboration Requirements:

  • All Workflow Service contributions must be developed in close partnership with the Duo Agent Platform team
  • Our implementations must align with their service architecture and vision
  • We commit to supporting Workflow Service’s broader goals and adhering to their technical standards

Collaboration Process:

  • Reach out to relevant Duo Agent Platform contacts (listed below) during the planning phase
  • Join their #g_duo_workflow channel
  • Follow our async communication preferences by default, but schedule sync meetings when needed and ensure key outcomes are documented in GitLab issues

Primary Duo Agent Platform contacts

Team Member Expertise Area
Mikołaj Wawrzyniak Workflow Service architecture
Frédéric Caplette Client-side implementation
Dylan Griffith Workflow Executor architecture: remote execution environment and runner implementation
Jessie Young Authorization and authentication
Shekhar Patnaik / Igor Drozdov Duo Chat agent integration
Sebastian Rehm Engineering Manager, backup contact for any of the above

Our tech stack

Team meetings

Workflow Catalog: Group meeting

  • Time: Every Tuesday at 05:30 UTC and 10:00 UTC. It’s held twice in one day to allow APAC and EMEA to attend.
  • Purpose: This meeting serves as a general sync meeting to bring up any current issues and blockers.
  • Agenda: Google Doc (internal only)
  • Recordings: Google Drive (internal only)

Capacity planning

We use a lightweight system of issue weighting to help with capacity planning. These weights help us ensure that the amount of scheduled work in a cycle is reasonable, both for the team as a whole and for each individual. The “weight budget” for a given cycle is determined based on the team’s recent output, as well as the upcoming availability of each engineer.

Since things take longer than you think, it’s OK if an issue takes longer than the weight indicates. The weights are intended to be used in aggregate, and what takes one person a day might take another person a week, depending on their level of background knowledge about the issue. That’s explicitly OK and expected. We should strive to be accurate, but understand that they are estimates! Change the weight if it is not accurate or if the issue becomes harder than originally expected. Leave a comment indicating why the weight was changed and tag your EM so that we can better understand weighting and continue to improve.

Weights

The weights we use are:

Weight Description
1: Trivial The problem is very well understood, no extra investigation is required, the exact solution is already known and just needs to be implemented, no surprises are expected, and no coordination with other teams or people is required.

Examples are documentation updates, simple regressions, and other bugs that have already been investigated and discussed and can be fixed with a few lines of code, or technical debt that we know exactly how to address, but just haven’t found time for yet.
2: Small The problem is well understood and a solution is outlined, but a little bit of extra investigation will probably still be required to realize the solution. Few surprises are expected, if any, and no coordination with other teams or people is required.

Examples are simple features, like a new API endpoint to expose existing data or functionality, or regular bugs or performance issues where some investigation has already taken place.
3: Medium Features that are well understood and relatively straightforward. A solution will be outlined, and most edge cases will be considered, but some extra investigation will be required to realize the solution. Some surprises are expected, and coordination with other teams or people may be required.

Bugs that are relatively poorly understood and may not yet have a suggested solution. Significant investigation will definitely be required, but the expectation is that once the problem is found, a solution should be relatively straightforward.

Examples are regular features, potentially with a backend and frontend component, or most bugs or performance issues.
5: Large Features that are well understood, but known to be hard. A solution will be outlined, and major edge cases will be considered, but extra investigation will definitely be required to realize the solution. Many surprises are expected, and coordination with other teams or people is likely required.

Bugs that are very poorly understood, and will not have a suggested solution. Significant investigation will be required, and once the problem is found, a solution may not be straightforward.

Examples are large features with a backend and frontend component, or bugs or performance issues that have seen some initial investigation but have not yet been reproduced or otherwise “figured out”.

Anything larger than 5 should be broken down if possible.

Weights should account for both development and review time.

Security issues are typically weighted one level higher than they would normally appear from the table above. This is to account for the extra rigor of the patch release process. In particular, the fix usually needs more-careful consideration, and must also be backported across several releases.

Last modified June 27, 2025: Cleanup AI Engineering Handbook (7faf63b4)