AI usage in UX

How to use AI for UX activities. Learn when to use AI, best practices, what to avoid, and how to keep users at the center while using AI as a helper.

This page shows how to use AI for user experience (UX) activities. It adds to our general AI guidance and is for anyone doing UX work at GitLab, regardless of their department.

If you want to know how to work on AI experiences, see these resources.

Getting started

First, read the general guidance in our internal handbook:

The information on this page adds to that general guidance with advice for anyone doing UX activities.

About AI tools

This guide tries to work with any AI tool, but AI usage depends on what tools we can use. This page will be updated as new tools become available.

Based on our approved AI tools (internal), the main AI tools for UX work at GitLab are:

  • Claude (general purpose)
  • Dovetail (interview transcriptions)
  • FigJam (whiteboards and diagrams)
  • Figma Design (content, images, design, and basic prototypes)
  • Figma Make (functional prototypes and web apps)
  • GitLab Duo (GitLab and software development)
  • Rally (interview transcriptions)

Our Tech Stack may have other tools with AI features you can use for UX work.

Main rule: AI helps, doesn’t replace

AI works best as a creative helper that makes human skills better, not as a replacement for user-centered thinking. 47% of UX professionals say AI helps them, with teams working 250% faster when they use it well. Success comes from using AI smartly while keeping human judgment and focusing on users.

When to use AI

Get feedback

Use AI also as a critic, and not just as a generator. It can help you get things done quickly, but it can also help improve your work faster. It can be a cost-effective way to get feedback before asking a colleague for an expert review.

To get feedback on your work:

Plan research

Tools: Claude.

  • Study plans: Create research questions, survey questions, interview guides, and usability test tasks.
  • Ideation support: Generate content for participants to use during studies.
  • Participant recruitment: Draft recruitment emails and screeners.
  • Role-playing - be careful: Simulate participant responses or interactions to test study plans. It can’t predict or replace real human behavior, so always test with actual users or run a small pilot before launch.

Collect data

Prompts: See UX Research Prompts (internal).

Tools: Claude, Dovetail, GitLab Duo, Rally.

Analyze data

Prompts: See UX Research Prompts (internal).

Tools: Claude (others are more often incorrect).

  • Pattern finding - be careful: Spot trends across different data and user sessions. Always review all claims and ask for sources.
  • Initial coding - be careful: Create starting themes from research data. Always review all claims and ask for sources.
  • Spreadsheet formulas: Generate formulas for data analysis in tools like Google Sheets.
  • Reporting: Get help with storytelling, report outlines, and summaries from research results.
  • Persona content: Turn research insights into persona stories.
  • Proto-personas: Draft hypothetical personas. Clearly mark these as “based on assumptions” and plan to test them with real users later.

Whiteboards and diagrams

Tools: Claude, FigJam, GitLab Duo.

  • Whiteboards: Prepare boards for team exercises and meetings.
  • Diagrams: Create visual supports like diagrams, mind maps, flow charts, and timelines. Claude and GitLab Duo can create diagrams for GitLab Flavored Markdown.
  • Stickies: Quickly sift through and categorize stickies to create alignment and outline next steps.
  • Images: Generate and edit images to go beyond text and stickies with FigJam.

UI text

Tools: Claude, Figma Design.

  • UI text creation: Create different versions of error messages, tooltips, and UI text.
  • Tone changes: Make content clearer, shorter, or match our brand voice.
  • A/B test versions: Generate different wordings for testing.

Design

Tools: Claude, Figma Design.

  • Idea exploration: Quickly create many design directions, use cases, or scenarios. Claude for ideas in text and Figma Design for visual ideas.
  • Real content: Replace placeholder text with actual UI text.
  • Wireframe starting points: Use as first concepts, not final designs.
  • Visual assets: Create quick icons, illustrations, or concept art with Figma Design.
  • Fake data — be careful: Create user input for testing (like many form entries or user profiles). It can’t predict or replace real human behavior, so always test with actual users.

Prototype

We are currently testing and choosing tools for AI prototyping in this confidential issue.

Tools: Claude, Figma Design, Figma Make.

  • Basic prototypes: Add interactions to quickly turn designs into basic prototypes with Figma Design.
  • Functional prototypes: Build functional prototypes that show ideas quickly. Figma Make can use Pajamas components, Claude uses generic UI components.

Product documentation

Tools: Claude, GitLab Duo.

  • Write and improve documentation: Create drafts, turn lists into tables, and make content easier to scan.
  • Technical tasks: Fix broken pipelines, write scripts to analyze data, and create Mermaid diagrams.
  • Analysis and research: Review documentation to find improvements, summarize long documents, and organize content by topic.

For more information, see how to use AI for GitLab documentation.

When not to use AI

  • Never replace real users:
    • Don’t pretend AI feedback is user feedback or create fake research data.
    • AI doesn’t understand emotions and context needed for decisions that really affect user well-being.
    • No AI can accurately predict real human behavior.
  • Important decisions: Sensitive situations, like software security or personal data, need careful human oversight that AI can’t provide.
  • Production without review:
    • Never give users AI-created outputs without human checking.
    • AI tools can create generic, template-like outputs even with specific instructions.
  • Faster to do manually:
    • Sometimes struggling with an AI prompt takes more time than just doing the task yourself.
    • If you spend 30 minutes trying to get AI to create a very specific idea, it might be faster to sketch it by hand.
    • Use AI where it clearly helps with speed or creativity; if it’s being difficult, go back to your normal way.

Recommendations

✅ Do ❌ Don’t
Use AI for boring work and brainstorming Skip user research or testing
Pick right AI mode for task Use same AI mode for everything
Keep humans involved at every step Trust AI outputs blindly
Check insights with real data Put sensitive data into unknown tools
Write detailed, context-rich prompts Use AI content without editing
Document when and how AI was used Hide that you used AI
Try different versions and experiment Think one attempt is enough
Follow ethical standards Ignore company guidelines
Keep developing core UX skills Become dependent on AI

Best practices

To avoid common problems when using AI, learn how to apply best practices.

Problem: Many lawsuits against AI companies show copyright risks. Many AI services keep and use input data to train their models, which could expose private information.

Best practices:

Modes

Problem: Using the same AI mode for every task can waste time or give poor results. Research mode for simple tasks wastes time, while standard mode for complex work can miss critical insights.

Best practice: Pick the right AI mode for your task. To select a mode, you may need to turn on a setting or choose a different model.

Mode When to use Examples
Standard/small/mini
Change to a smaller mode for quicker responses
Simple questions with clear answers and enough context - Writing button text
- Drafting test scripts
- Quick brainstorming
Web search
Usually turned on by default
Need current info beyond AI’s training - Latest coding practices
- Recent security issues
- Competitor updates
Research Deep analysis needing multiple sources (10+ minutes) - Competitive analysis reports
- Industry best practices
- Strategic planning
Thinking/reasoning Complex problems needing step-by-step work (20-60 seconds, worth the time for most UX work) - Accessibility planning
- Information architecture planning
- User flow mapping

Context and requirements

Problem: AI doesn’t know your project details, preferences, or knowledge unless you manage what information it has access to. Without this, results can be too general and don’t match your needs.

Best practice: Context engineering means giving AI the right information at the right time for each task. You not only write good prompts, but you also manage the AI’s knowledge.

1. Write context: Use files that store project knowledge so you don’t repeat yourself. Examples:

  • User personas and research findings
  • Design principles and guidelines
  • Technical limits and requirements
  • Style guides and component rules

With Claude, create a project with knowledge and instructions. For other tools, keep a “project context” folder with key files you can reference or share with AI when starting new tasks.

2. Select context: Pull the right information into each conversation based on your current task.

Include Examples
Role “You’re a UX designer for developer tools with deep understanding of CI/CD workflows”
“Act as a DevSecOps engineer reviewing the merge request interface for security scanning results”
Specific task with context “Write 3 error messages for when a CI/CD pipeline fails because security problems were found during SAST scanning”
“Create 5 usability test tasks for developers setting up their first GitLab Runner”
Complete background info “Our users are software developers with 3–10 years experience who manage multiple repositories. They want clear feedback on build status and quick access to logs”
“This is for a DevSecOps platform. Users range from junior developers to senior SREs managing production deployments across multiple environments”
Examples “Here’s a good error message from our app: …"
“Follow the style of these existing personas: …”
Format “Use this template format”
“Format as a table with columns: …”
Success criteria “Include the failed stage, use standard Git terms, use clear language, give next steps”
“Follow GitLab’s design system, be easy to scan, and include error codes”

3. Compress context: When chats get long, summarize key decisions instead of repeating everything. Examples:

  • “Based on our discussion, we decided: (key points)”
  • “Main user problems: slow feedback, unclear errors”
  • “Using the guidelines we established earlier…”

With Claude, you can search and reference past chats.

4. Isolate context: Use different conversations for different types of work to avoid confusion. Approaches:

  • Separate chats for research vs. design vs. testing.
  • Different sessions for each user type or feature.
  • Switch AI “roles” between conversations (researcher vs. designer vs. writer).

Improvement

Problem: Taking the first AI response misses the chance for better results. AI can create many variations, but teams often don’t use this ability.

Best practices:

  • Ask for multiple options (3–10 variations is usually enough).
  • Combine the best parts (phrases, structures, or concepts).
  • Improve through conversation: ask follow-ups and refinements to previous responses.
  • Always apply human editing and expertise before sharing.

Example:

  1. Round 1: “Create tooltip for CI/CD pipeline status”.
  2. Round 2: “Make it shorter and mention the last commit”.
  3. Round 3: “Use GitLab’s specific terms for job artifacts”.
  4. Round 4: Combine best parts, add team’s style guide requirements.
  5. Final: Human polish for clarity and brand consistency.

Review

Problem: Even with context, AI results can be incorrect, inappropriate, or incomplete. AI explanations can actually make people rely more on AI recommendations instead of strengthening human judgment.

Best practices:

  • Fact-check all claims and ask for sources.
  • Double-check counts, percentages, and other numbers.
  • Compare with established UX principles.
  • Test with real users when possible.
  • Review for bias in personas and recommendations.
  • Treat AI like an intern: helpful, but needs oversight.
  • Build review time into your process for any AI contribution.

Biases

Problem: AI can reflect and make worse biases present in its training data. This creates recommendations that aren’t representative or that put specific groups at a disadvantage.

Best practices:

  • Do an “inclusivity check” on all AI outputs.
  • Ask: Could any group be misrepresented or hurt by this?
  • Ask for diversity when creating personas or user scenarios.
  • Test outputs with diverse user groups.
  • Watch for cultural assumptions in language and images.
  • Consider asking AI to self-check: “Does this contain any biased assumptions?”

Transparency

Problem: Stakeholders and team members may not know when or how AI was used in research and design work, leading to trust issues and inability to properly judge the quality of insights. This lack of openness can also create ethical concerns about hidden AI involvement in user-facing decisions.

Best practices:

  • Note which parts of deliverables used AI help.
  • Disclose AI involvement in published content, like research reports and user-facing decisions. Examples:
    • “Analysis for these interviews was done using Claude, checked by @username”
    • “UI text variations created by Figma, reviewed and edited by @username”
  • Keep copies of successful prompts in shared documentation.
  • Share both AI successes and failures with the team to build group knowledge. See documentation and knowledge sharing.

What’s next

Staying current

The AI world changes quickly, with new abilities and tools appearing constantly. You should invest in continuous learning to stay current with AI developments relevant to your work. See the UX Department learning page and company learning resources available to you, and discuss this with your manager.

Documentation and knowledge sharing

We want to learn from both successes and failures in AI usage for UX work at GitLab.

Check and contribute to these resources:

Tool evaluation and selection

To help you successfully use AI for UX work, we need ongoing assessment of new AI abilities against existing workflows. If you have an idea or suggestion, create an issue in the GitLab Design project.

Last modified September 22, 2025: Add AI usage guidelines for UX (4ff35e52)