
Security News
curl Shuts Down Bug Bounty Program After Flood of AI Slop Reports
A surge of AI-generated vulnerability reports has pushed open source maintainers to rethink bug bounties and tighten security disclosure processes.
@gitlab/gitlab-ai-provider
Advanced tools
A comprehensive TypeScript provider for integrating GitLab Duo AI capabilities with the Vercel AI SDK. This package enables seamless access to GitLab's AI-powered features including chat, agentic workflows, and tool calling through a unified interface.
npm install @gitlab/gitlab-ai-provider
npm install @ai-sdk/provider @ai-sdk/provider-utils
import { createGitLab } from '@gitlab/gitlab-ai-provider';
import { generateText } from 'ai';
const gitlab = createGitLab({
apiKey: process.env.GITLAB_TOKEN,
instanceUrl: 'https://gitlab.com', // optional, defaults to gitlab.com
});
// All equivalent ways to create a chat model:
const model = gitlab('duo-chat'); // callable provider
const model2 = gitlab.chat('duo-chat'); // .chat() alias (recommended)
const model3 = gitlab.languageModel('duo-chat'); // explicit method
const { text } = await generateText({
model: gitlab.chat('duo-chat'),
prompt: 'Explain how to create a merge request in GitLab',
});
console.log(text);
import { createGitLab } from '@gitlab/gitlab-ai-provider';
import { generateText } from 'ai';
const gitlab = createGitLab({
apiKey: process.env.GITLAB_TOKEN,
});
// Use agentic model for native tool calling support
const model = gitlab.agenticChat('duo-chat', {
anthropicModel: 'claude-sonnet-4-20250514',
maxTokens: 8192,
});
const { text } = await generateText({
model,
prompt: 'List all open merge requests in my project',
tools: {
// Your custom tools here
},
});
You can pass feature flags to enable experimental features in GitLab's Anthropic proxy:
import { createGitLab } from '@gitlab/gitlab-ai-provider';
// Option 1: Set feature flags globally for all agentic chat models
const gitlab = createGitLab({
apiKey: process.env.GITLAB_TOKEN,
featureFlags: {
duo_agent_platform_agentic_chat: true,
duo_agent_platform: true,
},
});
const model = gitlab.agenticChat('duo-chat');
// Option 2: Set feature flags per model (overrides global flags)
const modelWithFlags = gitlab.agenticChat('duo-chat', {
featureFlags: {
duo_agent_platform_agentic_chat: true,
duo_agent_platform: true,
custom_feature_flag: false,
},
});
// Option 3: Merge both (model-level flags take precedence)
const gitlab2 = createGitLab({
featureFlags: {
duo_agent_platform: true, // will be overridden
},
});
const mergedModel = gitlab2.agenticChat('duo-chat', {
featureFlags: {
duo_agent_platform: false, // overrides provider-level
duo_agent_platform_agentic_chat: true, // adds new flag
},
});
const gitlab = createGitLab({
apiKey: 'glpat-xxxxxxxxxxxxxxxxxxxx',
});
export GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
const gitlab = createGitLab(); // Automatically uses GITLAB_TOKEN
The provider automatically detects and uses OpenCode authentication if available:
const gitlab = createGitLab({
instanceUrl: 'https://gitlab.com',
// OAuth tokens are loaded from ~/.opencode/auth.json
});
const gitlab = createGitLab({
apiKey: 'your-token',
headers: {
'X-Custom-Header': 'value',
},
});
Main provider factory that creates language models with different capabilities.
interface GitLabProvider {
(modelId: string): LanguageModelV2;
languageModel(modelId: string): LanguageModelV2;
agenticChat(modelId: string, options?: GitLabAgenticOptions): GitLabAgenticLanguageModel;
}
Provides native tool calling through GitLab's Anthropic proxy.
https://cloud.gitlab.com/ai/v1/proxy/anthropic/Executes local file system and command tools:
list_dir - List directory contentsread_file - Read file contentswrite_file - Write to filesedit_file - Edit file with find/replacefind_files - Find files by patternmkdir - Create directoriesgrep - Search file contentsrun_command - Execute shell commandsrun_git_command - Execute git commandsExecutes GitLab API operations:
Automatically detects GitLab projects from git remotes.
const detector = new GitLabProjectDetector({
instanceUrl: 'https://gitlab.com',
getHeaders: () => ({ Authorization: `Bearer ${token}` }),
});
const project = await detector.detectProject(process.cwd());
// Returns: { id: 12345, path: 'group/project', namespaceId: 67890 }
Caches project information with TTL.
const cache = new GitLabProjectCache(5 * 60 * 1000); // 5 minutes
cache.set('key', project);
const cached = cache.get('key');
Manages OAuth token lifecycle.
const oauthManager = new GitLabOAuthManager();
// Exchange authorization code
const tokens = await oauthManager.exchangeAuthorizationCode({
instanceUrl: 'https://gitlab.com',
code: 'auth-code',
codeVerifier: 'verifier',
});
// Refresh tokens
const refreshed = await oauthManager.refreshIfNeeded(tokens);
Manages direct access tokens for Anthropic proxy.
const client = new GitLabDirectAccessClient({
instanceUrl: 'https://gitlab.com',
getHeaders: () => ({ Authorization: `Bearer ${token}` }),
});
const directToken = await client.getDirectAccessToken();
// Returns: { token: 'xxx', headers: {...}, expiresAt: 123456 }
interface GitLabProviderSettings {
instanceUrl?: string; // Default: 'https://gitlab.com'
apiKey?: string; // PAT or OAuth access token
refreshToken?: string; // OAuth refresh token
name?: string; // Provider name prefix
headers?: Record<string, string>; // Custom headers
fetch?: typeof fetch; // Custom fetch implementation
}
interface GitLabAgenticOptions {
anthropicModel?: string; // Default: 'claude-sonnet-4-20250514'
maxTokens?: number; // Default: 8192
}
import { GitLabError } from '@gitlab/gitlab-ai-provider';
try {
const result = await generateText({ model, prompt });
} catch (error) {
if (error instanceof GitLabError) {
if (error.isAuthError()) {
console.error('Authentication failed');
} else if (error.isRateLimitError()) {
console.error('Rate limit exceeded');
} else if (error.isServerError()) {
console.error('Server error:', error.statusCode);
}
}
}
npm run build # Build once
npm run build:watch # Build in watch mode
npm test # Run all tests
npm run test:watch # Run tests in watch mode
npm run lint # Lint code
npm run lint:fix # Lint and auto-fix
npm run format # Format code
npm run format:check # Check formatting
npm run type-check # TypeScript type checking
gitlab-ai-provider/
βββ src/
β βββ index.ts # Main exports
β βββ gitlab-provider.ts # Provider factory
β βββ gitlab-agentic-language-model.ts # Agentic chat model
β βββ gitlab-direct-access.ts # Direct access tokens
β βββ gitlab-oauth-manager.ts # OAuth management
β βββ gitlab-oauth-types.ts # OAuth types
β βββ gitlab-project-detector.ts # Project detection
β βββ gitlab-project-cache.ts # Project caching
β βββ gitlab-anthropic-tools.ts # Anthropic tool executor
β βββ gitlab-api-tools.ts # GitLab API tool executor
β βββ gitlab-api-types.ts # API types
β βββ gitlab-error.ts # Error handling
β βββ gitlab-workflow-debug.ts # Debug logging
βββ tests/ # Test files
βββ dist/ # Build output
βββ package.json
βββ tsconfig.json
βββ tsup.config.ts
βββ vitest.config.ts
Contributions are welcome! Please see our Contributing Guide for detailed guidelines on:
Quick Start for Contributors:
Commit Messages: Use conventional commits format
feat(scope): add new feature
fix(scope): fix bug
docs(scope): update documentation
Code Quality: Ensure all checks pass
npm run lint
npm run type-check
npm test
Testing: Add tests for new features
This project is built on top of:
Made with β€οΈ for the OpenCode community
FAQs
GitLab Duo provider for Vercel AI SDK
The npm package @gitlab/gitlab-ai-provider receives a total of 14,535 weekly downloads. As such, @gitlab/gitlab-ai-provider popularity was classified as popular.
We found that @gitlab/gitlab-ai-provider demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 6 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
A surge of AI-generated vulnerability reports has pushed open source maintainers to rethink bug bounties and tighten security disclosure processes.

Product
Scan results now load faster and remain consistent over time, with stable URLs and on-demand rescans for fresh security data.

Product
Socket's new Alert Details page is designed to surface more context, with a clearer layout, reachability dependency chains, and structured review.