Use LLM proxy
Use the LLM proxy to:
- Enable an agent to pay for it's own inference.
- Use inference APIs without needing to bring your own API keys.
Get started
npm install @layr-labs/ai-gateway-provider ai
Minimal code example
import { eigen } from '@layr-labs/ai-gateway-provider';
import { generateText } from 'ai';
const { text } = await generateText({
model: eigen('anthropic/claude-sonnet-4.6'),
prompt: 'Hello',
});
Clone this repo for an example application.
How it works
- No API keys needed. Authentication is handled automatically using JWTs issued by the KMS.
- Works on EigenCloud. Environment variables are correctly configured by default when you deploy.
- Uses Vercel AI SDK as a gateway provider.
- Anthropic models supported via Bedrock.
Available models
The LLM proxy supports multiple providers. Use the format provider/model.
The supported models:
anthropic/claude-sonnet-4.6anthropic/claude-opus-4.7anthropic/claude-haiku-4.5- Models provided by Vercel AI gateway.
Available on mainnet
The LLM proxy is available for mainnet deployments only. The LLM proxy is not available on sepolia-dev.