Documentation Index
Fetch the complete documentation index at: https://sambanova-systems.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
Vercel is a platform for deploying and hosting web applications, which developers can use to easily manage their websites and serverless functions.
Prerequisites
Before you begin, ensure you have:
Setup
The SambaNova provider is available via the sambanova-ai-provider module. You can install it with:
npm install sambanova-ai-provider
Environment variables
Create a .env file with a SAMBANOVA_API_KEY variable.
Provider instance
You can import the default provider instance sambanova from sambanova-ai-provider:
import { sambanova } from 'sambanova-ai-provider';
If you need a customized setup, you can import createSambaNova from sambanova-ai-provider and create a provider instance with your settings:
import { createSambaNova } from 'sambanova-ai-provider';
const sambanova = createSambaNova({
// Optional settings
});
You can use the following optional settings to customize the SambaNova provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.sambanova.ai/v1.
-
apiKey string
API key that is being sent using the
Authorization header. It defaults to the SAMBANOVA_API_KEY environment variable.
-
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Models
You can use any of the SambaCloud models on a provider instance. The first argument is the model id, e.g. Meta-Llama-3.3-70B-Instruct.
const model = sambanova('Meta-Llama-3.3-70B-Instruct');
Example usage
Basic demonstration of text generation using the SambaNova provider.
import { createSambaNova } from 'sambanova-ai-provider';
import { generateText } from 'ai';
const sambanova = createSambaNova({
apiKey: 'YOUR_API_KEY',
});
const model = sambanova('Meta-Llama-3.3-70B-Instruct');
const { text } = await generateText({
model,
prompt: 'Hello, nice to meet you.',
});
console.log(text);
You’ll receive an output similar to the following:
Hello. Nice to meet you too. Is there something I can help you with or would you like to chat?
Intercepting fetch requests
Intercepting fetch requests is supported. See the Vercel AI SDK example for intercepting fetch requests.
Example
import { createSambaNova } from 'sambanova-ai-provider';
import { generateText } from 'ai';
const sambanovaProvider = createSambaNova({
apiKey: 'YOUR_API_KEY',
fetch: async (url, options) => {
console.log('URL', url);
console.log('Headers', JSON.stringify(options.headers, null, 2));
console.log(`Body ${JSON.stringify(JSON.parse(options.body), null, 2)}`);
return await fetch(url, options);
},
});
const model = sambanovaProvider('Meta-Llama-3.3-70B-Instruct');
const { text } = await generateText({
model,
prompt: 'Hello, nice to meet you.',
});
Example output
URL https://api.sambanova.ai/v1/chat/completions
Headers {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}
Body {
"model": "Meta-Llama-3.3-70B-Instruct",
"temperature": 0,
"messages": [
{
"role": "user",
"content": "Hello, nice to meet you."
}
]
}