Vercel is a platform for deploying and hosting web applications, which developers can use to easily manage their websites and serverless functions.

Setup

The SambaNova provider is available via the sambanova-ai-provider module. You can install it with:

npm install sambanova-ai-provider

Environment variables

Create a .env file with a SAMBANOVA_API_KEY variable.

Provider Instance

You can import the default provider instance sambanova from sambanova-ai-provider:

import { sambanova } from 'sambanova-ai-provider';

If you need a customized setup, you can import createSambaNova from sambanova-ai-provider and create a provider instance with your settings:

import { createSambaNova } from 'sambanova-ai-provider';

const sambanova = createSambaNova({
  // Optional settings
});

You can use the following optional settings to customize the SambaNova provider instance:

  • baseURL string

    Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is https://api.sambanova.ai/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the SAMBANOVA_API_KEY environment variable.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation. Defaults to the global fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.

Models

You can use any of our supported models on a provider instance. The first argument is the model id, e.g. Meta-Llama-3.3-70B-Instruct.

const model = sambanova('Meta-Llama-3.3-70B-Instruct');

Example Usage

Basic demonstration of text generation using the SambaNova provider.

import { createSambaNova } from 'sambanova-ai-provider';
import { generateText } from 'ai';

const sambanova = createSambaNova({
  apiKey: 'YOUR_API_KEY',
});

const model = sambanova('Meta-Llama-3.3-70B-Instruct');

const { text } = await generateText({
  model,
  prompt: 'Hello, nice to meet you.',
});

console.log(text);

You’ll receive an output similar to the following:

Hello. Nice to meet you too. Is there something I can help you with or would you like to chat?

Intercepting Fetch Requests

Intercepting fetch requests is supported. You can find more details here.

Example

import { createSambaNova } from 'sambanova-ai-provider';
import { generateText } from 'ai';

const sambanovaProvider = createSambaNova({
  apiKey: 'YOUR_API_KEY',
  fetch: async (url, options) => {
    console.log('URL', url);
    console.log('Headers', JSON.stringify(options.headers, null, 2));
    console.log(`Body ${JSON.stringify(JSON.parse(options.body), null, 2)}`);
    return await fetch(url, options);
  },
});

const model = sambanovaProvider('Meta-Llama-3.3-70B-Instruct');

const { text } = await generateText({
  model,
  prompt: 'Hello, nice to meet you.',
});

Example output

URL https://api.sambanova.ai/v1/chat/completions
Headers {
  "Content-Type": "application/json",
  "Authorization": "Bearer YOUR_API_KEY"
}
Body {
  "model": "Meta-Llama-3.3-70B-Instruct",
  "temperature": 0,
  "messages": [
    {
      "role": "user",
      "content": "Hello, nice to meet you."
    }
  ]
}