OpenAI compatible API

This document contains the SambaStudio OpenAI compatible API reference information. It describes input and output formats for the SambaStudio OpenAI compatible API, which makes it easy to try out our open source models on existing applications.

Create chat completions

Creates a model response for the given chat conversation.

POST https://<your-sambastudio-domain>/openai/v1/<project-id>/<endpoint-id>/chat/completions

Request body

The chat request body formats are described below.

Reference

Parameter Definition Type Values

model

The name of the model to query.

String

The expert name.

messages

A list of messages comprising the conversation so far.

Array of objects

Array of message objects, each containing:

  • role (string, required): The role of the messages author. Choice between system, user, or assistant.

  • content (string, required): The contents of the message.

max_tokens

The maximum number of tokens to generate.

Integer

The total length of input tokens and generated tokens is limited by the model’s context length. Default value is the context length of the model.

temperature

Determines the degree of randomness in the response.

Float

The temperature value can be between 0 and 1.

top_p

The top_p (nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.

Float

The top_p value can be between 0 and 1.

top_k

The top_k parameter is used to limit the number of choices for the next predicted word or token.

Integer

The top k value can be between 1 to 100.

stream

If set, partial message deltas will be sent.

Boolean or null

Default is false.

stream_option

Options for streaming response. Only set this when you set stream: true.

Object or null

Default is null.

Value can be include_usage: boolean.

repetition_penalty

A parameter that controls how repetitive text can be. A lower value means more repetitive, while a higher value means less repetitive.

Float or null

Default is 1.0, which means no penalty.

The repetition penalty value can be between 1.0 to 10.0.

Example request

Below is an example request body for a streaming response.

Example streaming request
{
   "messages": [
      {"role": "system", "content": "Answer the question in a couple sentences."},
      {"role": "user", "content": "Share a happy story with me"}
   ],
   "max_tokens": 800,
   "model": "Meta-Llama-3.1-8B-Instruct",
   "stream": true,
   "stream_options": {"include_usage": true}
}

Response

The API returns a chat completion object , or a streamed sequence of chat completion chunk objects, if the request is streamed.

Chat completion object

Represents a chat completion response returned by model, based on the provided input.

Reference

Property Type Description

id

String

A unique identifier for the chat completion.

choices

Array

A list containing a single chat completion.

created

Integer

The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

model

String

The model used to generate the completion.

object

String

The object type, which is always chat.completion.

usage

Object

An optional field present when stream_options: {"include_usage": true} is set.

When present, it contains a null value except for the last chunk, which contains the token usage statistics for the entire request.

Values returned are:

  • throughput_after_first_token: The rate (as tokens per second) at which output tokens are generated after the first token has been delivered.

  • time_to_first_token: The time (in seconds) the model takes to generate the first token.

  • model_execution_time: The time (in seconds) to generate a complete response or all tokens.

  • output_tokens_count: Number of tokens generated in the response.

  • input_tokens_count: Number of tokens in the input prompt.

  • total_tokens_count: The sum of input and output tokens.

  • queue_time: The time (in seconds) a request spends waiting in the queue before being processed by the model.

Example

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "Llama-3-8b-chat",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "logprobs": null,
    "finish_reason": "stop"
  }]
}

Chat completion chunk object

Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

Reference

Property Type Description

id

String

A unique identifier for the chat completion.

choices

Array

A list containing a single chat completion.

created

Integer

The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

model

String

The object type, which is always chat.completion.

usage

Object

An optional field present when stream_options: {"include_usage": true} is set.

When present, it contains a null value except for the last chunk, which contains the token usage statistics for the entire request.

Values returned are:

  • throughput_after_first_token: The rate (as tokens per second) at which output tokens are generated after the first token has been delivered.

  • time_to_first_token: The time (in seconds) the model takes to generate the first token.

  • model_execution_time: The time (in seconds) to generate a complete response or all tokens.

  • output_tokens_count: Number of tokens generated in the response.

  • input_tokens_count: Number of tokens in the input prompt.

  • total_tokens_count: The sum of input and output tokens.

  • queue_time: The time (in seconds) a request spends waiting in the queue before being processed by the model.

Example

{
  "id": "chatcmpl-123",
  "object": "chat.completion.chunk",
  "created": 1694268190,
  "model": "Llama-3-8b-chat",
  "system_fingerprint": "fp_44709d6fcb",
  "choices": [
    {
      "index": 0,
      "delta": {},
      "logprobs": null,
      "finish_reason": "stop"
    }
  ]
}

Batch API

You can send a batch of queries in one request using the batch API.

curl --location 'https://<your-sambastudio-domain>/v1/<project-id>/<endpoint-id>/chat/completions' \
--header 'Content-Type: application/json' \
--header 'key: API Key' \
--data '[
    {
  "model": "Meta-Llama-3-8B-Instruct",
  "messages": [
    {
      "role": "system",
      "content": "You are an AI assistant that helps with answering questions and providing information."
    },
    {
      "role": "user",
      "content": "What is the capital of France?"
    }
  ],
  "process_prompt": true,
  "max_tokens": 50,
  "stream": true
},
{
  "model": "Meta-Llama-3-8B-Instruct",
  "messages": [
    {
      "role": "system",
      "content": "You are an AI assistant that helps with answering questions and providing information."
    },
    {
      "role": "user",
      "content": "What is the capital of India?"
    }
  ],
  "process_prompt": true,
  "max_tokens": 50,
  "stream": true
}
]'

Example requests using OpenAI client

Example requests for streaming and non-streaming are shown below.

Streaming

from openai import OpenAI

client = OpenAI(
    base_url="https://<your-sambastudio-domain>/openai/v1/<project-id>/<endpoint-id>/chat/completions",
    api_key= "YOUR ENDPOINT API KEY"
)

completion = client.chat.completions.create(
  model="Meta-CodeLlama-70b-Instruct",
  messages = [
      {"role": "system", "content": "You are intelligent"},
      {"role": "user", "content": "Tell me a story in 3 lines"}
    ],
  stream=True
)

for chunk in completion:
  print(chunk.choices[0].delta)

Non-streaming

from openai import OpenAI

client = OpenAI(
    base_url="https://<your-sambastudio-domain>/openai/v1/<project-id>/<endpoint-id>/chat/completions",
    api_key="YOUR ENDPOINT API KEY"
)

response = client.chat.completions.create(
  model="Meta-Llama-3.1-8B-Instruct",
  messages=[
      {"role": "system", "content": "Answer the question in a couple sentences."},
      {"role": "user", "content": "Share a happy story with me"}
    ]
)
print(response.choices[0].message)