Skip to main content
SambaNova vision models support multimodal inputs, allowing users to process both text and images. These models analyze images and generate context-aware text responses. Learn how to query SambaNova vision models using either the SambaNova or OpenAI Python client.

Make a query with an image

On SambaNova, the vision model request follows OpenAI’s multimodal input format which accepts both text and image inputs in a structured payload. While the call is similar to Text Generation, it differs by including an encoded image file, referenced via the image_path variable. A helper function is used to convert this image into a base64 string, allowing it to be passed alongside the text in the request.
1

Step 1

Make a new Python file and copy the code below.;
This example uses the Llama-4-Maverick-17B-128E-Instruct model.
from sambanova import SambaNova
import base64

client = SambaNova(
    base_url="your-sambanova-base-url",
    api_key="your-sambanova-api-key",
)

# Helper function to encode the image
def encode_image(image_path):
  with open(image_path, "rb") as image_file:
    return base64.b64encode(image_file.read()).decode('utf-8')

# The path to your image
image_path = "sample.JPEG"

# The base64 string of the image
image_base64 = encode_image(image_path)

print(image_base64)

response = client.chat.completions.create(
    model="Llama-4-Maverick-17B-128E-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What is happening in this image?"},
                {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}
            ]
        }
    ]
)

print(response.choices[0].message.content)
2

Step 2

Use your SambaNova API key and base URL from the API keys and URLs page to replace the string fields "your-sambanova-api-key" and "your-sambanova-base-url"in the construction of the client.
3

Step 3

Select an image and move it to a suitable path that you can specify in the lines.
# The path to your image
image_path = "sample.JPEG"
4

Step 4

Verify the prompt to pair with the image in the content portion of the user prompt.
5

Step 5

Run the Python file to receive the text output.
I