Audio
SambaNova’s first speech reasoning model on SambaNova Cloud will extend our multimodal AI capabilities beyond vision to include advanced audio processing and understanding. This model offers OpenAI compatible endpoints that enable real-time reasoning, transcriptions and translations.
The Whisper-Large-v3 model
-
Model: Whisper-Large-v3
-
Description: State-of-the-art automatic speech recognition (ASR) and translation model. Developed by OpenAI and trained on 5M+ hours of labeled audio. Excels in multilingual and zero-shot speech tasks across diverse domains.
-
Model ID:
Whisper-Large-v3
-
Supported languages: Multilingual
Core capabilities
-
Transcribes and translates extended audio inputs (up to 25 MB).
-
Demonstrates high accuracy in speech recognition and translation tasks.
-
Provides OpenAI-compatible endpoints for transcriptions and translations.
Request parameters
Parameter | Type | Description | Default | Endpoints |
---|---|---|---|---|
model | String | The ID of the model to use. | Required | transcriptions , translations |
file | File | Audio file in FLAC, MP3, MP4, MPEG, MPGA, M4A, Ogg, WAV, or WebM format. File size limit: 25MB. | Required | transcriptions , translations |
prompt | String | Prompt to influence transcription style or vocabulary. Example: “Please transcribe carefully, including pauses and hesitations.” | Optional | transcriptions , translations |
response_format | String | Output format: either json or text . | json | transcriptions , translations |
language | String | The language of the input audio. Using ISO-639-1 format (e.g., en ) improves accuracy and latency. | Optional | transcriptions , translations |
stream | Boolean | Enables streaming responses. | false | transcriptions , translations |
stream_options | Object | Additional streaming configuration (e.g., {“include_usage”: true}). | Optional | transcriptions , translations |
The Qwen2-Audio Instruct model
-
Model: Qwen2-Audio Instruct
-
Description: Instruction-tuned large audio language model. Built on Qwen-7B with Whisper-large-v3 audio encoder (8.2B parameters).
-
Model ID:
qwen2-audio-7b-instruct
-
Supported languages: Multilingual
This model is currently being provided as a beta model.
Core capabilities
-
Transform audio into Intelligence: Allows you to build GPT-4-like voice applications quickly.
-
Provides direct question-answering for any audio input.
-
Comprehensive audio processing that includes real-time conversation, transcription, translation, and analysis through a single unified model.
Customization and control
-
System-level prompts: Use Assistant Prompt in the request to customize model behavior for specific requirements. See the
message
parameter in the Request parameters section for more details.-
Brand-specific formatting (e.g., BrandName vs brandname).
-
Domain-specific terminology.
-
Response style and tone control.
-
View the Audio reasoning, Translation, and Transcription API endpoint documents for more details.
Audio processing
-
Silence detection: Intelligent identification of meaningful pauses and gaps in speech.
-
Noise cancellation: Advanced noise filtering and clean audio processing.
-
Multilingual processing: Support for multiple languages with automatic language detection.
Analysis capabilities
-
Sentiment analysis: Detects and analyzes emotional content in speech.
-
Multi-speaker handling: Processes conversations with multiple participants.
-
Mixed audio understanding: Comprehends speech, music, and environmental sounds.
Speech recognition performance numbers
-
Metrics taken from published Qwen2-Audio paper benchmarks.
-
WER%, lower is better
Language | Dataset | Qwen2-Audio | Whisper-large-v3 | Improvement |
---|---|---|---|---|
English | Common Voice 15 | 8.6% | 9.3% | +7.5% |
Chinese | Common Voice 15 | 6.9% | 12.8% | +46.1% |
Request parameters
Parameter | Type | Description | Default | Endpoints |
---|---|---|---|---|
model | String | The ID of the model to use. Only qwen2-audio-7b-instruct is currently available. | Required | All |
messages | Message | A list of messages containing role (user , system , assistant ), type (text or audio_content ), and audio_content (base64-encoded audio). | Required | All |
response_format | String | The output format: either json or text . | json | All |
temperature | Number | Sampling temperature between 0 and 1. Higher values (e.g., 0.8) increase randomness; lower values (e.g., 0.2) make output more focused. | 0 | All |
max_tokens | Number | The maximum number of tokens to generate. | 1000 | All |
file | File | Audio file in FLAC, MP3, MP4, MPEG, MPGA, M4A, Ogg, WAV, or WebM format. Each file must not exceed 30 seconds in duration. | Required | All |
language | String | The target language for transcription or translation. | Optional | transcriptions , translations |
stream | Boolean | Enables streaming responses. | false | All |
stream_options | Object | Additional streaming configuration (e.g., {"include_usage": true} ). | Optional | All |