Playground

The Playground allows you to interact with your live endpoints and provides the ability to compare and evaluate models using your prompt.

The Playground landing screen is shown in Figure 1. The Playground interactive screen, with two model response panes, is shown in Figure 2.

  • A live endpoint is required to use the Playground.

  • You can access the Playground from the left menu or by clicking Try It from a generative endpoint window.

Playground landing screen
Figure 1. Playground landing screen
Playground interactive window
Figure 2. Playground interactive window with two model response panes

Number one Select a live endpoint to use in the Playground.

Number two Select a Composition of Experts (CoE) expert, filter supported prompt type.

Number three Adjust the Tuning Parameters to maximize the performance and output of the response.

Number four Input a prompt for your selected expert or model.

Number five Click to access the System Prompt box.

Number six Click to add an image to your input, if a model supports it.

Number seven Add a model response pane. Compare responses to your prompt by adding up to six panes.

Number eight Click to mirror your prompt input across multiple model response panes.

Number nine Submit your prompt input to generate a response from the selected experts(s) or model(s).

Number ten Click to access the View Code box.

Number eleven Clear the prompt(s) and response(s) from the dialog box.

Number twelve Click to remove the corresponding model response pane.

Number thirteen Click to download the results of your inputs in JSON file format.

Select a CoE expert

Composition of Experts (CoE) endpoints provide a list of specialized model experts to choose for your task. Additionally, you can choose one of the Samba-1 routers for your task.

  • Select a model expert for your CoE endpoint from the drop-down.

  • Select a Samba-1 router from the drop.

  • Enter a name to quickly locate experts.

  • Filter by Chat or Single Turn prompt types.

    • Chat provides an initial response to your prompt and allows on-going iterative exchanges. For example, prompts are kept within the context of your conversation, with the Playground able to understand your follow-on prompts without the need to restate preceding information.

      A chat icon next to the model name indicates that it supports chat functionality.

    • Single Turn provides quick, complete statement responses to a prompt. Single Turn is straightforward and does not require additional context or clarification. Unlike, Chat, follow-on prompts are not kept within the context of previous prompts.

Select drop-down
Figure 3. Select a CoE expert dropdown

Tuning Parameters

The Playground Tuning Parameters provide additional flexibility and options. Adjusting these parameters allows you to search for the optimal values to maximize the performance and output of the response.

  • Tuning Parameters can be adjusted independently for the selected models in each model response pane of the Playground.

  • Hover over a parameter name to view additional information about it. Click the > (right arrow) to open the Understanding Parameters tuning box to view specific information about your chosen model or expert.

Tuning Parameters
Figure 4. Tuning Parameters

System Prompt

System prompts are unique instruction messages used to steer the behavior of models and their resulting outputs. From the System Prompt box, input your system prompt and click Apply Changes.

  • System prompts can be adjusted independently in each model response pane.

  • For CoE endpoints, the system prompt applies to the selected expert.

  • For non-CoE endpoints, the system prompt applies to the selected endpoint.

  • Edits to the system prompt remain in effect only for your current session. When you log out of the platform, the system prompt will revert to its unedited state.

  • An edited system prompt is denoted by a red dot in the icon (Number five in Figure 1 and Figure 2).

Example System Prompt box with input
Figure 5. Example System Prompt box with input

Prompt guidelines

We recommend using the following guidelines when submitting prompts to the Playground.

Prompt structure

End the prompts with either a colon (:), a question mark (?), or another way of letting the model know that it is time for it to start generating. For example, using Please summarize the previous article: (with a colon) is a better prompt than Please summarize the previous article (without a colon). Adding these annotations tends to lead to better generations as it indicates to the model that you’ve finished your question and are expecting an answer.

Resubmitting prompts

Please ensure that you do not submit an <|endoftext|> token in your prompt. This might happen if you hit submit twice after the model returns its generations.

View Code

The View Code box allows you to view and copy the code generated in each model response pane from the current prompt input. You can then make a request programmatically using the copied code.

  • Click the CURL, CLI, or Python SDK tab to view the corresponding code block and make a request programmatically.

  • Click Copy Code to copy the selected code block to your clipboard.

View code window
Figure 6. View Code