Query model LLM
Query out LLM model. The input / output format is the same has OpenAI API
Supported language are: English, German, Spanish, French, Italian. And limited capabilities in Portuguese, Polish, Dutch, Romanian, Czech, Swedish.
By default, the number of requests per minute is rate limited. If you wish to know the rate limite or increase this limit, please contact our support team for assistance.
Path parameters
LLM API product identifier - Use the following endpoint to retrieve your product identifier.
Body Parameters
application/jsonNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
UNUSED Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Maximum number of generated tokens.
A list of messages comprising the conversation so far.
Model name to use
UNUSED How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
The number should be above -2.0 but below 2.0. A positive value for the number will result in a penalty for any new tokens that are similar to those already present in the text, encouraging the model to discuss different topics. It is recommended to set the value between -1.5 and -0.5. Using higher values along with a larger temperature setting may cause a hallucination loop.
Define parameter profiles according to your usage preferences. Creativity encourages greater diversity in text generation. Standard settings offer a well-balanced chatbot output. Strict settings result in highly predictable generation, suitable for tasks like translation or text classification labeling.
Random sampling seed.
Up to 4 sequences where the API will stop generating further tokens.
Enable streaming SSE
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p
but not both.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Response Body
The model used.
This attribute is always empty
The object chat.completion
if not stream mode else chat.completion.chunk
This fingerprint represents the backend configuration that the model runs with.
The Unix timestamp (in seconds) of when the response was created.
A completion choices.
Example request
<?php
use GuzzleHttp\Client;
$client = new Client();
$headers = [
'Authorization' => 'Bearer YOUR-TOKEN-HERE',
'Content-Type' => 'application/json'
];
$body = '{
"messages": [
{
"content": "Write a letter to your future self",
"role": "user"
}
],
"model": "mixtral"
}';
$request = new Request('POST', 'https://api.infomaniak.com/1/ai/{product_id}/openai/chat/completions', $headers, $body);
$res = $client->sendAsync($request)->wait();
echo $res->getBody();
Example response
application/json
{"model":"mixtral","id":"example","object":"example","system_fingerprint":"example","created":90771,"choices":[{"index":77551,"message":{"role":"assistant","content":"example"},"delta":{"role":"example","content":"example"},"logprobs":{"content":[{"token":"example","bytes":[],"top_logprobs":[{"token":"example","bytes":[]}]}]},"finish_reason":"eos_token"}],"usage":{"input_tokens":94925,"output_tokens":89240,"total_tokens":13299}}