Query model LLM (deprecated)
Deprecated use /1/ai/{product_id}/openai/chat/completions
Query out LLM model. The input format is the same has OpenAI API
Supported language are: English, German, Spanish, French, Italian. And limited capabilities in Portuguese, Polish, Dutch, Romanian, Czech, Swedish.
By default, the number of requests per minute is rate limited. If you wish to know the rate limite or increase this limit, please contact our support team for assistance.
Path parameters
LLM API product identifier - Use the following endpoint to retrieve your product identifier.
Body Parameters
application/jsonNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Maximum number of generated tokens.
A list of messages comprising the conversation so far. Roles must alternate user/assistant/user/assistant/user/... always end with the user role
Model name to use. Caution: Only mixtral8x7b and mixtral8x22b support this endpoint. Please use latest endpoint '/1/ai/{productID}/openai/chat/completions'to use all new models
Define parameter profiles according to your usage preferences. Creativity encourages greater diversity in text generation. Standard settings offer a well-balanced chatbot output. Strict settings result in highly predictable generation, suitable for tasks like translation or text classification labeling.
The repetition penalty parameter is set at 1.0, indicating no penalty. This parameter assists in penalizing tokens according to their frequency within the text, encompassing the input prompt as well. Tokens that have appeared five times or more receive a heavier penalty compared to tokens that have appeared just once. A value of 1 signifies no penalty, while values greater than 1 discourage the repetition of tokens. The usual range for this parameter is between 0.7 to 1.3.
Random sampling seed.
Up to 4 sequences where the API will stop generating further tokens.
Enable streaming SSE
System prompt at the beginning of the conversation with the model
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p
but not both.
The number of highest probability vocabulary tokens to keep for top-k-filtering. Typical value 50 to introduces more diversity into the generated text, 20 to produce more conservative and higher-quality samples.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Truncate inputs tokens to the given size.
Typical Decoding Mass (typical_p) refers to a parameter used in text generation algorithms, influencing the distribution of token probabilities during decoding. It signifies a typical probability value commonly employed to shape the diversity and coherence of generated text. Adjusting this parameter allows users to control the balance between these aspects, where a typical_p value guides the likelihood of certain tokens being selected during the generation process. In typical chatbot conversations, a value of 0.9 is often used.
Response Body
Result of the HTTP request
Represents a completion response returned by model, based on the provided input.
Example request
<?php
use GuzzleHttp\Client;
$client = new Client();
$headers = [
'Authorization' => 'Bearer YOUR-TOKEN-HERE',
'Content-Type' => 'application/json'
];
$body = '{
"messages": [
{
"content": "Write a letter to your future self",
"role": "user"
}
]
}';
$request = new Request('POST', 'https://api.infomaniak.com/1/llm/{product_id}', $headers, $body);
$res = $client->sendAsync($request)->wait();
echo $res->getBody();
Example response
application/json
{"result":"success","data":{"model":"mixtral","created":7983,"choices":[{"index":14354,"message":{"role":"assistant","content":"example"},"delta":{"role":"example","content":"example"},"finish_reason":"eos_token"}],"usage":{"input_tokens":97665,"output_tokens":95569,"total_tokens":21918}}}