Query model LLM (deprecated)

v1
Deprecated
post
/1/llm/{product_id}

Deprecated use /1/ai/{product_id}/openai/chat/completions
Query out LLM model. The input format is the same has OpenAI API
Supported language are: English, German, Spanish, French, Italian. And limited capabilities in Portuguese, Polish, Dutch, Romanian, Czech, Swedish.

By default, the number of requests per minute is rate limited. If you wish to know the rate limite or increase this limit, please contact our support team for assistance.

Path parameters

product_id
requiredinteger

LLM API product identifier - Use the following endpoint to retrieve your product identifier.

Examples:10775

Body Parameters

application/json
frequency_penaltynumber

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

Examples:0.1
max_new_tokensinteger
Min:1Max:5000

Maximum number of generated tokens.

Examples:1024
messagesrequiredarrayofobject

A list of messages comprising the conversation so far. Roles must alternate user/assistant/user/assistant/user/... always end with the user role

modelstring
Possible values:mixtralmixtral8x22b

Model name to use. Caution: Only mixtral8x7b and mixtral8x22b support this endpoint. Please use latest endpoint '/1/ai/{productID}/openai/chat/completions'to use all new models

Examples:mixtral
profile_typestring
Possible values:creativestandardstrict

Define parameter profiles according to your usage preferences. Creativity encourages greater diversity in text generation. Standard settings offer a well-balanced chatbot output. Strict settings result in highly predictable generation, suitable for tasks like translation or text classification labeling.

Examples:standard
repetition_penaltynumber

The repetition penalty parameter is set at 1.0, indicating no penalty. This parameter assists in penalizing tokens according to their frequency within the text, encompassing the input prompt as well. Tokens that have appeared five times or more receive a heavier penalty compared to tokens that have appeared just once. A value of 1 signifies no penalty, while values greater than 1 discourage the repetition of tokens. The usual range for this parameter is between 0.7 to 1.3.

Examples:1.2
seedinteger

Random sampling seed.

Examples:66580
stoparray

Up to 4 sequences where the API will stop generating further tokens.

streamboolean

Enable streaming SSE

system_promptstring

System prompt at the beginning of the conversation with the model

Examples:You are a helpful assistant
temperaturenumber

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

Examples:0.5
top_kinteger
Min:1Max:100

The number of highest probability vocabulary tokens to keep for top-k-filtering. Typical value 50 to introduces more diversity into the generated text, 20 to produce more conservative and higher-quality samples.

Examples:50
top_pnumber

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Examples:0.95
truncateinteger
Min:1Max:30000

Truncate inputs tokens to the given size.

Examples:4000
typical_pnumber

Typical Decoding Mass (typical_p) refers to a parameter used in text generation algorithms, influencing the distribution of token probabilities during decoding. It signifies a typical probability value commonly employed to shape the diversity and coherence of generated text. Adjusting this parameter allows users to control the balance between these aspects, where a typical_p value guides the likelihood of certain tokens being selected during the generation process. In typical chatbot conversations, a value of 0.9 is often used.

Examples:null

Response Body

application/json
resultrequiredstring
Possible values:successerrorasynchronous

Result of the HTTP request

Examples:success
dataLlm Response

Represents a completion response returned by model, based on the provided input.

Example request

                <?php
use GuzzleHttp\Client;

$client = new Client();
$headers = [
	'Authorization' => 'Bearer YOUR-TOKEN-HERE',
	'Content-Type' => 'application/json'
];

$body = '{
    "messages": [
        {
            "content": "Write a letter to your future self",
            "role": "user"
        }
    ]
}';

$request = new Request('POST', 'https://api.infomaniak.com/1/llm/{product_id}', $headers, $body);
$res = $client->sendAsync($request)->wait();
echo $res->getBody();
            

Example response

application/json
                
                    
{
"result":"success",
"data":{
"model":"mixtral",
"created":7983,
"choices":[
{
"index":14354,
"message":{
"role":"assistant",
"content":"example"
},
"delta":{
"role":"example",
"content":"example"
},
"finish_reason":"eos_token"
}
],
"usage":{
"input_tokens":97665,
"output_tokens":95569,
"total_tokens":21918
}
}
}