Thyris API

Thyris API (1.0.0)

Download OpenAPI specification:

Chat interaction with various AI models

Processes chat requests with specified model and chat history, optionally using documents and images.

Authorizations:
ApiKeyAuth
Request Body schema: application/json
required
model
required
string <string>

Name of the model to use for the chat (e.g., "thyris1" or "thyris1-turbo").

input
required
string <string>

Input text to send to the model. Cannot exceed the model's maximum input character limit. To find the maximum input character limit either refer to our documentation, or programmatically get the model's metadata from the /api/models endpoint.

chat_history
Array of strings <= 10 items [ items <string >[ items <string > ] ]

Array of previous messages, where each message is an array containing user and assistant messages. Cannot be more than 10 items.

use_documents_from
string <string>

Name of the document collection to use for RAG if any.

n_documents
integer
Default: 5

Number of documents to retrieve if using a document collection. If the number is too small, the chat model will not have enough context to generate a response. Similarly, if the number is too large, the chat model will have too much context and may reach the maximum context size.

image
string <string>

Base64 encoded image to send to the model. The image should be in the following format "data:image/;base64,". Image can only be defined with vision models. Allowed image specification (size, format, dimension) depends on the model. To find out the allowed image specification, refer to the model's documentation or programmatically get the model's metadata from the /api/models endpoint.

system_prompt
string <string>
Default: "You're an assistant created by ThyrisAI. Please help the user with their questions by providing concise and clear answers."

System prompt to be used for the chat model. This is a special instruction that can guide the model's behavior. The system prompt is not used for the embedding model.

rag_prompt
string

Prompt to be used for the RAG model. This is a special instruction that can guide the model's behavior. The rag_prompt is not used for the embedding model.

When doing RAG (use_documents_from parameter is set), the rag_prompt is used to generate the context for the chat model and is set to the following value by default: "You are given a user query, some textual context and rules, all inside xml tags. You have to answer the query based on the context while respecting the rules.\n\n{{Context}}\n\n\n- If you don't know, just say so.\n- If you are not sure, ask for clarification.\n- Answer in the same language as the user query.\n- If the context appears unreadable or of poor quality, tell the user then answer as best as you can.\n- If the answer is not in the context but you think you know the answer, explain that to the user then answer with your own knowledge.\n- Answer directly and without using xml tags.\n\n\n{{UserInput}}\n"

The '{{Context}}' and '{{UserInput}}' placeholders in the rag_prompt are replaced with the context (contents of the relevant documents) and user query respectively. These placeholders are required in the rag_prompt.

Only to be used when doing RAG (use_documents_from parameter is set).

This prompt is passed as a user message and not as a system message.

frequency_penalty
number <float> [ -2 .. 2 ]
Default: 0

Frequency penalty to apply to the model's output. This can help reduce repetition in the generated text. The value should be between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

presence_penalty
number <float> [ -2 .. 2 ]
Default: 0

Presence penalty to apply to the model's output. This can help increase the diversity of the generated text. The value should be between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

temperature
number <float> [ 0 .. 2 ]
Default: 1

Temperature to apply to the model's output. This can help control the randomness of the generated text. The value should be between 0.0 and 2.0. Lower values make the output more deterministic, while higher values make it more random. It is recommended to alter this or top_p but not both.

top_p
number <float> [ 0 .. 1 ]
Default: 1

Top-p sampling to apply to the model's output. This can help control the randomness of the generated text. The value should be between 0.0 and 1.0. Lower values make the output more deterministic, while higher values make it more random. It is recommended to alter this or temperature but not both.

Responses

Request samples

Content type
application/json
{
  • "model": "thyris1",
  • "input": "Why is the sky blue?",
  • "chat_history": [
    ],
  • "use_documents_from": "my-collection",
  • "n_documents": 5,
  • "image": "data:image/jpeg;base64,/9j/4AAQS...KVyj//Z",
  • "system_prompt": "You are a helpful assistant.",
  • "rag_prompt": "string",
  • "frequency_penalty": 0.5,
  • "presence_penalty": 0.5,
  • "temperature": 0.5,
  • "top_p": 0.5
}

Response samples

Content type
application/json
{
  • "model": "thyris1",
  • "created_at": "2023-10-15T10:30:00Z",
  • "message": "The weather is sunny and warm.",
  • "sources": [
    ],
  • "usage": {
    }
}

Get a document collection

Gets a document collection with the specified key.

Authorizations:
ApiKeyAuth
path Parameters
collectionKey
required
string <string>
Example: my-collection

Key of the document collection.

Responses

Response samples

Content type
application/json
{
  • "id": "00000000-0000-0000-0000-000000000001",
  • "key": "my-collection",
  • "model": "thyris1-embedding",
  • "description": "A collection of documents about the world."
}

Create a document collection

Creates a document collection with the specified key. A collection is a logical group of documents.

Authorizations:
ApiKeyAuth
path Parameters
collectionKey
required
string <string> <= 32 characters
Example: my-collection

Key of the document collection.

Request Body schema: application/json
required
model
required
string <string>

The model to use for creating embeddings for the collection. Any document added to the collection will be embedded using this model.

description
string <string> <= 256 characters

Description of the collection.

Responses

Request samples

Content type
application/json
{
  • "model": "thyris1-embedding",
  • "description": "A collection of documents about the world."
}

Response samples

Content type
application/json
{
  • "id": "00000000-0000-0000-0000-000000000001"
}

Delete a document collection

Deletes a document collection with the specified key.

Authorizations:
ApiKeyAuth
path Parameters
collectionKey
required
string <string>
Example: my-collection

Key of the document collection.

Responses

Response samples

Content type
application/json
{
  • "error": "Invalid input parameters."
}

Get all document collections

Gets all document collections.

Authorizations:
ApiKeyAuth

Responses

Response samples

Content type
application/json
{
  • "collections": [
    ]
}

Add documents to a collection

Adds documents to a document collection.

Authorizations:
ApiKeyAuth
path Parameters
collectionKey
required
string <string> <= 32 characters
Example: my-collection

Key of the document collection.

documentKey
required
string <string> <= 64 characters
Example: my-document

Key of the document.

Request Body schema: application/json
required
description
string <string> <= 256 characters

The description of the document

content
required
string <string>

Document text. Cannot exceed 1 MiB (hard-limit irrespective of the model). The max size of the document is dependent on the maximum document size of the collection's model. To find the maximum document size, refer to the model's documentation or programmatically get the model's metadata from the /api/models endpoint.

chunkSize
integer >= 1

Chunk size (currently in characters) of the documents when tokenizing. Defaults to the embedding model maximum. Cannot be larger than the model maximum. The number of tokens in the chunk should not exceed the model's maximum chunk count. The resulting chunk count (based on document size and chunkSize) cannot exceed the model's maximum chunk count in one request. For a better result, it is recommended to use define the chunk size based on the chat model and not just the embedding model. For example, a small chunk size can result in giving a very fragmented context to the chat model. Similarly, a large chunk size can result in giving a very broad context to the chat model and even exceed the chat model's context size. The maximum chunk size is dependent on the model. To find the maximum chunk size, refer to the model's documentation or programmatically get the model's metadata from the /api/models endpoint.

Responses

Request samples

Content type
application/json
{
  • "description": "A document about the world",
  • "content": "Hello, world!",
  • "chunkSize": 1536
}

Response samples

Content type
application/json
{
  • "documentId": "00000000-0000-0000-0000-000000000001",
  • "usage": {
    }
}

Delete a document from a collection

Deletes a document from a document collection.

Authorizations:
ApiKeyAuth
path Parameters
collectionKey
required
string <string>
Example: my-collection

Key of the document collection.

documentKey
required
string <string>
Example: my-document

Key of the document.

Responses

Response samples

Content type
application/json
{
  • "error": "Invalid input parameters."
}

Get available models with parameters.

Get a list of available models with parameters

Authorizations:
ApiKeyAuth

Responses

Response samples

Content type
application/json
{
  • "object": "string",
  • "data": [
    ]
}