POST
/
api
/
v1
/
collection
/
template
curl --request POST \
  --url https://app.twilix.io/api/v1/collection/template \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "collection": "<string>",
  "prompt": "<string>",
  "promptFields": [
    {}
  ],
  "fields": [
    {}
  ],
  "conversationID": "<string>",
  "minimumScore": 123,
  "limit": 123,
  "includeModeration": true,
  "stream": true
}'

For users wanting more flexible control over prompting and outputs as part of their RAG solution, they can do this with our template endpoint.

You want to use this endpoint if you are looking for:

  • More fine-grained control over what the model outputs such as specifically HTML or Markdown.
  • More steerable inputs where you want to provide an example response before adding references into the prompt.
collection
string
required

The collection to query.

prompt
string
required

The template that you want to use. This template uses a reference magic in order to provide users with more flexible control over their LLM outputs.

An example template that is looking for just the returned Markdown could be:

You are a cybersecurity consultant, can users help provide
a clearer understanding of what is happening? Return 
this in Markdown with clear headings to separate it out.
{reference}
Markdown:

On our backend, we replace reference with the relevant promptFields that you supply. If None is supplied, then it uses all fields in a collection.

promptFields
array

The fields that you want to use to feed into the prompt template.

fields
array

The fields that you want to be returned as reference. If not specified, it returns all fields as reference.

conversationID
string

The conversation ID. This is returned in the response so you can use the one that has been automatically generated for you or you can also supply your own to keep track of the conversation on your side.

minimumScore
float

The minimum rerank score.

limit
integer

The max number of documents to returned

includeModeration
boolean

If true then there will be a moderation layer applied after the user inputs a query and when the AI outputs to ensure that the generated content is not harmful or violent.

stream
boolean

Whether or not a stream response should be returned. See examples below for details.

Content-Type
default: "application/json"required

Requires JSON Content Type