AI Provider
AI Provider module allows you to integrate AI models into the Crowdin App. You can use it to provide translations, suggestions, or any other AI-powered features to the users.
Sample
Section titled “Sample”const crowdinModule = require('@crowdin/app-project-module');
const configuration = { baseUrl: 'https://123.ngrok.io', clientId: 'clientId', clientSecret: 'clientSecret', name: 'Sample App', identifier: 'sample-app', description: 'Sample App description', dbFolder: __dirname, imagePath: __dirname + '/' + 'logo.png', aiProvider: { settingsUiModule: { fileName: 'setup.html', uiPath: __dirname + '/' + 'public' }, getModelsList: ({client, context}) => { return [ { id: 'gpt-3.5-turbo', supportsJsonMode: false, supportsFunctionCalling: false, contextWindowLimit: 4096 outputLimit: 4096, }, { id: 'gpt-4-turbo', supportsJsonMode: true, supportsFunctionCalling: false, contextWindowLimit: 4096 outputLimit: 4096, }, { id: 'gpt-5-power', supportsVision: true, supportsStreaming: true outputLimit: 4096, } ] }, chatCompletions: ({ messages, model, action, responseFormat, client, context, isStream, sendEvent, tools, toolChoice, req }) => { const result = { "strings": [ { "id": 3135046, "text": "zeile eins", "key": null, "context": "", "maxLength": null, "pluralForm": null }, { "id": 3135048, "text": "zeile zwei", "key": null, "context": "", "maxLength": null, "pluralForm": null } ] }
return [{ content: JSON.stringify(result) }]; } }};
crowdinModule.createApp(configuration);Configuration
Section titled “Configuration”The AI Provider module configuration object has the following properties:
| Parameter | Description | Allowed values | Default value |
|---|---|---|---|
settingsUiModule | Object with the settings UI module configuration | - | - |
getModelsList Function
Section titled “getModelsList Function”This function is called when the app is loaded. It should return an array of AI models that the app supports.
Parameters
Section titled “Parameters”client- Crowdin API client.context- Context object.
Return Value
Section titled “Return Value”Returns an array of AI models that the app supports. Each model object should have the following properties:
{ id: string, supportsJsonMode?: boolean, supportsFunctionCalling?: boolean, supportsStreaming?: boolean, supportsVision?: boolean, contextWindowLimit?: number, outputLimit?: number}| Parameter | Description | Default value |
|---|---|---|
id | Model identifier. | - |
supportsJsonMode | Indicates whether the model supports JSON mode. | false |
supportsFunctionCalling | Indicates whether the model supports function calling. | false |
supportsVision | Indicates whether the model supports vision. | false |
supportsStreaming | Indicates whether the model supports response streaming. | false |
contextWindowLimit | Maximum number of tokens in the context window. | 4096 |
outputLimit | Maximum number of tokens in the model response. | 4096 |
Read more about JSON mode: OpenAI, Azure OpenAI.
Read more about function calling: OpenAI, Azure OpenAI, Google Generative AI.
Read more about Vision: OpenAI, Azure OpenAI.
chatCompletions Function
Section titled “chatCompletions Function”This function is called when the app receives a request to generate a response.
Parameters
Section titled “Parameters”messages- Array of messages.model- Model identifier.action- Action object.responseFormat- Response format.client- Crowdin API client.context- Context object.isStream- Indicate is it stream response expected.sendEvent- Callback to handle and senddeltaobject chunks. Used for response streaming.tools- A list of tools the model may call. Used OpenAI compatible format, read more.toolChoice- Controls which (if any) tool is called by the model, read more.req- Request object.
Return Value
Section titled “Return Value”Returns an array of strings. Each string should represent a part of the response:
{ id: number, text: string, key: string, context: string, maxLength: number, pluralForm: number}Response Streaming
Section titled “Response Streaming”To enable response streaming, the AI model must support it, and the supportsStreaming property should be set to true in the model configuration.
getModelsList: ({client, context}) => { return [ { id: 'gpt-5-power', supportsStreaming: true } ]}To implement streaming, the chatCompletions method has the following parameters:
isStream- boolean, indicating whether the response should be streamed.sendEvent- callback function to send the response chunk. It accepts the following parameters:role- string (optional), default value isassistant.content- string, representing a small part of the message.tool_calls- array of the tool calls generated by the model (optional).
If you use sendEvent callback, the return value of the chatCompletions method will be ignored. Additionally, there is no need to handle different logic based on the isStream parameter.
The framework will handle sendEvent calls and internally determine how to send the response to the client based on the request.
chatCompletions: async ({ messages, model, sendEvent }) => { const openai = new OpenAI({ apiKey: 'OPEN_AI_API_KEY' });
const completion = await openai.chat.completions.create({ model, messages, stream: true });
for await (const chunk of completion) { await sendEvent({ content: chunk?.choices?.[0]?.delta?.content || '', tool_calls: chunk?.choices?.[0]?.delta?.tool_calls }); }}Handle Rate Limit Errors
Section titled “Handle Rate Limit Errors”Methods getModelsList and chatCompletions are observed to handle rate limit errors. It handles if error has status or code equal to 429.
You may use special Error object:
const { RateLimitError } = require('@crowdin/app-project-module/out/modules/ai-provider/util');
throw new RateLimitError({ error: e, // optional, it good to pass actual error object if it presents message: 'You has been reach rate limit' // optional, you may set your own message});Or use any own objects like this:
throw { status: 429, message: 'You has been reach rate limit'}Restricting AI Provider and AI Prompt Provider modules to use the same application
Section titled “Restricting AI Provider and AI Prompt Provider modules to use the same application”If your application implements both AI Provider and AI Prompt Provider modules and you need to restrict them to work only in pairs, you can use the restrictAiToSameApp configuration property. When this property is enabled, prompts that use the application’s AI Provider can be saved only if they use the same application’s AI Prompt Provider, and vice versa.