Making API Requests
Learn how to make requests to different AI models via Wictz API.
Making API Requests
Wictz API provides a unified interface for making requests to various AI models. This section explains the general request structure and provides examples for popular providers.
Request Structure
All API requests are made via POST
(unless specified otherwise by the underlying provider's original API design for a specific endpoint) to the following URL pattern:
https://wictz.com/api/v1/{provider}/{endpoint_path}
{provider}
: This is the identifier for the AI provider you wish to use (e.g.,openai
,anthropic
,google
).{endpoint_path}
: This is the path to the specific API endpoint for that provider, mirroring the provider's own API structure (e.g.,chat/completions
for OpenAI,v1/messages
for Anthropic).
The request body, headers (other than Authorization
), and parameters should generally match what the underlying provider expects for the chosen endpoint. Wictz API forwards these to the provider after handling authentication and applying any relevant Wictz-specific logic (like rate limiting or usage tracking).
OpenAI Requests
To make requests to OpenAI models (like GPT-4, GPT-3.5 Turbo), use the standard OpenAI API format.
Example Endpoint: /api/v1/openai/chat/completions
curl -X POST https://wictz.com/api/v1/openai/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant that translates English to French."
},
{
"role": "user",
"content": "Translate the following English text to French: \"Hello, how are you?\""
}
],
"max_tokens": 60,
"temperature": 0.7
}'
Anthropic Requests
For Anthropic's Claude models, use the standard Anthropic Messages API format.
Example Endpoint: /api/v1/anthropic/v1/messages
curl -X POST https://wictz.com/api/v1/anthropic/v1/messages \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-3-opus-20240229",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "Write a short poem about the beauty of code."
}
]
}'
Provider-Specific Headers
anthropic-version
header in the Anthropic example. Wictz API passes through most headers to the underlying provider. Always refer to the original provider's documentation for required headers beyond standard ones like Content-Type
.Google Requests
For Google's Gemini models, use the standard Google Generative Language API format.
Example Endpoint: /api/v1/google/v1beta/models/gemini-pro:generateContent
(Note: v1beta
or other version specifiers are part of the endpoint_path
)
curl -X POST https://wictz.com/api/v1/google/v1beta/models/gemini-pro:generateContent \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"parts": [
{
"text": "Explain the concept of a Large Language Model in one sentence."
}
]
}
]
}'
Response Format
Wictz API returns responses in the original format provided by the underlying AI provider. This ensures maximum compatibility with existing client libraries and code written for those providers.
For example, a successful response from an OpenAI chat completion request via Wictz API will look identical to a direct OpenAI API response:
{
"id": "chatcmpl-xxxxxxxxxxxxxxxxxxxxxx",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4-xxxxxxxx",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Bonjour, comment ça va ?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 22,
"completion_tokens": 5,
"total_tokens": 27
}
}
Always consult the specific provider's API documentation for details on their response structures.
On this page