Spafe API Documentation

Spafe API - Free, no token, use of OpenAI (and not only) APIs | Product Hunt

Free use of OpenAI (and not only) APIs

Introduction

This is the documentation for using our API.

Endpoints

POST https://api.spafe.ru/api/v1/chat/completions

Description

This endpoint accepts messages from the user and generates text or images using various models.

Request

{
  "messages": [
    {
      "role": "user",
      "content": "Hi"
    }
  ],
  "stream": false,
  "model": "gpt-4o",
  "temperature": 0.5,
  "presence_penalty": 0,
  "frequency_penalty": 0,
  "top_p": 1
}

Response for Text Generation

{
  "id": "chatcmpl-1234567890",
  "object": "chat.completion",
  "created": 1730888458,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": null
}

Response for Image Generation

{
  "id": "chatcmpl-1234567890",
  "object": "chat.completion",
  "created": 1730888745,
  "model": "flux",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "![Generated Image](https://storage.googleapis.com/a1aa/image/image.jpg)"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": null
}

Available Models

  • gpt-4o
  • o1-preview
  • flux
  • 01-ai/Yi-34B-Chat
  • glm4-9B-chat
  • blackbox
  • claude-sonnet-3.5
  • gemini-pro
  • llama-3.1-405b
  • llama-3.1-70b
  • llama-3.1-8b
  • gemini-1.5-flash
  • google/codegemma-7b-it
  • google/gemma-1.1-7b-it
  • google/gemma-2-27b-it
  • google/gemma-2-9b-it
  • llama3.1-8b
  • llama3.1-70b
  • meta-llama/Llama-3.2-90B-Vision-Instruct
  • codellama/CodeLlama-34b-Instruct-hf
  • codellama/CodeLlama-70b-Instruct-hf
  • meta-llama/Llama-2-70b-chat-hf
  • meta-llama/Llama-2-7b-chat-hf
  • meta-llama/Meta-Llama-3-8B-Instruct
  • meta-llama/Meta-Llama-3.1-70B-Instruct
  • meta-llama/Meta-Llama-3.1-8B-Instruct
  • bigcode/starcoder2-15b
  • bigcode/starcoder2-15b-instruct-v0.1
  • cognitivecomputations/dolphin-2.6-mixtral-8x7b
  • cognitivecomputations/dolphin-2.9.1-llama-3-70b
  • databricks/dbrx-instruct
  • mattshumer/Reflection-Llama-3.1-70B
  • microsoft/Phi-3-medium-4k-instruct
  • microsoft/WizardLM-2-7B
  • microsoft/WizardLM-2-8x22B
  • mistralai/Mistral-7B-Instruct-v0.1
  • mistralai/Mistral-7B-Instruct-v0.2
  • mistralai/Mistral-7B-Instruct-v0.3
  • mistralai/Mistral-Nemo-Instruct-2407
  • mistralai/Mixtral-8x22B-Instruct-v0.1
  • mistralai/Mixtral-8x22B-v0.1
  • mistralai/Mixtral-8x7B-Instruct-v0.1
  • openchat/openchat-3.6-8b
  • Phind/Phind-CodeLlama-34B-v2
  • Qwen/Qwen2-72B-Instruct
  • Qwen/Qwen2.5-72B-Instruct
  • Sao10K/L3-70B-Euryale-v2.1

Example Requests

JavaScript

const requestData = {
  messages: [
    {
      role: "user",
      content: "Hi"
    }
  ],
  stream: false,
  model: "gpt-4o",
  temperature: 0.5,
  presence_penalty: 0,
  frequency_penalty: 0,
  top_p: 1
};

fetch('https://api.spafe.ru/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json'
  },
  body: JSON.stringify(requestData)
})
.then(response => response.json())
.then(data => console.log(data));

Python

import requests

url = 'https://api.spafe.ru/api/v1/chat/completions'
request_data = {
    "messages": [
        {
            "role": "user",
            "content": "Hi"
        }
    ],
    "stream": False,
    "model": "gpt-4o",
    "temperature": 0.5,
    "presence_penalty": 0,
    "frequency_penalty": 0,
    "top_p": 1
}

response = requests.post(url, json=request_data)
data = response.json()
print(data)

Example Response in Streaming

data: {"id": "chatcmpl-1234567890", "object": "chat.completion.chunk", "created": 1730889622, "model": "gpt-4o", "choices": [{"index": 0, "delta": {"content": "Hello! How can I assist you today?", "role": "assistant"}, "finish_reason": null}], "usage": null} 

data: {"id": "chatcmpl-1234567890", "object": "chat.completion.chunk", "created": 1730889622, "model": "gpt-4o", "choices": [{"index": 0, "delta": {"content": "", "role": "assistant"}, "finish_reason": "stop"}], "usage": null} 

data: [DONE]