Skip to main content

Base URL

All API requests should be made to:
https://api.mysafecache.com

Authentication

All requests require authentication using an API key in the Authorization header:
Authorization: Bearer YOUR_API_KEY

Core Endpoints

MySafeCache provides a simple, yet powerful API with just 3 core endpoints:

Request Format

All requests must include:
  • Content-Type: application/json header
  • Valid JSON body (for POST requests)
  • Authorization header with API key

Response Format

All responses are returned in JSON format with consistent structure:
{
  "status": "success|error",
  "data": {...},
  "message": "Human readable message",
  "timestamp": "2025-01-15T10:30:45Z"
}

Error Handling

MySafeCache uses standard HTTP status codes:
Status CodeMeaning
200Success
400Bad Request - Invalid input
401Unauthorized - Invalid API key
404Not Found - Endpoint doesn’t exist
429Too Many Requests - Rate limit exceeded
500Internal Server Error

Error Response Format

{
  "error": {
    "code": "INVALID_INPUT",
    "message": "The messages array is required",
    "details": {
      "field": "messages",
      "expected": "array",
      "received": "undefined"
    }
  },
  "status": 400,
  "timestamp": "2025-01-15T10:30:45Z"
}

Rate Limits

API calls are rate limited based on your plan:
PlanRequests/minuteBurst
Free60100
Pro6001000
Enterprise600010000
Rate limit headers are included in all responses:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
X-RateLimit-Reset: 1642348800

Message Format

MySafeCache uses the OpenAI message format for consistency:
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user", 
      "content": "What is artificial intelligence?"
    },
    {
      "role": "assistant",
      "content": "Artificial intelligence (AI) refers to..."
    }
  ]
}

Supported Roles

  • system: Sets the behavior of the assistant
  • user: Messages from the user
  • assistant: Previous responses from the AI

Cache Types

MySafeCache returns different cache types based on how the match was found:
Cache TypeDescriptionResponse Time
exactPerfect hash match1-5ms
semanticVector similarity match5-20ms

Pagination

For endpoints that return lists (like analytics), pagination is supported:
{
  "data": [...],
  "pagination": {
    "page": 1,
    "per_page": 20,
    "total": 150,
    "total_pages": 8
  }
}
Add pagination parameters to your requests:
  • page: Page number (default: 1)
  • per_page: Items per page (default: 20, max: 100)

Timestamps

All timestamps are returned in ISO 8601 format with UTC timezone:
2025-01-15T10:30:45.123Z

Testing

Use our test endpoint to verify your integration:
curl -X GET https://api.mysafecache.com/api/v1/health \
  -H "Authorization: Bearer YOUR_API_KEY"

SDK Support

Official SDKs are available for popular languages:

Python

pip install mysafecache

JavaScript/Node.js

npm install mysafecache

Go

go get github.com/mysafecache/go-sdk

PHP

composer require mysafecache/php-sdk

OpenAPI Specification

Download our complete OpenAPI specification:

Download OpenAPI Spec

Complete machine-readable API specification

Quick Example

Here’s a complete example showing the basic API flow:
import requests

api_key = "your-api-key"
base_url = "https://api.mysafecache.com/api/v1"
headers = {
    "Authorization": f"Bearer {api_key}",
    "Content-Type": "application/json"
}

# 1. Check cache
check_response = requests.post(
    f"{base_url}/check",
    headers=headers,
    json={
        "messages": [
            {"role": "user", "content": "What is Docker?"}
        ]
    }
)

result = check_response.json()

if result["cache_hit"]:
    print(f"Cache hit! Answer: {result['answer']}")
else:
    # 2. Call your LLM (not shown)
    llm_answer = "Docker is a containerization platform..."
    
    # 3. Store the response
    store_response = requests.post(
        f"{base_url}/store",
        headers=headers,
        json={
            "messages": [
                {"role": "user", "content": "What is Docker?"}
            ],
            "answer": llm_answer,
            "model": "gpt-4",
            "tokens_used": 150
        }
    )
    
    print(f"Stored: {store_response.json()['stored']}")

Next Steps