Overview
This quickstart guide will get you up and running with MySafeCache in just a few minutes. You’ll learn how to:
Set up your MySafeCache account
Make your first cache check
Store your first response
Monitor your usage
Prerequisites
Before you begin, you’ll need:
A MySafeCache account (sign up at mysafecache.com )
An API key from your dashboard
Basic knowledge of REST APIs
Step 1: Get Your API Key
Sign up for a MySafeCache account at mysafecache.com
Navigate to your dashboard
Go to the “API Keys” tab
Click “Generate New Key”
Copy your API key (keep it secure!)
Keep your API key secure and never commit it to version control. Use environment variables or secure credential management.
Step 2: Check Cache
First, let’s check if we have a cached response for a query:
curl -X POST https://api.mysafecache.com/api/v1/check \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "What is Docker?"
}
]
}'
Cache Hit Response:
{
"cache_hit" : true ,
"answer" : "Docker is a containerization platform..." ,
"cache_type" : "exact" ,
"lookup_time_ms" : 2.5 ,
"tokens_saved" : 150 ,
"created_at" : "2025-01-15T10:30:45" ,
"similarity_score" : null
}
Cache Miss Response:
{
"cache_hit" : false ,
"message" : "No cached response found" ,
"prompt_hash" : "abc123def456" ,
"lookup_time_ms" : 8.2 ,
"suggested_action" : "Call your LLM and use /store endpoint to cache the response"
}
Step 3: Store Response
When you get a cache miss, call your LLM provider and then store the response:
curl -X POST https://api.mysafecache.com/api/v1/store \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "What is Docker?"
}
],
"answer": "Docker is a containerization platform that allows you to package applications and their dependencies into lightweight, portable containers...",
"model": "gpt-4",
"tokens_used": 150
}'
{
"stored" : true ,
"message" : "Response cached successfully" ,
"cache_types_stored" : [ "exact" , "semantic" ],
"storage_time_ms" : 15.3
}
Step 4: Complete Integration Example
Here’s a complete example showing the full flow:
Complete Python Example
Complete JavaScript Example
import requests
import openai # or your preferred LLM client
class MySafeCacheClient :
def __init__ ( self , api_key , base_url = "https://api.mysafecache.com/api/v1" ):
self .api_key = api_key
self .base_url = base_url
self .headers = {
"Authorization" : f "Bearer { api_key } " ,
"Content-Type" : "application/json"
}
def get_response ( self , messages ):
# 1. Check cache first
check_response = requests.post(
f " { self .base_url } /check" ,
headers = self .headers,
json = { "messages" : messages}
)
result = check_response.json()
if result[ "cache_hit" ]:
print ( f "✅ Cache hit! ( { result[ 'cache_type' ] } , { result[ 'lookup_time_ms' ] :.1f} ms)" )
print ( f "💰 Saved { result[ 'tokens_saved' ] } tokens" )
return result[ "answer" ]
# 2. Cache miss - call LLM
print ( "❌ Cache miss - calling LLM..." )
# Replace with your LLM call
llm_response = openai.ChatCompletion.create(
model = "gpt-4" ,
messages = messages
)
answer = llm_response.choices[ 0 ].message.content
tokens_used = llm_response.usage.total_tokens
# 3. Store the response
store_response = requests.post(
f " { self .base_url } /store" ,
headers = self .headers,
json = {
"messages" : messages,
"answer" : answer,
"model" : "gpt-4" ,
"tokens_used" : tokens_used
}
)
if store_response.json()[ "stored" ]:
print ( "💾 Response cached for future use" )
return answer
# Usage
client = MySafeCacheClient( "your-api-key" )
messages = [{ "role" : "user" , "content" : "What is Docker?" }]
response = client.get_response(messages)
print ( f "Answer: { response } " )
Step 5: Monitor Usage
Track your cache performance and cost savings:
curl -X GET https://api.mysafecache.com/api/v1/usage \
-H "Authorization: Bearer YOUR_API_KEY"
{
"total_requests" : 1250 ,
"cache_hits" : 875 ,
"cache_misses" : 375 ,
"hit_rate_percentage" : 70.0 ,
"exact_hits" : 650 ,
"semantic_hits" : 225 ,
"average_lookup_time_ms" : 5.2 ,
"performance" : {
"fast_responses" : 650 ,
"smart_responses" : 225 ,
"estimated_cost_savings" : "$45.20"
}
}
Next Steps
Now that you have MySafeCache working, explore these advanced features:
Need Help?