🤖 Cappy Local AI

✅ Server is running successfully!

Your local Ollama (cappycare) API is now accessible through ngrok.

Available Endpoints:

GET /

This page - API status and documentation

POST /chat

Chat with your local Ollama (cappycare model)

{
  "message": "Hello, how are you?",
  "model": "cappycare",
  "system_prompt": "You are a helpful AI assistant."
}

POST /chat (OpenAI-style format)

Chat using OpenAI-compatible messages array format with automatic Cappycare system prompt

{
  "model": "cappycare",
  "messages": [
    {
      "role": "user",
      "content": "Hi there."
    },
    {
      "role": "assistant",
      "content": "Hi! How can I assist you today?"
    },
    {
      "role": "user",
      "content": "My elderly mother needs help with daily activities in Singapore"
    }
  ],
  "n_predict": 150,
  "temperature": 0.7,
  "stream": false
}

POST /chat (with custom system prompt)

Override the default Cappycare system prompt

{
  "model": "cappycare",
  "messages": [
    {
      "role": "system",
      "content": "You are a medical assistant specializing in dementia care."
    },
    {
      "role": "user",
      "content": "My father has early-stage dementia. What should I know?"
    }
  ],
  "n_predict": 200,
  "temperature": 0.7,
  "stream": false
}

POST /chat (legacy format)

Chat with conversation history for backward compatibility

{
  "message": "What did I ask you about earlier?",
  "model": "cappycare:latest",
  "conversation_history": [
    {
      "role": "user",
      "content": "My mother needs help with daily activities"
    },
    {
      "role": "assistant", 
      "content": "I understand you're looking for help with your mother's daily activities..."
    }
  ]
}

POST /chat (with streaming)

Chat with streaming support for real-time responses

{
  "model": "cappycare",
  "messages": [
    {
      "role": "user",
      "content": "Tell me a long story about Vitamin C"
    }
  ],
  "n_predict": 1000,
  "temperature": 0.7,
  "stream": true
}

POST /chat/stream

Dedicated streaming endpoint for real-time chat responses

{
  "model": "cappycare",
  "messages": [
    {
      "role": "user",
      "content": "Tell me a long story about Vitamin C"
    }
  ],
  "n_predict": 1000,
  "temperature": 0.7
}

POST /analyze-image

General image analysis using LLaVA-1.5 13B

{
  "image": "base64_encoded_image_data",
  "message": "Please analyze this image",
  "model": "llava:13b",
  "system_prompt": "Custom system prompt (optional)"
}

POST /nutrition-analysis

Specialized nutrition analysis with structured JSON response

{
  "image": "base64_encoded_food_image"
}

Response Format:

{
  "id": "nutrition_20250927160000",
  "object": "nutrition.analysis",
  "created": 1758959000,
  "model": "llava:13b",
  "meal_description": "A plate of chicken rice with steamed chicken breast, fragrant rice, cucumber slices, and light soy sauce",
  "macronutrients": {
    "calories": "450-500",
    "protein": "35-40",
    "carbs": "45-50",
    "fat": "8-12",
    "fiber": "2-3"
  },
  "health_score": 7,
  "overall_assessment": "Good protein source with moderate carbs, suitable for elderly but could benefit from more vegetables",
  "insights": [
    "Excellent protein content for muscle maintenance in elderly",
    "Moderate sodium content - consider reducing soy sauce for heart health",
    "Low fiber content - adding vegetables would improve nutritional value",
    "Good choice for elderly with diabetes due to moderate carbohydrate content"
  ],
  "image_info": {
    "size": "1024x768",
    "format": "jpeg"
  },
  "analysis_type": "structured_nutrition",
  "target_audience": "elderly_care"
}

GET /health

Health check endpoint

GET /models

List available models

Test Your API:

You can test the API using curl:

curl -X POST http://llm.cappycare.com/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!", "model": "local-model"}'