feat: Implementiere umfassende RESTful API für LLM Gateway

Fügt 21 neue API-Endpoints in 4 Phasen hinzu:

Phase 1 - Foundation (Provider & Models):
- GET /api/providers - Liste aller Provider
- GET /api/providers/{provider} - Provider-Details
- GET /api/models - Liste aller Models mit Filtering/Sorting
- GET /api/models/{provider}/{model} - Model-Details

Phase 2 - Core Features (Credentials, Budget, Pricing):
- GET/POST/PUT/DELETE /api/credentials - Credential-Management
- POST /api/credentials/{id}/test - Connection Testing
- GET /api/budget - Budget-Status mit Projektionen
- GET /api/budget/history - Budget-Historie
- GET /api/pricing - Model-Pricing-Listen
- GET /api/pricing/calculator - Kosten-Kalkulator
- GET /api/pricing/compare - Preis-Vergleich

Phase 3 - Analytics (Usage Statistics):
- GET /api/usage/summary - Umfassende Statistiken
- GET /api/usage/requests - Request-History mit Pagination
- GET /api/usage/requests/{id} - Request-Details
- GET /api/usage/charts - Chart-Daten (4 Typen)

Phase 4 - Account (Account Info & Activity):
- GET /api/account - User-Informationen
- GET /api/account/activity - Activity-Log

Features:
- Vollständige Scramble/Swagger-Dokumentation
- Consistent Error-Handling
- API-Key Authentication
- Filtering, Sorting, Pagination
- Budget-Tracking mit Alerts
- Provider-Breakdown
- Performance-Metriken
- Chart-Ready-Data

Controller erstellt:
- ProviderController
- ModelController
- CredentialController
- BudgetController
- PricingController
- UsageController
- AccountController

Dokumentation:
- API_KONZEPT.md - Vollständiges API-Konzept
- API_IMPLEMENTATION_STATUS.txt - Implementation-Tracking
- API_IMPLEMENTATION_SUMMARY.md - Zusammenfassung und Workflows
This commit is contained in:
wtrinkl
2025-11-19 12:33:11 +01:00
parent c149bdbdde
commit b6d75d51e3
12 changed files with 4556 additions and 11 deletions

View File

@@ -18,13 +18,103 @@ class ChatCompletionController extends Controller
/**
* Create a chat completion
*
* Accepts OpenAI-compatible chat completion requests and routes them to the appropriate
* LLM provider (OpenAI, Anthropic, DeepSeek, Google Gemini, or Mistral AI).
* Send messages to an LLM provider and receive completions. This endpoint accepts OpenAI-compatible
* requests and routes them to the appropriate LLM provider (OpenAI, Anthropic, DeepSeek, Google Gemini,
* or Mistral AI).
*
* The request uses the authenticated user's API keys for the specified provider.
* Cost tracking, budget checking, and rate limiting are applied automatically.
* The request uses the authenticated gateway user's provider credentials (API keys). Cost tracking,
* budget checking, and rate limiting are applied automatically based on the gateway user's configuration.
*
* Returns an OpenAI-compatible response with usage statistics and cost information.
* ## Authentication
*
* Requires a valid API key in the Authorization header:
* ```
* Authorization: Bearer llmg_your_api_key_here
* ```
*
* ## Supported Providers
*
* - `openai` - OpenAI models (GPT-4, GPT-3.5-turbo, etc.)
* - `anthropic` - Anthropic Claude models (Claude 4, Claude 3.5 Sonnet, etc.)
* - `google` - Google Gemini models (Gemini Pro, Gemini Flash, etc.)
* - `deepseek` - DeepSeek models (DeepSeek Chat, DeepSeek Coder)
* - `mistral` - Mistral AI models (Mistral Large, Mistral Medium, etc.)
*
* ## Example Request
*
* ```json
* {
* "provider": "openai",
* "model": "gpt-4o-mini",
* "messages": [
* {
* "role": "system",
* "content": "You are a helpful assistant."
* },
* {
* "role": "user",
* "content": "Hello, how are you?"
* }
* ],
* "temperature": 0.7,
* "max_tokens": 150
* }
* ```
*
* ## Example Response
*
* ```json
* {
* "success": true,
* "request_id": "req_abc123xyz",
* "provider": "openai",
* "model": "gpt-4o-mini",
* "content": "Hello! I'm doing well, thank you for asking. How can I help you today?",
* "role": "assistant",
* "finish_reason": "stop",
* "usage": {
* "prompt_tokens": 23,
* "completion_tokens": 18,
* "total_tokens": 41
* },
* "cost": {
* "prompt_cost": 0.000023,
* "completion_cost": 0.000054,
* "total_cost": 0.000077
* },
* "response_time_ms": 1245
* }
* ```
*
* ## Error Responses
*
* ### Budget Exceeded (402)
* ```json
* {
* "success": false,
* "error": "budget_exceeded",
* "message": "Monthly budget limit exceeded"
* }
* ```
*
* ### Rate Limit Exceeded (429)
* ```json
* {
* "success": false,
* "error": "rate_limit_exceeded",
* "message": "Rate limit exceeded. Please try again later.",
* "retry_after": 3600
* }
* ```
*
* ### Provider Error (400-500)
* ```json
* {
* "success": false,
* "error": "provider_error",
* "message": "Invalid API key for provider"
* }
* ```
*
* @tags Chat
*
@@ -65,7 +155,7 @@ class ChatCompletionController extends Controller
} catch (ProviderException $e) {
Log::error('Provider error in chat completion', [
'user_id' => $request->user()->id,
'user_id' => $request->user()->user_id,
'provider' => $request->input('provider'),
'error' => $e->getMessage(),
]);
@@ -78,7 +168,7 @@ class ChatCompletionController extends Controller
} catch (\Exception $e) {
Log::error('Unexpected error in chat completion', [
'user_id' => $request->user()->id,
'user_id' => $request->user()->user_id,
'error' => $e->getMessage(),
'trace' => $e->getTraceAsString(),
]);