Facebook SDK

Recent Posts

test

Show HN: Open-source proxy server for Llama2, GPT-4, Claude2 with Logging,Cache https://ift.tt/OGflQ0P

Show HN: Open-source proxy server for Llama2, GPT-4, Claude2 with Logging,Cache Hello hacker news, I’m the maintainer of liteLLM() - package to simplify input/output to OpenAI, Azure, Cohere, Anthropic, Hugging face API Endpoints: https://ift.tt/L12DkH9 We’re open sourcing our implementation of liteLLM proxy: https://ift.tt/oZ1yQl4... TLDR: It has one API endpoint /chat/completions and standardizes input/output for 50+ LLM models + handles logging, error tracking, caching, streaming What can liteLLM proxy do? - It’s a central place to manage all LLM provider integrations - Consistent Input/Output Format - Call all models using the OpenAI format: completion(model, messages) - Text responses will always be available at ['choices'][0]['message']['content'] - Error Handling Using Model Fallbacks (if GPT-4 fails, try llama2) - Logging - Log Requests, Responses and Errors to Supabase, Posthog, Mixpanel, Sentry, Helicone - Token Usage & Spend - Track Input + Completion tokens used + Spend/model - Caching - Implementation of Semantic Caching - Streaming & Async Support - Return generators to stream text responses You can deploy liteLLM to your own infrastructure using Railway, GCP, AWS, Azure Happy completion() ! https://ift.tt/3RjK0ZE August 12, 2023 at 05:38AM
Show HN: Open-source proxy server for Llama2, GPT-4, Claude2 with Logging,Cache https://ift.tt/OGflQ0P Show HN: Open-source proxy server for Llama2, GPT-4, Claude2 with Logging,Cache https://ift.tt/OGflQ0P Reviewed by Manish Pethev on August 12, 2023 Rating: 5

No comments:

If you have any suggestions please send me a comment.

Flickr

Powered by Blogger.