🦙
Ollama AI Local LLMs
AI run LLMs locally
AI Infrastructure
Open-source tool using AI to enable developers to download, run, and manage large language models locally on their machines, providing a simple API for local LLM inference.
Key Features
- ✓Local AI LLM running
- ✓Model AI management
- ✓Local AI inference API
- ✓Private AI models
#local AI LLM#run AI models local#private AI inference#offline AI models
Quick Info
- Category
- AI Infrastructure
- Pricing
- Free
More AI Infrastructure Tools
Inferless
AI InfrastructureServerless AI model deployment platform with GPU auto-scaling and cold start optimization
Colossal AI
AI InfrastructureOpen-source system for efficient large-scale AI model training and fine-tuning
Neural Magic
AI InfrastructureSoftware-defined AI inference engine that runs LLMs at GPU speed on CPUs
Weaviate Cloud
AI InfrastructureFully managed cloud service for the Weaviate open-source vector database