Neural Chat (Intel)
Intel's optimized open-source LLM for efficient CPU and GPU inference
Neural Chat (Intel)
Intel's optimized open-source LLM for efficient CPU and GPU inference
Neural Chat is Intel's open-source large language model optimized for efficient inference on Intel CPUs, GPUs, and Gaudi accelerators. Built on Mistral and fine-tuned with Intel Extension for Transformers, Neural Chat achieves strong performance on instruction following benchmarks while delivering significantly faster inference than comparable models on Intel hardware. Intel provides quantized versions (INT4/INT8) for deployment on edge and enterprise hardware without NVIDIA GPUs.
Key Features
- ✓Intel hardware optimized
- ✓INT4/INT8 quantization
- ✓CPU inference
- ✓Mistral base
- ✓Enterprise edge
- ✓Apache 2.0
Quick Info
- Category
- Code & Development
- Pricing
- Free
More Code & Development Tools
GitHub Copilot
Code & DevelopmentThe AI pair programmer trusted by millions of developers
Cursor
Code & DevelopmentThe code editor built around AI from the ground up
Tabnine
Code & DevelopmentPrivacy-first AI code completion
Codeium
Code & DevelopmentFree AI coding assistant with no usage limits