Skip to main content
🔷

Neural Chat (Intel)

Intel's optimized open-source LLM for efficient CPU and GPU inference

Code & Development
Neural Chat (Intel) logo

Neural Chat (Intel)

Intel's optimized open-source LLM for efficient CPU and GPU inference

Neural Chat is Intel's open-source large language model optimized for efficient inference on Intel CPUs, GPUs, and Gaudi accelerators. Built on Mistral and fine-tuned with Intel Extension for Transformers, Neural Chat achieves strong performance on instruction following benchmarks while delivering significantly faster inference than comparable models on Intel hardware. Intel provides quantized versions (INT4/INT8) for deployment on edge and enterprise hardware without NVIDIA GPUs.

Key Features

  • Intel hardware optimized
  • INT4/INT8 quantization
  • CPU inference
  • Mistral base
  • Enterprise edge
  • Apache 2.0
#llm#intel#cpu-inference#quantized#edge#open-source

Get Started

Visit Neural Chat (Intel)
🟢
Free
Completely free to use

Quick Info

Category
Code & Development
Pricing
Free

More Code & Development Tools