Skip to main content
🔷

d-Matrix

AI inference chip for LLMs

AI Hardware
d-Matrix logo

d-Matrix

AI inference chip for LLMs

AI HardwareEnterprise

AI semiconductor startup developing in-memory compute chips optimized for large language model inference, providing high throughput at low latency and cost.

Key Features

  • In-memory compute
  • LLM inference chip
  • Low latency
  • High throughput
#AI chip#in-memory compute#LLM inference#semiconductor

Get Started

Visit d-Matrix
🟠
Enterprise
Enterprise pricing — contact sales

Quick Info

Category
AI Hardware
Pricing
Enterprise

More AI Hardware Tools