🔷
d-Matrix
AI inference chip for LLMs
AI Hardware
AI semiconductor startup developing in-memory compute chips optimized for large language model inference, providing high throughput at low latency and cost.
Key Features
- ✓In-memory compute
- ✓LLM inference chip
- ✓Low latency
- ✓High throughput
#AI chip#in-memory compute#LLM inference#semiconductor
Quick Info
- Category
- AI Hardware
- Pricing
- Enterprise