🔧
Expedera AI
Configurable neural network inference IP for SoCs
AI Infrastructure
Semiconductor IP provider offering configurable neural processing unit cores for system-on-chip designs targeting smart home, automotive, and surveillance applications.
Key Features
- ✓Neural network IP cores
- ✓Configurable NPU design
- ✓SoC integration
- ✓Low-latency inference
#semiconductor#NPU#IP#SoC
Quick Info
- Category
- AI Infrastructure
- Pricing
- Paid
More AI Infrastructure Tools
Inferless
AI InfrastructureServerless AI model deployment platform with GPU auto-scaling and cold start optimization
Colossal AI
AI InfrastructureOpen-source system for efficient large-scale AI model training and fine-tuning
Neural Magic
AI InfrastructureSoftware-defined AI inference engine that runs LLMs at GPU speed on CPUs
Weaviate Cloud
AI InfrastructureFully managed cloud service for the Weaviate open-source vector database