intel/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
Star & Fork Trend (45 data points)
Multi-Source Signals
Growth Velocity
intel/ipex-llm has +4 stars this period . 7-day velocity: 0.1%.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | ipex-llm | Bert-VITS2 | jetson-inference | NarratoAI |
|---|---|---|---|---|
| Stars | 8.8k | 8.7k | 8.8k | 8.7k |
| Forks | 1.4k | 1.3k | 3.1k | 1.2k |
| Weekly Growth | +4 | +0 | +0 | +14 |
| Language | Python | Python | C++ | Python |
| Sources | 1 | 1 | 1 | 1 |
| License | Apache-2.0 | AGPL-3.0 | MIT | NOASSERTION |
Capability Radar vs Bert-VITS2
Last code push 70 days ago.
Fork-to-star ratio: 16.1%. Active community forking and contributing.
Issue data not yet available.
+4 stars this period — 0.05% growth rate.
Licensed under Apache-2.0. Permissive — safe for commercial use.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.