IL

intel/ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

8.8k 1.4k +4/wk
GitHub
gpu llm pytorch transformers
Trend 3

Star & Fork Trend (45 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

intel/ipex-llm has +4 stars this period . 7-day velocity: 0.1%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric ipex-llm Bert-VITS2 jetson-inference NarratoAI
Stars 8.8k 8.7k8.8k8.7k
Forks 1.4k 1.3k3.1k1.2k
Weekly Growth +4 +0+0+14
Language Python PythonC++Python
Sources 1 111
License Apache-2.0 AGPL-3.0MITNOASSERTION

Capability Radar vs Bert-VITS2

ipex-llm
Bert-VITS2
Maintenance Activity 63

Last code push 70 days ago.

Community Engagement 81

Fork-to-star ratio: 16.1%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 43

+4 stars this period — 0.05% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.