LA

lucidrains/linear-attention-transformer

Transformer based on a variant of attention that is linear complexity in respect to sequence length

824 75 +0/wk
GitHub
artificial-intelligence attention-mechanism deep-learning pytorch transformer
Trend 0

Star & Fork Trend (31 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

lucidrains/linear-attention-transformer has +0 stars this period . Velocity data will be available after more historical data is collected.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric linear-attention-transformer FedLab kur taichi-nerfs
Stars 824 824823826
Forks 75 14210856
Weekly Growth +0 +1+0+0
Language Python Jupyter NotebookPythonPython
Sources 1 111
License MIT Apache-2.0Apache-2.0Apache-2.0

Capability Radar vs FedLab

linear-attention-transformer
FedLab
Maintenance Activity 0

Last code push 703 days ago.

Community Engagement 46

Fork-to-star ratio: 9.1%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.