TU
guolinke/TUPE
Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.
253 27 +0/wk
GitHub
bert language-model pretraining transformer
Trend
0
Star & Fork Trend (18 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
guolinke/TUPE has +0 stars this period . Velocity data will be available after more historical data is collected.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | TUPE | docGPT-langchain | astrbot_plugin_self_learning | pyribs |
|---|---|---|---|---|
| Stars | 253 | 253 | 254 | 255 |
| Forks | 27 | 55 | 28 | 46 |
| Weekly Growth | +0 | -2 | +5 | +0 |
| Language | Python | Python | Python | Python |
| Sources | 1 | 1 | 1 | 1 |
| License | MIT | MIT | GPL-3.0 | MIT |
Capability Radar vs docGPT-langchain
TUPE
docGPT-langchain
Maintenance Activity 0
Last code push 1612 days ago.
Community Engagement 53
Fork-to-star ratio: 10.7%. Active community forking and contributing.
Issue Burden 70
Issue data not yet available.
Growth Momentum 30
No measurable growth in the current period (first-day cold start expected).
License Clarity 95
Licensed under MIT. Permissive — safe for commercial use.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.