TU

guolinke/TUPE

Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.

253 27 +0/wk
GitHub
bert language-model pretraining transformer
Trend 0

Star & Fork Trend (18 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

guolinke/TUPE has +0 stars this period . Velocity data will be available after more historical data is collected.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric TUPE docGPT-langchain astrbot_plugin_self_learning pyribs
Stars 253 253254255
Forks 27 552846
Weekly Growth +0 -2+5+0
Language Python PythonPythonPython
Sources 1 111
License MIT MITGPL-3.0MIT

Capability Radar vs docGPT-langchain

TUPE
docGPT-langchain
Maintenance Activity 0

Last code push 1612 days ago.

Community Engagement 53

Fork-to-star ratio: 10.7%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.