Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
104.8
TFLOPS
5
5
11
Daniel Byrne
realdanielbyrne
Follow
tsinghuald's profile picture
evalstate's profile picture
LiteMind's profile picture
3 followers
·
14 following
realdanielbyrne
AI & ML interests
Deep Learning, LLMs, Time Series Analysis
Recent Activity
liked
a model
29 days ago
WeiboAI/VibeThinker-1.5B
liked
a dataset
3 months ago
1-800-LLMs/physics
replied
to
Kseniase
's
post
3 months ago
10 awesome advanced LoRA approaches Low-Rank Adaptation (LoRA) is the go-to method for efficient model fine-tuning that adds small low-rank matrices instead of retraining full models. The field isn’t standing still – new LoRA variants push the limits of efficiency, generalization, and personalization. So we’re sharing 10 of the latest LoRA approaches you should know about: 1. Mixture-of-LoRA-experts → https://huggingface.co/papers/2509.13878 Adds multiple low-rank adapters (LoRA) into a model’s layers, and a routing mechanism activates the most suitable ones for each input. This lets the model adapt better to new unseen conditions 2. Amortized Bayesian Meta-Learning for LoRA (ABMLL) → https://huggingface.co/papers/2508.14285 Balances global and task-specific parameters within a Bayesian framework to improve uncertainty calibration and generalization to new tasks without high memory or compute costs 3. AutoLoRA → https://huggingface.co/papers/2508.02107 Automatically retrieves and dynamically aggregates public LoRAs for stronger T2I generation 4. aLoRA (Activated LoRA) → https://huggingface.co/papers/2504.12397 Only applies LoRA after invocation, letting the model reuse the base model’s KV cache instead of recomputing the full turn’s KV cache. Efficient in multi-turn conversations 5. LiLoRA (LoRA in LoRA) → https://huggingface.co/papers/2508.06202 Shares the LoRA matrix A across tasks and additionally low-rank-decomposes matrix B to cut parameters in continual vision-text MLLMs 6. Sensitivity-LoRA → https://huggingface.co/papers/2509.09119 Dynamically assigns ranks to weight matrices based on their sensitivity, measured using second-order derivatives Read further below ↓ Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
None yet
realdanielbyrne
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a model
29 days ago
WeiboAI/VibeThinker-1.5B
Text Generation
•
2B
•
Updated
16 days ago
•
28.6k
•
501
liked
a dataset
3 months ago
1-800-LLMs/physics
Viewer
•
Updated
Apr 19
•
20k
•
18
•
2
liked
a dataset
11 months ago
PowerInfer/QWQ-LONGCOT-500K
Viewer
•
Updated
Dec 26, 2024
•
286k
•
134
•
125
liked
a model
about 1 year ago
Sao10K/I_am_alive_yay
Updated
Nov 30, 2024
•
64
liked
2 datasets
about 1 year ago
realdanielbyrne/AgathaChristieText
Viewer
•
Updated
Dec 6, 2024
•
14.7k
•
66
•
2
rombodawg/Everything_Instruct
Viewer
•
Updated
Oct 8, 2024
•
4.05M
•
60
•
54
liked
a model
about 1 year ago
ValiantLabs/Llama3.1-8B-Enigma
Text Generation
•
8B
•
Updated
Mar 12
•
41
•
11
liked
a dataset
about 1 year ago
sequelbox/Tachibana
Viewer
•
Updated
Sep 27, 2024
•
104k
•
46
•
10
liked
a model
about 1 year ago
Undi95/Meta-Llama-3.1-8B-Claude
Text Generation
•
8B
•
Updated
Jul 31, 2024
•
72
•
56
liked
a dataset
over 1 year ago
Weyaxi/sci-datasets
Viewer
•
Updated
Sep 28, 2024
•
971k
•
3.55k
•
27
liked
a model
over 1 year ago
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Text Generation
•
1B
•
Updated
Mar 17, 2024
•
1.84M
•
1.48k