Ben's Bites
daily AI product launches & news
Search
Community
Advertise
Sign up /
Login
📰 News
🤝 Advertise with us
🐦 Twitter
💬 Join community
Log out
Search
← back to feed
QLoRA 4 bit quantization - Allows to fine tune LLaMa sized models on consumer GPU!
twitter.com
by
altryne
2 years ago
•
discuss
QLoRA is a new method of finetuning large models, that's so efficient that it allows for finetuning up to 100 LLaMa sized models a day in a small cluster.
Share
Facebook
Twitter
LinkedIn
No comments yet…
Login to comment.
No comments yet…