torque-mcclyde@alien.topOPBtoLocalLLaMA•Tool to quickly iterate when fine-tuning open-source LLMsEnglish
1·
1 year agoThis means a lot! Thank you.
This means a lot! Thank you.
Yes, our datasets usually have a few hundred examples. We do support arbitrarily large datasets though, the fine-tuning just takes a little longer.
For deploying and scaling we’re using Modal, it’s a “serverless” GPU provider that we found to be very user-friendly.
Glad to hear that we’re not the only ones!
Fine-tuning is online. You can download the weights and run them wherever (including your own computer).