wojcech@alien.topB to LocalLLaMAEnglish · 1 year agoEPFL releases an open Medical Llama 2 finetune, including weights and training data, within 5%/10% of GPT-4/Med-PaLM-2arxiv.orgexternal-linkmessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkEPFL releases an open Medical Llama 2 finetune, including weights and training data, within 5%/10% of GPT-4/Med-PaLM-2arxiv.orgwojcech@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square1fedilink
minus-squarewojcech@alien.topOPBtoLocalLLaMA•Is anyone experimenting with non-instruction tuned models?linkfedilinkEnglisharrow-up1·1 year agoJust to be clear, you aren’t doing fine tuning here as in gradient updates, you are using the base model + ICL? linkfedilink
minus-squarewojcech@alien.topOPBtoLocalLLaMA•Is anyone experimenting with non-instruction tuned models?linkfedilinkEnglisharrow-up1·1 year agoFine tune as in gradient updates or as in ICL? linkfedilink
wojcech@alien.topB to LocalLLaMAEnglish · 1 year agoIs anyone experimenting with non-instruction tuned models?plus-squaremessage-squaremessage-square5fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs anyone experimenting with non-instruction tuned models?plus-squarewojcech@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square5fedilink
Just to be clear, you aren’t doing fine tuning here as in gradient updates, you are using the base model + ICL?