You must log in or register to comment.
a nice blog post about this https://simonwillison.net/2023/Nov/29/llamafile/
excited to try this thing!
Just tried this out on my Windows machine and got this:
warning: couldn't find nvcc (nvidia c compiler) try setting $CUDA_PATH if it's installed
warning: GPU offload not supported on this platform; GPU related options will be ignored
warning: you might need to install xcode (macos) or cuda (windows, linux, etc.) check the output above to see why support wasn't linked
So I can not use my GPU like I can do with the standard llama.cpp… and I don’t want to install anything - I would like to have a portable solution which I can copy on my external SSD…