I know the typical answer is “no because all the libs are in python”… but I am kind of baffled why more porting isn’t going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we’d see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust… it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option.

There are some Go libs I’ve found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go.

I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so?

  • thewayupisdown@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Could you elaborate on the Perl part, if possible? I don’t mind learning as much Python as necessary as I go along, but I’d much rather be doing all that is convenient in Perl.

    • ttkciar@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sure! I’ve been doing a few LLM’ing things in Perl:

      • A previous project, implemented in Perl, indexes a local wikipedia dump in Lucy and allows searching for pages. I’ve been reusing that project for RAG inference.

      • My “infer” utility is written in Perl. It wraps llama.cpp’s “main” utility with IPC::Open3 and I’m using it for inference, for RAG, for stop-words, for matching prompt templates to models, and for summarization. It’s gloriously broken at the moment and in dire need of a refactor.

      • I recently started writing a “copilot” utility in Perl, to better streamline using inference for research and code-writing copilots. It also wraps llama.cpp’s “main”, but in a much more simple way than “infer” (blocking I/O, no stop words, not trying to detect when the LLM infers the prompt text, etc).

      If you’re more interested in using the existing Python libraries and not wrapping llama.cpp, you should take a look at the Inline::Python module. I’ve only dabbled with LangChain, but if/when I get back to it, I will probably implement Perl bindings with a simple Inline::Python wrapper. It makes it pretty easy.

      If you do decide to wrap llama.cpp, you might be more comfortable with IPC::Run rather than IPC::Open3. It’s considered the more modern module. I’m just using IPC::Open3 out of familiarity.