• 3 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: November 9th, 2023

help-circle
  • I am really looking forward to Zig maturing. I find the memory model stuff a bit odd, but the overall language looks easy enough for most things, and so far everything I read… and this being a 0.11 release, is that it’s as fast if not faster than C code in most cases. I don’t know how well it would compare to higher level languages like Go, Java, etc for back end web stuff… but I’d imagine with some time and a few good frameworks similar to Go’s Chi, JWTAuth and Casbin libraries, it would be very good.




  • Well that is interesting. What libraries are they using in Go do you know? Or are they building their own from scratch. I would imagine there would be some movement to translate python to Go for some situations. But there is a couple examples within this thread that show some good use of Go with OpenAI (and a local llama as well).

    I am thinking that I could stand up a model locally that uses OpenAI API, and then write some code in Go that calls the OpenAI APIs of the model… and then it would likely swap to ChatGPT APIs or if we decide to run our own larger model in the cloud.


  • WOW… fantastic. This is the first (only?) response I’ve seen of the dozens here that says any of this. As a long time back end dev, switching to Go for that was amazing. I was FAR faster and more productive than in Java, Python or NodeJS. Especially once I built my own set of frameworks that I quickly copy/paste (or import). I love how fast/easy it is to stand up a docker wrapper as well. Very tiny images with very fast runtimes.

    My point in this post was partially stemmed from the need to install so much runtime to use python and the various aspects with speed/memory/etc running python apps. If it is just basic glue code like some have said, then it’s not a big deal. But that I know Go and love the language makes me want to use it over anything else if I can.




  • Yah… the thing is… do I have to learn very fancy advanced python to do this… or can I use more simple python that then makes use of as you said more optimized libraries. I am wondering how much time its going to take to figure out python well enough to be of use and hence was thinking Go and Rust might work well as I know those well enough.

    If it’s just calling APIs even to a local running model, I Can do that in Go just fine. If it’s writing advanced AI code in python, then I have to spend likely months or longer learning the language to use it well enough for that sort of work. In which case I am not sure I am up to that task. I am terrible with math/algos, so not sure how much of all that I am going to have to somehow “master” to be a decent to good AI engineer.

    I think the idea is… am I a developer using AI (via APIs or CLI tools) in some capacity, or am I writing AI code that will be used for training, etc AI. I don’t know which path to go, but I’d lean towards using models and API calls over having to become a deep AI expert.



  • I agree with you here. I suspect from most answers I’ve read/seen here and other places, that Python was chosen way back when due to dynamic and creative capabilities (being dynamic) and that for most cases it was plenty fast/good enough and just stuck. It’s also “easier” for the not so code capable folks to learn and dabble with because its dynamic… easier to grasp assigning any value to a variable than specifically declaring the type of variables and being stuck using it for just that type.

    But I would think… and thus my reason for asking this, that today, nearing 2024, with all our experience in languages, threading, more and more cpu core counts, etc… we’d want a faster binary/native runtime than an interpreted single threaded heavy resource usage language like python or nodejs to really speed up and take advantage of todays hardware.

    But some have said that the underlying “guts” are c/c++ binary and python or any language more or less calls those bits, so hence the “glue” code. In those cases, I can agree that it may not matter as much.

    I was thinking (and still trying to learn a ton more about) the training aspect… if that is done on the GPU, the code that runs on it if it can run as fast as it can, it would reduce time to train, thus making it possible to train much faster more specialized models. What do I know though. I started my AI journey literally a week ago and am trying to grasp what I imagine has taken years of development in to my old brain.


  • I’ve no clue what that means… I’ll take your word for it. :). Just started on this AI journey and so far trying to learn how to run a model locally, figure out how to maximize what little hardware I have, but interested in the whole shabang… how you gather data, what sort of data, what it looks like, how you format it (is that inference?) to then be ingested during the training step. What code is used for training… what does training do, how does training result in a model, and what running the model does… is the model a binary (code) that you just pass input and get output. So much to learn.



  • I hear you… that’s what I am trying to understand. I guess going back to when AI dev started, maybe Go wasn’t around much (too early) and Rust as well. But I question why the AI blokes would choose a dynamic slow runtime language for things like training AI, which seems to be the massive bulk of the cpu/gpu work, over using much faster native binary languages like Go/Rust, or even C. But you said something, which maybe is what I am missing. Others have said this too. Python is more or less “glue code” to use underlying C (native) binary libaries. If that is the case, then I get it. I assumed the super crazy ass long training times and expensive GPUs needed was due in part that python is much slower runtime… and that using Go/Rust/C would reduce the large training times by quite a bit if it was used. But I am guessing from all the responses that the python code just pushes the bulk of the work on to the GPU using native binary libs… and thus the code done in python does not have to be super fust runtime. Thus, you pick up the “creative” side of python and benefit from using that in ways that might be harder to do in Go or Rust.

    But some have replied they are using Rust for inference, data prep, etc… I’ll have to learn what Inference is… not sure what that part is, and nor do I fully understand what data prep entails. Is it just turning gobs of all sorts of data in various formats in to a specific structure (I gather from some reading a vector database) that the training part understands the structure of that database… so you’re basically gathering data (Scraping the web, reading CSV files, github, etc) and putting that in to a very specific sort of key/value (or similar) structure, that the training bit then uses to train with?



  • So are you (your company) building models very fine tuned for specific needs of your company (products)? That is what I am trying to learn… but knowing Go and not a big fan of Python (dont know it well), I was hoping I could utilize my knowledge of Go + the runtime speed/memory/etc to train my own custom models. I am NOT sure how all that works though. I feel like its just some loop that reads in the prepared data and puts it in a new format, and thats it. lol. I don’t quite understand what “training” does. How it works. Or the code behind training. Is it some iterative process… like keeps repeating the same thing so the AI “learns”… so like… you ask it “What is 2+2” and it says 8, 7, 9, 13, 6, 5, 4, 2, 3, 4, 5, 4, 4, 5, 4, 4, 4, 4, 4, 4, … ?? So eventually on some iteration it gets the right answer consistently… and at that point you say “trained” next question?


  • OK… so that’s fair, but I would counter with… if Go/Rust were going to increase the runtime performance of training/using the AI models by a factor of 2, 3 or more, and the time to learn Go is a couple weeks and Rust a couple years (kidding… sort of), if you’re job is for years to come doing this sort of work and the end result of say, Go is training much faster or doing data prep much faster… wouldnt the benefits of learning Go or even Rust be worth the exponential increase in time savings for training/running, as well as memory efficiency, less resources needed, etc?

    Not saying you should, cause I don’t even know if Go/Rust/Zig will result in much faster training/etc. I would assume if that were the case, then company’s like OpenAI would have been using these languages already since they had the money and time to do so.



  • I don’t know why you say “nor as powerful”. In which way? For a Go developer that knows Go, performance/memory wise, its much more powerful. Maintaining code… its on par… a go dev and a python dev are going to very likely be “equal” in ability to read/maintain code in their languages of choice. Go lacks some things, sure, and is more verbose in some areas, sure, but I dont see how that makes it a bad excuse for a modern language. On the contrary, it is very fast/easy to learn, produces small fast memory efficient (largely) binaries that can be built on all platforms for all platforms and has probably some of the best threading capabilities of any language.

    I would argue that for the use case, perhaps python IS better suited for AI… and I am just not at all knowledgeable in how that may be. So I’ll give you that… if the runtime and training bits of AI do NOT need the much more performance of a language like Go or Rust, then so be it. But if it’s the usual “There are just good libraries in python that have been written for many years and would have to be rewritten in Go/Rust/etc” excuse… then that doesn’t tell me that python is better for AI, just that it would require work that nobody wants to do to convert the existing python libs that are so good, to Go/Rust/etc.


  • Burn looks interesting. So I think I am just lacking the understanding of how this all works. I assumed there is code that is used in AI that handles the models… some sort of way to use the models as “data” but the NLP, AI “logic” brain, etc would be done in code. I assumed that that is largely the python code. I assumed that models were more or less data that a runtime AI engine uses to find answers to the questions asked, thus thought the model runners handled the NLP work and turned incoming queries in to some model specific format that allows the algorithms of the model to do what they do… e.g. return responses and answers as if a human replied. I assumed ALL that was tons and tons of code done in python, and thus, was thinking if that is the runtime “brain” that AI uses, then wouldn’t it run even faster in Go or Rust or something.

    I am sadly not sure if I am explaining this right. I just assumed there was likely millions of lines of code behind the AI “brain” and that the model was basically gobs of data in some sort of… for lack of a better word compressed database format. So when “training” occurs… I am not entirely clear what is going on, other than it takes a ton of compute and it results in a single .gguf or similar file that is the model that can then be loaded by the likes of ollama, etc and then queried against by users using plain english. The code behind the training, the code behind running a model… that is what I am foggy on. Is there code IN the model… in binary format or something along with ALL the data it draws from?

    I originally thought AI would use the internet in real time… but that would clearly take a LOT longer for AI to search the web for answers and then formulate some sort of intelligent response rather than just some sort of paste of snippets it finds.


  • That’s an interesting response. I responded elsewhere that I am just missing how all this comes together. I assumed the runtime (e.g. python glue in this case) handles the NLP query you type in, turns it in to some meaningful structure that is then applied to the model to find responses. I am unclear if the model is the binary code/logic/brain of the AI… or is it just a ton of data in a compressed format that the model runner uses to find stuff? I assume the former, since python is glue code apparently.

    But also… the training stuff… I am unclear if that is gobs and gobs of python code… if so, converted to Go or Rust… wouldn’t it train MUCH faster given the exponential runtime increase of something like Go or Rust? Or does that ALSO not matter since most of the training is done on GPUs/ASICs and again python is just glue code using those GPU/Asic libraries (which are likely in C/C++)? E.g. TensorFlow and the likes by nvidia I assume is used for training.

    But the code that actually does training… that is what I am really trying to understand. Code somehow results in an AI that can “think” (though not AGI or sentient… but seems like it can think) like a human… and respond with often much better details and data than any human. ALL of the data it is trained on… is basically rows and rows of structures that are like key/values (or something like that) I assume, and somehow that results in a single file (gguf or whatever format it is) that a very little bit of python code can then execute…

    I am just baffled how all the different parts work… and the code behind those. I always assumed python was used in the early days due to “ease to learn” and that somehow the slow runtime speed nobody gave a shit about because back then it was just starting, but now that its big, its too late to “rewrite” in Go or Rust or what not. But it sounds like a lot of the training stuff uses native nvidia/asic/etc binary libraries (likely done in c/c++) and that the majority of the python code doesn’t need speed of Go/Rust/C to run. it is again glue code that just uses the underlying c/c++ libraries provided by the hardware that is used for training?