• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: October 17th, 2023

help-circle
  • You can try something like Claude.ai which has long context and is free to use.

    You can use a python script to load the model, split the text into chunks, and ask the model to translate per chunk, then you don’t need a model with 64K context window (which will take up a lot of memory and are not that common).

    It also depends on the language you are trying to translate, it would be best to find models that has been trained in the original language, most models have a large english corpus, with many finetuned with chinese data, but there are specialty models for German/arabic/japanese, try google search or find on hugging face.







  • Cost is really the main issue. You can train a local LLM, or you can train ChatGPT as well. I wouldn’t be surprised if someone is already making a custom GPT for helping with unity of unreal engine projects.

    For Privacy, company with money will use a private instance from Azure, it is like 2-3 times the cost , but your data is safe as you have a contract with MS to keep it safe and private, with large financial penalties if it isn’t.

    Also, running LLM locally isn’t 0 cost, depending on the electricity price of your area. GPU consume a LOT of power. The 4090 is like 460 watts.