Simple answer is that you can’t parallelize LLM work.
It generate answers word-by-word, (or token-by-token to be more precise) so it’s impossible to split task into 10-100-1000 different pieces that you could send into this distributed network.
Each word in the LLM answer also serve as part of input to calculate next one, so LLMs are actually counter-distributed systems.
Simple answer is that you can’t parallelize LLM work.
It generate answers word-by-word, (or token-by-token to be more precise) so it’s impossible to split task into 10-100-1000 different pieces that you could send into this distributed network.
Each word in the LLM answer also serve as part of input to calculate next one, so LLMs are actually counter-distributed systems.