Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.
They’re dubiously naming it Phind V7. Also, they’ve ripped off WizardLM’s code in the past and rebranded it to secure seed funding.
I doubt it’s based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate as if it’s GPT-3.5 Turbo.
If it’s not local they go in the bin anyway. Don’t worry about it.
If the training data contains statements to the effect that the model was extracted from the brain of a living walrus, that’s what it will tell you when you ask where it came from. These things aren’t self-aware in any sense. They don’t contemplate themselves or ask “who am I?”
They trained their model using synthetic GPT-3.5-turbo data + a mix of their data. It is normal that V7 says “I am gpt-3.5”, but it is not normal that Phind uses synthetic OpenAI GPT data because it violates OpenAI terms.
OpenAI’s terms only mean that they might ban your account if they catch you gathering it. The data itself is not copywritable in any way, OpenAi has no legal right to control its use.
There is so much investor money flowing into AI startups that it is completely not surprising that somebody would do that.
Aren’t language models well known for boldfaced bullshitting?
Yeah but at least PHIND should have cleaned the training dataset rows which mentions gpt-3.5-turbo/gpt-3 words…lol
Yeah and it baffles me how many people, even in the tech community, take LLM output as hard facts.
you cant ask an llm about itself.
People seem to forget that language models are text prediction engines, not actual intelligence.
GPT-3.5 turbo apparently has 20 billion parameters, significantly less than the previous best Phind models. Given how bad GPT-3.5 is, I think it was more likely just fine tuned some other base model on GPT-3.5 outputs.
isn’t it 175B?
The recent Microsoft paper on codefusion leaked it.