Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.

They’re dubiously naming it Phind V7. Also, they’ve ripped off WizardLM’s code in the past and rebranded it to secure seed funding.

I doubt it’s based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate as if it’s GPT-3.5 Turbo.

  • api@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If the training data contains statements to the effect that the model was extracted from the brain of a living walrus, that’s what it will tell you when you ask where it came from. These things aren’t self-aware in any sense. They don’t contemplate themselves or ask “who am I?”

  • kristaller486@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They trained their model using synthetic GPT-3.5-turbo data + a mix of their data. It is normal that V7 says “I am gpt-3.5”, but it is not normal that Phind uses synthetic OpenAI GPT data because it violates OpenAI terms.

    • cuyler72@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      OpenAI’s terms only mean that they might ban your account if they catch you gathering it. The data itself is not copywritable in any way, OpenAi has no legal right to control its use.

  • Soc13In@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    There is so much investor money flowing into AI startups that it is completely not surprising that somebody would do that.

    • Xhehab_@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah but at least PHIND should have cleaned the training dataset rows which mentions gpt-3.5-turbo/gpt-3 words…lol

    • donotdrugs@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah and it baffles me how many people, even in the tech community, take LLM output as hard facts.

  • lakolda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    GPT-3.5 turbo apparently has 20 billion parameters, significantly less than the previous best Phind models. Given how bad GPT-3.5 is, I think it was more likely just fine tuned some other base model on GPT-3.5 outputs.