Hello after a long time :)

I am TokenBender.
Some of you may remember my previous model - codeCherryPop
It was very kindly received so I am hoping I won’t be killed this time as well.

Releasing EvolvedSeeker-1.3B v0.0.1
A 1.3B model with 68.29% on HumanEval.
The base model is quite cracked, I just did with it what I usually try to do with every coding model.

Here is the model - https://huggingface.co/TokenBender/evolvedSeeker_1_3
I will post this in TheBloke’s server for GGUF but I find that Deepseek coder’s GGUF sucks for some reason so let’s see.

EvolvedSeeker v0.0.1 (First phase)

This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs.

I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder.

Around 3k answers were modified via self-instruct.

Recommended format is ChatML, Alpaca will work but take care of EOT token

This is a very early version of 1.3B sized model in my major project PIC (Partner-in-Crime)
Going to teach this model json/md adherence next.

https://preview.redd.it/jhvz3xoj7y1c1.png?width=1500&format=png&auto=webp&s=3c0ec081768293885a9953766950758e9bf6db7d

I will just focus on simple things that I can do for now but anything you guys will say will be taken into consideration for fixes.

  • AfterAte@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Try using the alpaca template, turn temperature down to 0.1 or 0.2 and repetition penalty to 1. I haven’t tested this yet, but those settings work for Deepseek-coder. If you’re using oobabooga, the StarChat preset works for me.