SillyTavern Settings for MythoMax 13B

I've been playing around with MythoMax for some time and for 13B it's arguably one of the better options for role-playing. I won't say it's the best because my experience isn't that in depth, but I have messed around with the settings considerably to get something that seems consistent and doesn't generate junk.

Firstly, you'll want to set your token padding to 100, this is basically the bear minimum or MythoMax will generate junk in short order as soon as you have filled up your context.

I set my response to 500 tokens, but my over all context is 6144, this is for your consideration. You may want to consider reducing your tokens, this is taken directly out of your prompt limit, so you won't have as much of back story going into the prompt (arguably a bad thing as I can't seem to get it produce more than around 400 token anyway).

Settings

::: spoiler spoiler Temperature: 0.74 Top P: 0.66 Top K: 90 Top A: 0 Typical P: 1 Min P: 0 Tail Free Sampling: 1 Repetition Penalty: 1.1 Repetition Penalty Range: 400 Frequency Penalty: 0.07 Presence Penalty: 0 Dynamic Temperature: Off Mirostat Mode: 2 Mirostat Tau: 5 Mirostat Eta 0.1 Ban EOS Token: Disabled Skip Special Tokens: Disabled CFG: 1.2

Sampling Order Repetition Penalty Top K Top A Tail Free Sampling Typical P Top P & Min P Temperature :::

JSON

JSON is supplied so you can just create a text file and import it without needing to change anything manually.

::: spoiler spoiler

{
    "temp": 0.74,
    "temperature_last": true,
    "top_p": 0.66,
    "top_k": 90,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0,
    "rep_pen": 1.1,
    "rep_pen_range": 400,
    "no_repeat_ngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 1,
    "min_length": 0,
    "encoder_rep_pen": 1,
    "freq_pen": 0.07,
    "presence_pen": 0,
    "do_sample": true,
    "early_stopping": false,
    "dynatemp": false,
    "min_temp": 0,
    "max_temp": 2,
    "add_bos_token": true,
    "truncation_length": 2048,
    "ban_eos_token": false,
    "skip_special_tokens": false,
    "streaming": true,
    "mirostat_mode": 2,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "guidance_scale": 1.15,
    "negative_prompt": "",
    "grammar_string": "",
    "banned_tokens": "",
    "ignore_eos_token_aphrodite": false,
    "spaces_between_special_tokens_aphrodite": true,
    "sampler_order": [
        6,
        0,
        1,
        3,
        4,
        2,
        5
    ],
    "logit_bias": [],
    "n": 1,
    "rep_pen_size": 0,
    "genamt": 500,
    "max_length": 6144
}

:::