Skip to content

Part 6 (LLM Prompt Checks)

Introduction

If you haven't seen the previous parts of this series, it might be worth going and reading those first.

Previously, I focused on splitting chapter generation into multiple stages with mostly positive results, but that added some issues which I need to fix now.

Approach

I saw that the LLM was cheating a bit and getting around doing the work, which made think that it might be appropriate to add another step during chapter generation where after each generation, the LLM now checks if it's output is good or not.

This wound up being the following function:

def LLMDidWorkRight(_Client, _Messages:list):


    Prompt:str = f"""
Please write a JSON formatted response with no other content with the following keys.
Note that a computer is parsing this JSON so it must be correct.

Did the LLM mostly answer the output from the previous generation?

Please indicate if they did or did not by responding:

"DidAddressPromptFully": true/false

For example, if the previous response was "Good luck!" or something similar that doesn't *actually* do what is needed by the system, that would be an automatic fail.
Make sure to double check for things like that - sometimes the LLM is tricky and tries to sneak around doing what is needed.

Again, remember to make your response JSON formatted with no extra words. It will be fed directly to a JSON parser.
"""

    Writer.PrintUtils.PrintBanner("Prompting LLM To Generate Stats", "green")
    Messages = _Messages
    Messages.append(Writer.OllamaInterface.BuildUserQuery(Prompt))
    Messages = Writer.OllamaInterface.ChatAndStreamResponse(_Client, Messages, Writer.Config.CHECKER_MODEL)
    Writer.PrintUtils.PrintBanner("Finished Getting Stats Feedback", "green")

    Dict = json.loads(Writer.OllamaInterface.GetLastMessageText(Messages))


    while True:

        RawResponse = Writer.OllamaInterface.GetLastMessageText(Messages)
        RawResponse = RawResponse.replace("`", "")
        RawResponse = RawResponse.replace("json", "")

        try:
            Dict = json.loads(RawResponse)
            return Dict["DidAddressPromptFully"]
        except Exception as E:
            Writer.PrintUtils.PrintBanner("Error Parsing JSON Written By LLM, Asking For Edits", "red")
            EditPrompt:str = f"Please revise your JSON. It encountered the following error during parsing: {E}."
            Messages.append(Writer.OllamaInterface.BuildUserQuery(EditPrompt))
            Writer.PrintUtils.PrintBanner("Asking LLM TO Revise", "red")
            Messages = Writer.OllamaInterface.ChatAndStreamResponse(_Client, Messages, Writer.Config.CHECKER_MODEL)
            Writer.PrintUtils.PrintBanner("Done Asking LLM TO Revise", "red")

I've written a bit of code that's not shown here which implements a lot of these interfaces, but the general gist of it is here. If you want to go and check out the code, head over to the git repo (see main page for this project).

So now, chapter generation had this function called after every stage, thus checking that the LLM didn't drop the ball at any part of the process.

Prior to implementation, the system would sometimes generate a story where halfway through, the entire story would change - totally different setting, characters, plot, genre, etc. I later figured out that this was happening due to an LLM in the chapter generation stage writing "good luck" or something similar as it's entire output for that stage. Thus, subsequent stages would be given something that looks like this as a prompt:

Prompt = f"""
{ContextHistoryInsert}


Please add dialogue the following chapter {_ChapterNum} of {_TotalChapters} based on the following criteria and any previous chapters.
Pay attention to the previous chapters, and make sure you both continue seamlessly from them, It's okay to deviate from the outline a bit if needed, just make sure to somewhat follow it (take creative liberties).

Don't take away content, instead expand upon it to make a longer and more detailed output.

Here's what I have so far for this chapter:
<CHAPTER_CONTENT>
{Stage2Chapter}
</CHAPTER_CONTENT

As a reminder to keep the following criteria in mind:
    - Dialogue: Does the dialogue make sense? Is it appropriate given the situation? Does the pacing make sense for the scene E.g: (Is it fast-paced because they're running, or slow-paced because they're having a romantic dinner)? 
    - Disruptions: If the flow of dialogue is disrupted, what is the reason for that disruption? Is it a sense of urgency? What is causing the disruption? How does it affect the dialogue moving forwards? 
     - Pacing: 
       - Are you skipping days at a time? Summarizing events? Don't do that, add scenes to detail them.
       - Is the story rushing over certain plot points and excessively focusing on others?

Don't answer these questions directly, instead make your writing implicitly answer them. (Show, don't tell)

Make sure that your chapter flows into the next and from the previous (if applicable).

Also, please remove any headings from the outline that may still be present in the chapter.

Remember, have fun, be creative, and add dialogue to chapter {_ChapterNum}!

"""

Since the chapter generation previously failed, the chapter content would wind up as this:

'''
<CHAPTER_CONTENT>
"Good luck!"
</CHAPTER_CONTENT
'''

Then, it would forget what the chapter was supposed to be about since it only saw that, and start writing something random.

Results

So far, this seems to have really improved the quality of output, as it fixed the "good luck!" problem, which also massively improved between-chapter coherency.

I'm still testing this and debugging as of writing this, but it seems very promising so far, and future work will hopefully improve this further.