PaLM 2 Model: How to Fine-tune the chatbot(chat-bison@001)

Checkout how you can train the bot so that is can respond as of humans...
Aug 31 2023 · 6 min read

Background

In the Previous Blog, we have seen the basics of how to create your own chatbot that talk with you and give you a suggestion when you need them.

It would be an injustice to say that you(the chatbot creator) are the only one who will use the bot and no one else. Consider you want your bot to act as a Health and Fitness Advisor. In the context of Advisor, it needs to assist user queries respectfully and give them tips to become healthy.

In this blog, we will explore how we can order(train) our bot to behave with certain manners and treat the users as humans.

Let’s tune our bot to be a health and fitness advisor!

We are what we repeatedly do. Excellence, then, is not an act, but a habit. Try out Justly and start building your habits today!

PaLM 2 Model(chat-bison@001) doesn’t provide pipeline tuning, but it facilitates prompt tuning as an alternative way.

Using Google Cloud Console, We will first see how model tuning works with test prompts and then we will apply the tune of our bot using code.

Model Tuning with Prompts using Google Cloud Console

I’m considering that you’ve set up the project and enabled the Vertex AI API, If not then do that first and you’re good to go!

  • From the Google Cloud Console Dashboard, search for Vertex AI, it will redirect you to the Vertex AI dashboard.

You will see the dialogue below if you haven’t enabled the Vertex AI API.

Vertex AI API is not enabled
  • Go to Language -> MY PROMPTS -> CREAT CHAT PROMPT
Create a test prompt for fine-tuning the model
  • Ask your initial question to the chat model and it will respond to you!
Initiate conversation with chat model

On the right panel — you can see the parameters set to the default values, as we discussed in how to create a chatbot blog. You can also change the chat models you want to test. Save the prompt with a good name you like!

Change chat models

On the left side — You’ll see the Context and Examples tabs.
Context — It is used for customizing the behavior of the chat model.
For example, In our use case of tuning the model as an advisor, we will instruct it to act as a health and fitness advisor but to avoid responding to queries except health and fitness.

Examples — It is used for demonstrating the way to manage to and from messages.
For example, By acting as an advisor in our case, a bot can’t perform any physical actions, other than just giving tips and advice about health.
Therefore, if the user asks “Can you give me insulin?” the bot should respond “I’m sorry! I can’t do that.”

Similar input and output messages we can set up using the Examples tab(add as many as you need).

Let’s demonstrate it using the cloud console:

  • Add Context and Examples below
Fine-tune the chat model to act as a health & fitness advisor
  • Now, ask anything related or unrelated to health, the model will answer by keeping in mind our instructions(Context + Examples).
Chat model responding as an advisor

Yayy! We have fine-tuned the chat-bison@001 model with the help of prompt🎊. But it was just a test prompt that gave us a hint about how we can fine-tune the model and our bot!

Now, we want to do the same thing using code, as we need to deploy our bot someplace.

Model Tuning using code

In the right panel, Click the <>VIEW CODE button and you will see the Context and Examples added to the chat_model.start_chat() method. Copy it and replace our start_chat() method with it. (Refer How to create your own chatbot)

Copy start_chat() content

Don’t forget to add the required imports. chat.py should look like this,

import vertexai
from vertexai.preview.language_models import ChatModel, InputOutputTextPair

def doChat():

    # initialize vertexai
    # projectname = "my-project"
    # location = "us-central1"
    vertexai.init(project="your-project-name", location="your-project-location")

    # load model
    chat_model = ChatModel.from_pretrained("chat-bison@001")

    # define model parameters
    parameters = {
        "temperature": 0.2,
        "max_output_tokens": 256,
        "top_p": 0.8,
        "top_k": 40,
    }

    # starts a chat session with the model
    chat = chat_model.start_chat(
    context="""Act as health and fitness advisor that gives tips and suggestions to the users.
            Don\'t respond to other questions except for health and fitness, just respond like \"I\'m Sorry! I can\'t help with that.\"""",
    examples=[
        InputOutputTextPair(
            input_text="""Can you give me insulin?""",
            output_text="""I\'m Sorry! I can\'t do that."""
        )
    ]
)

    # sends message to the language model and gets a response
    response = chat.send_message("Can you give me injection?", **parameters)

    return response


print(doChat()) # bot replies "I'm Sorry! I can't do that." 

Run chat.py and ask the same question you asked to advisor prompt, it will answer it in the same manner.

We can add as many Examples and Context as we want, but just keep in mind that Google Vertex AI charges according to the number of input and output characters.
Therefore, at some point, you might need to pay attention to the billing section. Google provides a free trial period worth $400 for 90 days, though!

Maintain message history to resume the conversation

There comes a time when we need to maintain the history of previous conversations. Especially, when we left the conversation in between.

The chat-bison model also provides support for it. Consider a model as just a input processing unit, that just gives output according to the input. It doesn't store anything like what we have previously asked or what has previously responded.

The conversation history can be provided to the model with the help of message_history parameter for the start_chat() method. So that the model goes through it and starts over the conversation where it’s left.

Vertex AI is using the ChatMessage class for it. It consists of two key-value pairs author and content .

For the user input, it has a format like,

{
 "author" : "user",
 "content" : "hello bot!"
}

For the bot response, it has a format like,

{
 "author" : "bot",
 "content" : "Hello! How can I help you today?"
}

Add the following code to the start_chat() method and import ChatMessage class.

message_history= [
    ChatMessage(
        author="user", content="hi"
    ),
    ChatMessage(
        author="bot", content="Hello! How can I help you?"
    )
]

start_chat() method will look like,

chat = chat_model.start_chat(
    context="""Act as health and fitness advisor that gives tips and suggestions to the users.
            Don\'t respond to other questions except for health and fitness, just respond like \"I\'m Sorry! I can\'t help with that.\"""",
    examples=[
        InputOutputTextPair(
            input_text="""Can you give me insulin?""",
            output_text="""I\'m Sorry! I can\'t do that."""
        )
    ],
    message_history= [
        ChatMessage(
          author="user", content="hi"
    ),
    ChatMessage(
          author="bot", content="hello! How can I help you?"
    )]
)

Remember the number of messages(history + current one) passed to send_message() should be an odd number(1,3,5…) always. Otherwise, it will throw an error like,
400 There should be odd number of messages for correct alternating turn.

Format the Chatbot responses

We can also instruct the chatbot to respond in a certain format when we want to display the conversation in a unique style.

For example, Let’s instruct our advisor prompt to wrap greeting messages inside <greeting></greeting> , so that we can easily identify and format the messages using regex.

Add While greeting users, give it in the format of <greeting></greeting> to the context and send the message “hi” OR “hello” to the chatbot.
It will print <greeting>Hello! How can I help you?</greeting>.

The final code will look like this,

import vertexai
from vertexai.preview.language_models import ChatModel, InputOutputTextPair, ChatMessage

def doChat():

    # initialize vertexai
    # projectname = "my-project"
    # location = "us-central1"
    vertexai.init(project="your-project-name", location="your-project-location")

    # load model
    chat_model = ChatModel.from_pretrained("chat-bison@001")

    # define model parameters
    parameters = {
        "temperature": 0.2,
        "max_output_tokens": 256,
        "top_p": 0.8,
        "top_k": 40,
    }

    # starts a chat session with the model
    chat = chat_model.start_chat(
    context="""Act as health and fitness advisor that gives tips and suggestions to the users.
            Don\'t respond to other questions except for health and fitness, just respond like \"I\'m Sorry! I can\'t help with that.\"
                        While greeting users, give it in the format of <greeting></greeting>""",
            
    examples=[
        InputOutputTextPair(
            input_text="""Can you give me insulin?""",
            output_text="""I\'m Sorry! I can\'t do that."""
        )
    ],
    message_history= [
        ChatMessage(
          author="user", content="hi"
    ),
    ChatMessage(
          author="bot", content="hello! How can I help you?"
    )]
)

    question = "hi"  # replace "hi" with whatever you want to ask

    # sends message to the language model and gets a response
    response = chat.send_message(question, **parameters) # prints <greeting>Hello! How can I help you?</greeting>

    return response

# Invoke doChat() 
print(doChat())

Final Thoughts

We have tuned the chat-bison@001 (PaLM 2)model to act as a health and fitness advisor with the help of test prompts and from the code as well.

Also, we explored how to provide message history so that the bot can easily resume the conversation, instead of starting it from scratch.

Note that, the bot doesn’t persist history, it simply takes input and gives output, so we need to store it on our own and provide it to the bot at the time of asking further questions.

We can also instruct the bot to format responses in specific formats.

Tuning can be done in various ways:

  • What to respond to and what not to do?
  • How to respond to spam queries?
  • What should be the response format?
  • What to avoid from user queries?
  • What criteria need to be more focused on?
  • What role a bot should play? and many more.

Similar Blogs


nidhi-d image
Nidhi Davra
Web developer@canopas | Gravitated towards Web | Eager to assist


nidhi-d image
Nidhi Davra
Web developer@canopas | Gravitated towards Web | Eager to assist

Let's Work Together

Not sure where to start? We also offer code and architecture reviews, strategic planning, and more.

cta-image
Get Free Consultation
footer
Subscribe Here!
Follow us on
2024 Canopas Software LLP. All rights reserved.