It’s good to have a friendly companion with whom we can share our feelings and get consolidation as well as suggestions for our problems. A bot can do such things for us, as we can’t wander all the time with our friends or mentors!
To implement a bot that can chat with us like our friend, I have used Google’s PaLM 2 foundation model — chat-bison.
In this blog, we will explore how to implement a Chatbot of your own, keeping Google’s PaLM 2 model under the hood. I’m assuming that you’re signed in to Google Cloud Console and set up the project already. Also, don’t forget to Enable Vertex AI API.
Google has tried its best to document things for Vertex AI, but at some point, I found it a bit confusing. Therefore come up with Vertex AI Terminologies blog that explains it in a more natural way.
We are what we repeatedly do. Excellence, then, is not an act, but a habit. Try out Justly and start building your habits today!
chat-bison(chat-bison@001) is a better candidate when you want…
text-bison(text-bison@001) is suitable when you want…
You can use anything from both, Ultimately both serve the same purpose. But the blocker point is PaLM API is not yet for General Availability. So if you want to access it early, you need to join the waitlist, and that’s only allowed for the US region.
That’s why in this blog, we will implement a chatbot using Vertex AI SDK for Python.
To use Vertex AI SDK We first need to authenticate the request. That gives affirmation to the SDK from which user it’s being requested.
Google Authentication can be done by setting ADC(Application Default Credentials) in different ways according to environments:
Refer to How to Configure Gcloud CLI for more details.
NOTE — If you face an issue like gcloud: command not found
, consider restarting the system once.
If you haven’t set up Default Authentication(ADC) using Google Cloud CLI, Consider adding an env variable
GOOGLE_APPLICATION_CREDENTIALS
, which points to the service account file.
Use existing one from the project or create new.
NOTE: Don’t assign the content of the service account key, GOOGLE_APPLICATION_CREDENTIALS
expects the location of the service account key.
If you have used the service account key for authentication, Go through these steps — (No need to see it if authenticated using Google Cloud CLI).
Once you have the service account key handy, make sure you add Vertex AI User permission to it. Otherwise, the model(chat-bison) won’t allow you to interact using it and will give you an error like below.
It’s because, the service account key wants to access the vertex AI service, but it doesn’t have sufficient permission for it.
Follow the below steps for adding the Vertex AI User permission.
Edit Principal
of the service account you want to add permission for.ADD ANOTHER ROLE
buttonAssuming you’ve already installed python3, let’s go for activating the virtual environment. However, it’s totally optional but recommended way as it makes the project isolated and will not be affected by any external dependencies.
source .venv/bin/activate
Let’s first install the required libraries for vertex AI.
pip install google-cloud-aiplatform
pip install vertexai
chat.py
and implement a doChat()
function, that will communicate with the chat-bison@001 model.import vertexai
from vertexai.preview.language_models import ChatModel
def doChat():
return ""
def doChat():
# initialize vertexai
# projectname = "my-project"
# location = "us-central1"
vertexai.init(project="your-project-name", location="your-project-location")
return ""
# load model
chat_model = ChatModel.from_pretrained("chat-bison@001")
# model parameters
parameters = {
"temperature": 0.2,
"max_output_tokens": 256,
"top_p": 0.8,
"top_k": 40,
}
# starts a chat session with the model
chat = chat_model.start_chat()
# sends message to the language model and gets a response
response = chat.send_message("hi", **parameters)
import vertexai
from vertexai.preview.language_models import ChatModel
def doChat():
# initialize vertexai
# projectname = "my-project"
# location = "us-central1"
vertexai.init(project="your-project-name", location="your-project-location")
# load model
chat_model = ChatModel.from_pretrained("chat-bison@001")
# define model parameters
parameters = {
"temperature": 0.2,
"max_output_tokens": 256,
"top_p": 0.8,
"top_k": 40,
}
# starts a chat session with the model
chat = chat_model.start_chat()
# sends message to the language model and gets a response
response = chat.send_message("hi", **parameters) # user says "hi"
return response
# Invoke doChat()
print(doChat()) # bot replies "Hi there! How can I help you today?"
Run python3 chat.py
, you will see it prints the response received from the model.
Voila! You have just implemented your first chatbot!🎉 🎉
In this blog, we have differentiated the use cases of text-bison vs. chat-bison models.
It’s quite easy to integrate the built-in chat model provided by Google Cloud Vertex AI, with a few steps of configuration.
We have learned the basics of How to craft our own chatbot, using what is already built. We will see how we can fine-tune it as per our requirements in the upcoming blog. Stay tuned!!
Get started today
Let's build the next
big thing!
Let's improve your business's digital strategy and implement robust mobile apps to achieve your business objectives. Schedule Your Free Consultation Now.
Get Free ConsultationGet started today
Let's build the next big thing!
Let's improve your business's digital strategy and implement robust mobile apps to achieve your business objectives. Schedule Your Free Consultation Now.
Get Free Consultation