Google BARD has been giving cutting-edge competition to chatGPT since it was launched. It’s because chatGPT has limited data, ending in 2021, But BARD crawls the latest information from the internet that’s why it will give the updated data most of the time with the highest accuracy.
It would be worth saying that BARD comes with much more ability than GPT-4. Now Google is making the AI model publicly available, which it has used for BARD and other products like Gmail, Google Docs, etc.
Quite often, we want to implement a chatbot for us, that can interact with humans as humans. For that purpose, google is making our life more accessible, by providing a pre-trained model readily available, rather than we need to implement it by ourselves.
For Chat models, Google is providing a service named Vertex AI.
In this blog, we will introduce some of the terms used by Google Vertex AI and will clarify things I felt you should grasp easily that me crazy while going through it. No doubt Google has documented everything, but sometimes it becomes hard to understand. So I’m trying to make it more clear especially for developers!
We are what we repeatedly do. Excellence, then, is not an act, but a habit. Try out Justly and start building your habits today!
It’s a service provided by Google Cloud, that lets you train and deploy Machine Learning(ML) models and AI applications. For more information visit Vertex AI.
It’s a Google Cloud console tool that is provided within Vertex AI service.
It allows users to prototype and test generative AI models provided By Google Cloud.
For more information visit Generative AI Studio.
A Model Garden as the name presents, it’s a group of models in simple words. It’s also included inside Vertex AI service.
It consists of plenty of models including Google’s own foundation models, open-source models, and third-party models that can be fine-tunable as well. For more information visit Model Garden.
Foundation models are the pre-trained models provided by Google Cloud. It provides more capabilities to developers and data scientists for building generative AI applications.
There are many foundation models provided by Google that perform tasks like chatting, text-to-code, speech-to-text, and image-to-text. For more information visit Foundation Models.
It’s a model trained by developers or data scientists. Vertex AI allows the creation of custom-trained models as per our use case.
In case foundation models don’t specify our needs, we can opt for a custom model. For more information visit Custom Model.
Foundation Models are pre-trained based on large datasets, but sometimes we want to train them according to our needs.
Let’s say we want to train it to act as a receptionist. For that purpose, we will need to provide some instructions that are needed when any customer arrives at reception and ask their queries to the receptionist.
Providing the set of rules that are used while responding to humans is called fine-tuning the model.
In order to fine-tune the foundation model, one needs to provide a dataset in the same format as the instructions used by that foundation model.
The latest PaLM 2 Model only text-bison@001
provides support for fine-tuning.
Prompt Tuning serves a similar purpose as fine-tuning, but it covers a small set of datasets.
It usually allows us to set a small set of patterns that can be followed by the foundation model while responding to user queries.
Use cases for prompt-tuning:
1. Control what the bot should respond to and what not
For example, Give instructions to the bot I’m unable to answer your query
when you find any questions that are not related to the study.
2. Control what should be the format of the answer given by the bot
For example, Give instructions to the bot Provide answer in maximum 3 lines
.
Generative AI Studio provides a similar facility for adding prompts while we use foundation models. We just need to provide context, input, and output examples and we’re ready to go 🚀.
The foundation models that don’t support fine-tuning yet, can be tuned by prompt-tuning.
The latest PaLM 2 Model chat-bison@001
, textembedding-gecko@001
etc doesn’t have support for fine-tuning, so it can be tuned with the prompt-tuning method.
PaLM 2(Pathways Language Model) is a language model that Google has deployed to bring AI capabilities to all of its products, including Gmail, Google Docs, and Bard.
Similar to other language models like GPT-4, PaLM 2 is capable of powering AI-based chatbots. It provides out-of-box functionalities like multilingual translation, multi-turn conversation, etc. For more information visit PaLM 2 Model.
It comes in different sizes and is given names in the order of Gecko, Otter, Bison, and Unicon.
It’s a Google Cloud service covered within Vertex AI service that enables the usage and training of generative models.
Generative AI Studio provides access to the PaLM 2 model under the hood via Vertex AI PaLM API.
Currently, PaLM API is in public Preview Mode, and that too for the US region only. So If you want to give it a try you must join the waitlist.
For more information visit PaLM API.