Google Assistant Integration
Assistant integration allow you to integrate your voicebot to your Google Assistant.


To do that, you have to follow these steps
Step 1
Click on Google Assistant and this window will appear.


Account Linking
Don't quote the Account linking option if you are not using it, it may lead to your action not working at all.
Step 2
Go to Action on Google
It will show something like below


Step 3
Click on Add/Import Project, this window will appear, you have to fill the project name


Step 4
Once the project is created you will be redirected to this page


We're almost done 😅
Step 5
Click on Action SDK in step 4 in order to find this which shows this window with
Google Project ID and you will have to copy it.


Step 6
Go back to step 1 and Paste the Project ID in your Assistant Integration like this:


Next step is getting Google Assistant Token
Click here to generate a token
You will be directed to a page to authorise Google Home to use your email


*Click on AUTORISER or Authorise
One you're done, you will be redirected to this page
Copy the token and paste it in Step 1


Paste the code here


Click on Deploy and you will be done with that
How to send quick replies, cards and carousels?
I you want to present visual components to your users.
You can make use of the advanced UI features available in Assistant.
For this, you simply have to add a code snippet with some JSON describing what you want to display.
Here is an example of what you can show on Assistant:


And here is a code sample showing how you can add UI component within the code snippet.
assistant_answer.card = {
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here's an example of a basic card."
}
},
{
"basicCard": {
"title": "Title: this is a title",
"subtitle": "This is a subtitle",
"formattedText": "This is a basic card. Text in a basic card can include \"quotes\" and\n most other unicode characters including emojis. Basic cards also support\n some markdown formatting like *emphasis* or _italics_, **strong** or\n __bold__, and ***bold itallic*** or ___strong emphasis___ as well as other\n things like line \nbreaks",
"image": {
"url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
"accessibilityText": "Image alternate text"
},
"buttons": [
{
"title": "This is a button",
"openUrlAction": {
"url": "https://assistant.google.com/"
}
}
],
"imageDisplayOptions": "CROPPED"
}
},
{
"simpleResponse": {
"textToSpeech": "Which response would you like to see next?"
}
}
]
}
}
}
]
}
If you want to learn the exact syntax of all the possible elements you can use on Google Assistant, check the JSON code snippets from the official Google Actions documentation
How to get the user name and location?
You can get the name and the location of a user to improve the experience of your voice app.
Before leveraging this data, you have to request user's permission to do so.
Here is how your dialog flow should looks like:
In the first code snippet, at the welcome state, you will need to add this code:
assistant_answer.SupportedPermissions = true;
assistant_answer.message = "Hello and welcome to the Uber app, in order to offer you a better service, may I access your name and address from your Google account?";
Then, the user can agree or not by saying yes or no.
If the user agrees you can access the data, in the get the data snippet, with the following code:
let name = assistant_request.user.profile.displayName;
let longitude = assistant_request.device.location.coordinates.longitude;
let latitude = assistant_request.device.location.coordinates.latitude;
If the user refuses, well you will have to continue without the data and try to help him anyway!
Updated about a year ago