The Google PaLM API (import google.generativeai as palm) works successfully when I use a 2 CPU colab runtime.

However, when I switch to an 8 CPU, 51 GB colab runtime (via Colab Pro+), I get an error when running a simple PaLM API request. The error message says:

FailedPrecondition: 400 User location is not supported for the API use.

What I tried:

  • When I switch back to the 2 CPU Colab runtime, everything works. It is only when I switch to the high RAM 51GB runtime, where it does not work.

  • After I pip install google-generativeai, I do restart the runtime, so that is not the issue, as that works with the 2 CPU Colab runtime.

  • Note, some days it does work and some days it seemingly randomly does not. I want to use the 8 CPU Colab runtime in order to speed up my colab.


Here is a sample colab with the minimal code to reproduce the error: https://colab.research.google.com/drive/1fm4CZjj_axPssIOkBRi4V6JxX9q1Zt4p?usp=sharing

Note, to run the above colab, you'll need to upload your own PaLM API key.


If you have this same issue, you can +1 the bug I created in the Google issue tracker here.

3

There are 3 best solutions below

0
On BEST ANSWER

Use !curl ipinfo.io to check where your colab instance is located. just a quick test 2 of 3 "high ram" instances I created landed in Belgium. Belgium is not in the "allowed regions" list.

0
On

I made it work by disabling IPV6 for on my server.

  1. Run
ping generativelanguage.googleapis.com -4

To get the IPV4 of the domain name.

  1. Edit /etc/hosts/ Replace the IP address with the one you find in step 1.
172.217.2.202 generativelanguage.googleapis.com
2
On

Palm api is currently only supported in a limited number of countries available_regions

Mostly EU isnt supported.

If you are getting "400 User location is not supported for the API use." its because your not in the proper region.

Swaping around Colab does tend to move you to a different server and sometimes it clicks in. Im pretty sure this issue has been fixed.