cant access GCS object correctly from cloud functions

800 Views Asked by At

Any time I try to get the cloud storage bucket object and input using a method shown on the support sites I get the error

google.api_core.exceptions.InvalidArgument: 400 The GCS object specified in gcs_content_uri does not exist.

The gcs reference looks like this when printed:

gs://lang-docs-in/b'doc1.txt'

I have tried everything to get this to work: encoding, decoding, etc for hours it seems but to no avail. Any thoughts?

main.py

import sys
from google.cloud import language
from google.cloud import storage

storage_client = storage.Client()

DOCUMENT_BUCKET = 'lang-docs-out'

def process_document(data, context):
    # Get file attrs
    bucket = storage_client.get_bucket(data['bucket'])
    blob = bucket.get_blob(data['name'])
    # send to NLP API
    gcs_obj = 'gs://{}/{}'.format(bucket.name, blob.name.decode('utf-8'))
    print('LOOK HERE')
    print(gcs_obj)
    parsed_doc = analyze_document(bucket, blob)
    # Upload the resampled image to the other bucket
    bucket = storage_client.get_bucket(DOCUMENT_BUCKET)
    newblob = bucket.blob('parsed-' + data['name'])     
    newblob.upload_from_string(parsed_doc)

def analyze_document(bucket, blob):
    language_client = language.LanguageServiceClient()
    gcs_obj = 'gs://{}/{}'.format(bucket.name, blob.name.decode('utf-8'))
    print(gcs_obj)
    document = language.types.Document(gcs_content_uri=gcs_obj, language='en', type='PLAIN_TEXT')
    response = language_client.analyze_syntax(document=document, encoding_type= get_native_encoding_type())
    return response

def get_native_encoding_type():
    """Returns the encoding type that matches Python's native strings."""
    if sys.maxunicode == 65535:
        return 'UTF16'
    else:
        return 'UTF32'

requirements.txt

google-cloud-storage
google-cloud-language
google-api-python-client
grpcio
grpcio-tools
1

There are 1 best solutions below

0
On

The name attribute of a google.cloud.storage.blob.Blob instance should be a string, thus you shouldn't need to do .decode() at all.

It seems likely that you literally have a file named "b'doc1.txt'" that was created due to issues with whatever is adding the files to GCS, not with your Cloud Function, e.g.:

>>> blob.name
"b'doc1.txt'"
>>> type(blob.name)
<class 'str'>

and not:

>>> blob.name
b'doc1.txt'
>>> type(blob.name)
<class 'bytes'>

This would be really hard to distinguish as they look the same when printed:

>>> print(b'hi')
b'hi'
>>> print("b'hi'")
b'hi'