I'm trying to use v4beta1 of GCTS - search_jobs()
The docs: https://cloud.google.com/talent-solution/job-search/docs/reference/rest/v4beta1/projects.jobs/search
There are references to the parameter pageToken
but in \google\cloud\talent_v4beta1\gapic\job_service_client.py
there is no such parameter in the function definition:
def search_jobs(
self,
parent,
request_metadata,
search_mode=None,
job_query=None,
enable_broadening=None,
require_precise_result_size=None,
histogram_queries=None,
job_view=None,
offset=None,
page_size=None,
order_by=None,
diversification_level=None,
custom_ranking_info=None,
disable_keyword_match=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
In the comments page_token
is mentioned - eg for the Offset
parameter.
How do I specify the page token for job searches?
I've specified require_precise_result_size=False
but the return value doesn't contain a SearchJobsResponse.estimated_total_size
. Is this a clue that search_jobs()
isn't being set to the desired "mode"?
I believe the pageToken is abstracted away for you by the python client library. If you go down to the end of the search_jobs method in the source you will see it builds an iterator that is aware of the pageToken and nextPageToken fields:
So all you should need to do is the following - copied from the docs at https://googleapis.github.io/google-cloud-python/latest/talent/gapic/v4beta1/api.html:
Default page size is 10 apparently, you can modify this with the pageSize parameter. Page iterator documentation can be found here:
Doco: https://googleapis.github.io/google-cloud-python/latest/core/page_iterator.html
Source: https://googleapis.github.io/google-cloud-python/latest/_modules/google/api_core/page_iterator.html#GRPCIterator
Probably the simplest way to deal with this is consume all results using
If you have massive amounts of data and don't want to page through in one go I would do the following. The ".pages" is just returning a generator that you can work with as usual.
You would need to catch StopIteration error for when you run out of items or pages:
https://anandology.com/python-practice-book/iterators.html
This is why:
See how after the yield it calls _next_page? This will check for more pages and then perform another request for you if they exist.
If you are wanting a sessionless option, you can use offset + page size and pass the current offset to the user on each ajax request: