In the process of migrating from Hive to BigQuery, people often face with converting Hive UDFs written in Java to BigQuery's SQL UDFs. What are the best practices or methodologies for translating complex Java logic (like loops and conditionals) into equivalent SQL statements in BigQuery. Is there a structured approach or tool that can assist in this translation, especially for complex Java functions?
Best Practices for Translating Hive UDF Java Logic to BigQuery SQL
23 Views Asked by Csaba Kassai At
1
There are 1 best solutions below
Related Questions in GOOGLE-CLOUD-PLATFORM
- Why do I need to wait to reaccess to Firestore database even though it has already done before?
- Unable to call datastore using GCP service account key json
- Troubleshooting Airflow Task Failures: Slack Notification Timeout
- GoogleCloud Error: Not Found The requested URL was not found on this server
- Kubernetes cluster on GCE connection refused error
- Best way to upload images to Google Cloud Storage?
- Permission 'storage.buckets.get' denied on resource (or it may not exist)
- Google Datastream errors on larger MySQL tables
- Can anyone explain the output of apache-beam streaming pipeline with Fixed Window of 60 seconds?
- Parametrizing backend in terraform on gcp
- Nonsense error using a Python Google Cloud Function
- Unable to deploy to GAE from Github Actions
- Assigned A record for Subdomain in Cloud DNS to Compute Engine VM instance but not propagated/resolved yet
- Task failure in DataprocCreateClusterOperator when i add metadata
- How can I get the long running operation with google.api_core.operations_v1.AbstractOperationsClient
Related Questions in GOOGLE-BIGQUERY
- SQL LAG() function returning 0 for every row despite available previous rows
- Convert C# DateTime.Ticks to Bigquery DateTime Format
- SELECT AS STRUCT/VALUES
- Google Datastream errors on larger MySQL tables
- Can i add new label called looker-context-look_id in BigQuery connection(Looker)
- BigQuery external table using JSON files
- Does Apache Beam's BigQuery IO Support JSON Datatype Fields for Streaming Inserts?
- sample query for review for improvement on big query
- How does Big Query differentiate between a day and month when we upload any CSV or text file?
- How to get max value of a column when ids are unique but they are related through different variables
- how to do a filter from a table where 2 different columns has 2 different records which has same set of key combinations in bigquery?
- How to return a string that has a special character - BigQuery
- How do I merge multiple tables into a new table in BigQuery?
- Customer Churn Calculation
- Is it correct to add "UNNEST" in the "ON" condition of a (left) join?
Related Questions in HIVE
- Type Adapter for Offset in hive flutter
- HIVE Sql Date conversion
- How to set spark.executor.extraClassPath & spark.driver.extraClassPath in hive query without adding those in hive-site.xml
- Hive query on HUE shows different timestamp than programatically/on data
- descending order of data in hive using collect_set
- How to optimize writing to a large table in Hive/HDFS using Spark
- Spark SQL repartition before insert operation
- Alter datatype of complex type(array<struct>>) in hive
- SqlAlchemy connection to Hive using http thrift transport and basic auth
- Aggregate values into a new column while retaining the old column
- Is it possible to query MAPR hdfs/hive tables from Trino?
- Can we make a column having both partitioning and bucketing in hive?
- converting varchar(7) to decimal (7,5) in hive
- Extract all characters before numeric values in hive SQL
- Livy session to submit pyspark from HDFS
Related Questions in BIGQUERY-UDF
- Running Total UDF ( user defined function ) with a window function is throwing 'Failed to coerce output value ''
- Decoding url (Unicode to UTF-8) in Bigquery /Python
- GCP Dataflow Batch MongoDB to BigQuery error schema does not match field changed type from TIMESTAMP to STRING
- Bigquery, converting uncommon STRING to TIMESTAMP
- BigQuery UDF: How to sum(case when)
- Bigquery export data output is not in proper order
- Persistent UDF Issue in BigQuery
- Best Practices for Translating Hive UDF Java Logic to BigQuery SQL
- Levenshtein distance for words from list in Bigquery
- Convert the normal value to a IBM "packed decimal"
- BigQuery table functions with tables as arguments or return values
- BigQuery UDF Error: Correlated Subquery in Temp Function
- where to get tutorial to create a Big Query Project?
- BigQuery SQL Query Issue in PostMan
- Merge multiple instances of event_params from bigquery api
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
The journey begins with the fundamental question of whether the UDF is needed in the new BigQuery data warehouse. We check during the assessment if the function is used in any transformation that will be migrated to the BQ side. If it's not, we're met with a simple resolution—no migration necessary, and we can bask in the simplicity of this outcome. However, if the UDF is indeed needed, we next explore BigQuery's native function library to see if there's an existing feature that can replicate our Hive UDF's capabilities. When a native BigQuery function is available, we adopt it, embracing BigQuery’s built-in efficiency.
In instances where BigQuery doesn't offer a native alternative, we delve into the type of function we're dealing with. For a standard UDF, we evaluate whether it's feasible to reimplement it in BigQuery SQL; if so, we proceed with this SQL-centric approach. If not, we turn to Google Cloud's serverless solutions and leverage the BQ Remote UDFs feature. This option is appealing because we can use Java and keep the core function code as is. If for some reason Remote Functions are not available in our case we can always fall back to using JS UDFs.
When it comes to UDAFs, the decision hinges on the volume of data—specifically, whether the aggregation operates within a bounded scope. For manageable data groupings, we can craft custom aggregations using BigQuery's ARRAY_AGG function. For more unwieldy aggregations, we may need to refactor our approach entirely or shift processing to Google Cloud Dataflow or Dataproc, ensuring scalability and performance.
Lastly, for UDTFs, if we would like to stay in the realm of SQL the path is straightforward: we transition the function to generate elements as an array, utilizing BigQuery’s UNNEST function to flatten the arrays into multiple rows. If this approach does not work we can always go back to using Cloud Dataflow or Dataproc to implement our functionality.
This decision tree not only aids in methodically migrating Hive's custom functions to BigQuery but also ensures that each step taken is in alignment with BigQuery’s optimal practices and architecture, guaranteeing a smooth and efficient transition.
Read this post for the wider context and more information.