We are looking to see if there is a tool within the Foundry platform that will allow us to have a list of field descriptions and when the dataset builds, it can populated those descriptions automatically. Does this exist and if so what is the tool called?
Is there a tool available within Foundry that can automatically populate column descriptions? If so, what is it called?
541 Views Asked by Robert F At
1
There are 1 best solutions below
Related Questions in PALANTIR-FOUNDRY
- How do I configure a Contour heatmap to display colors for small overall data?
- Embed a Contour board in an iframe in Object Explorer or Slate
- In Foundry Contour, how can I analyze a prior version of a dataset?
- How do I make a Foundry Slate container collapsible?
- How to decrypt AES-encrypted files with Data Connection?
- Does Foundry Data Connection support NFS transfer?
- How do I enforce a minimum test coverage percentage in my Foundry Code Repositories?
- Data Connection - downloading files from multiple URLs in one sync
- How to configure Code Workbook's timezone for CURRENT_TIMESTAMP?
- install JAR package related to pyspark into foundry
- How to access report parameters array in fusion spreadsheet
- How to create a dataset from a fusion sheet in foundry?
- How can you use Data Connection to sync a dataset with only a schema and no rows?
- How can I get the minimum value of multiple columns in contour?
- How do I transform the data set into a dictionary inside the repo. I am using pyspark within foundry
Related Questions in FOUNDRY-CODE-REPOSITORIES
- How do I enforce a minimum test coverage percentage in my Foundry Code Repositories?
- How do I transform the data set into a dictionary inside the repo. I am using pyspark within foundry
- How to create python libraries and how to import it in palantir foundry
- Is adding a column description via write_dataframe aware of the branch it is run on?
- how to access the data frame without my_compute_function
- Python unit tests for Foundry's transforms?
- How do I compute my Foundry 'latest version' dataset faster?
- How do I make my many-join / many-union datasets compute faster?
- Is there a way to populate column descriptions specific to data set?
- How to revert/roll back to an earlier commit in Foundry Code Repo
- How to add a column as a file name to a parsed dataset in Palantir Foundry?
- Pass a whole dataset contains multiple files to HuggingFace function in Palantir
- How to upload dataset without authentication in Palantir foundry
- How can I copy code from one Code Repository to another in Foundry?
- In Palantir Foundry, how should I get the current SparkSession in a Transform?
Related Questions in FOUNDRY-PYTHON-TRANSFORM
- How to create python libraries and how to import it in palantir foundry
- Is adding a column description via write_dataframe aware of the branch it is run on?
- how to access the data frame without my_compute_function
- Is there a way to populate column descriptions specific to data set?
- Is there a tool available within Foundry that can automatically populate column descriptions? If so, what is it called?
- How to throw a warning if threshold value exceeds in foundry code repositories
- How can I merge an incremental dataset and a snapshot dataset while retaining deleted rows?
- Palantir Foundry incremental testing is hard to iterate on, how do I find bugs faster?
- Why is my build hanging / taking a long time to generate my query plan with many unions?
- How do I parse large compressed csv files in Foundry?
- How do I parse xml documents in Palantir Foundry?
- How do I ensure consistent file sizes in datasets built in Foundry Python Transforms?
- Does a count() over a DataFrame materialize the data to the driver / increase a risk of OOM?
- Why don't I see smaller tasks for my requested repartitioning?
- Why is my Code Repo warning me about using withColumn in a for/while loop?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
If you upgrade your Code Repository to version 1.184.0+, this is released and available from this point onwards.
The method you use to push output column descriptions is to provide a new optional argument to your
TransformOutput.write_dataframe(), namelycolumn_descriptions.This argument should be a
dictwith keys of column names and values of column descriptions (up to 200 characters in length for stability reasons).The code will automatically compute the intersection of the column names available on your
pyspark.sql.DataFrameand the keys in thedictyou provide, so it won't try to put descriptions on columns that don't exist.The code you use to run this process looks like this: