Hi am working with OozieClient API. I need to retrieve the task tracker logs for a particular workflow job using the OozieClient API. If not with OozieClient API any other way using a program is also fine. As of now with the OozieClient i am able to get the job log using client.getJobLog(), but i need task tracker logs and not job logs. Kindly help.
Retrieving tasktracker logs for a particular job programatically
278 Views Asked by dnivra At
1
There are 1 best solutions below
Related Questions in HADOOP
- pcap to Avro on Hadoop
- schedule and automate sqoop import/export tasks
- How to diagnose Kafka topics failing globally to be found
- Only 32 bit available in Oracle VM - Hadoop Installation
- Using HDFS with Apache Spark on Amazon EC2
- How to get raw hadoop metrics
- How to output multiple values with the same key in reducer?
- Loading chararray from embedded JSON using Pig
- Oozie Pig action stuck in PREP state and job is in RUNNING state
- InstanceProfile is required for creating cluster - create python function to install module
- mapreduce job not setting compression codec correctly
- What does namespace and block pool mean in MapReduce 2.0 YARN?
- Hadoop distributed mode
- Building apache hadoop 2.6.0 throwing maven error
- I am using Hbase 1.0.0 and Apache phoenix 4.3.0 on CDH5.4. When I restart Hbase regionserver is down
Related Questions in HDFS
- Using HDFS with Apache Spark on Amazon EC2
- How to read a CSV file from HDFS via Hadoopy?
- How can I migrate data from one HDFS cluster to another over the network?
- Spark/Spark Streaming in production without HDFS
- Jcascalog to query thrift data on HDFS
- What is Metadata DB Derby?
- Can Solr or ElasticSearch be configured to use HDFS as their persistence layer in a way that also supports MapReduce?
- How to import only new data by using Sqoop?
- How to access hdfs by URI consisting of H/A namenodes in Spark which is outer hadoop cluster?
- Force HDFS globStatus to skip directories it doesn't have permissions to
- Trying to use WinInet to upload a file to HDFS
- Apache Spark architecture
- Is possible to set hadoop blocksize 24 MB?
- Unable to create file using Pail DFS
- Hadoop Distributed File Systmes
Related Questions in BIGDATA
- How to add a new event to Apache Spark Event Log
- DB candidate as CouchDB/Schema replacement
- Getting java.lang.IllegalArgumentException: requirement failed while calling Sparks MLLIB StreamingKMeans from java application
- More than expected jobs running in apache spark
- Does Cassandra support aggregation function or any other capabilities like Map Reduce?
- Accessing a large number of unsorted array elements in Python
- What are the approaches to the Big-Data problems?
- Talend Open Studio for Big Data
- How to store and retrieve time series using google appengine using python
- Connecting Spark code from web application
- Designing an API on top of BigQuery
- Apache Spark architecture
- Hive(Bigdata)- difference between bucketing and indexing
- When does an action not run on the driver in Apache Spark?
- Use of core-site.xml in mapreduce program
Related Questions in OOZIE
- Oozie Pig action stuck in PREP state and job is in RUNNING state
- API data to hadoop via Flume
- Oozie cannot detect Spark workflow-app tag in XML
- Using Oozie to create a hive table on hbase causes an error with libthrift?
- Oozie and Hue: why am I getting "permission denied" error while playing oozie workflow?
- can we see intermediate output in oozie workflow
- Hadoop - Oozie : check if existing oozie workflow is running
- Apache Oozie as a business workflow engine
- Specifying Driver Program for Map-Reduce Job in Oozie
- Passing parameters from one action to another in Oozie
- Oozie workflow file cannot see start step, error E0701
- Hue configuration error
- MapR oozie sqoop error; Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
- Oozie worflow with avro - output is a corrupt avro file
- Oozie on YARN - oozie is not allowed to impersonate hadoop
Related Questions in WEBHDFS
- Trying to use WinInet to upload a file to HDFS
- Uploading a local file to a remote hdfs with Java API but connect to localhost
- Is it possible to use WebHDFS with Flume?
- Read a file using WebHDFS
- Filebrowser in HUE not working, uploads fail for user
- No module named pywebhdfs.web hdfs
- Retrieving tasktracker logs for a particular job programatically
- Possible encrypted secure comminication to WebHDFS via HTTPS?
- What is my webHDFS url on Azure HDInsight?
- Docker Kerberos WebHDFS AuthenticationException: Unauthorized
- Error while copying a column of one dataframe to another in R
- how to cache images from hadoop and also how to hide port number given in the url
- How to Authenticate WebHDFS with c#
- How can i get the schema details(table structure) from parquet file using hdfs API
- Can WebHDFS UI delete functionality be disabled?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Try retrieving the yarn application Id from oozie using OozieClient API.
Once you have this ID you can make a call to history server using its rest api/or history server's client library, to fetch the Log dir path using "jobAttempts" api.
Now you can browse this directory using hadoop client.