Is there a way to capture the name of a task that has been executed in SnapLogic?

871 Views Asked by At

We have a lot of triggered Tasks that run on the same pipelines, but with different parameters.

My question regarding this - is there a possible way, like a function or an expression to capture the name of the triggered task so that we could use the information while writing the reports and e-mails of which Task started the error pipeline. I can't find anything even close to it.

Thank you in advance.

2

There are 2 best solutions below

5
On

This answer addresses the requirement of uniquely identify the invoker task in the invoked pipeline


For triggered tasks, there isn't anything provided out of the box by SnapLogic. Although, in case of ULTRA tasks you can get $['task_name'] from the input to the pipeline.

Out of the box, SnapLogic provides the following headers that can be captured and used in the pipeline being initiated by the triggered task. These are as follows.

  • PATH_INFO - The path elements after the Task part of the URL.
  • REMOTE_USER - The name of the user that invoked this request.
  • REMOTE_ADDR - The IP address of the host that invoked this request.
  • REQUEST_METHOD - The method used to invoke this request.

None of these contains the task-name.

In your case, as a workaround, to uniquely identify the invoker task in the invoked pipeline you could do one of the following three things.

  1. Pass the task-name as a parameter
  2. Add the task-name in the URL like https://elastic.snaplogic.com/.../task-name
  3. Add a custom header from the REST call

All the three above methods can help you capture the task-name in the invoked pipeline.

In your case, I would suggest you go for a custom header because the parameters you pass in the pipeline could be task-specific and it is redundant to add the task-name again in the URL.

Following is how you can add a custom header in your triggered task.

From SnapLogic Docs -

Custom Headers To pass a custom HTTP header, specify a header and its value through the parameter fields in Pipeline Settings. The request matches any header with Pipeline arguments and passes those to the Task, while the Authorization header is never passed to the Pipeline.

Guidelines

  • The header must be capitalized in its entirety. Headers are case-sensitive.
  • Hyphens must be changed to underscores.
  • The HTTP custom headers override both the Task and Pipeline parameters, but the query string parameter has the highest precedence.

For example, if you want to pass a tenant ID (X-TENANT-ID) in a header, add the parameter X_TENANT_ID and provide a default or leave it blank. When you configure the expression, refer to the Pipeline argument following standard convention: _X_TENANT_ID. In the HTTP request, you add the header X-TENANT-ID: 123abc, which results in the value 123abc being substituted for the Pipeline argument X_TENANT_ID.

Creating a task-name parameter in the pipeline settings

enter image description here

Using the task-name parameter in the pipeline

enter image description here

Calling the triggered task

enter image description here

Note: Hyphens must be changed to underscores.

References:

0
On

I'm adding this as a separate answer because it addresses the specific requirement of logging an executed triggered task separate from the pipeline. This solution has to be a separate process (or pipeline) instead of being part of the triggered pipeline itself.


The Pipeline Monitoring API doesn't have any explicit log entry for the task name of a triggered task. invoker is what you have to use.

However, the main API used by SnapLogic to populate the Dashboard is more verbose. Following is a screenshot of how the response looks on Google Chrome Developer Tools.

developer tools

You can use the invoker_name and pipe_invoker fields for identifying a triggered task.

Following is the API that is being used.

POST https://elastic.snaplogic.com/api/2/<org snode id>/rest/pm/runtime

Body:

{
  "state": "Completed,Stopped,Failed,Queued,Started,Prepared,Stopping,Failing,Suspending,Suspended,Resuming",
  "offset": 0,
  "limit": 25,
  "include_subpipelines": false,
  "sort": {
      "create_time": -1
  },
  "start_ts": null,
  "end_ts": null,
  "last_hours": 1
}

You can have a pipeline that periodically fires this API then parses the response and populates a log table (or creates a log file).