Splunk Dashboard Studio Drilldown on one Table Column - splunk

I have created a Dashboard using the Splunk Dashboard Studio which has a table visualization. Have then added a "customUrl" Drilldown and used one of the table column value as a token on the custom url.
The Drilldown URL is working as expected, but the drilldown link is added to all the columns. I want to restrict the drilldown link to be available on the single column which is being used as the token.
Below is the current code for the table visualization:
{
"type": "splunk.table",
"dataSources": {
"primary": "ds_Yu8QvBzy"
},
"options": {
"count": 25,
"tableFormat": {
"align": "> table |pick(alignment)",
"rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByTheme)"
},
"backgroundColor": "> themes.defaultBackgroundColor"
},
"context": {
"alignment": [
"left"
]
},
"showProgressBar": true,
"showLastUpdated": true,
"eventHandlers": [
{
"type": "drilldown.customUrl",
"options": {
"url": "https://myURL/$row.myColumn.value$",
"newTab": true
}
}
]
}

Related

Adobe Analytics - how to get report API 2.0 to get multi-level breakdown data using Java

I need help in getting adobe-analytics data when multiple IDs are passed for multi-level breakdown using API 2.0.
I am getting first level data for V124 ----
"metricContainer": {
"metrics": [
{
"columnId": "0",
"id": "metrics/event113",
"sort": "desc"
}
]
},
"dimension": "variables/evar124",
but i want to use IDs returned in above call response to in second level breakdown of v124 to get v125 as---
"metricContainer": {
"metrics": [
{
"columnId": "0",
"id": "metrics/event113",
"sort": "desc",
"filters": [
"0"
]
}
],
"metricFilters": [
{
"id": "0",
"type": "breakdown",
"dimension": "variables/evar124",
"itemId": "2629267831"
},
{
"id": "1",
"type": "breakdown",
"dimension": "variables/evar124",
"itemId": "2629267832"
}
]
},
"dimension": "variables/evar125",
This always returns data only for 2629267831 ID and not 2629267832.
I want to get data for thousands of IDs (returned from first API call) in a single API call. What am i doing wrong?

Is there a way to obtain pipeline activity run details for Azure Synapse from Log Analytics?

I have set up a log analytics workspace and added it to the diagnostic settings on my Synapse workspace. However, I am unable to write a query that extracts pipeline activity information such as dataRead, rowsCopied, etc.
I have tried using
dataReadvar = parse_json(Output).dataRead to extract the JSON within SynapseIntegrationActivityRuns but it doesn’t seem to be able to find ‘Output’.
Azure Data Factory and Azure Synapse Analytics have three groupings of activities: data movement activities, data transformation activities, and control activities. An activity can take zero or more input datasets and produce one or more output datasets.
Check for the sample pipeline for how it is defined and its pipeline activities:
{
"name": "PipelineName",
"properties":
{
"description": "pipeline description",
"activities":
[
],
"parameters": {
},
"concurrency": <your max pipeline concurrency>,
"annotations": [
]
}
}
Below is the JSON format for Top level structure for Execution Activities:
{
"name": "Execution Activity Name",
"description": "description",
"type": "<ActivityType>",
"typeProperties":
{
},
"linkedServiceName": "MyLinkedService",
"policy":
{
},
"dependsOn":
{
}
}
Activity Policy:
{
"name": "MyPipelineName",
"properties": {
"activities": [
{
"name": "MyCopyBlobtoSqlActivity",
"type": "Copy",
"typeProperties": {
...
},
"policy": {
"timeout": "00:10:00",
"retry": 1,
"retryIntervalInSeconds": 60,
"secureOutput": true
}
}
],
"parameters": {
...
}
}
}
Here are few MS Docs as Docs1, Docs2 which are related to your scenario which can help.

Importing Data to Contentful programatically from a json file

I am trying to import some data programatically into contentful:
I am following the docs here
And running the command inside my integrated terminal
contentful space import --config config.json
Where the config file is
{
"spaceId": "abc123",
"managementToken": "112323132321adfWWExample",
"contentFile": "./dataToImport.json"
}
And the dataToImport.json file is
{
"data": [
{
"address": "11234 New York City"
},
{
"address": "1212 New York City"
}
]
}
The thing is I don't understand what format my dataToImport.json should be and what is missing inside this file or in my config file so that the array of addresses from the .json file get added as new entries to an already created content model inside the Contentful UI show in the screenshot below
I am not specifying the content model for the data to go into so I believe that is one issue, and I don't know how I do that. An example or repo would help me out greatly
The types of data you can import are listed : in their documentation
your json top level should say "entries" and not data, if new content of a content type is what you would like to import.
This is an example of a blog post as per content model of the tutorial they provide.
The only thing i didn't work out yet is where the user id is :D so i substituted for one of the content type 'person' also provided in their tutorial (I think it's called Gatsby Starter)
{"entries": [
{
"sys": {
"space": {
"sys": {
"type": "Link",
"linkType": "Space",
"id": "theSpaceIdToReceiveYourImport"
}
},
"type": "Entry",
"createdAt": "2019-04-17T00:56:24.722Z",
"updatedAt": "2019-04-27T09:11:56.769Z",
"environment": {
"sys": {
"id": "master",
"type": "Link",
"linkType": "Environment"
}
},
"publishedVersion": 149, -- these are not compulsory, you can skip
"publishedAt": "2019-04-27T09:11:56.769Z", -- you can skip
"firstPublishedAt": "2019-04-17T00:56:28.525Z", -- you can skip
"publishedCounter": 3, -- you can skip
"version": 150,
"publishedBy": { -- this is an example of a linked content
"sys": {
"type": "Link",
"linkType": "person",
"id": "personId"
}
},
"contentType": {
"sys": {
"type": "Link",
"linkType": "ContentType",
"id": "blogPost" -- here should be your content type 'RealtorProperties'
}
}
},
"fields": { -- here should go your content type fields, i can't see it in your post
"title": {
"en-US": "Test 1"
},
"slug": {
"en-US": "Test-1"
},
"description": {
"en-US": "some description"
},
"body": {
"en-US": "some body..."
},
"publishDate": {
"en-US": "2016-12-19"
},
"heroImage": { -- another example of a linked content
"en-US": {
"sys": {
"type": "Link",
"linkType": "Asset",
"id": "idOfTHisImage"
}
}
}
}
},
--another entry, ...]}
Have a look at this repo. I am also trying to figure this out. Looks like there's quite a lot of fields that need to be included in the json file. I was hoping there'd be a simple solution but it seems you (me too actually) will need to create scripts to "convert" your json file to data contentful can read and import.
I'll let you know if I find anything better.

How to update existing Knowledgebase using QnA Maker API v4.0?

I've successfully created my Knowledgebase using API.
But I forgot to add some alternative questions and metadata for one of the pairs.
I've noticed PATH method in the API to update the Knowledebase, so updating kb is supported.
I've created a payload which looked like this:
{
"add": {
},
"delete": {
},
"update": {
"qnaList": [
{
"id": 1,
"answer": "Answer",
"source": "link_to_source",
"questions": [
"Question 1?",
"Question 2?"
],
"metadata": [
{
"name": "oldMetadata",
"value": "oldMetadata"
},
{
"name": "newlyAddedMetaData",
"value": "newlyAddedMetaData"
}
]
}]}
}
I get back the following response HTTP 202 Accepted:
{
"operationState": "NotStarted",
"createdTimestamp": "2018-05-21T07:46:52Z",
"lastActionTimestamp": "2018-05-21T07:46:52Z",
"userId": "user_uuid",
"operationId": "operation_uuid"
}
So, looks like it worked. But in reality, this request doesn't take any affect.
When I check operation details, it returns me the following:
{
"operationState": "Succeeded",
"createdTimestamp": "2018-05-21T07:46:52Z",
"lastActionTimestamp": "2018-05-21T07:46:54Z",
"resourceLocation": "/knowledgebases/kb_uuid",
"userId": "user_uuid",
"operationId": "operation_uuid"
}
What am I doing wrong? And how should I update my kb via API properly?
Please help
I had the same problem, I discovered that it was necessary to have all the data of the json even if they were not used.
In your case you need "name" and "urls" in the "update" section and "Delete" in "update/qnaList/questions" section:
{
"add": {},
"delete": {},
"update": {
"name": "nameofKbBase", //this
"qnaList": [
{
"id": 2370,
"answer": "DemoAnswerEdit",
"source": "CustomSource",
"questions": {
"add": [
"DemoQuestionEdit"
],
"delete": [] //this
},
"metadata": { }
}
],
"urls": [] //this
}
}

Azure Data Factory v2 If activity always fails

I'm currently struggling with the Azure Data Factory v2 If activity which always fails with this error message:
enter image description here
I've designed two separate pipelines, one takes the full snapshot of the data (1333 records) from the on-premises SQL Server and loads the data into the Azure SQL Database, and another one just takes delta from the same source.
Both pipelines work fine when executed independently.
I then decided to wrap these two pipelines into the one parent pipeline which would do this:
1.
Execute LookUp activity to check if the target table in Azure SQL Database has any records, basic Select Count(Request_ID) As record_count From target_table - activity works fine, I can preview the returned record count.
2.
Pass the output from the LookUp activity to the If activity with the conditions that if record_count = 0, the parent pipeline would invoke the full load pipeline, otherwise the parent pipeline would invoke the delta load pipeline.
This is the actual expression:
{#activity('lookup_sites_record_count').output.firstRow.record_count}==0"
Whenever I try to execute this parent pipeline, it fails with the above message of "Activity failed: Activity failed because an inner activity failed."
Both inner activities, that is, full load and delta load pipelines, work just fine when triggered independently.
What I'm missing?
Many thanks in advance :).
mikhailg
Pipeline's JSON definition below:
{
"name": "pl_remedyreports_load_rs_sites",
"properties": {
"activities": [
{
"name": "lookup_sites_record_count",
"type": "Lookup",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "Select Count(Request_ID) As record_count From mdp.RS_Sites;"
},
"dataset": {
"referenceName": "ds_azure_sql_db_sites",
"type": "DatasetReference"
}
}
},
{
"name": "If_check_site_record_count",
"type": "IfCondition",
"dependsOn": [
{
"activity": "lookup_sites_record_count",
"dependencyConditions": [
"Succeeded"
]
}
],
"typeProperties": {
"expression": {
"value": "{#activity('lookup_sites_record_count').output.firstRow.record_count}==0",
"type": "Expression"
},
"ifFalseActivities": [
{
"name": "pl_remedyreports_invoke_load_sites_inc",
"type": "ExecutePipeline",
"typeProperties": {
"pipeline": {
"referenceName": "pl_remedyreports_load_sites_inc",
"type": "PipelineReference"
}
}
}
],
"ifTrueActivities": [
{
"name": "pl_remedyreports_invoke_load_sites_full",
"type": "ExecutePipeline",
"typeProperties": {
"pipeline": {
"referenceName": "pl_remedyreports_load_sites_full",
"type": "PipelineReference"
}
}
}
]
}
}
],
"folder": {
"name": "Load Remedy Reference Data"
}
}
}
Your expression should be:
#equals(activity('lookup_sites_record_count').output.firstRow.record_count,0)