ADF v2 - possible to get input of activity? - azure-data-factory-2

I know ADF could get the output of specific Activity, like this
activity('xxx').output
And could I get the input of specific Activity?

There is no such provision according to docs. Current activity input would be the output of previous activities right. You can use the same to store in a variable or parameter explicitly for your further use.
You can only view or copy inputs of an activity from pipeline run output or errors.

Related

How to store and serve coupons with Google tools and javascript

I'll get a list of coupons by mail. That needs to be stored somewhere somehow (bigquery?) where I can request and send it to the user. The user should only be able to get 1 unique code, that was not used beforehand.
I need the ability to get a code and write, that it was used, so the next request gets the next code...
I know it is a completely vague question but I'm not sure how to implement that, anyone has any ideas?
thanks in advance
Thr can be multiples solution for same requirement, one of them is given below :-
Step 1. Try to get coupons over a file (CSV, JSON, and etc) as per your preference/requirement.
Step 2. Load Source file to GCS (storage).
Step 3. Write a Dataflow code which read data from GCS (file) an load data to a different Bigquery table (tentative name: New_data). Sample code.
Step 4. Create a Dataflow code to read data from Bigquery table New_data and compare it with History_data and identify new coupons and write data to a file on GCS or Bigquery table. Sample code.
Step 5. Schedule entire process over an orchestrator/Cloud scheduler/Cron tab job.
Step 6. Once you have data you can send it to consumers through any communication channel.

Heat map visualization issue Microstrategy VI

I am creating a Dashboard using as a visualization a heat map. Everything was OK until I changed the parameters of my metric, the chart disappeared and I've got this message: 'Filter excludes all data'
The only modification that I've done is to set the Include Distinct Elements to true within the Count Parameter option of the metric.
What could be happening?. Do I need to set another parameter to get the count of distinct elements that I need?
Regards.
surely the metric is level with some attribute that is not inside the visualization, if the filter has date for example, include it in the visualization.
"Filter Excludes All data" is a default warning message that you will get in MicroStrategy when a reports/visuals/dashboards does not return any data.
https://community.microstrategy.com/s/article/KB47557-How-to-Properly-Suppress-the-Message-Filter-excludes-all?r=1&Component.reportDeprecationUsages=1&Headline.getInitData=1&ArticleView.getArticleHeaderDetail=1&Quarterback.validateRoute=1&RecordGvp.getRecord=1&ArticleRichContent.getArticleAuthor=1&ArticleTopicList.getTopics=1&ArticleRichContent.hasArticleAccess=1&ForceCommunityFeed.getModel=1&ArticleRichContent.getTopicsAssigned=1
There are n number of reasons for a report which cannot return data, please check the following steps to debug the issue,
As per your first image it shows a date range as filter, after you have changed to "Include only distinct" this date range might get affected so please put the objects in grid and apply the date filter and check whether it returns the correct data.
If it returns, check whether all the metrics returns a value.
Check whether candidateID, date and people attributes/metrics are properly related.
These steps will show, where the problem is, still if you could not figure it out export the dashboard and share it with MicroStrategy Tech Support team for debugging.
Hope it helps.

Talend, How to set up context value manually, and pass it to query

I am working with Talend Open Studio for Data Integration.
I want to create a simple job which shows all customers from database with specific city.
My job structure looks like this:
DbConnection -- onComponentOk -- DbInput -- row1-- tJavaRow -- row2 -- tLogRow
I have created a context parameter that contains specific values which are the city ids. I want to set the city manually after the job starts, and then pass it to my query on the WHERE clause. Is it possible to do this scenario with Talend? How should my tJavaRow code should look like?
If you want to manually input something in a running job, you can use a tMsgBox. In Component, set buttons -> Question, the rest of the settings depends on you.
You will be able to input a value. That value is retrievable from the variable RESULT of the component.
Example with tMsgBox_1
(String)globalMap.get("tMsgBox_1_RESULT")

How to add condition in splunk data model constraint

I have a outbound flow that gets data written by App, mem and cards api. cards and mem api is writing logs into applog but App is writing logs in syslog.
In my data model I have sourcetype=app_log as the source type. So in case of all flows except app I am getting write splunk dashboard report but for application I am not getting any data.
So I want to add a condition in data model "constraint" section like
when api is applications then sourcetype=app_log OR sourcetype=sys_log
else sourcetype=app_log
Can anyone assist me how to do this in splunk?
If you need a dual sourcetype, it's usually best to make that part of the root search/event to draw in all relevant data you would like to use in your data model.
A data model is like sheering away wood on a sculpture, so if it's usually better to start with all of the data, and then slowly pick away at what you want to see.
You can add | where clauses as constraints, however you can't add more data if you don't start with it in the root events.
my suggestion would be something like this in your root search:
(index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) field=blah field=blah ....

Data Lake Analytics: Custom Outputter to write to different files?

I am trying to write a custom outputter for U-SQL that writes rows to individual files based on the data in one column.
For example, if the column has a date "2016-01-01", it writes that row a file with that name, and a the next row to a file with the value in the same column.
I am aiming to do this by using the Data Lake Store SDK within the outputter, which creates a client and uses the SDK functions to write to individual files.
Is this a viable and possible solution?
I have seen that the function to be overriden for outputters is
public override void Output (IRow row, IUnstructuredWriter output)
In which the IUnstructuredWriter is casted to a StreamWriter(I saw one such example), so I assume this IUnstructuredWriter is passed to this function by the U-SQL script. So that doesn't leave for me any control over this what is passed here, also it will remain constant for all rows and can't change.
This is currently not possible but we are working on this functionality in reply to this frequent customer request. For now, please add your vote to the request here: https://feedback.azure.com/forums/327234-data-lake/suggestions/10550388-support-dynamic-output-file-names-in-adla
UPDATE (Spring 2018): This feature is now in private preview. Please contact us via email (usql at microsoft dot com) if you want to try it out.