Azure Dashboard - Metric: ADF pipeline limit - azure-data-factory-2

In Azure, we have dashboard which has Metrics for failed and success Data factory V2 pipelines in bar graph form. Graph gives result when number of pipeline selected in the filter is less than 32. Issue is we are not able to add more that 32 pipelines in it. If more than 32 pipelines selected it gives error - "Error retrieving data" and no data is seen.
We have more pipelines now in our ADF and want to add it in the dashboard Metrics. Any pointer to what the issue could be and how to solve this limit?

I test this and I can add 33 pipelines. "Error retrieving data" can be due to the setting of Time range and Time granularity. You can change this to have a try.

Related

Cloudwatch dashboard insight graphs - can I set binsize dynamically?

I'm using dashboards to monitor various output stats on AWS.
Lets say it looks something like this:
stats avg(myfield1), min(myfield2), max(myfield3) by bin(1m)
This works fine - however I am by default using a bin size of 1 minute - so the data retention period is only 3 days. If I want to look at a week or a month I have to use a separate widget with a larger bin size - I still want the 1 minute resolution for the shorter time periods and I'd rather not have to double up the graphs as the dashboard is already very busy.
Obviously all the built in metrics graphs adjust the bin size they are querying dynamically as the data range being viewed is changed.
Is it possible to do this within a cloudwatch insights query and if so what is the syntax?

ADF v2 - possible to get input of activity?

I know ADF could get the output of specific Activity, like this
activity('xxx').output
And could I get the input of specific Activity?
There is no such provision according to docs. Current activity input would be the output of previous activities right. You can use the same to store in a variable or parameter explicitly for your further use.
You can only view or copy inputs of an activity from pipeline run output or errors.

How to get Moving Average and Stochastic data from what user attached in mq4?

How can I get data with mq4 Script from the following Graph ?
Moving Average and Stochastic in Metatrader
As you can see I have attached 2 functions : Moving Average and Stochastic.
I started to write a script. But I have no ideea how can I get data from the functions attached to the script so that I can start processing
Is there a handle somewhere that returns an array? Global arrays ?

How to add condition in splunk data model constraint

I have a outbound flow that gets data written by App, mem and cards api. cards and mem api is writing logs into applog but App is writing logs in syslog.
In my data model I have sourcetype=app_log as the source type. So in case of all flows except app I am getting write splunk dashboard report but for application I am not getting any data.
So I want to add a condition in data model "constraint" section like
when api is applications then sourcetype=app_log OR sourcetype=sys_log
else sourcetype=app_log
Can anyone assist me how to do this in splunk?
If you need a dual sourcetype, it's usually best to make that part of the root search/event to draw in all relevant data you would like to use in your data model.
A data model is like sheering away wood on a sculpture, so if it's usually better to start with all of the data, and then slowly pick away at what you want to see.
You can add | where clauses as constraints, however you can't add more data if you don't start with it in the root events.
my suggestion would be something like this in your root search:
(index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) field=blah field=blah ....

Error: Not Found: Project <project-id>

I've recently signed up to Google BigQuery for curiosity's sake and saw that it allows one to play with sample data sets without enabling billing. I followed the installation steps, first creating a Project named "Test Cloud Project" and then enabled BigQuery in the services tab of GoogleAPI.
I have tried running the following:
SELECT repository.url FROM [publicdata:samples.github_nested] LIMIT 1000
and receive the error Error: Not Found: Project p-testcloud-bren
Did I miss a setup step somewhere or do you have to enable billing to actually query the sample datasets?
You don't need billing enabled to run a query on a publicdata:sample table (the first 100GB of data processed per month is free of charge).
If you are making your own API calls, double check that you have the "project id" correct. You should be able to use either the project number (a unique integer value) or the project id (an alpha numeric value you can choose) for your requests to the BigQuery API.