"Value cannot be null.\r\nParameter name: endpoint" in Azure Data Factory V2 - azure-data-factory-2

I am getting the following error when I execute Azure ML Batch Execution Activity in ADF V2.
I have written following JSON query in ML Activity
{
"name": "MLBatchExecution1",
"description": "",
"type": "AzureMLBatchExecution",
"linkedServiceName": {
"name": "AzureMLLinkedservice2",
"type": "AzureML"
},
"typeProperties": {
"webServiceInputs": {
"input1": {
"LinkedServiceName":{
"name": "azureblobstoragelinkedservice",
"type": "AzureStorage"
},
"FilePath":"tutoial/Input/TraiData.csv"
},
"input2": {
"LinkedServiceName":{
"name": "azureblobstoragelinkedservice",
"type": "AzureStorage"
},
"FilePath":"tutoial/Input/TestData.csv"
}
},
"webServiceOutputs": {
"output1": {
"LinkedServiceName":{
"name": "AzureStorageLinkedService2",
"type": "AzureStorage"
},
"FilePath":"tutoial/Output/Output.csv"
}
}
}
}
I have make use of the following link to create linked service & activity:
https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-machine-learning
Can anyone help on this pls.
Any help will be appreciated..
Thanks
Deepak

Try passing the argument when triggering the pipeline. The error you are receiving is because it enforces a parameter.

Related

BigQuery Execute fails with no meaningful error on Cloud Data Fusion

I'm trying to use the BigQuery Execute function in Cloud Data Fusion (Google). The component validates fine, the SQL checks out but I get this non-meaningful error with every execution:
02/11/2022 12:51:25 ERROR Pipeline 'test-bq-execute' failed.
02/11/2022 12:51:25 ERROR Workflow service 'workflow.default.test-bq-execute.DataPipelineWorkflow.<guid>' failed.
02/11/2022 12:51:25 ERROR Program DataPipelineWorkflow execution failed.
I can see nothing else to help me debug this. Any ideas? The SQL in question is a simple DELETE from dataset.table WHERE ds = CURRENT_DATE()
This was the pipeline
{
"name": "test-bq-execute",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.5.1",
"scope": "SYSTEM"
},
"config": {
"resources": {
"memoryMB": 2048,
"virtualCores": 1
},
"driverResources": {
"memoryMB": 2048,
"virtualCores": 1
},
"connections": [],
"comments": [],
"postActions": [],
"properties": {},
"processTimingEnabled": true,
"stageLoggingEnabled": false,
"stages": [
{
"name": "BigQuery Execute",
"plugin": {
"name": "BigQueryExecute",
"type": "action",
"label": "BigQuery Execute",
"artifact": {
"name": "google-cloud",
"version": "0.18.1",
"scope": "SYSTEM"
},
"properties": {
"project": "auto-detect",
"sql": "DELETE FROM GCPQuickStart.account WHERE ds = CURRENT_DATE()",
"dialect": "standard",
"mode": "batch",
"dataset": "GCPQuickStart",
"table": "account",
"useCache": "false",
"location": "US",
"rowAsArguments": "false",
"serviceAccountType": "filePath",
"serviceFilePath": "auto-detect"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": ""
}
],
"id": "BigQuery-Execute",
"type": "action",
"label": "BigQuery Execute",
"icon": "fa-plug"
}
],
"schedule": "0 1 */1 * *",
"engine": "spark",
"numOfRecordsPreview": 100,
"maxConcurrentRuns": 1
}
}
I was able to catch the error using Cloud Logging. To enable Cloud Logging in Cloud Data Fusion, you may use this GCP Documentation. And follow these steps to view the logs from Data Fusion to Cloud Logging. Replicating your scenario this is the error I found:
"logMessage": "Program DataPipelineWorkflow execution failed.\njava.util.concurrent.ExecutionException: com.google.cloud.bigquery.BigQueryException: Cannot set destination table in jobs with DML statements\n at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)\n at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)\n at io.cdap.cdap.internal.app.runtime.distributed.AbstractProgramTwillRunnable.run(AbstractProgramTwillRunnable.java:274)\n at org.apache.twill.interna..."
}
What we did to resolve this error: Cannot set destination table in jobs with DML statements is we left the Dataset Name and Table Name empty inside the pipeline properties as there is no need for the destination table to be specified.
Output:

REST dataset for Copy Activity Source give me error Invalid PaginationRule

My Copy Activity is setup to use a REST Get API call as my source. I keep getting Error Code 2200 Invalid PaginationRule RuleKey=supportRFC5988.
I can call the GET Rest URL using the Web Activity, but this isn't optimal as I then have to pass the output to a stored procedure to load the data to the table. I would much rather use the Copy Activity.
Any ideas why I would get an Invalid PaginationRule error on a call?
I'm using a REST Linked Service with the following properties:
Name: Workday
Connect via integration runtime: link-unknown-self-hosted-ir
Base URL: https://wd2-impl-services1.workday.com/ccx/service
Authentication type: Basic
User name: Not telling
Azure Key Vault for password
Server Certificate Validation is enabled
Parameters: Name:format Type:String Default value:json
Datasource:
"name": "Workday_Test_REST_Report",
"properties": {
"linkedServiceName": {
"referenceName": "Workday",
"type": "LinkedServiceReference",
"parameters": {
"format": "json"
}
},
"folder": {
"name": "Workday"
},
"annotations": [],
"type": "RestResource",
"typeProperties": {
"relativeUrl": "/customreport2/company1/person%40company.com/HIDDEN_BI_RaaS_Test_Outbound"
},
"schema": []
}
}
Copy Activity
{
"name": "Copy Test Workday REST API output to a table",
"properties": {
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010",
"requestMethod": "GET",
"paginationRules": {
"supportRFC5988": "true"
}
},
"sink": {
"type": "SqlMISink",
"tableOption": "autoCreate"
},
"enableStaging": false
},
"inputs": [
{
"referenceName": "Workday_Test_REST_Report",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "Destination_db",
"type": "DatasetReference",
"parameters": {
"schema": "ELT",
"tableName": "WorkdayTestReportData"
}
}
]
}
],
"folder": {
"name": "Workday"
},
"annotations": []
}
}
Well after posting this, I noticed that in the copy activity code there is a nugget about "supportRFC5988": "true" I switched the true to false, and everything just worked for me. I don't see a way to change this in the Copy Activity GUI
Editing source code and setting this option to false helped!

Is there a way to obtain pipeline activity run details for Azure Synapse from Log Analytics?

I have set up a log analytics workspace and added it to the diagnostic settings on my Synapse workspace. However, I am unable to write a query that extracts pipeline activity information such as dataRead, rowsCopied, etc.
I have tried using
dataReadvar = parse_json(Output).dataRead to extract the JSON within SynapseIntegrationActivityRuns but it doesn’t seem to be able to find ‘Output’.
Azure Data Factory and Azure Synapse Analytics have three groupings of activities: data movement activities, data transformation activities, and control activities. An activity can take zero or more input datasets and produce one or more output datasets.
Check for the sample pipeline for how it is defined and its pipeline activities:
{
"name": "PipelineName",
"properties":
{
"description": "pipeline description",
"activities":
[
],
"parameters": {
},
"concurrency": <your max pipeline concurrency>,
"annotations": [
]
}
}
Below is the JSON format for Top level structure for Execution Activities:
{
"name": "Execution Activity Name",
"description": "description",
"type": "<ActivityType>",
"typeProperties":
{
},
"linkedServiceName": "MyLinkedService",
"policy":
{
},
"dependsOn":
{
}
}
Activity Policy:
{
"name": "MyPipelineName",
"properties": {
"activities": [
{
"name": "MyCopyBlobtoSqlActivity",
"type": "Copy",
"typeProperties": {
...
},
"policy": {
"timeout": "00:10:00",
"retry": 1,
"retryIntervalInSeconds": 60,
"secureOutput": true
}
}
],
"parameters": {
...
}
}
}
Here are few MS Docs as Docs1, Docs2 which are related to your scenario which can help.

GraphJSON serialization in Gremlin.Net

I'm trying to query the TinkerPop server (hosted inside docker container) via CosmosDB client library, which uses under the hood Gremlin.Net. So I managed to connect it and insert the data, here's intercepted WebSocket request:
!application/vnd.gremlin-v1.0+json{
"requestId": "b64bd2eb-46c3-4095-9eef-768bca2a14ed",
"op": "eval",
"processor": "",
"args": {
"gremlin": "g.addV(\"User\").property(\"UserId\",2).property(\"CustomerId\",1)"
}
}
The response:
{
"requestId": "b64bd2eb-46c3-4095-9eef-768bca2a14ed",
"status": {
"message": "",
"code": 200,
"attributes": {
"host": "/172.19.0.1:38848"
}
},
"result": {
"data": [
{
"id": 0,
"label": "User",
"type": "vertex",
"properties": {}
}
],
"meta": {}
}
}
Problem is that I see those properties when I'm connected via gremlin console
gremlin> g.V().hasLabel("User").has("CustomerId",1).has("UserId",2).limit(1).valueMap()
==>{UserId=[2], CustomerId=[1]}
Also, I'm able to query the TinkerPop server with Gremlin.Net:
!application/vnd.gremlin-v1.0+json{
"requestId": "de35909f-4bc1-4aae-aa5f-28361b3c0933",
"op": "eval",
"processor": "",
"args": {
"gremlin": "g.V().hasLabel(\"User\").has(\"CustomerId\",1).has(\"UserId\",2).limit(1)"
}
}
But it returns a payload with zero-valued ID and without any properties included:
{
"requestId": "de35909f-4bc1-4aae-aa5f-28361b3c0933",
"status": {
"message": "",
"code": 200,
"attributes": {
"host": "/172.19.0.1:38858"
}
},
"result": {
"data": [
{
"id": 0,
"label": "User",
"type": "vertex",
"properties": {}
}
],
"meta": {}
}
}
Tried to swap between GraphSON v1, v2, v3 with no luck. Documentation says that script serializers should include all the properties. Do I have to tweak the config somehow to make this work and return properties?
So it seems that with a version of 3.4 of the Gremlin server ReferenceElementStrategy
was added by default to traversals, to preserve compatibility between binary and script serializers. In our case we wanted to mimic the behavior of the CosmosDB, so to adjust and receive desired behavior just remove the strategy from init script (in our case it was empty-sample.groovy
globals << [g : graph.traversal().withStrategies(ReferenceElementStrategy.instance())]
to
globals << [g : graph.traversal()]

Azure Data Factory v2 If activity always fails

I'm currently struggling with the Azure Data Factory v2 If activity which always fails with this error message:
enter image description here
I've designed two separate pipelines, one takes the full snapshot of the data (1333 records) from the on-premises SQL Server and loads the data into the Azure SQL Database, and another one just takes delta from the same source.
Both pipelines work fine when executed independently.
I then decided to wrap these two pipelines into the one parent pipeline which would do this:
1.
Execute LookUp activity to check if the target table in Azure SQL Database has any records, basic Select Count(Request_ID) As record_count From target_table - activity works fine, I can preview the returned record count.
2.
Pass the output from the LookUp activity to the If activity with the conditions that if record_count = 0, the parent pipeline would invoke the full load pipeline, otherwise the parent pipeline would invoke the delta load pipeline.
This is the actual expression:
{#activity('lookup_sites_record_count').output.firstRow.record_count}==0"
Whenever I try to execute this parent pipeline, it fails with the above message of "Activity failed: Activity failed because an inner activity failed."
Both inner activities, that is, full load and delta load pipelines, work just fine when triggered independently.
What I'm missing?
Many thanks in advance :).
mikhailg
Pipeline's JSON definition below:
{
"name": "pl_remedyreports_load_rs_sites",
"properties": {
"activities": [
{
"name": "lookup_sites_record_count",
"type": "Lookup",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "Select Count(Request_ID) As record_count From mdp.RS_Sites;"
},
"dataset": {
"referenceName": "ds_azure_sql_db_sites",
"type": "DatasetReference"
}
}
},
{
"name": "If_check_site_record_count",
"type": "IfCondition",
"dependsOn": [
{
"activity": "lookup_sites_record_count",
"dependencyConditions": [
"Succeeded"
]
}
],
"typeProperties": {
"expression": {
"value": "{#activity('lookup_sites_record_count').output.firstRow.record_count}==0",
"type": "Expression"
},
"ifFalseActivities": [
{
"name": "pl_remedyreports_invoke_load_sites_inc",
"type": "ExecutePipeline",
"typeProperties": {
"pipeline": {
"referenceName": "pl_remedyreports_load_sites_inc",
"type": "PipelineReference"
}
}
}
],
"ifTrueActivities": [
{
"name": "pl_remedyreports_invoke_load_sites_full",
"type": "ExecutePipeline",
"typeProperties": {
"pipeline": {
"referenceName": "pl_remedyreports_load_sites_full",
"type": "PipelineReference"
}
}
}
]
}
}
],
"folder": {
"name": "Load Remedy Reference Data"
}
}
}
Your expression should be:
#equals(activity('lookup_sites_record_count').output.firstRow.record_count,0)