How to set Datatype in Additional Column in ADF - sql

I need to set datatype for Additional Column with Dynamic Content in Sink in ADF
By default its taking nvarchar(max) from Json obj but I need bigInt
Below is a Json Obj which create table with Additional column
{
"source": {
"type": "SqlServerSource",
"additionalColumns": [
{
"name": "ApplicationId",
"value": 3604509277250831000
}
],
"sqlReaderQuery": "SELECT * from Table A",
"queryTimeout": "02:00:00",
"isolationLevel": "ReadUncommitted",
"partitionOption": "None"
},
"sink": {
"type": "AzureSqlSink",
"writeBehavior": "insert",
"sqlWriterUseTableLock": false,
"tableOption": "autoCreate",
"disableMetricsCollection": false
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"typeConversion": true,
"typeConversionSettings": {
"allowDataTruncation": true,
"treatBooleanAsNumber": false
}
}
}
ADF Configuration
After create table Database - column with datatype
If I convert Dynamic content into Int
#int(pipeline().parameters.application.applicationId)
Then getting below warning
Please let me know how can I set Datatype in ADF

I also tried the same and getting same result.
By default its taking nvarchar(max) from Json obj but I need bigInt
To resolve this when you add additional column in your source data set and in Mapping click onimport schema it will import the schema of the source and also give you additional column in schema you have to change the type of the column as Int64 as shown in below image. in below image you can see after name there is additional means it is an additional column.
After this run your pipeline, It will create additional column with data type bigint .
{
"name": "pipeline2",
"properties": {
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "JsonSource",
"additionalColumns": [
{
"name": "name",
"value": {
"value": "#pipeline().parameters.demo.age",
"type": "Expression"
}
}
],
"storeSettings": {
"type": "AzureBlobFSReadSettings",
"recursive": true,
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "JsonReadSettings"
}
},
"sink": {
"type": "AzureSqlSink",
"writeBehavior": "insert",
"sqlWriterUseTableLock": false,
"tableOption": "autoCreate",
"disableMetricsCollection": false
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"path": "$['taskId']"
},
"sink": {
"name": "taskId",
"type": "String"
}
},
{
"source": {
"path": "$['taskObtainedScore']"
},
"sink": {
"name": "taskObtainedScore",
"type": "String"
}
},
{
"source": {
"path": "$['multiInstance']"
},
"sink": {
"name": "multiInstance",
"type": "String"
}
},
{
"source": {
"path": "$['name']"
},
"sink": {
"name": "name",
"type": "Int64"
}
}
],
"collectionReference": ""
}
},
"inputs": [
{
"referenceName": "Json1",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "AzureSqlTable1",
"type": "DatasetReference"
}
]
}
],
"parameters": {
"demo": {
"type": "object",
"defaultValue": {
"name": "John",
"age": 30,
"isStudent": true
}
}
},
"annotations": []
}
}
OUTPUT:

Related

JSON Schema - How to narrow down possible array object values

How can the following requirements be configured in a JSON schema.
I have the following JSON
{
"orderMethods": [
{
"requestType": "some service",
"deployType": "MANUAL",
"deployLabel": "some label"
},
{
"requestType": "some service",
"deployType": "MANUAL",
"deployLabel": "some label"
},
{
"requestType": "some service",
"deployType": "AUTO",
"deploySubType": "REST",
"deployLabel": "some label",
"deployConfig": "some config"
},
{
"requestType": "some service",
"deployType": "AUTO",
"deploySubType": "DB",
"deployLabel": "some label",
"deployConfig": "some config"
}
]
}
Requirements:
only one deployType = "MANUAL" is allowed
only one deployType = "AUTO" is allowed
additionally, only one deploySubType = "REST or "DB" is allowed
Currently I have the following JSON schema. Unfortunately the above JSON is valid. How can the above requirements be addressed?
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"orderMethods": {
"$ref": "#/definitions/OrderMethods"
}
},
"additionalProperties": false,
"definitions": {
"OrderMethods": {
"type": "array",
"items": {
"$ref": "#/definitions/OrderMethod"
}
},
"OrderMethod": {
"oneOf": [
{
"$ref": "#/definitions/OrderMethodManualDeploy"
},
{
"$ref": "#/definitions/OrderMethodAutoDeploy"
}
],
"uniqueItems": true
},
"OrderMethodAutoDeploy": {
"oneOf": [
{
"$ref": "#/definitions/OrderMethodAutoRestDeploy"
},
{
"$ref": "#/definitions/OrderMethodAutoDBDeploy"
}
],
"uniqueItems": true
},
"OrderMethodManualDeploy": {
"type": "object",
"properties": {
"requestType": {
"type": "string"
},
"deployType": {
"const": "MANUAL"
},
"deployLabel": {
"type": "string"
}
},
"required": [
"requestType",
"deployType",
"deployLabel"
],
"additionalProperties": false
},
"OrderMethodAutoRestDeploy": {
"type": "object",
"properties": {
"requestType": {
"type": "string"
},
"deployType": {
"const": "AUTO"
},
"deploySubType": {
"const": "REST"
},
"deployLabel": {
"type": "string"
},
"deployConfig": {
"type": "string"
}
},
"additionalProperties": false,
"required": [
"requestType",
"deployType",
"deploySubType",
"deployLabel",
"deployConfig"
]
},
"OrderMethodAutoDBDeploy": {
"type": "object",
"properties": {
"requestType": {
"type": "string"
},
"deployType": {
"const": "AUTO"
},
"deploySubType": {
"const": "DB"
},
"deployConfig": {
"type": "string"
},
"deployLabel": {
"type": "string"
}
},
"additionalProperties": false,
"required": [
"requestType",
"deployType",
"deploySubType",
"deployLabel",
"deployConfig"
]
}
}
}
Thanks for your help/ideas.

Running data flows in Azure Data Factory 4 times slower than running in Azure SSIS data flow

Here are the details of this performance test (very simple). I'm trying to understand why running data flows in the cloud native Azure Data Factory environment (Spark) is so much slower than running data flows hosted in Azure SSIS IR. My results show that running in latest ADFv2 is over 4 times slower than running the exact same data flow in Azure SSIS (even with a warm IR cluster already warmed up from previous run). I like all the new features of the v2 data flows but it hardly seems worth the performance hit unless I'm completely missing something. Eventually I'll be adding more complex data flows but wanted to understand base performance behavior.
Source:
1GB CSV stored in blob storage
Destination:
Azure SQL Server Database (one table and truncated before each run)
When using control flow in ADFv2 using a simple CopyActivity (no data flow)
91 seconds
When using native SSIS package with data flow (Azure Feature Pack to pull from same blob storage) running Azure SSIS with 8 cores.
76 seconds
Pure ADF Cloud Pipeline using DataFlow with warm Azure IR (cached from previous run) 8 (+ 8 Driver cores) with default partitioning (Spark)
(includes 96 seconds cluster startup which is another thing I don't understand since the TTL is 30 minutes on the IR and it was just ran 10 minutes prior)
360 seconds
Pipeline (LandWithCopy)
{
"name": "LandWithCopy",
"properties": {
"activities": [
{
"name": "CopyData",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"storeSettings": {
"type": "AzureBlobStorageReadSettings",
"recursive": true,
"wildcardFileName": "data.csv",
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
},
"sink": {
"type": "AzureSqlSink",
"preCopyScript": "TRUNCATE TABLE PatientAR",
"disableMetricsCollection": false
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "RecordAction",
"type": "String"
},
"sink": {
"name": "RecordAction",
"type": "String"
}
},
{
"source": {
"name": "UniqueId",
"type": "String"
},
"sink": {
"name": "UniqueId",
"type": "String"
}
},
{
"source": {
"name": "Type",
"type": "String"
},
"sink": {
"name": "Type",
"type": "String"
}
},
{
"source": {
"name": "TypeDescription",
"type": "String"
},
"sink": {
"name": "TypeDescription",
"type": "String"
}
},
{
"source": {
"name": "PatientId",
"type": "String"
},
"sink": {
"name": "PatientId",
"type": "String"
}
},
{
"source": {
"name": "PatientVisitId",
"type": "String"
},
"sink": {
"name": "PatientVisitId",
"type": "String"
}
},
{
"source": {
"name": "VisitDateOfService",
"type": "String"
},
"sink": {
"name": "VisitDateOfService",
"type": "String"
}
},
{
"source": {
"name": "VisitDateOfEntry",
"type": "String"
},
"sink": {
"name": "VisitDateOfEntry",
"type": "String"
}
},
{
"source": {
"name": "DoctorId",
"type": "String"
},
"sink": {
"name": "DoctorId",
"type": "String"
}
},
{
"source": {
"name": "DoctorName",
"type": "String"
},
"sink": {
"name": "DoctorName",
"type": "String"
}
},
{
"source": {
"name": "FacilityId",
"type": "String"
},
"sink": {
"name": "FacilityId",
"type": "String"
}
},
{
"source": {
"name": "FacilityName",
"type": "String"
},
"sink": {
"name": "FacilityName",
"type": "String"
}
},
{
"source": {
"name": "CompanyName",
"type": "String"
},
"sink": {
"name": "CompanyName",
"type": "String"
}
},
{
"source": {
"name": "TicketNumber",
"type": "String"
},
"sink": {
"name": "TicketNumber",
"type": "String"
}
},
{
"source": {
"name": "TransactionDateOfEntry",
"type": "String"
},
"sink": {
"name": "TransactionDateOfEntry",
"type": "String"
}
},
{
"source": {
"name": "InternalCode",
"type": "String"
},
"sink": {
"name": "InternalCode",
"type": "String"
}
},
{
"source": {
"name": "ExternalCode",
"type": "String"
},
"sink": {
"name": "ExternalCode",
"type": "String"
}
},
{
"source": {
"name": "Description",
"type": "String"
},
"sink": {
"name": "Description",
"type": "String"
}
},
{
"source": {
"name": "Fee",
"type": "String"
},
"sink": {
"name": "Fee",
"type": "String"
}
},
{
"source": {
"name": "Units",
"type": "String"
},
"sink": {
"name": "Units",
"type": "String"
}
},
{
"source": {
"name": "AREffect",
"type": "String"
},
"sink": {
"name": "AREffect",
"type": "String"
}
},
{
"source": {
"name": "Action",
"type": "String"
},
"sink": {
"name": "Action",
"type": "String"
}
},
{
"source": {
"name": "InsuranceGroup",
"type": "String"
},
"sink": {
"name": "InsuranceGroup",
"type": "String"
}
},
{
"source": {
"name": "Payer",
"type": "String"
},
"sink": {
"name": "Payer",
"type": "String"
}
},
{
"source": {
"name": "PayerType",
"type": "String"
},
"sink": {
"name": "PayerType",
"type": "String"
}
},
{
"source": {
"name": "PatBalance",
"type": "String"
},
"sink": {
"name": "PatBalance",
"type": "String"
}
},
{
"source": {
"name": "InsBalance",
"type": "String"
},
"sink": {
"name": "InsBalance",
"type": "String"
}
},
{
"source": {
"name": "Charges",
"type": "String"
},
"sink": {
"name": "Charges",
"type": "String"
}
},
{
"source": {
"name": "Payments",
"type": "String"
},
"sink": {
"name": "Payments",
"type": "String"
}
},
{
"source": {
"name": "Adjustments",
"type": "String"
},
"sink": {
"name": "Adjustments",
"type": "String"
}
},
{
"source": {
"name": "TransferAmount",
"type": "String"
},
"sink": {
"name": "TransferAmount",
"type": "String"
}
},
{
"source": {
"name": "FiledAmount",
"type": "String"
},
"sink": {
"name": "FiledAmount",
"type": "String"
}
},
{
"source": {
"name": "CheckNumber",
"type": "String"
},
"sink": {
"name": "CheckNumber",
"type": "String"
}
},
{
"source": {
"name": "CheckDate",
"type": "String"
},
"sink": {
"name": "CheckDate",
"type": "String"
}
},
{
"source": {
"name": "Created",
"type": "String"
},
"sink": {
"name": "Created",
"type": "String"
}
},
{
"source": {
"name": "ClientTag",
"type": "String"
},
"sink": {
"name": "ClientTag",
"type": "String"
}
}
]
}
},
"inputs": [
{
"referenceName": "PAR_Source_DS",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "PAR_Sink_DS",
"type": "DatasetReference"
}
]
}
],
"annotations": []
}
}
Pipeline Data Flow (LandWithFlow)
{
"name": "WriteData",
"properties": {
"type": "MappingDataFlow",
"typeProperties": {
"sources": [
{
"dataset": {
"referenceName": "PAR_Source_DS",
"type": "DatasetReference"
},
"name": "GetData"
}
],
"sinks": [
{
"dataset": {
"referenceName": "PAR_Sink_DS",
"type": "DatasetReference"
},
"name": "WriteData"
}
],
"transformations": [],
"script": "source(output(\n\t\tRecordAction as string,\n\t\tUniqueId as string,\n\t\tType as string,\n\t\tTypeDescription as string,\n\t\tPatientId as string,\n\t\tPatientVisitId as string,\n\t\tVisitDateOfService as string,\n\t\tVisitDateOfEntry as string,\n\t\tDoctorId as string,\n\t\tDoctorName as string,\n\t\tFacilityId as string,\n\t\tFacilityName as string,\n\t\tCompanyName as string,\n\t\tTicketNumber as string,\n\t\tTransactionDateOfEntry as string,\n\t\tInternalCode as string,\n\t\tExternalCode as string,\n\t\tDescription as string,\n\t\tFee as string,\n\t\tUnits as string,\n\t\tAREffect as string,\n\t\tAction as string,\n\t\tInsuranceGroup as string,\n\t\tPayer as string,\n\t\tPayerType as string,\n\t\tPatBalance as string,\n\t\tInsBalance as string,\n\t\tCharges as string,\n\t\tPayments as string,\n\t\tAdjustments as string,\n\t\tTransferAmount as string,\n\t\tFiledAmount as string,\n\t\tCheckNumber as string,\n\t\tCheckDate as string,\n\t\tCreated as string,\n\t\tClientTag as string\n\t),\n\tallowSchemaDrift: true,\n\tvalidateSchema: false,\n\twildcardPaths:['data.csv']) ~> GetData\nGetData sink(input(\n\t\tRecordAction as string,\n\t\tUniqueId as string,\n\t\tType as string,\n\t\tTypeDescription as string,\n\t\tPatientId as string,\n\t\tPatientVisitId as string,\n\t\tVisitDateOfService as string,\n\t\tVisitDateOfEntry as string,\n\t\tDoctorId as string,\n\t\tDoctorName as string,\n\t\tFacilityId as string,\n\t\tFacilityName as string,\n\t\tCompanyName as string,\n\t\tTicketNumber as string,\n\t\tTransactionDateOfEntry as string,\n\t\tInternalCode as string,\n\t\tExternalCode as string,\n\t\tDescription as string,\n\t\tFee as string,\n\t\tUnits as string,\n\t\tAREffect as string,\n\t\tAction as string,\n\t\tInsuranceGroup as string,\n\t\tPayer as string,\n\t\tPayerType as string,\n\t\tPatBalance as string,\n\t\tInsBalance as string,\n\t\tCharges as string,\n\t\tPayments as string,\n\t\tAdjustments as string,\n\t\tTransferAmount as string,\n\t\tFiledAmount as string,\n\t\tCheckNumber as string,\n\t\tCheckDate as string,\n\t\tCreated as string,\n\t\tClientTag as string,\n\t\tFileName as string,\n\t\tPractice as string\n\t),\n\tallowSchemaDrift: true,\n\tvalidateSchema: false,\n\tdeletable:false,\n\tinsertable:true,\n\tupdateable:false,\n\tupsertable:false,\n\tformat: 'table',\n\tpreSQLs:['TRUNCATE TABLE PatientAR'],\n\tmapColumn(\n\t\tRecordAction,\n\t\tUniqueId,\n\t\tType,\n\t\tTypeDescription,\n\t\tPatientId,\n\t\tPatientVisitId,\n\t\tVisitDateOfService,\n\t\tVisitDateOfEntry,\n\t\tDoctorId,\n\t\tDoctorName,\n\t\tFacilityId,\n\t\tFacilityName,\n\t\tCompanyName,\n\t\tTicketNumber,\n\t\tTransactionDateOfEntry,\n\t\tInternalCode,\n\t\tExternalCode,\n\t\tDescription,\n\t\tFee,\n\t\tUnits,\n\t\tAREffect,\n\t\tAction,\n\t\tInsuranceGroup,\n\t\tPayer,\n\t\tPayerType,\n\t\tPatBalance,\n\t\tInsBalance,\n\t\tCharges,\n\t\tPayments,\n\t\tAdjustments,\n\t\tTransferAmount,\n\t\tFiledAmount,\n\t\tCheckNumber,\n\t\tCheckDate,\n\t\tCreated,\n\t\tClientTag\n\t),\n\tskipDuplicateMapInputs: true,\n\tskipDuplicateMapOutputs: true) ~> WriteData"
}
}
}
We are having the Same issues. Copy Activity without Data Flow is much faster than Data Flow. Our case is Copy Activity vs Data Flow. Not Sure if I'm doing anything wrong.
Our Scenario is just copy from Source to Destination 13 tables based on where Clause. We now have two copy activity which takes 1.5 minutes. So I was thinking may be create Data Flow and do one Source two Sinks. But it's running like 5 minutes to 8 Minutes depending on Cluster startup time. Hope we get an answer.

After running my logic app, it shows some error message as" "We couldn't convert to Number.\r\n "

I am collecting data from sensors and upload to AZURE cosmo. My logic app on AZURE keep failing and show the following message
{
"status": 400,
"message": "We couldn't convert to Number.\r\n inner exception: We
couldn't convert to Number.\r\nclientRequestId:xxxxxxx",
"source": "sql-ea.azconn-ea.p.azurewebsites.net"
}
Below are the input data from cosmo. I saw that the input data has shown "ovf" and "inf". I have tried convert the data type of that column to other data type like bigiant and numeric and resubmitted. Still did not fixed that.
{
"Device": "DL084",
"Device_Group": "DLL",
"Time": "2019-09-04T11:45:20.0000000",
"acc_x_avg": "ovf",
"acc_x_stdev": "inf",
"acc_y_avg": "3832.88",
"acc_y_stdev": "2850.45",
"acc_z_avg": "13304.38",
"acc_z_stdev": "2289.86",
"cc_volt": "3.900293",
"cp_volt": "1.940371",
"fp_volt": "0.718475",
"id": "xxxxxxxxxxxxxxxxxxxx",
"ls_volt": "4.882698",
"millis": "1073760.00",
"rs_rpm": "0.00",
"smp_volt": "1.070381"
}
I have also tried to convert the data to string. And it will show
"Failed to execute query. Error: Operand type clash: numeric is
incompatible with ntext"
The question is how can I eliminate this error? How can I know the error is definitely due to the "inf" and "ovf" error?
The logic app code is as below
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"For_each": {
"actions": {
"Delete_a_document": {
"inputs": {
"host": {
"connection": {
"name": "#parameters('$connections')['documentdb']['connectionId']"
}
},
"method": "delete",
"path": "/dbs/#{encodeURIComponent('iot')}/colls/#{encodeURIComponent('messages')}/docs/#{encodeURIComponent(items('For_each')['id'])}"
},
"runAfter": {
"Insert_row": [
"Succeeded"
]
},
"type": "ApiConnection"
},
"Insert_row": {
"inputs": {
"body": {
"Device": "#{item()['device']}",
"Device_Group": "#{items('For_each')['devicegroup']}",
"Time": "#{addHours(addSeconds('1970-01-01 00:00:00', int(items('For_each')['time'])), 8)}",
"acc_x_avg": "#items('For_each')['acc_x_avg']",
"acc_x_stdev": "#items('For_each')['acc_x_stdev']",
"acc_y_avg": "#items('For_each')['acc_y_avg']",
"acc_y_stdev": "#items('For_each')['acc_y_stdev']",
"acc_z_avg": "#items('For_each')['acc_z_avg']",
"acc_z_stdev": "#items('For_each')['acc_z_stdev']",
"cc_volt": "#items('For_each')['cc_volt']",
"cp_volt": "#items('For_each')['cp_volt']",
"fp_volt": "#items('For_each')['fp_volt']",
"id": "#{guid()}",
"ls_volt": "#items('For_each')['ls_volt']",
"millis": "#items('For_each')['millis']",
"rs_rpm": "#items('For_each')['rs_rpm']",
"smp_volt": "#items('For_each')['smp_volt']"
},
"host": {
"connection": {
"name": "#parameters('$connections')['sql']['connectionId']"
}
},
"method": "post",
"path": "/datasets/default/tables/#{encodeURIComponent(encodeURIComponent('[dbo].[RCD]'))}/items"
},
"runAfter": {},
"type": "ApiConnection"
}
},
"foreach": "#body('Query_documents')?['Documents']",
"runAfter": {
"Query_documents": [
"Succeeded"
]
},
"type": "Foreach"
},
"Query_documents": {
"inputs": {
"body": {
"query": "SELECT \t\t * \nFROM \t\t\tc \nWHERE \t\t\tc.devicegroup = 'DLL' ORDER BY c._ts"
},
"headers": {
"x-ms-max-item-count": 1000
},
"host": {
"connection": {
"name": "#parameters('$connections')['documentdb']['connectionId']"
}
},
"method": "post",
"path": "/dbs/#{encodeURIComponent('iot')}/colls/#{encodeURIComponent('messages')}/query"
},
"runAfter": {},
"type": "ApiConnection"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
}
},
"triggers": {
"Recurrence": {
"recurrence": {
"frequency": "Minute",
"interval": 30
},
"type": "Recurrence"
}
}
}
}

JSONSchema validation failure with $ref (Draft v3)

I have created a JSON schema following the draft v3 specifications. Schema looks like this:
{
"$schema": "http://json-schema.org/draft-03/schema#",
"additionalProperties": false,
"type": "object",
"properties": {
"ExecutionPlanList": {
"type": "array",
"items": [{
"type": "object",
"properties": {
"model": {
"required": true,
"properties": {
"featureList": {
"required": true,
"items": {
"properties": {
"featureName": {
"type": ["string", "null"]
},
"featureType": {
"type": "string"
}
},
"type": "object"
},
"type": "array"
},
"modelId": {
"required": true,
"type": "string"
}
},
"type": "object"
},
"cascadeSteps": {
"required": false,
"items": {
"properties": {
"binaryModel": {
"$ref": "#/properties/ExecutionPlanList/items/properties/model",
"required": true
},
"threshold": {
"required": true,
"default": "0.0",
"maximum": 100.0,
"type": "number"
},
"backupModel": {
"$ref": "#/properties/ExecutionPlanList/items/properties/model",
"required": true
}
}
},
"type": "array"
},
"marketplaceId": {
"required": true,
"type": "integer"
}
}
}]
}
},
"required": true
}
Essentially, executionPlanList contains list of model and cascadeStep, and each cascadeStep contains two models with a number. So I'm trying to re-use the schema for model in cascadeStep, but validation (https://www.jsonschemavalidator.net/) is failing with Could not resolve schema reference '#/properties/ExecutionPlanList/items/properties/model'.
Would appreciate any pointers on what's wrong with this schema.
what about adding a 'definitions' and refer like this:
{
"$schema": "http://json-schema.org/draft-03/schema#",
"additionalProperties": false,
"type": "object",
"definitions": {
"model": {
"required": true,
"properties": {
"featureList": {
"required": true,
"items": {
"properties": {
"featureName": {
"type": ["string", "null"]
},
"featureType": {
"type": "string"
}
},
"type": "object"
},
"type": "array"
},
"modelId": {
"required": true,
"type": "string"
}
},
"type": "object"
}
},
"properties": {
"ExecutionPlanList": {
"type": "array",
"items": [{
"type": "object",
"properties": {
"model": {
"$ref" : "#/definitions/model"
},
"cascadeSteps": {
"required": false,
"items": {
"properties": {
"binaryModel": {
"$ref" : "#/definitions/model",
"required": true
},
"threshold": {
"required": true,
"default": "0.0",
"maximum": 100.0,
"type": "number"
},
"backupModel": {
"$ref" : "#/definitions/model",
"required": true
}
}
},
"type": "array"
},
"marketplaceId": {
"required": true,
"type": "integer"
}
}
}]
}
},
"required": true
}
It should be '#/properties/ExecutionPlanList/items/0/properties/model'
The schema validates only the first item, by the way.

Failed to execute 'send' on 'XMLHttpRequest' for only one url in Loopback

Strongloop/Loopback node.js server used with 'ng-admin' editor and sqlite db. I need to get count of entities:
var Httpreq = new XMLHttpRequest();
Httpreq.open('GET', yourUrl, false);
Httpreq.send(null);
return Httpreq.responseText;
where yourUrl is like http://localhost:3000/api/v1/entity/count.
All urls works except one of entity named 'Advertisement', I have angular.js:12783 Error: Failed to execute 'send' on 'XMLHttpRequest': Failed to load http://localhost:3000/api/v1/advertisement/count. This url works in API explorer.
Advertisement.json:
{
"name": "Advertisement",
"base": "EntityBase",
"plural": "Advertisement",
"options": {
"validateUpsert": true,
"sqlite3": {
"table": "advertisement"
}
},
"properties": {
"name": {
"type": "string",
"required": true
},
"category_id": {
"type": "number"
},
"due_date": {
"type": "string"
},
"from_date": {
"type": "string"
},
"phone": {
"type": "string"
},
"site": {
"type": "string"
}
},
"validations": [],
"relations": {
"category": {
"type": "belongsTo",
"model": "Category",
"foreignKey": "category_id"
},
"photos": {
"type": "hasMany",
"model": "Photo",
"foreignKey": "advertisement_id"
}
},
"acls": [],
"methods": {},
"mixins": {
"Timestamp": {},
"SoftDelete": {},
"GenderAge": {},
"Descripted": {}
}
}
Renamed entity to Commercial to fix this.