BigCommerce / SkyVia Sql data integration - sql

I need to export from SQL Server 2016 table of shipped orders to the BigCommerce OrderShipments table.
BC API requires the Order line items as an array object.
I have set the SQL table items column data type to nvarchar(max).
This is then final items array in my SQL shipped orders table
[
{“order_product_id”:16,“product_id”:1920,“quantity”:1},
{“order_product_id”:17,“product_id”:1921,“quantity”:1}
]
This fails with an error
Is this array text correct, any suggestions?
Thanks

It looks like the product_id field is read-only, have you tried removing this field? Could you also share the error you're seeing after making this request? I've included an example request I made that worked on a sandbox below.
Sent as a POST request to /v2/orders/{order_id}/shipments:
{
"order_address_id":"1",
"shipping_provider":"",
"items": [
{
"order_product_id": 1,
"quantity":1
},
{
"order_product_id": 2,
"quantity":1
}
]
}

Related

Is it correct to do 1-to-1 mapping in Update API request param

There is a need for me to do bulk update of user details.
Let the object details have the following fields,
User First Name
User ID
User Last Name
User Email ID
User Country
An admin can upload the updated data of the users through a csv file. Values with mismatching data needs to be updated. The most probable request format for this bulk update request will be like:(Method 1)
"data" : {
"userArray" : [
{
"id" : 2343565432,
"f_name" : "David",
"email" : "david#testmail.com"
},
{
"id" : 2344354351,
"country" : "United States",
}
.
.
.
]
}
Method 2 : I would send the details in two arrays, one containing the list of similar filed values with respect to their user ids
"data" : {
"userArray" : [
{
"ids" : [23234323432, 4543543543, 45654543543],
"country" : ["United States", "Israel", "Mexico"]
},
{
"ids" : [2323432334543, 567676565],
"email" : ["groove#drivein.com", "zara#foobar.com"]
},
.
.
.
]
}
In method 1, i need to query the database for every user update, which will be more as the no of user edited is more. In contrast, if i use method 2, i query the database only once for each param(i add the array in the query and get those rows whose user id is present in the given array in a single query). And then i can update the each row with their respective details.
But overall in the internet, most of the update api had params in the format specified in method 1 which gives user good readability. But i need to know what will be advantage if i go with method 1 rather than method 2? (I save some query time in method 2 if the no of users count is large which can improve my performance)
I almost always see it being method 1 style.
Woth that said, I don't understand why your DB performance is based on the way the input data is structured. That's just the way information gets into your code.
You can have the client send the data as method 1 and then shim it to method 2 on the backend if that helps you structure the DB queries better

Selecting the latest document for each "Group"

I am using Azure Cosmos DB SQL API to try to achieve the following;
We have device data stored within a collection and would love to retrieve the latest event data per device serial effectively without having to do N queries for each device separately.
SELECT *
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1') ORDER BY c.EventEnqueuedUtcTime DESC
Im assuming I would need to use Group By - https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-group-by
Any assistance would be greatly appreciated
Rough example of data :
[
{
"temperature": 25.22063251827873,
"humidity": 71.54208429695204,
"serial": "V55555555",
"testid": 1,
"location": {
"type": "Point",
"coordinates": [
30.843687,
-29.789895
]
},
"EventProcessedUtcTime": "2020-09-07T12:04:34.5861918Z",
"PartitionId": 0,
"EventEnqueuedUtcTime": "2020-09-07T12:04:34.4700000Z",
"IoTHub": {
"MessageId": null,
"CorrelationId": null,
"ConnectionDeviceId": "V55555555",
"ConnectionDeviceGenerationId": "637323979596346475",
"EnqueuedTime": "2020-09-07T12:04:34.0000000"
},
"Name": "admin",
"id": "6dac491e-1f28-450d-bf97-3a15a0efaad8",
"_rid": "i2UhAI7ofAo3AQAAAAAAAA==",
"_self": "dbs/i2UhAA==/colls/i2UhAI7ofAo=/docs/i2UhAI7ofAo3AQAAAAAAAA==/",
"_etag": "\"430131c1-0000-0100-0000-5f5621d80000\"",
"_attachments": "attachments/",
"_ts": 1599480280
}
]
UPDATE:
So doing the following returns the correct data but sadly you can only return data thats inside your group by or an aggregate function (i.e. cant do select *)
SELECT c.serial, MAX(c.EventProcessedUtcTime)
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1')
GROUP BY c.serial
[
{
"serial": "synap-aim-g1",
"$1": "2020-09-09T06:29:42.6812629Z"
},
{
"serial": "V55555555",
"$1": "2020-09-07T12:04:34.5861918Z"
}
]
Thanks for #AnuragSharma-MSFT's help:
I am afraid there is no direct way to achieve it using a query in
cosmos db. However you can refer to below link for the same topic. If
you are using any sdk, this would help in achieving the desired
functionality: https://learn.microsoft.com/en-us/answers/questions/38454/index.html
We're glad that you resolved it in this way, thanks for sharing the update:
So doing the following returns the correct data but sadly you can only return data thats inside your group by or an aggregate function (i.e. cant do select *)
SELECT c.serial, MAX(c.EventProcessedUtcTime)
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1')
GROUP BY c.serial
[
{
"serial": "synap-aim-g1",
"$1": "2020-09-09T06:29:42.6812629Z"
},
{
"serial": "V55555555",
"$1": "2020-09-07T12:04:34.5861918Z"
}
]
If the question is really about an efficient approach to this particular query scenario, we can consider denormalization in cases where the query language itself doesn't offer an efficient solution. This guide on partitioning and modeling has a relevant section on getting the latest items in a feed.
We just need to get the 100 most recent posts, without the need to
paginate through the entire data set.
So to optimize this last request, we introduce a third container to
our design, entirely dedicated to serving this request. We denormalize
our posts to that new feed container.
Following this approach, you could create a "Feed" or "LatestEvent" container dedicated to the "latest" query which uses the device serial as id and having a single partition key in order to guarantee that there is only one (the most recent) event item per device, and that it can be fetched by the device serial or listed with least possible cost using a simple query:
SELECT *
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1')
The change feed could be used to upsert the latest event, such that the latest event is created/overwritten in the "LatestEvent" container as its source item is created in the main.

Cumulocity Inventory API filter by Creation Date

I'm currently trying to implement a simple date filter for the Inventory API using the query language. The filter should return a list of managed objects which were created after a given date. For some reasons I always receive an empty list as result but the example in the query language documentation looks the same as my query:
GET {{url}}/inventory/managedObjects?query=creationTime+gt+'2018-12-01T09:00:53.351Z'
gives me
{
"managedObjects": [],
"next": "{{url}}/inventory/managedObjects?query=creationTime+gt+'2018-12-01T09:00:53.351Z'&pageSize=5&currentPage=2",
"statistics": {
"currentPage": 1,
"pageSize": 5
},
"self": "{{url}}/inventory/managedObjects?query=creationTime+gt+'2018-12-01T09:00:53.351Z'&pageSize=5&currentPage=1"
}
And if I try this structure for the timestamp I even receive an error:
GET {{url}}/inventory/managedObjects?query=creationTime+gt+'2018-12-01T09:00:53.3512B1:00'
{
"error": "inventory/Invalid Data",
"info": "https://www.cumulocity.com/guides/reference-guide/#error_reporting",
"message": "Find by filter query failed : Query 'creationTime gt '2018-12-01T09:00:00'' could not be understood. Please try again."
}
Try to filter by
creationTime.date
Background is that the timestamps are stored as MongoDb dates.
You can also check the device list filter in device management which has a filter on creationTime as well.

Square Connect API: Retrieving all items within a category

I have been reading over the Square Connect API and messing around with the catalog portion.
I am unable to find how to retrieve all items and their data associated with a particular category. Can someone please point me in the right direction.
I thought it was the
BatchRetrieveCatalogObjects endpoint
I was using the category ID but it was only returning the catalog's data. I need each of the IDs of the items to retrieve their individual data.
I was looking to propagate a list of all the items and their data in one request in JSON.
JSON data to be passed to endpoint:
data = {
"object_ids": [
"category id"
],
"include_related_objects": True
}
My connection to the API:
category_item_endpoint = self.connection.post('/v2/catalog/batch-retrieve', data)
I am using python3 and the requests library.
In order to list items in a category I found it easiest to use the /v2/catalog/search endpoint. Simply follow the documentation on what parameters are accepted. Below are the search parameters that I used to list items by category id.
let sParams: JSON = [
"object_types": [
"ITEM"
],
"include_related_objects": true,
"include_deleted_objects": false,
"query": [
"exact_query": [
"attribute_name": "category_id",
"attribute_value": id
]
],
"limit": 1000
]
You'd probably have the most luck listing your entire catalog GET /v2/catalog/list and then applying filtering (in this case specific catagory_ids ) after you get the data. Based on the documentation doing what you desire doesn't seem possible with an endpoint/query combitionation.

Marketo API error 1006: ignore fields that don't match

I am trying to do a simple Lead create/update via the Marketo API from a web form. I am posting data to multiple sources, not just Marketo, so I have other fields that don't match any fields during the Marketo update. This throws an API error of 1006 http://developers.marketo.com/documentation/rest/error-codes/
Here is an example JSON:
{
"action": "createOrUpdate",
"lookupField": "email"
"input": [
{
"firstName": "Matthew Edward",
"campaign_id": "testingCID",
"lastName": "King",
"email": "mking#umbel.com"
"message": "",
}
]
}
Since "campaign_id" and "message" aren't fields in the Lead capture, it throws the error and won't import anything. I would rather not write a function that cleans this data JUST for the the Marketo import. It would make future web forms more scalable if we didn't have to create a "blacklist" of fields that can't be imported into Marketo.
Is there anyway to avoid this error? Thanks.
This is by design. As you mentioned, the Marketo API will return the 1006 error code if a lead field you attempt to update does not exist in Marketo.
If writing a function that excludes this data is not an option, another option would be to create custom fields in Marketo for each custom field you need to update via the API.