How to create an upsert bulk job for 'Account' object in Salesforce? what will be the externalIdFieldName? - api

I tried with the below body(payload) to create an upsert bulk job for Account push to Salesforce.
{
"object" : "Account",
"externalIdFieldName":"Website",
"contentType" : "CSV",
"operation" : "upsert",
"lineEnding" : "LF"
}
However, I receive an error as below, unable to find a way out. Could you please help with the correct 'externalIdFieldName' ??
[
{
"errorCode": "INVALIDJOB",
"message": "InvalidJob : Field name provided, website does not match an External ID, Salesforce Id, or indexed field for Account"
}
]

As the message states, Account.Website does not meet the qualifications to be used to upsert. A field used for upsert matching must be the Id field, or must be indexed, or have the Id Lookup property, none of which this field possesses.
You can look up these properties for standard fields in the SOAP Reference. Other than Id, there aren't any standard fields you can upsert against on Account; you'll be limited to custom fields that have the External Id property set (making them indexed).
For contrast, see Contact, where Email has the idLookup property and can be an upsert target.

Related

How do I insert into a user column in a SharePoint list using Graph API?

I am trying to create an item in a SharePoint list using Microsoft Graph API and all the fields are inserting except when I add a user column I get the following error:
"code": "generalException",
"message": "General exception while processing".
Based on research, to insert into a user column the user's LookupId is required. My request body for the user column is as follows:
{
"fields": {
"[ColumnName]LookupId": "12"
}
}
If anybody could advise what I'm doing wrong or if I can insert using the user's email that would be better.
Cheers.
Everything is good with your request, but this body will work only for lookup/user columns where setting "Allow multiple selections" is false. I guess in your case it's true.
You can check it with the endpoint
GET https://graph.microsoft.com/v1.0/sites/{{SiteId}}/lists/{{ListName}}/contentTypes?expand=columns(select=name,type,personOrGroup)
where personOrGroup.allowMultipleSelection will show the flag.
For user or lookup type column where multiple selection is allowed, use the following body (and obviously you may pass multiple values in array):
{
"fields": {
"[columnName]LookupId#odata.type":"Collection(Edm.String)",
"[columnName]LookupId":["12"]
}
}
As for referring to user fields with email, I don't think it's possible with Graph API, but you may check Sharepoint REST API v1 if it supports that

Is it correct to do 1-to-1 mapping in Update API request param

There is a need for me to do bulk update of user details.
Let the object details have the following fields,
User First Name
User ID
User Last Name
User Email ID
User Country
An admin can upload the updated data of the users through a csv file. Values with mismatching data needs to be updated. The most probable request format for this bulk update request will be like:(Method 1)
"data" : {
"userArray" : [
{
"id" : 2343565432,
"f_name" : "David",
"email" : "david#testmail.com"
},
{
"id" : 2344354351,
"country" : "United States",
}
.
.
.
]
}
Method 2 : I would send the details in two arrays, one containing the list of similar filed values with respect to their user ids
"data" : {
"userArray" : [
{
"ids" : [23234323432, 4543543543, 45654543543],
"country" : ["United States", "Israel", "Mexico"]
},
{
"ids" : [2323432334543, 567676565],
"email" : ["groove#drivein.com", "zara#foobar.com"]
},
.
.
.
]
}
In method 1, i need to query the database for every user update, which will be more as the no of user edited is more. In contrast, if i use method 2, i query the database only once for each param(i add the array in the query and get those rows whose user id is present in the given array in a single query). And then i can update the each row with their respective details.
But overall in the internet, most of the update api had params in the format specified in method 1 which gives user good readability. But i need to know what will be advantage if i go with method 1 rather than method 2? (I save some query time in method 2 if the no of users count is large which can improve my performance)
I almost always see it being method 1 style.
Woth that said, I don't understand why your DB performance is based on the way the input data is structured. That's just the way information gets into your code.
You can have the client send the data as method 1 and then shim it to method 2 on the backend if that helps you structure the DB queries better

Change/Update Field name in the NiFi Schema Text property Across various parallel flows

I have few identical parallel flows(as shown in screenshot). I have convertRecord in each of the identical flows and in the Record Reader I have used "Schema Text Field Property" as access strategy and specified the "Schema text". For Example:
{
"type": "record",
"name": "AVLRecord0",
"fields" : [
{"name": "TimeOfDay", "type": "string", "logicalType":"timestamp-millis"},
{"name":"Field1", "type": "double"},
{"name":"Field2", "type": "double"},
{"name":"Field3", "type": "double"},
{"name": "Filename", "type": "string"}
]
}
Lets say the above schema I have used across various parallel flows ConvertRecord, and now I want to update one field name from Field to Field_Name so is there any way I can do it in one go across all the convert record Schema Text?
If I want to change/update one of the Field in the schema Text do I have to change/Update the field name in each processor manually? Or there is a global way that will change the field name across all the parallel flow I have?
Is there Any way that I can update the Schema Text across various processors In one go?
Any help is much appreciated! Thanks
As you are using Schema Text Field Property so you need to change in all ConvertRecord processor manually.
Try with this approach:
In ConvertRecord processor use Schema Access Strategy as
Use Schema Name Property
Then set up AvroSchemaRegistry and define your schema by adding new property
I have added sch as schema.name and defined the avro schema.
After GetFile Processor use UpdateAttribute processor and add schema.name attribute(for ex: with value sch) to the flowfile.
Now in reader controller service use the Schema Access strategy as Use Schema Name Property and Schema Registry asAvroSchemaRegistry` that has already setup.
By following this way we are not defining schema on all ConvertRecord processors instead we are referring to same schema that defined in AvroSchemaRegistry in case if you want to change one field name it is easy to go into Registry and change the value.
Flow:
1.GetFile
2.UpdateAttribute //add schema.name attribute
3.ConvertRecord //define/use AvroSchemaRegistry and access strategy as schemaname property
..other processors
Refer to this link for more details regards to defining/using AvroSchemaRegistry.

"Cannot return null for non-nullable type: 'Person' within parent 'Messages' (/getMessages/sendBy)" in GraphQL SDL( aws appsync)

Iam new to graphql.Iam implementing a react-native app using aws appsync.Following is the code i have written in schema
type Messages {
id: ID!
createdAt: String!
updateAt: String!
text: String!
sendBy: Person!
#relation(name: "UserMessages")}
type Person {
id: ID!
createdAt: String!
updateAt: String!
name: String!
messages: [Messages!]!
#relation(name: "UserMessages")}
When i tried to query the sendBy value it is giving me an error saying
query getMessages{
getMessages(id : "a0546b5d-1faf-444c-b243-fab5e1f47d2d") {
id
text
sendBy {
name
}
}
}
{
"data": {
"getMessages": null
},
"errors": [
{
"path": [
"getMessages",
"sendBy"
],
"locations": null,
"message": "Cannot return null for non-nullable type: 'Person' within parent 'Messages' (/getMessages/sendBy)"
}
]
}
Am not understanding that error please help me.Thanks!! in Advance
This might sound silly, but still, developers do this kind of mistakes so did I. In subscription, the client can retrieve only those fields which are outputted in the mutation query. For example, if your mutation query looks like this:
mutation newMessage {
addMessage(input:{
field_1: "",
field_2: "",
field_n: "",
}){
field_1,
field_2
}
}
In the above mutation since we are outputting only field_1 & field_2. A client can retrieve the only subset of these fields.
So if in the schema, for a subscription if you have defined field_3 as required(!), and since you are not outputting field_3 in the above mutation, this will throw the error saying Cannot return null for non-nullable type: field_3.
Looks like the path [getMessages, sendBy] is resolving to a null value, and your schema definition (sendBy: Person!) says sendBy field cannot resolve to null. Please check if a resolver is attached to the field sendBy in type Messages.
If there is a resolver attached, please enable CloudWatch logs for this API (This can be done on the Settings page in Console, select ALL option). You should be able to check what the resolved Request/Response mapping was for the path [getMessages, 0, sendBy].
I encountered a similar issue while working on my setup with CloudFormation. In my particular situation I didn't configure the Projection correctly for the Global Secondary Indexes. Since the attributes weren't projected into the index, I was getting an ID in the response but null for all other values. Updating the ProjectionType to 'ALL' resolved my issue. Not to say that is the "correct" setting but for my particular implementation it was needed.
More on Global Secondary Index Projection for CloudFormation can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html
Attributes that are copied (projected) from the source table into the index. These attributes are additions to the primary key attributes and index key attributes, which are automatically projected.
I had a similar issue.
What happened to me was a problem with an update resolver. I was updating a field that was used as GSI (Global Secondary Index). But I was not updating the GSI, so when query by GSI the index exists but the key for that attribute had changed.
If you are using Dynamo DB, you can start debugging there. You can check the item and see if you have any reference to the primary key or the indexes.
I had a similar issue.
for me the problem was lying with the return type of the schema . As i was doing a query with PK on dynamodb table ..it was returning a list of items or data you can say . but in my schema i had a schema define as a singular struct format .
Error was resolved when i just made the return type in schema as list of items .
like
type mySchema {
[ID]
}
instead of
type mySchema
{
id : ID!
name : String!
details : String!
}
This error is thrown for multiple reasons . so your reason could be else but still i just posted one of the scenarios.

How to update old data with new data in Firebase?

I'm developing a chat app. In my app there are 4 nodes called User, Recent, Message, Group. I'm using Objective-C My message object looks like,
{
"createdAt" : 1.486618017521277E9,
"groupId" : "-KcWKeXXQ9tjYsYfCknx",
"objectId" : "-KcWKftK8GiMxxAnarL5",
"senderId" : "828949592937598976",
"senderImage" : "http://hairstyleonpoint.com/wp-content/uploads/2014/10/marcello-alvarez.png",
"senderName" : "John Doee",
"status" : "Seen",
"text" : "Hi all",
"type" : "text",
"updatedAt" : 1.486622011467733E9
}
When I'm updating a User, all message's senderName should be updated accordingly. Is there are way to do this via the code or Do I need to write a rule. I'm a newbie to the firebase. Please suggest me a way to do that. If It's possible to do with the rules, Please guide me on this.
It's not possible to do this via rules, so you have to manually iterate over all your data and update the senderName.
Anyways, I think you would probably be better off with saving {senderID: $someUserID} instead - like you would do in a relational database. The userID is static, so can change the user without having to update all the instances where you use it.