Mule Salesforce Create - mule

With Mule Salesforce Connector using sfdc:create, the document says we can send up to 200 records at a time (single round trip call). If that's the case, what benefit do we get using Mule Batch flow with Batch Commit and Salesforce (sfdc:create) within the Batch Commit?
Example create below.
<xml<sfdc:create type="Account">
<sfdc:objects>
<sfdc:object>
<Name>MuleSoft</Name>
<BillingStreet>I live here </BillingStreet>
<BillingCity>My City</BillingCity>
<BillingState>MA</BillingState>
<BillingPostalCode>32423</BillingPostalCode>
<BillingCountry>US</BillingCountry>
</sfdc:object>
.......200 such objects
</sfdc:objects>
</sfdc:create>

Please keep in mind that in Salesforce, the SOAP API Call Limit for a client application is up to 200 records in a single create() call. If a create request exceeds 200 objects, then the entire operation fails.
Please refer reference from Salesforce.

Related

Trying to use adminp.DeleteReplicas followed by adminp.ApproveReplicaDeletion gives error "Invalid Approval Request note"

I am trying delete a database and any associated replicas using LotusScript adminp calls. This is basically the code:
Dim session As New NotesSession
Dim adminp As NotesAdministrationProcess
Set adminp = session.CreateAdministrationProcess("Software_Server")
noteid$ = adminp.DeleteReplicas("Software_Server", "Guys1")
If noteid$ <> "" Then
Call adminp.ApproveReplicaDeletion(noteid$) 'This is where the error is thrown
End If
The first adminp call is successful and returns a noteid, and if I look in the admin requests database can see the document. The next call to ApproveReplicateDeletion results in the error "Invalid Approval Request note"
The documentation doesn't contain any examples for the adminp approve methods. I have a feeling that maybe the second request cannot be called until much later when adminp has processed the first request?
Also related question, do I only have to make this request on a single server and it will remove replicas on all other servers, or do I need to make this request for each server?
So this is a bit more complicated than the help gives any indication of, and might explain why I couldn't find any examples on the internet on how to do it. So the Workflow for using AdminP is as Follows:
Create the request to get the noteID for the initial noteid of the DeleteReplicas request. The returned noteID is not the one that is used to approve the delete replica request. noteid$ = adminp.DeleteReplicas("Software_Server", "Guys1")
This creates a document in the admin.nsf db for the initial request, but the approval docs don't yet exist, for the server to create those, the adminp process must run.
So in the code send a Console command to the Server "tell adminp process now"
Sleep the agent for a few seconds to give adminp time to process the request (this will also fire off any other waiting adminp requests unfortunately)
Now new documents will have been created in the admin database, that are awaiting approval by an admin. These documents contain the noteids that should be sent the approvereplicadeletion
To get them, first lookup the notes document by the noteID in the admin db obtained in step 1
Using that noteid for the document, get the field value ProxyOriginatingRequestUNID from the document.
using this UNID value, perform a getalldocumentsbykey on the view ($AllRequestsbyOriginatingUNID)
if the returned document has a ProxyAction field with a value of "82", this is an approval request document. This documents noteid can be passed to approveReplicaDeletion to have adminp remove the database next time it processes requests.
You can either send a console command to process adminp again, or just wait for the database deletions etc. to happen next time around.

I am automating a login script for perdormance teating in j meter

I want to make script using jmeter for performance testing of login page . The authorization type is code and code challenge method is sh256. How could I fetch code challenge code verifier and state or noance values daynamically.
The script is successfull for 1 single user but failing for multiple can any one help? Also I am using blazemeter to record script..
The process of "fetching" dynamic values is known as correlation and there is a lot of information on the topic in the Internet, i.e. How to Handle Correlation in JMeter
The main steps are:
Use a suitable JMeter Post-Processor to extract a dynamic value from the response into a JMeter Variable
Replace recorded hard-coded value with the JMeter Variable from the previous step

Auth0. How to retrieve over 1000 users (and make this call via a python script than be run as a cron job)

I am trying to use Auth0 to get a list of users when my user list is >1000 (approx 2000)
So I understand a bit better now how this works after following the steps at:
https://auth0.com/docs/manage-users/user-migration/bulk-user-exports
There are three steps:
Use a POST call to the https://MY_DOMAIN/oauth/token endpoint to get an auth token (done)
Then take this token and insert it into the next POST call to the endpoint: https://MY_DOMAIN/api/v2/jobs/users-exports
Then take the job_id and insert it into the 3rd GET call to the endpoint: https://MY_DOMAIN/api/v2/jobs/MY_JOB_ID
But this just gives me a link to a document that I download. Essentially is the same end result as using the User Import / Export extension.
This is NOT what I want. I want to be able to call an endpoint and have it return a list of all the users (similar to the Retrieve Users with the Get Users Endpoint). I require it is done this way, so I can write a python script and run it as a cron job.
However, since I have over 1000 users, I am getting the below error when I call the GET /API/v2/users endpoint.
auth0.v3.exceptions.Auth0Error: 400: You can only page through the first 1000 records. See https://auth0.com/docs/users/search/v3/view-search-results-by-page#limitation
Can anyone help? Can this be done all the way I wish it to be?

How to get results from a completed process instance in Flowable?

I have a simple Service task that sets a variable "foo" to "bar".
When a process contains just that one task, and I initiate it using "runtime/process-instances", I can see variable "foo" in the response.
When I add a user task before the service task, and finish the task using action: complete on "runtime/tasks", I just get a 200 result code.
How do I get the resulting variables?
Flowable has 2 sets of services
RuntimeService - which provides information for the runtime data.
HistoryService - which provides information for all the available data (runtime and completed)
In order to access completed tasks / processes you'll need to use the history service. Those endpoints are under /history

Batch Request to Google API making calls before the batch executes in Python?

I am working on sending a large batch request to the Google API through Admin SDK which will add members to certain groups based upon the groups in our in-house servers(a realignment script). I am using Python and to access the Google API I am using the [apiclient library][1]. When I create my service and batch object, the creation of the service object requests a URL.
batch_count = 0
batch = BatchHttpRequest()
service = build('admin', 'directory_v1')
logs
INFO:apiclient.discovery:URL being requested:
https://www.googleapis.com/discovery/v1/apis/admin/directory_v1/rest
which makes sense as the JSON object returned by that HTTP call is used to build the service object.
Now I want to add multiple requests into the batch object, so I do this;
for email in add_user_list:
if batch_count != 999:
add_body = dict()
add_body[u'email'] = email.lower()
for n in range(0, 5):
try:
batch.add(service.members().insert(groupKey=groupkey, body=add_body), callback=batch_callback)
batch_count += 1
break
except HttpError, error:
logging_message('Quota exceeded, waiting to retry...')
time.sleep((2 ** n)
continue
Every time it iterates through the outermost for loop it logs (group address redacted)
INFO:apiclient.discovery:URL being requested:
https://www.googleapis.com/admin/directory/v1/groups/GROUPADDRESS/members?alt=json
Isn't the point of the batch to not send out any calls to the API until the batch request has been populated with all the individual requests? Why does it make the call to the API shown above every time? Then when I batch.execute() it carries out all the requests (which is the only area I thought would be making calls to the API?)
For a second I thought that the call was to construct the object that is returned by .members() but I tried these changes:
service = build('admin', 'directory_v1').members()
batch.add(service.insert(groupKey=groupkey, body=add_body), callback=batch_callback)
and got the same result still. The reason this is an issue is because it is doubling the number of requests to the API and I have real quota concerns at the scale this is running at.