On 30th April 2020, Azure Data Factory Data Flows show a new option on the Optimize Tab of the Join Activity in the Data Flow. I get a validation error on the pipeline that atleast one side should be a part of the Broadcast. When I fixed the validation issues and published the data factory all of the data flows have broken. Please find attached the snapshot.
enter image description here
Can you please clear your browser cache and reload your pipeline? There are some updated libraries that seem to have put your browser into an incomplete state.
Problem was resolved next day 5/1/2020. We tried using Edge as well as Chrome with same results, so thinking not cache. Next day everything was working again.
Related
Working on a piece of the project where a report needs to be generated with all the flow details(memory used, number of records processed, Processes ran successful, failed, etc). Most of the details are present on the Summary tab, but the requirement is to have separate reports.
Can any one help me with solution/steps/examples/screens/videos.
Thanks much.
Every underlying behavior of the UX/UI that Apache NiFi provides is also accessible through an API (in fact, the UI calls the API to perform each of these tasks). So you can invoke the GET /system-diagnostics API to return that information in JSON form, and then parse this data and present it in whatever form you like.
Any one else see the following problem.
I use zendesk API and pipeline deals api.
code has been in use for 2 months (no issues all working)
As of this week (no changes to the code) both API's fail on post with create calls (Gets work fine and authentication also working fine for both API's).
The execution log shows correct data being encoded example below (removed actual values)
UrlFetchApp.fetch([https://supernahelp.zendesk.com/api/v2/organizations.json, {headers={Authorization=Basic someencodedauthdata, Content-Type=application/json}, method=post, payload={"organization":{"name":"somecustomer","domain_names":"xyc.edu","organization_fields":{"supernauniqueid":"Sup-2308233814","crmdashboard":"someurladdedhere"}}}, muteHttpExceptions=true}])
The payload was passed through JSON.stringify to add to API call and has been working fine for ever.
Error return to from execution log "call to make to ZD {"error":"RecordInvalid","description":"Record validation errors","details":{"name":[{"description":"Name: cannot be blank","error":"BlankValue"}"
Which basically means API could not parse the body correctly for the name value which was sent
I opened case with Zendesk and they got there logs and showed me what they received (not the same record)
only a snippet
{"{\"organization\":{\"name\":\"customer name here \"
I noticed \ added to the payload (not by my code) but this was added by GAS.
AND
Pipeline API has same issue payload Post commands are rejected with bad payload.
Both failed on the same day, and no longer work at all.
this tells me others must have issue with post commands?
looking for help as code worked fine and then stopped and it looks like GAS is adding escape codes out of the blue
Andrew
GAS was broken, seems content type encoding into headers stopped working and moving the content type ad syntax was changed (broke many others scripts as well).
https://code.google.com/p/google-apps-script-issues/issues/detail?id=5585&can=6&colspec=Stars%20Opened%20ID%20Type%20Status%20Summary%20Component%20Owner
Andrew
There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.
In EMV book v4.3 2 page 49 states
If all of the above steps were executed successfully, SDA was successful. The Data Authentication Code recovered in Table 7 shall be stored in tag '9F45'.
How do I store the Data Authentication Code recovered in tag 9F45
So far I am stuck at this stage, the only thing I have come up is I have to issue a PUT DATA command APDU.
Any help will be greatly appreciated
If all of the above steps were executed successfully, SDA was successful. The Data Authentication Code recovered in Table 7 shall be stored in tag '9F45'.
This does not mean that you should store the recovered data authentication code on the card (so no need for any APDU). Instead, it means that you should store the DAC in the data element 9F45 on your reader (POS terminal) for further processing (you will eventually have to send that to the acquirer for clearing the transaction).
Worklight 6.1 on both Windows (colleague) and Mac (me), building an a Hybrid app destined for Android device but to speed up development we do initial testing as Mobile Web App in Chrome browser on desktop.
We get a weird symptom that I'm trying to fine-down to a reproducible test case. I think I see different behaviours when stepping in debugger and just letting it run. Want to check whether a certain coding pattern could be the cause of the symptom before I go any further.
Fundamental question: should we wait for the resolution of a promise returned by a JSONSTore request for an action on a collection before issuing another request? more explanation below.
The overall intent is to load some data into the JSONStore, with some intelligent replace/merge action if a record is already present. Pseudo code:
for each record retrieved from back-end
if ( record already present in Store )
do some data merging
replace record
else
add record
The application code actually works like this, just considering the add() case, the problem manifests when the store is empty, all records need to be added
for each record to add
addPromise = store.get().add(record);
listOfPromises.insert(addPromise);
examine the list of promises recording any errors
That is there is no "wait" for add to finish before issuing the next add request. Hence in effect we've initiated a set of adds "in parallel" whatever that might mean in JavaScript in Chrome.
The code appears to run just fine, no errors reported. On android device it works reliably. In Chrome under normal running (no stepping in debugger) we end up with no reported errors but only one record inserted - indeed as though a snapshot of the initial "empty" store had been taken and each add is working on that "empty" copy.
After writing this I'm now pretty convinced that the coding pattern described above is vulnerable to a kind of race and that the better approach is build a list of documents to be added and insert them in a single operation.
A more detailed answer will be coming later, but I now know that this
the coding pattern described above is vulnerable to a kind of race and
that the better approach is build a list of documents to be added and
insert them in a single operation.
is true. In the browser the JSONStore does require that we wait for the result of one request before issuing another one. The recommended approach is
var dataToAdd = buildArrayOfDataToAdd(responseFromServer);
var dataToReplace = buildArrayOfDataToReplace(responseFromServer);
jsonstore.add( dataToAdd ).then( function() { jsonstore.replace( dataToReplace); })