I am using the onedrive new api to do resumable upload by doc:
http://onedrive.github.io/items/upload_large_files.htm#create-an-upload-session
even I set the param name.conflictBehavior: replace
after the last fragment be uploaded, it still return the 409 conflict error
Even I change the behavior to rename, the same response got.
I wonder if I make fail parameters while create upload session:
headers = {'Authorization': 'Bearer <access_token>',
'Content-Type': 'application/json'}
params = {'#name.conflictBehavior': 'replace'}
try:
res = request.post(url, headers=headers, data=params)
except Exception as e:
print e
after doing that I could get the upload url with param behind the url.
but after the last fragment uploaded, the 409 error occurred.
Is this a bug or just the parameters I pass are wrong, I have no idea.
Looks like we have a bug here where we don't honor the #name.conflictBehavior annotation in that scenario. We're working on a fix.
Related
I am trying to migrate one of api test to karate framework. However I am unable to write the corrent step defined in karate documentation. Maybe I am missing some basic syntax..but could anyone have any idea how we write following steps in karate feature
requestPostDoc.header("x-api-key","FG6dcYHN9N7PYKfWCUlGo5QGTwZhv2Re1MrDSOTV");//New chnages
requestPostDoc.contentType("multipart/form-data").multiPart("part2-file",file).formParam("part1-json",objDocumentWrite.toJSONString());
requestPostDoc.queryParam("loadProperties",true); //New changes
responseForNewCaseDocFile=requestPostDoc.post("https://vrh0oox3hl.execute-api.eu-central-1.amazonaws.com/default/");//New changes
filterableRequestSpecification = (FilterableRequestSpecification) requestPostDoc;
filterableRequestSpecification.removeQueryParam("loadProperties");
I have written following feature file in karate:
Given url 'https://vrh0oox3hl.execute-api.eu-central-1.amazonaws.com/default/'
And header x-api-key = 'FG6dcYHN9N7PYKfWCUlGo5QGTwZhv2Re1MrDSOTV'
And header Authorization = 'Bearer ' + jwt
And param loadProperties = true
And multipart file info = { read: 'classpath:testData/documentWrite.json', filename: 'documentWrite.json' }
And multipart file Uploading = { read: 'classpath:testData/TextFile.txt', filename: 'TextFile.txt' }
When method post
Then print response
Then status 200
When I execute this test i am getting 400 response code
status code was: 400, expected: 200, response time in milliseconds: 252, url: https://vrh0oox3hl.execute-api.eu-central-1.amazonaws.com/default/?loadProperties=true, response:
Based on the cURL command in the comments, this is my best guess. The rest is up to your research. Read the docs and tweak the Content-Type and other sub-headers if needed. You need to figure this out depending on what your server wants: https://github.com/karatelabs/karate#multipart-file
* multipart file part1-json = { read: 'documentWrite.json' }
* multipart file part2-file = { read: 'TextFile.txt' }
For anyone coming across this question in the future and if you are stuck, get a friend if needed and go through this exercise together: https://github.com/karatelabs/karate/issues/1645#issuecomment-862502881
This stuff can be hard and needs time. There are no short cuts.
I am using google app script to Call UPS api and generate shipping label. However the API response is truncated and i am unable to decode base64 encoded image which is part of the JSON response object as it is truncated.
I am also not getting any truncation error messages or responses from the UPS servers, neither is google apps script throwing an error
Have contacted UPS support with the JSON request and it seems to works fine at their end.
// Here is the code for API call.
function getLabel() {
var userName = "myUPS_username";
var password = "*********";
var accessKey = "my_access_key";
var transId = "Trans123";
var transactionSrc = "upstest";
var url = "https://wwwcie.ups.com/ship/v1807/shipments";
var header = {
'AccessLicenseNumber' : accessKey,
'password' : password,
'transId' : transId,
'transactionsrc' : transactionSrc,
'username' : userName
};
// parameters for url fetch
var params = {
'method': 'GET',
'contentType': 'application/json',
'headers': header,
'payload' : JSON.stringify(payload)
};
// call the UPS Shipment API
var response = UrlFetchApp.fetch(url, params);
}
Not including the JSON payload here
Answer:
UrlFetchApp() has functionality and response limitations, including POST and response sizes.
More Information:
As per the Apps Script Documentation, URL Fetch has Limitations which are implemented in the methods themselves. The limitations are as follows:
URL Fetch response size: 50MB/call
URL Fetch headers: 100/call
URL Fetch header size: 8kB/call
URL Fetch POST size: 50MB/call
URL Fetch URL length: 2kB/call
Unfortunately, there is no way to get around this.
Feature Request:
There is a Feature Request on Google's Issue Tracker requesting the increase of the UrlFetch response size limit already. This Feature Request can be found here, which you can give a star (☆) in the top left to let Google know more people wish for this request. There is already a response from them saying 'We'll consider raising the quota if there is enough interest from the developer community.', so letting them know this is a wanted feature is a good way to go.
References:
Google's Issue Tracker
Increase the UrlFetch Total Bytes quota Feature Request
Quotas for Google Services
Current Quotas
Current Limitations
I'm building my own WebhookClient for dialog flow. My code is the following (using Azure Functions, similar to Firebase Functions):
module.exports = async function(context, req) {
const agent = new WebhookClient({ request: context.req, response: context.res });
function welcome(agent) {
agent.add(`Welcome to my agent!!`);
}
let intentMap = new Map();
intentMap.set("Look up person", welcome);
agent.handleRequest(intentMap);
}
I tested the query and the response payload looks like this:
{
"fulfillmentText": "Welcome to my agent!!",
"outputContexts": []
}
And the headers in the response look like this:
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 11 Dec 2018 18:16:06 GMT
But when I test my bot in dialog flow, it returns the following:
Webhook call failed. Error: Failed to parse webhook JSON response:
Expect message object but got:
"笀ഀ ∀昀甀氀昀椀氀氀洀攀渀琀吀攀砀琀∀㨀 ∀圀攀氀挀漀洀攀 琀漀 洀礀 愀最攀渀琀℀℀∀Ⰰഀ ∀漀甀琀瀀甀琀䌀漀渀琀攀砀琀猀∀㨀 嬀崀ഀ紀".
There's Chinese symbols!? Here's a video of me testing it out in DialogFlow: https://imgur.com/yzcj0Kw
I know this should be a comment (as it isn't really an answer), but it's fairly verbose and I didn't want it to get lost in the noise.
I have the same problem using WebAPI on a local machine (using ngrok to tunnel back to Kestrel). A friend of mine has working code (he's hosting in AWS rather than Azure), so I started examining the differences between our responses. I've notice the following:
This occurs with Azure Functions and WebAPI (so it's not that)
The JSON payloads are identical (so it's not that)
Working payload isn't chunked
Working payload doesn't have a content type
As an experiment, I added this code to Startup.cs, in the Configure method:
app.Use(async (context, next) =>
{
var original = context.Response.Body;
var memory = new MemoryStream();
context.Response.Body = memory;
await next();
memory.Seek(0, SeekOrigin.Begin);
if (!context.Response.Headers.ContentLength.HasValue)
{
context.Response.Headers.ContentLength = memory.Length;
context.Response.ContentType = null;
}
await memory.CopyToAsync(original);
});
This code disables response chunking, which is now causing a new and slightly more interesting error for me in the google console:
*Webhook call failed. Error: Failed to parse webhook JSON response: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 94 path $.\u0000\\"\u0000f\u0000u\u0000l\u0000f\u0000i\u0000l\u0000l\u0000m\u0000e\u0000n\u0000t\u0000M\u0000e\u0000s\u0000s\u0000a\u0000g\u0000e\u0000s\u0000\\"\u0000.\
I thought this could be encoding at first, so I stashed my JSON as a string and used the various Encoding classes to convert between them, to no avail.
I fired up Postman and called my endpoint (using the same payload as Google) and I can see the whole response payload correctly - it's almost as if Google's end is terminating the stream part-way through reading...
Hopefully, this additional information will help us figure out what's going on!
Update
After some more digging and various server/lambda configs, I spotted this post here: https://github.com/googleapis/google-cloud-dotnet/issues/2258
It turns out that json.net IS the culprit! I guess it's something to do with the formatters on the way out of the pipeline. In order to prove this, I added this hard-coded response to my POST controller and it worked! :)
return new ContentResult()
{
Content = "{\"fulfillmentText\": null,\"fulfillmentMessages\": [],\"source\": null,\"payload\": {\"google\": {\"expectUserResponse\": false,\"userStorage\": null,\"richResponse\": {\"items\": [{\"simpleResponse\": {\"textToSpeech\": \"Why hello there\",\"ssml\": null,\"displayText\": \"Why hello there\"}}],\"suggestions\": null,\"linkOutSuggestion\": null}}}}",
ContentType = "application/json",
StatusCode = 200
};
Despite the HTTP header saying the charset is utf-8, that is definitely using the utf-16le character set, and then the receiving side is treating them as utf-16be. Given you're running on Azure, it sounds like there is some configuration you need to make in Azure Functions to represent the output as UTF-8 instead of using UTF-16 strings.
I'm trying to create a REST API from a SOAP Service using IBM API Connect 5. I have followed all the steps described in this guide (https://www.ibm.com/support/knowledgecenter/en/SSFS6T/com.ibm.apic.apionprem.doc/tutorial_apionprem_expose_SOAP.html).
So, after dragging the web service block from palette, ensuring the correctness of endpoint and publishing the API, I have tried to call the API from the browser. Unfortunately, the API return the following message:
<errorResponse>
<httpCode>500</httpCode>
<httpMessage>Internal Server Error</httpMessage>
<moreInformation>Error attempting to read the urlopen response
data</moreInformation>
</errorResponse>
To testing purpose, I have logged the request and I have tried the request on SOAPUI. The service return the response correctly.
What is the problem?
In my case, the problem was in the backend charset (Content-Type: text/xml;charset=iso-8859-1).
For example, backend returns text/xml in German (or French). Api Connect cannot process character ü. It needs Content-Type: text/xml;charset=UTF-8.
I had a similar issue, in my case was the accept. if you have an Invoke and the content-type or the accept, is not matching the one of the request, or the response that you got, APIC is getting mad.
Please, check if the formats to send (contentType) and receive (accept) are the same of that your API expected. In my case the error occurs because the API returns a String and my default code is configured to receive a JSON body.
//define a JSON-PLAIN TEXT protocol
private HttpEntity<String> httpEntityWithBody(Object objToParse){
HttpHeaders headers = new HttpHeaders();
headers.set("Authorization", "Bearer " + "xxx token xxx");
headers.set("Accept", MediaType.TEXT_PLAIN_VALUE);
headers.setContentType(MediaType.APPLICATION_JSON);
Gson gson = new GsonBuilder().create();
String json = gson.toJson(objToParse);
HttpEntity<String> httpEntity = new HttpEntity<String>(json, headers);
return httpEntity;
}
//calling the API to APIC...
ParameterizedTypeReference<String> responseType = new
ParameterizedTypeReference<String>(){};
ResponseEntity<String> result =
rest.exchange(builder.buildAndExpand(urlParams).toUri(), HttpMethod.PUT, httpEntityWithBody(myDTO), responseType);
String statusCode = result.getStatusCodeValue();
String message = result.getBody();
I'm trying to send a PDF for content extraction to a Tika Server but always get the error: "Cannot convert text from stream using the source encoding"
This is how Tika is expecting the files:
"All services that take files use HTTP "PUT" requests. When "PUT" is used, the original file must be sent in request body without any additional encoding (do not use multipart/form-data or other containers)." Source https://wiki.apache.org/tika/TikaJAXRS#Services
What is the correct way of sendig the file with XMLHttpRequest()?
Code:
var response, error, file, blob, xhr;
file = new File("/PROJECT/web/dateien/ai/pdf.pdf");
blob = file.toBuffer().toBlob("application/pdf");
url = "http://localhost:9998/tika";
// send data
try {
xhr = new XMLHttpRequest();
xhr.open("PUT", url);
xhr.setRequestHeader("Accept", "text/plain");
xhr.send(blob);
} catch (e) {
error = e;
}
({
response: xhr.responseText,
status: xhr.statusText,
error: error,
type: xhr.responseType,
blob: blob
});
Error:
I suspect PUT request to be converted into a POST request by wakanda when there is blob in XHR body. Can you wireshark your XHR request and add details ? If so, you can probably fill an issue in wakanda (https://github.com/Wakanda/wakanda-issues/issues)
Hope it helps,
Yann