_rewrite section stringified function isn't working proprely - couchdb-2.0

I tried CouchDB's rewrite function, see: http://docs.couchdb.org/en/2.0.0/api/ddoc/rewrites.html at "Rewrite section a is stringified function", but it seems it doesn't work. I used the example as a base.
This is mydb _design/router rewrites function:
function (req2) {
var path = req2.path.slice(4);
return {path:"../../../"+path.join("/")};
};
mydb design document:
{
"_id": "_design/router",
"_rev": "1-ff8b2d9e12f41de38495d3460e8c10ad",
"rewrites": "function (req2) {\r\n var path = req2.path.slice(4);\r\n\r\n return {path:\"../../../\"+path.join(\"/\")};\r\n}"
}
this code supposed to pass through all requests made to endpoint mydb/_design/router/_rewrite/*
example:
GET localhost:5984/mydb/_design/router/_rewrite/mydb/_all_docs
reroutes to mydb/_all_docs
The GET requests are working fine (as expected)
But the POST, PUT, DELETE requests are hanging (no response!).
example:
POST localhost:5984/mydb/_design/router/_rewrite/mydb
Content-Type:application/json
body:{"foo": "bar"}
No error message returns just hanging.
The above request works fine without rewrite! (POST localhost:5984/mydb inserts a new document {"foo": "bar"})
Is this a bug or am I doing something wrong here? If it is a bug where can I report it?
My specs: Win7 64bit, CouchDB 2.0.0
Thanks!

Related

CORS blocking requests in Kotlin lambda but not in identically setup Node lambda

I have a lambda, written in Kotlin with Serverless and CORS just is not working. I feel like I've tried everything. I deployed a Node Lambda with identical sls.sh command and yaml files. The function looks like this
hello:
handler: handler.hello
events:
- http:
path: hello
method: post
cors: true
My responses look like this in both Node and Kotlin:
{
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*"
},
"body": "{\"id\": \"f9f76590-xxxx-xxxx-xxxx-9c8e99238f40\"}"
}
In the Node case this all works great. I make a fetch call like this and it works (omitted the Promise resolutions for brevity):
var makeRequest = function (data) {
fetch('https://{lambda URL}/hello', {
'headers': {
'content-type': 'application/json'
},
'body': JSON.stringify({ data }),
'method': 'POST'
})
}
In the Kotlin case I get this CORS error back
Access to fetch at 'https://{lambda URL}/hello' from origin
'http://127.0.0.1:8080' has been blocked by CORS policy: No
'Access-Control-Allow-Origin' header is present on the requested
resource. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.
I try to "enable CORS" in the API Gateway panel but I get that it's already enabled:
And hit submit I get the error (invalid response status code)
When I hover over the error icon it says "Invalid Response status code specified".
Under Gateway Responses, under every sub item (Default 4XX, Default 5XXX, etc) there are response headers set. This is the same across my Node and Kotlin lambdas.
I'm completely out of ideas at this point.
The only potentially odd thing is I am noticing that in my Node request I see access-control-allow-origin: * in response headers in the browser network panel but in the Kotlin one I don't see it.
From this:
I can see that you haven't created Integration Response in your post method.
Try these configurations:
I discovered my CORS issue was because of server errors. If your server has an error and the API Gateway can't get a response then you get a CORS error because the Gateway itself doesn't have the CORS headers.
While the fix is easy (just handle that server error) it was hard to uncover. I wish this was documented better somewhere so hopefully this is found for others :)
For my case specifically, and why it didn't show up in Node but showed up in Kotlin, was because of types. the browser was sending a type Node automatically corrected the type (number to string) but Kotlin was expecting the type and threw a type error.

Webhook call failed. Error: Failed to parse webhook JSON response: Expect message object but got: [Chinese letters]

I'm building my own WebhookClient for dialog flow. My code is the following (using Azure Functions, similar to Firebase Functions):
module.exports = async function(context, req) {
const agent = new WebhookClient({ request: context.req, response: context.res });
function welcome(agent) {
agent.add(`Welcome to my agent!!`);
}
let intentMap = new Map();
intentMap.set("Look up person", welcome);
agent.handleRequest(intentMap);
}
I tested the query and the response payload looks like this:
{
"fulfillmentText": "Welcome to my agent!!",
"outputContexts": []
}
And the headers in the response look like this:
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 11 Dec 2018 18:16:06 GMT
But when I test my bot in dialog flow, it returns the following:
Webhook call failed. Error: Failed to parse webhook JSON response:
Expect message object but got:
"笀ഀ਀  ∀昀甀氀昀椀氀氀洀攀渀琀吀攀砀琀∀㨀 ∀圀攀氀挀漀洀攀 琀漀 洀礀 愀最攀渀琀℀℀∀Ⰰഀ਀  ∀漀甀琀瀀甀琀䌀漀渀琀攀砀琀猀∀㨀 嬀崀ഀ਀紀".
There's Chinese symbols!? Here's a video of me testing it out in DialogFlow: https://imgur.com/yzcj0Kw
I know this should be a comment (as it isn't really an answer), but it's fairly verbose and I didn't want it to get lost in the noise.
I have the same problem using WebAPI on a local machine (using ngrok to tunnel back to Kestrel). A friend of mine has working code (he's hosting in AWS rather than Azure), so I started examining the differences between our responses. I've notice the following:
This occurs with Azure Functions and WebAPI (so it's not that)
The JSON payloads are identical (so it's not that)
Working payload isn't chunked
Working payload doesn't have a content type
As an experiment, I added this code to Startup.cs, in the Configure method:
app.Use(async (context, next) =>
{
var original = context.Response.Body;
var memory = new MemoryStream();
context.Response.Body = memory;
await next();
memory.Seek(0, SeekOrigin.Begin);
if (!context.Response.Headers.ContentLength.HasValue)
{
context.Response.Headers.ContentLength = memory.Length;
context.Response.ContentType = null;
}
await memory.CopyToAsync(original);
});
This code disables response chunking, which is now causing a new and slightly more interesting error for me in the google console:
*Webhook call failed. Error: Failed to parse webhook JSON response: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 94 path $.\u0000\\"\u0000f\u0000u\u0000l\u0000f\u0000i\u0000l\u0000l\u0000m\u0000e\u0000n\u0000t\u0000M\u0000e\u0000s\u0000s\u0000a\u0000g\u0000e\u0000s\u0000\\"\u0000.\
I thought this could be encoding at first, so I stashed my JSON as a string and used the various Encoding classes to convert between them, to no avail.
I fired up Postman and called my endpoint (using the same payload as Google) and I can see the whole response payload correctly - it's almost as if Google's end is terminating the stream part-way through reading...
Hopefully, this additional information will help us figure out what's going on!
Update
After some more digging and various server/lambda configs, I spotted this post here: https://github.com/googleapis/google-cloud-dotnet/issues/2258
It turns out that json.net IS the culprit! I guess it's something to do with the formatters on the way out of the pipeline. In order to prove this, I added this hard-coded response to my POST controller and it worked! :)
return new ContentResult()
{
Content = "{\"fulfillmentText\": null,\"fulfillmentMessages\": [],\"source\": null,\"payload\": {\"google\": {\"expectUserResponse\": false,\"userStorage\": null,\"richResponse\": {\"items\": [{\"simpleResponse\": {\"textToSpeech\": \"Why hello there\",\"ssml\": null,\"displayText\": \"Why hello there\"}}],\"suggestions\": null,\"linkOutSuggestion\": null}}}}",
ContentType = "application/json",
StatusCode = 200
};
Despite the HTTP header saying the charset is utf-8, that is definitely using the utf-16le character set, and then the receiving side is treating them as utf-16be. Given you're running on Azure, it sounds like there is some configuration you need to make in Azure Functions to represent the output as UTF-8 instead of using UTF-16 strings.

aurelia-http-client connects to wrong address

I have a problem with the aurelia Http Client.
My api (http//localhost:3000/api/posts) works fine. The output of a get call (in postman or in the browser) is:
[
{
"_id": "58a5f4f635c3ab643c74d97a",
"text": "Foo",
"name": "Fooo",
"__v": 0
},
{
"_id": "58a5fcc32586d0683455f78d",
"text": "Bar",
"name": "Baar",
"__v": 0
}
]
This is my get call in the aurelia app:
getPosts(){
return client.get('http//localhost:3000/api/posts','callback')
.then(data => {
console.log(data);
return data.response;
})
}
And this is the output:
As you can see in the image the response contains something with "Aurelia" but my api never touched aurelia so i think there is something wrong with the URL.
Update1:
The fix suggested by GManProgram (missing :) was the problem.
Update2:
I have changed to the client to the aurelia-fetch-client as GManProgram suggested.
Here is the new output
I seems to put the address from the api behind its own address. Ho can I force it only to use the api address?
So first things first, in the example you posted, you are missing the : character after http in the URL.
If that doesn't fix it, and you are using the HttpClient from aurelia-fetch-client, then you may want to try using the .fetch method instead of the .get method
http://aurelia.io/hub.html#/doc/api/aurelia/fetch-client/1.1.0/class/HttpClient
In your case, since it looks like you are expecting json, the typical fetch call would look like:
return this.httpClient.fetch('http://localhost:3000/api/posts')
.then(response => response.json())
.then(response => new CaseModel(response));
Where you can also import the json method from aurelia-fetch-client.
Otherwise, maybe the HttpClient has already been configured in the application with a base URL and it is screwing you up?
What about:
return client.get('posts','callback')

Second request is not working in dojo.io.iframe.send in dojo 1.7

I am uploading a file using dojo.io.iframe.send ajax call using below code.
Am using dojo 1.7 and WebSphere Portal Server 8.0
dojo.io.iframe.send({
form: "workReqIDFormWBS",
handleAs: "text/html",
url:"<portlet:actionURL/>",
load: function(response, ioArgs) {
console.log(response, ioArgs);
return response;
},error: function(response, ioArgs) {
console.log(response, ioArgs);
return response;
}
});
When am uploding the file for the first time it's working fine,where as from second time onwards nothing is happening. Any solution for this issue.
Action URLs are only valid to be invoked once by default. Portal protects against form submission replay incidents by internally assigning an ID within the action URL produced.
You should also be seeing some logging on those subsequent action url requests: http://www-01.ibm.com/support/docview.wss?uid=swg21613334
I suggest either using resource URL and serveResource() in your portlet or ensuring that your response from the render phase following the action URL processing regenerate the Action URL value and update a variable your posted javascript reads and uses in subsequent send() calls.

JayData oData request with custom headers

I need to send custom headers to my wcf oData Service but with the following function the headers dont get modified.
entities.onReady(function () {
entities.prepareRequest = function(r) {
r[0].headers['APIKey'] = 'ABC';
};
entities.DataServiceClient.toArray(function (cli) {
cli.forEach(function (c) {
console.log(c.Name)
});
});
});
headers are not affected. any clue?
thanks!
It seems that the marked answer is incorrect. I was suffering from a similar issue, but got it working without changing datajs.
My issue was that I was doing a cross domain (CORS) request, but didn't explicitly allow the headers. After I added the correct CORS header to the webservice, it worked.
EDIT
On second thought, it seems like there is still something broken in JayData for MERGE requests.
This is NOT CORS and has nothing to do with it!
see JayData oData request with custom headers - ROUND 2
the bellow "hack" works, but the above question should take this problem to a new level.
----------
Old answer
Nevermind I found a solution.
It seems like prepareRequest is broken in JayData 1.3.2 (ODataProvider).
As a hack, I added an extraHeaders object in the providerConfiguration (oDataProvider.js):
this.providerConfiguration = $data.typeSystem.extend({
//Leave content unchanged and add the following:
extraHeaders: {}
}, cfg);
then at line 865 modify requestData like this:
var requestData = [
{
requestUri: this.providerConfiguration.oDataServiceHost + sql.queryText,
method: sql.method,
data: sql.postData,
headers: _.extend({
MaxDataServiceVersion: this.providerConfiguration.maxDataServiceVersion
},this.providerConfiguration.extraHeaders)
},
NOTE: Iam using lodash for conveniance, any js extend should do the trick.
then you just create your client like this:
var entities = new Entities.MyEntities({
name: 'oData',
oDataServiceHost: 'http://myhost.com/DataService.svc',
maxDataServiceVersion: "2.0",
//enableJSONP: true,
extraHeaders: {apikey:'f05d1c1e-b1b9-5a2d-2f44-da811bd50bd5', Accept:'application/json;odata=verbose'}
}
);