Migrating to self hosted Parse Server isn't giving me the logged user - parse-server

I'm trying to migrate my Parse server to my own server instance in DigitalOcean. After deploying my parse-server I'm falling in some issue I can't understand.
When you make a call to the Cloud Code, you can retrieve your user as request.user if you have revocable sessions enabled.
Everything is OK, but sometimes (random times) I get this strange behaviour: my request.user doesn't appear in Cloud Code.
I thought it could be a bad session token so I got rid of it by doing:
if (!request.user) {
response.error("INVALID_SESSION_TOKEN");
return;
}
and obbligate the user to log-in again.
This wasn't working, I was getting an INVALID_SESSION_TOKEN everytime I log in, so I decided to debug. These are my steps:
1.- Log in my user, so a _Session object is created:
so the sessionToken is r:a425239d4184cd98b9b693bbdedfbc9c
2.- Make call cloud function (sniff log):
POST /parse-debug/functions/getHomeAudios HTTP/1.1
X-Parse-OS-Version: 6.0.1
X-Parse-App-Build-Version: 17
X-Parse-Client-Key: **** (hidden)
X-Parse-Client-Version: a1.13.0
X-Parse-App-Display-Version: 1.15.17
X-Parse-Installation-Id: d7ea4fa0-b4dc-4eff-9b7d-ff53a1424dcb
User-Agent: Parse Android SDK 1.13.0 (com.pronuntiapp.debug.uat/17) API
Level 23
X-Parse-Session-Token: r:a425239d4184cd98b9b693bbdedfbc9c
X-Parse-Application-Id: **** (hidden)
Content-Type: applicati¡á“WÇX�
Content-Length: 346
Host: 46.101.89.192:1338
Connection: Keep-Alive
Accept-Encoding: gzip
3.- request.user is still not appearing on CloudCode.
EDIT: Reseting the parse-server worked in this case, but not in some others.

Days ago I got the solution.
When you have successfully deployed your Parse server, you will get request.user from any end point of the cloud, but if you call a cloud function from cloud, you won't get this request.user at least you pass the sessionToken:
Parse.Cloud.define("foo", function(request, response) {
if (!request.user) {
response.error("INVALID_SESSION_TOKEN");
return;
}
var countResponses = 0;
var responsesNeeded = 1;
Parse.Cloud.run('bar', request.params, {
sessionToken: request.user.getSessionToken(),
success: function(c) {
countResponses++;
result = c;
if (countResponses >= responsesNeeded) {
response.success(result);
}
},
error: function(error) {
response.error(error);
}
});
});
in this case, foo will have request.user and bar won't, unless you pass sessionToken.

Related

Chrome Extension - Migration to Manifest v3 - chrome.permissions user gesture issue

I have built a chrome extension in manifest version 2 and am now looking at migrating to version 3. As part of this migration I have come across an issue when trying to toggle an optional permission to use the chrome notifications api.
Since you can't request a new permission from a content script as the api is not accessible from a content script, you have to send a message to the background script to perform the request and return the response to the content script. This worked as expected with version 2, now I am receiving this error:
Unchecked runtime.lastError: This function must be called during a user gesture
This means that the extension wants the permission request to be initiated on the back of an event initiated by a user action, such as a click. This indicates that the extension wishes the permission request to be completed from the content script but as stated above this is impossible.
Could anyone illuminate me if I'm missing something?
Content Script:
chrome.runtime.sendMessage(
{message: 'requestPermissions', permissions: ['notifications']},
(res) => console.log(res)
);
Background Script:
export function requestPermissions(request, sender, sendResponse) {
const {permissions} = request;
new Promise((resolve) => {
chrome.permissions.request(
{
permissions
},
(granted) => resolve(granted)
);
}).then((res) => sendResponse(res));
return true;
}

CORS blocking requests in Kotlin lambda but not in identically setup Node lambda

I have a lambda, written in Kotlin with Serverless and CORS just is not working. I feel like I've tried everything. I deployed a Node Lambda with identical sls.sh command and yaml files. The function looks like this
hello:
handler: handler.hello
events:
- http:
path: hello
method: post
cors: true
My responses look like this in both Node and Kotlin:
{
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*"
},
"body": "{\"id\": \"f9f76590-xxxx-xxxx-xxxx-9c8e99238f40\"}"
}
In the Node case this all works great. I make a fetch call like this and it works (omitted the Promise resolutions for brevity):
var makeRequest = function (data) {
fetch('https://{lambda URL}/hello', {
'headers': {
'content-type': 'application/json'
},
'body': JSON.stringify({ data }),
'method': 'POST'
})
}
In the Kotlin case I get this CORS error back
Access to fetch at 'https://{lambda URL}/hello' from origin
'http://127.0.0.1:8080' has been blocked by CORS policy: No
'Access-Control-Allow-Origin' header is present on the requested
resource. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.
I try to "enable CORS" in the API Gateway panel but I get that it's already enabled:
And hit submit I get the error (invalid response status code)
When I hover over the error icon it says "Invalid Response status code specified".
Under Gateway Responses, under every sub item (Default 4XX, Default 5XXX, etc) there are response headers set. This is the same across my Node and Kotlin lambdas.
I'm completely out of ideas at this point.
The only potentially odd thing is I am noticing that in my Node request I see access-control-allow-origin: * in response headers in the browser network panel but in the Kotlin one I don't see it.
From this:
I can see that you haven't created Integration Response in your post method.
Try these configurations:
I discovered my CORS issue was because of server errors. If your server has an error and the API Gateway can't get a response then you get a CORS error because the Gateway itself doesn't have the CORS headers.
While the fix is easy (just handle that server error) it was hard to uncover. I wish this was documented better somewhere so hopefully this is found for others :)
For my case specifically, and why it didn't show up in Node but showed up in Kotlin, was because of types. the browser was sending a type Node automatically corrected the type (number to string) but Kotlin was expecting the type and threw a type error.

Lyft-API - GET from Localhost

I have been trying to figure out how to get this Vue project to work with the Lyft API. I have been able to get an Auth Token successfully created from the three-legged procedure, but I am unable to get the available drive types https://api.lyft.com/v1/ridetypes endpoint from the localhost:8080. It does work on Postman.
It keeps stating:
Access to XMLHttpRequest at
'https://api.lyft.com/v1/ridetypes?lat=37.7752315&lng=-122.418075'
from origin 'http://localhost:8080' has been blocked by CORS policy:
Response to preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource.
I had tried doing a proxy using a vue.config.js file:
module.exports = {
devServer: {
proxy: {
'/lyftapi': {
target: 'https://api.lyft.com/v1',
ws: true,
changeOrigin: true
}
}
}
}
I been around other parts of Stack Overflow, and this is the closest thing to my problem, but no answers.
CORS error in Lyft API started recently
Any suggestions?
Axios Get Call
axios.get('/ridetypes', {
baseURL: 'https://api.lyft.com/v1',
headers: {
'Authorization': this.lyftToken,
},
params: {
lat: lat.toString(),
lng: long.toString()
}
})
If it means anything, I am able to make successful GET calls to retrieve Uber products, but not so much the Auth Token (unless its from Postman).
Lyft-API has disabled CORS, this means that browsers will block calls to api.lyft.com.
Vue won't be able to do anyting about this as this is a browser security policy.
Luckily there is nothing from stoping you to make this call from your own server.
One solution is to forward the request and response using your own server. You make a call to your server, the server makes a call to lyft, waits for the response and then responds your request.
This is not a vue only solution.

Webhook call failed. Error: Failed to parse webhook JSON response: Expect message object but got: [Chinese letters]

I'm building my own WebhookClient for dialog flow. My code is the following (using Azure Functions, similar to Firebase Functions):
module.exports = async function(context, req) {
const agent = new WebhookClient({ request: context.req, response: context.res });
function welcome(agent) {
agent.add(`Welcome to my agent!!`);
}
let intentMap = new Map();
intentMap.set("Look up person", welcome);
agent.handleRequest(intentMap);
}
I tested the query and the response payload looks like this:
{
"fulfillmentText": "Welcome to my agent!!",
"outputContexts": []
}
And the headers in the response look like this:
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 11 Dec 2018 18:16:06 GMT
But when I test my bot in dialog flow, it returns the following:
Webhook call failed. Error: Failed to parse webhook JSON response:
Expect message object but got:
"笀ഀ਀  ∀昀甀氀昀椀氀氀洀攀渀琀吀攀砀琀∀㨀 ∀圀攀氀挀漀洀攀 琀漀 洀礀 愀最攀渀琀℀℀∀Ⰰഀ਀  ∀漀甀琀瀀甀琀䌀漀渀琀攀砀琀猀∀㨀 嬀崀ഀ਀紀".
There's Chinese symbols!? Here's a video of me testing it out in DialogFlow: https://imgur.com/yzcj0Kw
I know this should be a comment (as it isn't really an answer), but it's fairly verbose and I didn't want it to get lost in the noise.
I have the same problem using WebAPI on a local machine (using ngrok to tunnel back to Kestrel). A friend of mine has working code (he's hosting in AWS rather than Azure), so I started examining the differences between our responses. I've notice the following:
This occurs with Azure Functions and WebAPI (so it's not that)
The JSON payloads are identical (so it's not that)
Working payload isn't chunked
Working payload doesn't have a content type
As an experiment, I added this code to Startup.cs, in the Configure method:
app.Use(async (context, next) =>
{
var original = context.Response.Body;
var memory = new MemoryStream();
context.Response.Body = memory;
await next();
memory.Seek(0, SeekOrigin.Begin);
if (!context.Response.Headers.ContentLength.HasValue)
{
context.Response.Headers.ContentLength = memory.Length;
context.Response.ContentType = null;
}
await memory.CopyToAsync(original);
});
This code disables response chunking, which is now causing a new and slightly more interesting error for me in the google console:
*Webhook call failed. Error: Failed to parse webhook JSON response: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 94 path $.\u0000\\"\u0000f\u0000u\u0000l\u0000f\u0000i\u0000l\u0000l\u0000m\u0000e\u0000n\u0000t\u0000M\u0000e\u0000s\u0000s\u0000a\u0000g\u0000e\u0000s\u0000\\"\u0000.\
I thought this could be encoding at first, so I stashed my JSON as a string and used the various Encoding classes to convert between them, to no avail.
I fired up Postman and called my endpoint (using the same payload as Google) and I can see the whole response payload correctly - it's almost as if Google's end is terminating the stream part-way through reading...
Hopefully, this additional information will help us figure out what's going on!
Update
After some more digging and various server/lambda configs, I spotted this post here: https://github.com/googleapis/google-cloud-dotnet/issues/2258
It turns out that json.net IS the culprit! I guess it's something to do with the formatters on the way out of the pipeline. In order to prove this, I added this hard-coded response to my POST controller and it worked! :)
return new ContentResult()
{
Content = "{\"fulfillmentText\": null,\"fulfillmentMessages\": [],\"source\": null,\"payload\": {\"google\": {\"expectUserResponse\": false,\"userStorage\": null,\"richResponse\": {\"items\": [{\"simpleResponse\": {\"textToSpeech\": \"Why hello there\",\"ssml\": null,\"displayText\": \"Why hello there\"}}],\"suggestions\": null,\"linkOutSuggestion\": null}}}}",
ContentType = "application/json",
StatusCode = 200
};
Despite the HTTP header saying the charset is utf-8, that is definitely using the utf-16le character set, and then the receiving side is treating them as utf-16be. Given you're running on Azure, it sounds like there is some configuration you need to make in Azure Functions to represent the output as UTF-8 instead of using UTF-16 strings.

downloading a file that comes as an attachment in a POST request response in PhantomJs

I want to download a CSV file, it is generated on a button click through a POST request. I researched to my best on casperJs and phantomJS forums and returned empty handed. In a normal browser like firefox, a browser download dialog window appears after the post request. How to handle this case in PhantomJS
TTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
Vary: Accept-Encoding
Server: Microsoft-IIS/7.5
Content-disposition: attachment;filename=ExportData.csv
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Fri, 19 Apr 2013 23:26:40 GMT
Content-Length: 65183
I've found a way to do this using casperjs (it should work with phantomjs alone if you implement the download function using XMLHttpRequest, but i've not tried).
I'll leave you the working example, that tries to download the mos recent PDF from this page. When you click the download link, some javascript code is triggered that generates some hidden input fields that are then POSTed.
What we do is replace the form's onsubmit function so that it cancels the submission, and get the form destination (action) and all its fields. We use this information later to do the actual download.
var casper=require('casper').create();
casper.start("https://sede.gobcan.es/tributos/jsf/publico/notificaciones/comparecencia/ultimosanuncios.jsp", function() {
var theFormRequest = this.page.evaluate(function() {
var request = {};
var formDom = document.forms["resultadoUltimasNotif"];
formDom.onsubmit = function() {
//iterate the form fields
var data = {};
for(var i = 0; i < formDom.elements.length; i++) {
data[formDom.elements[i].name] = formDom.elements[i].value;
}
request.action = formDom.action;
request.data = data;
return false; //Stop form submission
}
//Trigger the click on the link.
var link = $("table.listado tbody tr:first a");
link.click();
return request; //Return the form data to casper
});
//Start the download
casper.download(theFormRequest.action, "downloaded_file.pdf", "POST", theFormRequest.data);
});
casper.run();
Note: you have to run it with --ignore-ssl-errors, as the CA they use isn't in your browser default CA list.
casperjs --ignore-ssl-errors=true downloadscript.js
You can listen to the page.resource.received event and download() the file when received:
casper.on('page.resource.received', function(resource) {
if (resource.stage !== "end") {
return;
}
if (resource.url.indexOf('ExportData.csv') > -1) {
this.download(resource.url, 'ExportData.csv');
}
});
#julianjm aproach is almost the solution, but in my case i did not have the correct form name to replace the form submission.
So i found another solution using phantomjs beta:
There is a beta version of phantomjs 2.0 that includes an event handler that solves this issue.
It is still a beta version, so there is no debugging.
So i have developed the clicks and the page treatments on the release version and then changed the phantom version to make download work.
casper.start('http://www.website.com.br/', function() {
this.page.onFileDownload = function(status){console.log('onFileDownload(' + status + ')');
//SYSTEM WILL DETECT THE DOWNLOAD, BUT YOU WILL HAVE TO NAME THE FILE BY YOURSLEF!!
return "ContactList_08-25-14.csv"; };
});
casper.then(function() {
//DO YOUR STUFF HERE TO CLICK ON THE DOWNLOAD LINK.
});
casper.run();
Download: Phantom 2.0 BETA
Download the exe, rename the release version of phantom.exe to phantom.bkp.exe
and insert this 2.0 version on the place.
Then, in casperjs you will need to add some lines at the beggining of casperjs/bin/bootstrap.js
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
*/
var system = require('system');
var argsdeprecated = system.args;
argsdeprecated.shift();
phantom.args = argsdeprecated;
also comment the version check (same file):
(function(version) {
// required version check
/* if (version.major !== 1) {
return __die('CasperJS needs PhantomJS v1.x');
} if (version.minor < 8) {
return __die('CasperJS needs at least PhantomJS v1.8 or later.');
}
if (version.minor === 8 && version.patch < 1) {
return __die('CasperJS needs at least PhantomJS v1.8.1 or later.');
} */
})(phantom.version);
Remember, this is a tweak!!.
So this lines on bootstrap will cause problems if you want to run phantom release version or slimerjs.
So DEVELOP ON RELEASE VERSION, than tweak to this version to be able to download.
If you need to debug, you will have to remove the lines of bootstrap.js
I have to deal with a site written with some kind of ASP.Net framework which sends a remarkable amount of POST data at each request (some 100 Kb of data, of which about 95 never seem to change between requests - viewport state related apparently).
However, no method I could find worked for me. I've looked into intercepting XHR, I've even found someone who is tackling the very same framework (at least judging from the selectors) but with a simpler case, inspired by this very question. I found out that back in the day this couldn't be done with PhantomJS.
My problem is that a click on a button starts a chain of AJAX requests culminating with the sending of this enormous POST form, to which finally the server replies with a "Content-Disposition: attachment".
In the end, I found this approach which works for me, even if it is network-inefficient:
...setting up everything, until I just need to click on a button...
phantomData = null;
phantomRequest = null;
// Here, I just recognize the form being submitted and copy it.
casper.on('resource.requested', function(requestData, request) {
for (var h in requestData.headers) {
if (requestData.headers[h].name === 'Content-Type') {
if (requestData.headers[h].value === 'application/x-www-form-urlencoded') {
phantomData = requestData;
phantomRequest = request;
}
}
}
});
// Here, I recognize when the request has FAILED because PhantomJS does
// not support straight downloading.
casper.on('resource.received', function(resource) {
for (var h in resource.headers) {
if (resource.headers[h].name === 'content-disposition') {
if (resource.stage === 'end') {
if (phantomData) {
// to do: get name from resource.headers[h].value
casper.download(
resource.url,
"output.pdf",
phantomData.method,
phantomData.postData
);
} else {
// Something went wrong.
}
// Possibly, remove listeners?
}
}
}
});
// Now, click on the button and initiate the dance.
casper.click(pdfLinkSelector);
The download works flawlessly, even if I can see that the file gets requested (and sent) twice.
[debug] [phantom] Navigation requested: url=https://somesite/SomePage.aspx, type=FormSubmitted, willNavigate=true, isMainFrame=true
[debug] [application] GOT FORM, REQUEST DATA SAVED
[warning] [phantom] Loading resource failed with status=fail (HTTP 200): https://somesite/SomePage.aspx
[debug] [application] END STAGE REACHED, PHANTOMDATA PRESENT
[debug] [application] ATTEMPTING CASPERJS.DOWNLOAD
[debug] [remote] sendAJAX(): Using HTTP method: 'POST'
[debug] [phantom] Downloaded and saved resource in output.pdf
[debug] [application] TERMINATING SUCCESSFULLY
[debug] [phantom] Navigation requested: url=about:blank, type=Other, willNavigate=true, isMainFrame=true
[debug] [phantom] url changed to "about:blank"
(Next, I'll probably modify the script to try invoking request.abort() from inside the resource.requested listener, set a semaphore and invoke again the downloader - I won't be able to get the attachment filename, but that matters little to me).