With the newly available community version of Parse server (https://github.com/parse-community/parse-server) there does not seem to be a configuration option to disable the /files endpoints which allow for file upload and hosting. I would very much like to disable this feature, and Cloud Code server-side hooks are not a good option (not currently supported in parse-dashboard, among other problems). What's the best way to disable these endpoints?
Using a little middleware works for me. Add this to your parse app config:
{
"middleware": "disableFilesMiddleware",
}
And then for your middleware module disableFilesMiddleware.js:
module.exports = function( req , res , next ){
if( req.path.substring( 0 , 12 ) === '/parse/files' ) {
res.status(400).send({ code: 119 , message: 'files endpoints are disabled' });
return;
}
next();
};
For anyone using Parse 5+, you can configure this in your Parse Server config to disabled all uploading:
fileUpload: {
enableForPublic: false,
enableForAnonymousUser: false,
enableForAuthenticatedUser: false
}
You can read about it in the docs here
Related
I am uses Dotenveditor to save the env parameters but after redirecting i faced error as
This site can’t be reachedThe connection was reset.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_RESET
what is mistake in my code? rest part of controller works properly.
if (isset($request->APP_DEBUG)) {
$env_update = DotenvEditor::setKeys(['APP_DEBUG' => 'true']);
} else {
$env_update = DotenvEditor::setKeys(['APP_DEBUG' => 'false']);
}
if (isset($request->COOKIE_CONSENT_ENABLED)) {
$env_update = DotenvEditor::setKeys(['COOKIE_CONSENT_ENABLED' => 'true']);
} else {
$env_update = DotenvEditor::setKeys(['COOKIE_CONSENT_ENABLED' => 'false']);
}
$env_update = DotenvEditor::setKeys([
'APP_NAME' => preg_replace('/\s+/', '', $request->title),
'APP_URL' => preg_replace('/\s+/', '', $request->APP_URL),
]);
$env_update->save();
Try to update your .env file using notepad++ as administrator. I Think it is much easier and user friendly. When you make the necessary changes save the file. Afterwords, I think you must reboot to the Virtual Machine (if you are using one) or restart the service in order the change takes effect to the application.
Talking about Laravel-Dotenv-Editor please try to visit Dotenv editor in order to find more information.
Example of a .env file:
I just implemented my first backend file where I fetch some user data, messages and so on.
Now I wanted to include error handling if there is no network available.
I don´t know if I did it right but this was my approach so far:
import axios from 'axios'
const host = process.env.VUE_APP_URL
export default {
person: async function (currentPerson) {
let params = {
currentPerson: localStorage.getItem("person"),
};
if (user) {
params['currentPerson'] = currentPerson;
}
return axios.get(`${host}/api/currentPerson`, {
params: params
})
//catching network errors
.catch (error => {
if (error.response) {
/*
* The request was made and the server responded with a
4xx/5xx error
*/
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
/*
* The request was made but no response was received
*/
console.log(error.request);
} else {
// Something happened in setting up the request and triggered an Error
console.log('Error', error.message);
}
console.log(error)
});
},
In my mounted() function of my main view I fetch the data from my backend file from above:
backend.matches().then(function (response) {
self.contacts = response.data.persons;
});
I tried to check in console if it is working but all I get is the following:
In the catch block I check for
response errors: like 4xx/5xx
request errors: if my network not responding in time
and any other errors
Would this be the right approach to check if a network is available or not? Or does it degrade the user experience when the user checks the error?
My backend file includes more methods.. do I have to write for each method these kind of requests?
In your backend file you don't react whether there is a network connection or not I think.
And only for reference: that is not the backend, but communicates with the backend - the backend is the part of your code what you communicate with, e.g. Laravel code, an API, ...
Try adding the following at the beginning of your catch part:
if (!error.response) {
//network error
console.log('No network connection');
} else if (error.response) {
//the rest of your code
This should print out No network connection in your console.
Run your application, turn off the internet connection and check the console.
These kind of code should always be located in your backend part.
My answer maybe different from your question.
When i create a .net core API with Angular i used three things to check is there network or not?
subscribe to windows's offline/online event
create signalR hub from layout component to API server
API request failed (it means lot of incident, but if 1. or 2. case is true i know what cause 3. case
I'm running locally both a Vue Cli 3 app and a Google Cloud Function (CF).
I have changed the response headers in CF as follows:
res.set('Access-Control-Allow-Origin', "*")
res.set('Access-Control-Allow-Methods', 'GET, POST')
and it serves me well when I call the CF from a browser.
For some reason, the same call is CORS blocked when invoked inside the Vue app.
I tried with Firefox (CORS enabled by settings as well as using a plugin).
I also added the following to vue.config.js as described here:
// vue.config.js
module.exports = {
devServer: {
proxy: 'http://localhost:8010', //<-- my CFs are running on 8010
}
}
Not sure how to proceed as the whole point of CFs is to not have any servers running (including a proxy).
Any pointers are much appreciated, cheers.
Problem was with the local Cloud Function emulator.
Got it working when I altered the Cloud Function headers in the live environment.
// Set CORS headers for preflight requests
function setCorsHeaders(req, res){
res.set('Access-Control-Allow-Origin', CLIENT_URL);
res.set('Access-Control-Allow-Credentials', 'true');
if (req.method === 'OPTIONS') {
// Send response to OPTIONS requests
res.set('Access-Control-Allow-Methods', 'GET,POST');
res.set('Access-Control-Allow-Headers', 'Content-Type');
res.set('Access-Control-Max-Age', '3600');
res.status(204).send('');
}
}
I'm a beginner in Amazon's Lambda-API implementations.
I'm just deploying a very simple API: a very simple lambda function with Python 2.7 printing "Hello World" that I trigger with API Gateway. However, when I click on the Invoke URL link, it tells me "{"message": "Internal server error"}".
Thus, I'm trying to see what is wrong here, so I click on the API itself and I can see the following being grey in my Method Execution: "Integration Response: Proxy integrations cannot be configured to transform responses."
I have tested many different configurations but I still face the same error. I have no idea why this step is grey.
I had the same problem when trying to integrate API gateway and lambda function. Basically, after spending a couple of hours, I figure out.
So when you were creating a new resource or method the Use Lambda Proxy integration was set by default.
So you need to remove this. Follow to Integration Request and untick the Use Lambda Proxy integration
you will see the following picture
Then in you Resources, Atction tab, choose Enable CORS
Once this done Deploy your API once again and test function. Also, this topic will explain what's happening under the hood.
Good luck...
The Lambda response should be in a specific format for API gateway to process. You could find details in the post. https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
exports.handler = (event, context, callback) => {
var responseBody = {
"key3": "value3",
"key2": "value2",
"key1": "value1"
};
var response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(responseBody),
"isBase64Encoded": false
};
callback(null, response);
My API was working in Postman but not locally when I was developing the front end. I was getting the same errors when trying to enable CORS on my resources for GET, POST and OPTIONS and after searching all over #aditya answer got me on the right track but I had to tweak my code slightly.
I needed to add the res.statusCodeand the two headers and it started working.
// GET
// get all myModel
app.get('/models/', (req, res) => {
const query = 'SELECT * FROM MyTable'
pool.query(query, (err, results, fields) => {
//...
const models = [...results]
const response = {
data: models,
message: 'All models successfully retrieved.',
}
//****** needed to add the next 3 lines
res.statusCode = 200;
res.setHeader('content-type', 'application/json');
res.setHeader('Access-Control-Allow-Origin', '*');
res.send(response)
})
})
If you re using terraform for aws resource provision you can set the
"aws_api_gateway_integration" type = "AWS" instead of "AWS_PROXY" and that should resolve your problem.
I am trying to write a simple electron app to interface with a REST server. The server doesn't have the appropriate certificates. When I try to make a 'GET' request (using fetch()), I get the following error message:
Failed to load resource: net::ERR_BAD_SSL_CLIENT_AUTH_CERT
Fixing the certs is not currently an option. I tried to use the 'ignore-certificates-error' flag (see below). It seems like it should allow me to skip over this error, but it doesn't.
var electron = require('electron');
var app = electron.app
app.commandLine.appendSwitch('ignore-certificate-errors');
...
The result is the same error.
Questions:
I am correct in assuming this options is supposed to help here?
If so, any ideas what I am doing wrong?
Electron version: 1.2.8
Thanks!
You can update your version of electron and use this callback:
app.on('certificate-error', (event, webContents, link, error, certificate, callback) => {
if ('yourURL/api/'.indexOf(link) !== -1) {
// Verification logic.
event.preventDefault();
callback(true);
} else {
callback(false);
}
});
That you going do the fetch to your api with https.