Parse server useMasterKey syntax - parse-server

I am using a simple query to increment the stock of a product. The query works when the class level permissions are set to public read and write however I cannot work out how to get the query to use the master key so that the class can be restricted from client-side changes. How should this be done?
itemQuery.equalTo('productName', items[count]);
itemQuery.first({
success: function(object) {
// Successfully retrieved the object.
object.increment('stock', 1);
object.save();
},
});

Set the class level permissions to restrict access as you see fit, then in cloud code you have two options: (1) user master key for the whole cloud function:
Parse.Cloud.useMasterKey();
itemQuery.equalTo('productName', items[count]);
// and so on...
Or (2) better, apply master key as an option for only the action that might be restricted:
// etc
object.save(null, { useMasterKey: true });

Related

Firebase Realtime DB: How to setup rule to only write to DB if value being passed to be written is not already stored in DB

I am looking for syntax to setup a rule in my Firebase Realtime DB to confirm that the value being passed from my app does not already exist in that particular location before writing another duplicate entry. I am passing an "ExponentPushToken" value for push notifications. This happens anonymously. That is, the write is happening without any user authentication, the DB is simply storing a list of "ExponentPushToken" like so:
{
"users" : {
"push_token" : {
"auto-generated key" : "ExponentPushToken-value"
}
}
}
I attempted to use the following, but it still will allow the "duplicate" value to be written. I am wondering if this has anything to do with random 'auto-generated key' that is generated for each instance a value written to the DB, or if i am not thinking through this properly.
{
"rules": {
".read": "false",
".write": "true && newData.val() !== data.child('push_token').val()",
}
}
For context, each ExponentPushToken is a unique value for a given device that is used to open my app. The ExponentPushToken is a unique ID that is assigned to that device indefinitely and won't change ever in any case that particular device is accessing the app. The ExponentPushToken is assigned to the device indefinitely, and is written to the DB. Each time the app is opened, there is a push/set command to write that token to the DB within my app's code. I would like to setup a rule for Firebase Realtime DB to confirm the value being passed from the app to the DB does NOT get written if that value already exists. I am stuck and am finding myself having to manually parse the db to remove duplicate value to keep the DB clean and not send more than one Push Notification to app users.
Any help or direction you can provide would be greatly appreciated!
Thanks!
This cannot be done using firebase security rules, as they cannot search for values (that wouldn't scale). The query to check for existing values in a particular location in the database would have to be done in the app that is performing the push/writing of the value into the database.
The app would need to query the database for values and have a condition in place to prevent the writing of duplicate values.
An example of working code that can be implemented within the app can be found here that accomplishes the desired outcome for this specific ask/desired outcome.
Expo and Firebase Realtime DB - Read values and confirm a particular value does not exist before writing to database
Rules are evaluated in the context of the node/path in which you declare them. So the .write rule in your question is evaluated on the root, and both data and newData contain the data/new data at the root.
The easiest way to verify that there's no data at the path, is to define this rules at that path:
{
"rules": {
".read": "false",
"push_token": {
"$pushId": {
".write": "!data.exists()",
}
}
}
}
The $pusId here means that the rules under there are applies to every child node of push_token, which seems to match your JSON. For more on this, see the documentation on wildcard variables.

Integrate BigQuery SubPub and Cloud Functions

I'm in a project the we need to use BigQuery, PubSub, Logs explorer and Cloud Functions.
The project:
Every time certain event occurs (like an user accepting cookies), a system inserts a new query into BigQuery with a lot of columns (params) like: utm_source, utm_medium, consent_cookies, etc...
Once I have this new query in my table I need to read the columns and get the values to use in a cloud function.
In the cloud function I want to use those values to make api calls.
What I manage to do so far:
I created a log routing sink that filter the new entries and send the log to my PubSub topic.
Where I'm stuck:
I want to create a Cloud function that triggers every time a new log comes in and in that function I want to access the information that is contained in the log, such as utm_source, utm_medium, consent_cookies, etc... And use values to make api calls.
Anyone can help me? Many MANY thanks in advance!
I made a project to illustrate the flow:
Insert to table:
2.From this insertion create a sink in logging: (filtering)
Now every time I create a new query it goes to PUB/SUB i get the log of the query
What I want to do is to trigger a function on this topic and use the values I have in the query to do operations like call api etc...
So far I was able to write this code:
"use strict";
function main() {
// Import the Google Cloud client library
const { BigQuery } = require("#google-cloud/bigquery");
async function queryDb() {
const bigqueryClient = new BigQuery();
const sqlQuery = `SELECT * FROM \`mydatatable\``;
const options = {
query: sqlQuery,
location: "europe-west3",
};
// Run the query
const [rows] = await bigqueryClient.query(options);
rows.forEach((row) => {
const username = row.user_name;
});
}
queryDb();
}
main();
Now I'm again stuck, Idont know how to get the correct query from the sink I created and use the info to make my calls...
You have 2 solutions to call your Cloud Functions from a PubSub message
HTTP Functions: You can set up a HTTP call. Create your Cloud Function in trigger-http, and create a push subscription on your PubSub topic to call the Cloud Functions. Don't forget to add security (make your function private and enable security on PubSub) because your function is publicly accessible
Background functions: You can bind directly your Cloud Functions to PubSub topic. A subscription is automatically created and linked to the Cloud Functions. The security is built-in.
And, because you have 2 types of functions, you have 2 different function signatures. I provide you both, the processing is the (quite) same.
function extractQuery(pubSubMessage){
// Decide base64 the PubSub message
let logData = Buffer.from(pubSubMessage, 'base64').toString();
// Convert it in JSON
let logMessage= JSON.parse(logData)
// Extract the query from the log entry
let query = logMessage.protoPayload.serviceData.jobInsertRequest.resource.jobConfiguration.query.query
console.log(query)
return query
}
// For HTTP functions
exports.bigqueryQueryInLog = (req, res) => {
console.log(req.body)
const query = extractQuery(req.body.message.data)
res.status(200).send(query);
}
// For Background functions
exports.bigqueryQueryInLogTopic = (message, context) => {
extractQuery(message.data)
};
The query logged is the insert into... that you have in your log entry. Then, you have to parse your SQL request to extract the part that you want.

setting permissions on redux actions

I am creating a web redux-react app which will have a number of different permission levels. Many users may be interacting with one one piece of data but some may have limitations on what they can do.
To me, the obvious way to set permissions on interactions on the data (held behind the app server) would be to associate certain permissions with different redux actions. Then, when a user saves their state the client side app would bundle up the users action history and send it back to the server. These actions could then be applied to the data in the server and permissions could be checked, action by action, against a user jwt.
This would mean lots our reducer code could be used isomorphically on the server.
I cannot find any resources/disscussions on this. What is the normal way of handling complex permissions in a redux app? Having auth purely at the endpoint seems cumbersome , this would require rewriting a ton of new code that is already written in client side reducers. Is the any reason not to go ahead and create a reducer which checks auth on each action?
Points:
We must assume actions sent to the server are authenticated, but sent by users that do not have permission dispatch these actions
If the permissions have been checked and are inside the actions then the reducer can check permissions and be pure
I think it's not the responsibility of action creators to check the permissions but using a reducer and a selector is definitively the way to go. Here is one possible implementation.
The following component requires some ACL checks:
/**
* Display a user record.
*
* A deletion link is added if the logged user has sufficient permissions to
* delete the record.
*/
function UserRecord({ username, email, avatar, isGranted, deleteUser }) {
return (
<div>
<img src={avatar} />
<b>{username}</b>
{isGranted("DELETE_USER")
? <button onClick={deleteUser}>{"Delete"}</button>
: null
}
</div>
)
}
We need to connect it to our store to properly hydrate all props:
export default connect(
(state) => ({
isGranted: (perm) => state.loggedUser.permissions.has(perm),
}),
{deleteUser},
(stateProps, dispatchProps, ownProps) => ({
...stateProps,
...ownProps,
deleteUser: () => dispatchProps.deleteUser(ownProps.user)
})
)(UserRecord)
The first argument of connect will create isGranted for the logged user. This part could be done using reselect to improve performance.
The second argument will bind the actions. Nothing fancy here.
The third argument will merge all props and will pass them to the wrapped component. deleteUser is bound to the current record.
You can now use UserRecord without dealing with ACL checks since it will auto-update depending on what is stored in loggedUser.
<UserRecord user={someUser} />
In order to get the above example work you need to store the logged user in Redux's store as loggedUser. You don't need check ACL on actions since the UI won't trigger them if current user lacks of permissions. Moreover, ACL have to be checked server-side.
You can set up an helper function that would be built into actions for checking user rights (locally or remotely) where you would also provide with a callback action creator on error. Of course redux-thunk or similar would be needed so you can dispatch actions from other actions.
The key rule you should observe here is:
Reducers are pure functions.
Action creators can be impure. That means reducers always return the same value given the same arguments. Checking for ACL rights in reducer will violate that rule.
Say let's say you need to fetch the list of contacts. Your action is REQUEST_CONTACTS. The action creator would first dispatch something like:
// ACL test function
function canAccessContacts(dispatch) {
if (user !== 'cool') {
dispatch({type: 'ACCESS_DENIED'});
return false;
}
}
// Action creator
function fetchContacts() {
return (dispatch) => {
if (!canAccessContacts(dispatch)) {
return false;
}
// your logic for retrieving contacts goes here
dispatch({
type: 'RECEIVE_CONTACTS',
data: your_contacts_data_here
});
};
}
RECEIVE_CONTACTS will be fired once you have data back. Time between REQUEST_CONTACTS and RECEIVE_CONTACTS (which is likely an async call) is ian opportunity to show your loading indicator.
Of course, this is a very raw example, but it should get you going.

Receiving "Invalid policy document or request headers!"

I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.

Accessing Meteor application as another user

I've recently updated some parts of the code and want to check if they play well with production database, which has different data sets for different users. But I can only access the application as my own user.
How to see the Meteor application through the eyes of another user?
UPDATE: The best way to do this is to use a method
Server side
Meteor.methods({
logmein: function(user_id_to_log_in_as) {
this.setUserId(user_id_to_log_in_as);
}
}):
Client side
Meteor.call("logmein", "<some user_id of who you want to be>");
This is kept simple for sake of clarity, feel free to place in your own security measures.
I wrote a blog post about it. But here are the details:
On the server. Add a method that only an admin can call that would change the currently logged user programatically:
Meteor.methods(
"switchUser": (username) ->
user = Meteor.users.findOne("username": username)
if user
idUser = user["_id"]
this.setUserId(idUser)
return idUser
)
On the client. Call this method with the desired username and override the user on the client as well:
Meteor.call("switchUser", "usernameNew", function(idUser) {
Meteor.userId = function() { return idUser;};
});
Refresh client to undo.
This may not be a very elegant solution but it does the trick.
Slightly updated answer from the accepted to log the client in as new user as well as on the server.
logmein: function(user_id_to_log_in_as) {
if (Meteor.isServer) {
this.setUserId(user_id_to_log_in_as);
}
if (Meteor.isClient) {
Meteor.connection.setUserId(user_id_to_log_in_as);
}
},
More info here: http://docs.meteor.com/api/methods.html#DDPCommon-MethodInvocation-setUserId