Custom user permissions on table in Hasura - permissions

I'm developing an app which requires custom permissions on a specific table. I have the following data structure:
Users
-> id
-> name
Accounts
-> id
-> name
UserAccounts
-> userId
-> accountId
-> permissionLevel
permissionLevel is an enum and could be: Owner, ReadAndWrite or ReadOnly.
What I'd like to have is the following:
1) If you're an Owner of a UserAccount, you can invite other users.
2) If you want to create a new Account, you'll get the Owner permission in UserAccounts.
3) You can not add yourself to UserAccounts when you are not the Owner of said Account.
The issue I'm having is I'm not sure on how to solve this in Hasura. I've tried the following Hasura permission (but I'm missing an option to expand the where clause (see below)):
{
"_or": [
{
"_and": [
{
"accountId": {
"_is_null": false
}
},
{
"Account": {
"UserAccounts": {
"_and": [
{
"userId": {
"_eq": "X-Hasura-User-Id"
}
},
{
"permissionLevel": {
"_eq": "Owner"
}
}
]
}
}
}
]
},
{
"_not": {
"_exists": {
"_table": {
"name": "Account",
"schema": "public"
},
"_where": {
"UserAccounts": {
"accountId": {
"_eq": "$accountIdFromQuery" // <-- this does not exist AFAIK
}
}
}
}
}
}
]
}
So I'm at a loss what direction to go. Maybe I'm just missing something, maybe I need to use a custom view or maybe I need to try a custom postgresql function. Any help is greatly appreciated!

For now, my solution is using an event trigger (which triggers an AWS Lambda function) to write into the UserAccounts table with admin privileges. Hasura has documentation on how to achieve this: https://hasura.io/docs/1.0/graphql/manual/event-triggers/serverless.html

Related

Databricks Job API create job with single node cluster

I am trying to figure out why I get the following error, when I use the Databricks Job API.
{
"error_code": "INVALID_PARAMETER_VALUE",
"message": "Cluster validation error: Missing required field: settings.cluster_spec.new_cluster.size"
}
What I did:
I created a Job running on a single node cluster using the Databricks UI.
I copy& pasted the job config json from the UI.
I deleted my job and tried to recreate it by sending a POST using the Job API with the copied json that looks like this:
{
"new_cluster": {
"spark_version": "7.5.x-scala2.12",
"spark_conf": {
"spark.master": "local[*]",
"spark.databricks.cluster.profile": "singleNode"
},
"azure_attributes": {
"availability": "ON_DEMAND_AZURE",
"first_on_demand": 1,
"spot_bid_max_price": -1
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"custom_tags": {
"ResourceClass": "SingleNode"
},
"enable_elastic_disk": true
},
"libraries": [
{
"pypi": {
"package": "koalas==1.5.0"
}
}
],
"notebook_task": {
"notebook_path": "/pathtoNotebook/TheNotebook",
"base_parameters": {
"param1": "test"
}
},
"email_notifications": {},
"name": " jobName",
"max_concurrent_runs": 1
}
The documentation of the API does not help (can't find anything about settings.cluster_spec.new_cluster.size). The json is copied from the UI, so I guess it should be correct.
Thanks for your help.
Source: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/clusters#--create
To create a Single Node cluster, include the spark_conf and custom_tags entries shown in the example and set num_workers to 0.
{
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "Standard_DS3_v2",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]"
},
"custom_tags": {
"ResourceClass": "SingleNode"
}
}

How not to expose duplicated (normalize?) nodes via GraphQL?

Given "user has many links" (what means a link was created by a user) DB entities relations, I want to develop API to fetch links along with users so that the returned data does not contain duplicated users.
In other words, instead of this request:
query {
links {
id
user {
id email
}
}
}
that returns the following data:
{
"data": {
"links": [
{
"id": 1,
"user": {
"id": 2,
"email": "user2#example.com"
}
},
{
"id": 2,
"user": {
"id": 2,
"email": "user2#example.com"
}
}
]
}
}
I want to make a request like this (note the "references" column):
query {
links {
id
userId
}
references {
users {
id
email
}
}
}
that returns associated users without duplicates:
{
"data": {
"links": [
{
"id": 1,
"userId": 2
},
{
"id": 2,
"userId": 2
},
],
"references": {
"users": [
{
"id": 2,
"email": "user2#example.com"
}
]
}
}
}
That should reduce amount of data transferred between client and server that adds a bit of speed boost.
Is there ready common implementation on any language of that idea? (Ideally, seeking for Ruby)
It's not a query or server role to normalize data.
there are no such possibilities in GraphQL specs;
server must return all asked fields within queried [response] structure;
... but you can implement some:
standarized (commonly used) pagination (relay style edges/nodes, nodes only or better both);
query [complexity] weights to promote this optimized querying style - separate problem;
reference dictionary field within queried type;
links {
egdes {
node {
id
title
url
authorId
# possible but limited usage with heavy weights
# author {
# id
# email
# }
}
}
pageInfo {
hasNextPages
}
referencedUsers {
id
email
}
}
where:
User has id and email props;
referencedUsers is [User!] type;
node.author is User type;
Normalizing Graphql client, like Apollo, can easily access cached user fields without making separate requests.
You can render (react?) some <User/> component (within <Link /> component) passing node.authorId as an argument like <User id={authorId} />. User component can useQuery hook with cache-only policy to read user props/fields.
See Apollo docs for details. You should implement this for yourself and document this to help/guide API users.

Creating a couchdb view to index if item in an array exists

I have the following sample documents in my couchdb. The original table in production has about 2M records.
{
{
"_id": "someid|goes|here",
"collected": {
"tags": ["abc", "def", "ghi"]
}
},
{
"_id": "someid1|goes|here",
"collected": {
"tags": ["abc", "klm","pqr"]
},
},
{
"_id": "someid2|goes|here",
"collected": {
"tags": ["efg", "hij","klm"]
},
}
}
Based on my previous question here, how to search for values when the selector is an array,
I currently have an index added for the collected.tags field, but the search is still taking a long time. Here is the search query I have.
{
"selector": {
"collected.tags": {
"$elemMatch": {
"$regex": "abc"
}
}
}
}
There are about 300k records matching the above condition, there search seems to take a long time. So, I want to create a indexed view to retrieve and lookup faster instead of a find/search. I am new to couchdb and am not sure how to setup the map function to create the indexed view.
Figured the map function out myself. Now all the documents are indexed and retrievals are faster
function (doc) {
if(doc.collected.tags.indexOf('abc') > -1){
emit(doc._id, doc);
}
}

how to count number of keys in embedded mongodb document

I have a mongodb query: (Give me the settings where account='test')
db.collection_name.find({"account" : "test1"}, {settings : 1}).pretty();
where I get the following sample output:
{
"_id" : ObjectId("49830ede4bz08bc0b495f123"),
"settings" : {
"clusterData" : {
"us-south-1" : "cluster1",
"us-east-1" : "cluster2"
},
},
What I'm looking for now, is to give me the account where the clusterData has more than 1 key.
I'm only interested in listing those accounts with (2) or more keys.
I've tried this: (but this doesn't work)
db.collection_name.find({'settings.clusterData.1': {$exists: true}}, {account : 1}).pretty();
Is this possible to do with the current data structure? I don't have the option to redesign this schema.
Your clusterData field is not an array which is why you cannot just filter the number of elements it has. There is a way, though, to get what you want via the aggregation framework. Try this:
db.collection_name.aggregate({
$match: {
"account" : "test1"
}
}, {
$project: {
"settingsAsArraySize": { $size: { $objectToArray: "$settings.clusterData" } },
"settings.clusterData": 1
}
}, {
$match: {
"settingsAsArraySize": { $gt: 1 }
}
}, {
$project: {
"_id": 0,
"settings.clusterData": 1
}
}).pretty();

How to manage fine grain permissions in Elasticsearch?

I need to store in a consistent way the role/groups that can access the information but I'm not sure what's the best way to do it.
Summary: I have 2 kinds of docs "tweet" and "blog":
At tweet level, I store the group name allowed to access the information
blog is more complex, there are metadata (title, description, nature, ...) but some of those informations can be restricted to some groups of user (only admin, or logged_in users)
What the best way to map this with Elasticsearch ?
As of today, I end up with documents like:
/tweet/455
{
id: 112,
ugroups: [ "restricted_user", "admin" ],
description: "foo",
},
{
id: 113,
ugroups: [ "anonymous" ]
description: "foo",
}
and
/blog/500
{
id: 5,
fields: [
{
"nature": {
"value": "foo",
"ugroup": [ "admin" ]
}
}
]
}
{
id: 6,
fields: [
{
"comment": {
"value": "foo",
"ugroup": [ "anonymous" ]
}
}
]
}
When user want to search in tweet, that's easy, I build a term query with words submitted by the user and I append the groups the user belongs to this query.
But how to make a query that will take this "ugroup" thing at various level ?
Ideally I could issue a query like:
search in tweet with tweet.ugroup: "anonymous" and in blog with blog.fields.*.ugroup: "anonymous"
Is there a way to write such a query ?