GraphQL best practice on redacting fields that require authorization - error-handling

Consider a User type, with an email field that, for certain "anonymous" users, should only be accessible if the request was properly authorized. Other users are fine with having their email publicly displayed.
type User {
name: String!
"Anonymous users have the email only accessible to admins"
email: String!
}
What are the pros and cons of these two approaches for handling that field in the email resolver for anonymous users?
throw if the request is unauthorized. In that case,
email needs to be declared as nullable String instead of String!, or the entire query will error.
The client may want to match the errors to the data. Because non-anonymous users will have their email accessible without authorization, errors and data may have different numbers of elements, so this matching seems impossible, at least with apollo-server, which doesn't return anything in each errors element that would indicate to which user it belongs.
email will be misleadingly null for anonymous users, confusing the situation with that of the email never having been added in the first place. Remember that email needs to be String, not String!, so a null email is reasonable.
There is a clear errors array in the response, so this feels like the "proper" approach?
return the email in a redacted form, e.g [NOT AUTHORIZED TO ACCESS THIS USER'S EMAIL].
This keeps the objects intact with clear errors for sensitive emails, instead of misleading "null" emails.
email can stay non-nullable
No need to try to match errors with data.
There is no explicit errors array in the response, so this feels like a hack.
Note that for arrays, returning a REDACTED form is not an option, because the query may ask for a field within the array (e.g. { anonUsers { email } }). The only option there is to return [] (directly or by throwing).
Am I missing anything? Is there prior work on this topic? How can I make a decision?

I've done something similar recently, what seemed to work best was to create an interface for the base user type.
interface UserInformation {
id: ID!
userName: String
firstName: String!
lastName: String
avatarImage: String
city: String
...
}
Then you'd have two separate implementations of it:
type UserPublic implements UserInformation {
id: ID!
userName: String
firstName: String!
lastName: String
avatarImage: String
...
}
type UserPrivate implements UserInformation {
id: ID!
userName: String
firstName: String!
lastName: String
avatarImage: String
friends: [UserInformation!]
pendingFriends: [UserInformation!]
phone: String
email: String
birthday: DateTime
...
}
All your other queries and types use the UserInformation base interface type when exposing Users. Then your graphql server simply returns either UserPublic or UserPrivate depending on what type of access the user has.
On the client side you query it like this:
query SomeQuery {
getSomeData {
user {
id
userName
firstName
lastName
avatarImage
...on UserPrivate {
email
phone
}
...
You can then check client-side whether the returned type of a field (__typename) was UserPublic or UserPrivate and act accordingly.

Related

Reddis Stack JSON: Nested JSON vs namespace usage recommendation

The redisinsight workbench use namespaces to store JSON objects, such as:
school_json:1 -> {...}
school_json:2 -> {...}
...
But I am asking myself if that is the way to go when dealing with JSON documents. The JSON examples at https://redis.io/docs/stack/json/path/ showcase how to store items in a nested JSON object called store.
In my case I would like to store users. At first I had a structure where a toplevel key users exists such as
users -> {
1: { // actually I'm using a uuid here
username: "Peter"
email: ... // etc.
},
2: {
username: "Marie",
email: ...
}
}
Or should I use namespaces here as well which would look somewhat like:
users:1 -> {
username: "Peter"
email: ...
},
users:2 -> {
username: "Marie",
email: ...
}
I assume that using namespaces would have performance benefits over nested JSON but the example in the redis documentation which uses a nested JSON object to store several items confused me if that is actually true.
I found this answer but that is discussing redis, not redis stack using JSON (which may come with other optimizations).
Thanks in advance!

Is it possible to use request header as a parameter inside FaunaDB user defined function (UDF)?

Can I get the header's value somehow inside the UDF's body?
The idea is to implement our own ABAC based on custom header parameters (like userId, role, key, UDF, etc.).
Fauna queries do not have access to request headers.
You can use Fauna to generate Tokens that you can use in your headers for querying data. While creating user you will create a username/email and a password that will be stored as a "credential". You can create a login UDF and a user_by_email index which will verify user and issue a token.
The login UDF looks something like this:
Query(
Lambda(
["data"],
Create(Collection("User"), {
credentials: { password: Select("password", Var("data")) },
data: {
firstName: Select("firstName", Var("data")),
lastName: Select("lastName", Var("data")),
email: Select("email", Var("data")),
role: Select("role", Var("data")),
phone: Select("phone", Var("data")),
}
})
)
)
And the user_to_email index will look like this:
Source Collection: User
Index Name: user_by_email
Terms: data.email
Values: Serialized: true
Here is a link to Fauna's User Authentication document: https://docs.fauna.com/fauna/current/tutorials/authentication/user.html
Hope this helps.

Why am I seeing the _entities request in one service when the entity is native to another?

I'm working on implementing services compatible with Apollo GraphQL federation; my providing services are written in Lacinia (GraphQL library for Clojure).
I have one service that defines Users:
type User #key(fields: "id") {
id: String!
name: String!
}
type Query {
user_by_id(id:String!) : User
}
schema { query: Query }
and and a second that defines Products and extends Users:
type User #extends #key(fields: "id") {
id: String! #external
favorite_products: [Product]
}
type Product #key(fields: "upc") {
upc: String!
name: String!
price: Int!
}
type Query {
product_by_upc(upc: String!) : Product
}
schema { query: Query }
When I execute a query that spans services:
{
user_by_id(id: "me") {
id
name
favorite_products {
upc
name
price
}
}
}
I get a failure; the following request is sent to the products service:
INFO products.server - {:query "query($representations:[_Any!]!){_entities(representations:$representations){...on User{favorite_products{upc name price}}}}", :vars {:representations [{:__typename "User", :id "me"}]}, :line 52}
and that fails, because the products service shouldn't, as far as I know, have to provide the equivalent of __resolveReference for type User (which it extends); just type Product.
This is very unclear in the documentation and I'll experiment with providing a kind of stub reference resolver in Product for stubs of User.
Yes, indeed, you must provide the __resolveReference (or equivalent) for each type the service schema extends. In retrospect, it makes sense, as it provides the "kernel" of a raw value to be passed down the resolver tree.

Proper error handling when performing multiple mutations in graphql

Given the following GraphQL mutations:
type Mutation {
updateUser(id: ID!, newEmail: String!): User
updatePost(id: ID!, newTitle: String!): Post
}
The Apollo docs state that it's totally possible to perform multiple mutations in one request, say
mutation($userId: ID!, $newEmail: String!, $postId: ID!, $newTitle: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
id
email
}
updatePost(id: $postId, newTitle: $newTitle) {
id
title
}
}
1. Does anyone actually do this? And if you don't do this explicitly, will batching cause this kind of mutation merging?
2. If you perform run multiple things within on mutation, how would you handle errors properly?
I've seen a bunch of people recommending to throw errors on the server so that the server would respond with something that looks like this:
{
errors: [
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updateUser'
],
message: {
message: 'Validation failed',
fields: {
newEmail: 'The new email is not a valid email address.'
}
},
},
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updatePost'
],
message: {
message: 'Validation failed',
fields: {
newTitle: 'The given title is too short.'
}
},
}
],
data: {
updateUser: null,
updatePost: null,
}
}
But how do I know which error belongs to which mutation? We can't assume, that the first error in the errors array belongs to the first mutation, because if updateUser succeeds, the array would simple contain one entry. Would I then have to iterate over all errors and check if the path matches my mutation name? :D
Another approach is to include the error in a dedicated response type, say UpdateUserResponse and UpdatePostResponse. This approach enables me to correctly address errors.
type UpdateUserResponse {
error: Error
user: User
}
type UpdatePostResponse {
error: Error
post: Post
}
But I have a feeling that this will bloat my schema quite a lot.
In short, yes, if you include multiple top-level mutation fields, utilize the path property on the errors to determine which mutation failed. Just be aware that if an error occurs deeper in your graph (on some child field instead of the root-level field), the path will reflect that field. That is, an execution error that occurs while resolving the title field would result in a path of updatePost.title.
Returning errors as part of the data is an equally valid option. There's other benefits to this approach to:
Errors sent like this can include additional meta data (a "code" property, information about specific input fields that may have generated the error, etc.). While this same information can be sent through the errors array, making it part of your schema means that clients will be aware of the structure of these error objects. This is particularly important for clients written in typed languages where client code is often generated from the schema.
Returning client errors this way lets you draw a clear distinction between user errors that should be made visible to the user (wrong credentials, user already exists, etc.) and something actually going wrong with either the client or server code (in which case, at best, we show some generic messaging).
Creating a "payload" object like this lets you append additional fields in the future without breaking your schema.
A third alternative is to utilize unions in a similar fashion:
type Mutation {
updateUser(id: ID!, newEmail: String!): UpdateUserPayload!
}
union UpdateUserPayload = User | Error
This enables clients to use fragments and the __typename field to distinguish between successful and failed mutations:
mutation($userId: ID!, $newEmail: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
__typename
... on User {
id
email
}
... on Error {
message
code
}
}
}
You can get even create specific types for each kind of error, allowing you to omit any sort of "code" field:
union UpdateUserPayload = User | EmailExistsError | EmailInvalidError
There's no right or wrong answer here. While there are advantages to each approach, which one you take comes ultimately comes down to preference.

Auth0 Get userId in response payload?

When a user logins using the Auth0 lock on my client side, I get an idToken, but also an idTokenPayload which looks like this:
idTokenPayload = {
audience: "AUTH0CLIENTID",
exp: 1494190538,
iat: 1494154538,
iss: "AUTH0DOMAIN"
sub: "USERNAME"
};
Would it be possible to return the userId in Auth0's database instead of the username in the sub field?
The reason I want to do this is that I want to keep Auth0's db for users, and I have on my server-side some Profile, Post, Comment etc entities which have a userId column. Right now before each request on my entities I need to populate the user by doing an extra request: let id = Profile.find("... where username === auth0.sub").getId(); (pseudo-code of course).
With the C# lock sdk, you get back an Auth0User after the call to the LoginAsync method in the Auth0 client. Let's call this variable auth0User. If I look at auth0User.Profile, a JObject (it's a JSON object if you're not using C#), it contains a JSON array named "identities". My identities variable initialization looks like:
var identities = (JArray)auth0User.Profile["identities"];
This array contains all the identity providers associated with the user. If like me you haven't attached any other sign in besides Auth0, there will be just 1 entry here. Each object in this JSON array will contain a "provider" string and a "user_id" string. If the provider says "auth0" then it's from Auth0. Since I don't use FB or other account types I'm not exactly sure what they say. Here's my C# code to get the UserID:
var identities = (JArray)auth0User.Profile["identities"];
if (identities != null)
{
foreach (var identity in identities)
{
var provider = (string)identity["provider"];
if (string.Equals(provider, "auth0"))
{
UserID = (string)identity["user_id"];
break;
}
}
}
I believe that this should all be provided standard without needing to add any rules or webhooks. This article should explain in more detail and also gives examples in javascript: auth0 normalized user profile