The redisinsight workbench use namespaces to store JSON objects, such as:
school_json:1 -> {...}
school_json:2 -> {...}
...
But I am asking myself if that is the way to go when dealing with JSON documents. The JSON examples at https://redis.io/docs/stack/json/path/ showcase how to store items in a nested JSON object called store.
In my case I would like to store users. At first I had a structure where a toplevel key users exists such as
users -> {
1: { // actually I'm using a uuid here
username: "Peter"
email: ... // etc.
},
2: {
username: "Marie",
email: ...
}
}
Or should I use namespaces here as well which would look somewhat like:
users:1 -> {
username: "Peter"
email: ...
},
users:2 -> {
username: "Marie",
email: ...
}
I assume that using namespaces would have performance benefits over nested JSON but the example in the redis documentation which uses a nested JSON object to store several items confused me if that is actually true.
I found this answer but that is discussing redis, not redis stack using JSON (which may come with other optimizations).
Thanks in advance!
Related
I want to achieve automatic serialization/deserialization of JSON request/response body for NestJS controllers, to be precise, automatically convert snake_case request body JSON keys to camelCase received at my controller handler and vice versa.
What I found is to use class-transformer's #Expose({ name: 'selling_price' }), as on the example below (I'm using MikroORM):
// recipe.entity.ts
#Entity()
export class Recipe extends BaseEntity {
#Property()
name: string;
#Expose({ name: 'selling_price' })
#Property()
sellingPrice: number;
}
// recipe.controller.ts
#Controller('recipes')
export class RecipeController {
constructor(private readonly service: RecipeService) {}
#Post()
async createOne(#Body() data: Recipe): Promise<Recipe> {
console.log(data);
return this.service.createOne(data);
}
}
// example request body
{
"name": "Recipe 1",
"selling_price": 50000
}
// log on the RecipeController.createOne handler method
{ name: 'Recipe 1',
selling_price: 50000 }
// what I wanted on the log
{ name: 'Recipe 1',
sellingPrice: 50000 }
There can be seen that the #Expose annotation works perfectly, but going further I want to be able to convert it as the attribute's name on the entity: sellingPrice, so I can directly pass the parsed request body to my service and to my repository method this.recipeRepository.create(data). Current condition is the sellingPrice field would be null because there exists the selling_price field instead. If I don't use #Expose, the request JSON would need to be written on camelCase and that's not what I prefer.
I can do DTOs and constructors and assigning fields, but I think that's rather repetitive and I'll have a lot of fields to convert due to my naming preference: snake_case on JSON and database columns and camelCase on all of the JS/TS parts.
Is there a way I can do the trick cleanly? Maybe there's a solution already. Perhaps a global interceptor to convert all snake_case to camel_case but I'm not really sure how to implement one either.
Thanks!
You could use mapResult() method from the ORM, that is responsible for mapping raw db results (so snake_case for you) to entity property names (so camelCase for you):
const meta = em.getMetadata().get('Recipe');
const data = {
name: 'Recipe 1',
selling_price: 50000,
};
const res = em.getDriver().mapResult(data, meta);
console.log(res); // dumps `{ name: 'Recipe 1', sellingPrice: 50000 }`
This method operates based on the entity metadata, changing keys from fieldName (which defaults to the value based on selected naming strategy).
Given the following GraphQL mutations:
type Mutation {
updateUser(id: ID!, newEmail: String!): User
updatePost(id: ID!, newTitle: String!): Post
}
The Apollo docs state that it's totally possible to perform multiple mutations in one request, say
mutation($userId: ID!, $newEmail: String!, $postId: ID!, $newTitle: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
id
email
}
updatePost(id: $postId, newTitle: $newTitle) {
id
title
}
}
1. Does anyone actually do this? And if you don't do this explicitly, will batching cause this kind of mutation merging?
2. If you perform run multiple things within on mutation, how would you handle errors properly?
I've seen a bunch of people recommending to throw errors on the server so that the server would respond with something that looks like this:
{
errors: [
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updateUser'
],
message: {
message: 'Validation failed',
fields: {
newEmail: 'The new email is not a valid email address.'
}
},
},
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updatePost'
],
message: {
message: 'Validation failed',
fields: {
newTitle: 'The given title is too short.'
}
},
}
],
data: {
updateUser: null,
updatePost: null,
}
}
But how do I know which error belongs to which mutation? We can't assume, that the first error in the errors array belongs to the first mutation, because if updateUser succeeds, the array would simple contain one entry. Would I then have to iterate over all errors and check if the path matches my mutation name? :D
Another approach is to include the error in a dedicated response type, say UpdateUserResponse and UpdatePostResponse. This approach enables me to correctly address errors.
type UpdateUserResponse {
error: Error
user: User
}
type UpdatePostResponse {
error: Error
post: Post
}
But I have a feeling that this will bloat my schema quite a lot.
In short, yes, if you include multiple top-level mutation fields, utilize the path property on the errors to determine which mutation failed. Just be aware that if an error occurs deeper in your graph (on some child field instead of the root-level field), the path will reflect that field. That is, an execution error that occurs while resolving the title field would result in a path of updatePost.title.
Returning errors as part of the data is an equally valid option. There's other benefits to this approach to:
Errors sent like this can include additional meta data (a "code" property, information about specific input fields that may have generated the error, etc.). While this same information can be sent through the errors array, making it part of your schema means that clients will be aware of the structure of these error objects. This is particularly important for clients written in typed languages where client code is often generated from the schema.
Returning client errors this way lets you draw a clear distinction between user errors that should be made visible to the user (wrong credentials, user already exists, etc.) and something actually going wrong with either the client or server code (in which case, at best, we show some generic messaging).
Creating a "payload" object like this lets you append additional fields in the future without breaking your schema.
A third alternative is to utilize unions in a similar fashion:
type Mutation {
updateUser(id: ID!, newEmail: String!): UpdateUserPayload!
}
union UpdateUserPayload = User | Error
This enables clients to use fragments and the __typename field to distinguish between successful and failed mutations:
mutation($userId: ID!, $newEmail: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
__typename
... on User {
id
email
}
... on Error {
message
code
}
}
}
You can get even create specific types for each kind of error, allowing you to omit any sort of "code" field:
union UpdateUserPayload = User | EmailExistsError | EmailInvalidError
There's no right or wrong answer here. While there are advantages to each approach, which one you take comes ultimately comes down to preference.
I am creating a Memory store as
var someData = [
{id:1, name:"One"},
{id:2, name:"Two"}
];
store = new Memory({
data: someData,
id:”userStore”
});
I was wondering if there is a way to query the Memory store to return the store instance by id. Like
var storePresent = Memory.getById(“userStore”)
something similar to
dijit.registry.byId();
that returns the instance of dijit specified by id
To my knowledge, there is not a store registry as you describe. You will need to code this yourself in your application's controller code.
A store is a simple Object.
You could:
Pass the store manually around your code.
Code a registry AMD module (caution, here be dragons).
The only exception to this rule is if you're already using dojox/app as your controller layer. That has some named store abilities. If not, I would not recommend refactoring to use it.
There's no build-in static repository of memory stores in module dojo/store/Memory. If you need something like that, the easiest way is to write custom factory of memory stores that will hold the static references to all stores that are created:
define(["dojo/store/Memory"], function(Memory){
var repository = {}
return {
getStore: function(id) {
return repository[id]
},
createStore: function(id, params) {
var memory = new Memory(params)
repository[id] = memory
return memory
}
}
});
The usage:
require(["modules/MemoryRepository"], function(MemoryRepository) {
MemoryRepository.createStore("userStore", {data: someData})
...
var userStore = MemoryRepository.getStore("userStore")
})
If you are to create a lot of stores on demand, you should think of deregistering them (removing the references from the factory) as well. Memory issues are probably the reason something like that is not provided out-of-the-box.
Like the other answerers already said, there's no specific repository or registry for stores. However, the dijit/registry can be used to store the reference as well by using the dijit/registry::add() function, for example:
// Add to registry
registry.add(new Memory({
id: "userStore",
data: [{
name: "Smith",
firstname: "John"
}, {
name: "Doe",
firstname: "John"
}]
}));
Then you can retrieve it by using the dijit/registry::byId() function, for example:
// Query the store by using the registry
var person = registry.byId("userStore").query({
firstname: "John"
}).forEach(function(person) {
console.log(person.firstname + " " + person.name);
});
A full example can be found on JSFiddle: http://jsfiddle.net/mn94f/
I see that the restkit document is quite nice and has variety of examples on object modelling. There is also an example of nested mapping but I find my scenario little bit different than this. RestKit documentation provides the example mapping of the nested attribute with the following json format.
Sample JSON structure from the RestKit Documentation :
{
"blake": {
"email": "blake#restkit.org",
"favorite_animal": "Monkey"
},
"sarah": {
"email": "sarah#restkit.org",
"favorite_animal": "Cat"
}
}
Suppose that my json is a bit different as this;
My JSON structure :
{
"id" : 1,
"author" : "RestKit",
"blake": {
"email": "blake#restkit.org",
"favorite_animal": "Monkey"
},
"sarah": {
"email": "sarah#restkit.org",
"favorite_animal": "Cat"
}
}
I created two different managedobject model with the following attributes and to many relations.
Two different entities for my structure Product and creator to map the above JSON object.
Product Creator
identifier <------------------- >> name
author email
favouriteAnimal
Now, my mapping would look like this for Product model would be;
This is how I map the Product entity,
[mapping mapKeyPath:"id" toAttribute:"identifier"];
[mapping mapKeyPath:"author" toAttribute: "author"];
But note here, mapping the nested dictionary attribute does not work for me.
// [mapping mapKeyOfNestedDictionaryToAttribute:#"creators"];
Now, in the authors class.
I could not figure out the usual way to map the above JSON structure.
If you have control over the web service, I would strongly recommend reorganizing your response data like this:
{
product:
{
id: 1,
author: 'RestKit',
creators: [
{
id: 1,
name: 'Blake',
email: '...',
favorite_animal: 'Monkey'
},
{
id: 2,
name: 'Sarah',
email: '...',
favorite_animal: 'Cat'
}
]
}
}
Following this structure, you'd be able to use RestKit's nested mapping features, and the relationship would be correctly reflected in the deserialized objects received by the object loader delegate. RestKit relies on naming and structure standards to simplify the code required to achieve the task. Your example deviates from key-value coding standards, so RK doesn't provide an easy way to interact with your data format.
If you don't have access or you can't change it, I think you'll need to map known key-value pairs with a mapping and perform the remaining assignments with a custom evaluator. You'd need to assume the unknown keys are actually name values for associated creators and their associated values contain the attribute hash for each. Using that, you'd then reconstruct each object manually.
I currently have a Sproutcore app setup with the following relationships on my models:
App.Client = SC.Record.extend({
name: SC.Record.attr(String),
brands: SC.Record.toMany('App.Brand', {isMaster: YES, inverse: 'client'})
});
App.Brand = SC.Record.extend({
name: SC.Record.attr(String),
client: SC.Record.toOne('App.Client, {isMaster: NO, inverse: 'brands'})
});
When I was working with fixtures my fixture for a client looked like this:
{
guid: 1,
name: 'My client',
brands: [1, 2]
}
And my fixture for a brand looked like this:
{
guid: 1,
name: 'My brand',
client: 1
}
Which all worked fine for me getting a clients brands and getting a brands client.
My question is in regards to how Datasources then fit into this and how the server response should be formatted.
Should the data returned from the server mirror exactly the format of the fixtures file? So clients should always contain a brands property containing an array of brand ids? And vice versa.
If I have a source list view which displays Clients with brands below them grouped. How would I go about loading that data for the source view with my datasource? Should I make a call to the server to get all the Clients and then follow that up with a call to fetch all the brands?
Thanks
Mark
The json you return will mostly mirror the fixtures. I recently had pretty much the same question as you, so I built a backend in Grails and a front end in SC, just to explore the store and datasources. My models are:
Scds.Project = SC.Record.extend(
/** #scope Scds.Project.prototype */ {
primaryKey: 'id',
name: SC.Record.attr(String),
tasks: SC.Record.toMany("Scds.Task", {
isMaster: YES,
inverse: 'project'
})
});
Scds.Task = SC.Record.extend(
/** #scope Scds.Task.prototype */ {
name: SC.Record.attr(String),
project: SC.Record.toOne("Scds.Project", {
isMaster: NO
})
});
The json returned for Projects is
[{"id":1,"name":"Project 1","tasks":[1,2,3,4,5]},{"id":2,"name":"Project 2","tasks":[6,7,8]}]
and the json returned for tasks, when I select a Project, is
{"id":1,"name":"task 1"}
obviously, this is the json for 1 task only. If you look in the projects json, you see that i put a "tasks" array with ids in it -- thats how the internals know which tasks to get. so to answer your first question, you dont need the id from child to parent, you need the parent to load with all the children, so the json does not match the fixtures exactly.
Now, it gets a bit tricky. When I load the app, I do a query to get all the Projects. The store calls the fetch method on the datasource. Here is my implementation.
Scds.PROJECTS_QUERY = SC.Query.local(Scds.Project);
var projects = Scds.store.find(Scds.PROJECTS_QUERY);
...
fetch: function(store, query) {
console.log('fetch called');
if (query === Scds.PROJECTS_QUERY) {
console.log('fetch projects');
SC.Request.getUrl('scds/project/list').json().
notify(this, '_projectsLoaded', store, query).
send();
} else if (query === Scds.TASKS_QUERY) {
console.log('tasks query');
}
return YES; // return YES if you handled the query
},
_projectsLoaded: function(response, store, query) {
console.log('projects loaded....');
if (SC.ok(response)) {
var recordType = query.get('recordType'),
records = response.get('body');
store.loadRecords(recordType, records);
store.dataSourceDidFetchQuery(query);
Scds.Statechart.sendEvent('projectsLoaded')
} else {
console.log('oops...error loading projects');
// Tell the store that your server returned an error
store.dataSourceDidErrorQuery(query, response);
}
}
This will get the Projects, but not the tasks. Sproutcore knows that as soon as I access the tasks array on a Project, it needs to get them. What it does is call retrieveRecords in the datasource. That method in turn calls retrieveRecord for every id in the tasks array. My retrieveRecord method looks like
retrieveRecord: function(store, storeKey) {
var id = Scds.store.idFor(storeKey);
console.log('retrieveRecord called with [storeKey, id] [%#, %#]'.fmt(storeKey, id));
SC.Request.getUrl('scds/task/get/%#'.fmt(id)).json().
notify(this, "_didRetrieveRecord", store, storeKey).
send();
return YES;
},
_didRetrieveRecord: function(response, store, storeKey) {
if (SC.ok(response)) {
console.log('succesfully loaded task %#'.fmt(response.get('body')));
var dataHash = response.get('body');
store.dataSourceDidComplete(storeKey, dataHash);
} ...
},
Note that you should use sc-gen to generate your datasource, because it provides a fairly well flushed out stub that guides you towards the methods you need to implement. It does not provide a retrieveMethods implementation, but you can provide your own if you don't want to do a single request for each child record you are loading.
Note that you always have options. If I wanted to, I could have created a Tasks query and loaded all the tasks data up front, that way I wouldn't need to go to my server when I clicked a project. So in answer to your second question, it depends. You can either load the brands when you click on the client, or you can load all the data up front, which is probably a good idea if there isn't that much data.