Sequelize - Incorporate stores into retailers after obtaining the latter - sql

I'm really new to Sequelize and I find the Docs confusing especially for my case since I already had a Postgres DB set up and used sequelize-auto to create all the models from the existing DB.
Now I have the following:
Retailers
Stores
Stores have a FK in retailer_id since Retailers have several Stores, but a single Store belongs to a single Retailer.
I want to retrieve from my Node API a JSON with the following format:
[{
id:"1",
name: "RetailerName",
stores: [{
id: "1",
name: "StoreName",
...
}]
}]
I was thinking of getting all of the retailers, iterate through them and getting all stores based on the "current" retailer id and adding them to retailers, replying that.
However this is not possible without a promise of some sort and since there are better tools to achieve this with sequelize I would like to know how to do this!

Use sequelize's associations (One-To-Many in this particular case)

Related

Algolia relationship one to many

i’ve a dilemma and require your technical help. I want to store 2 tables in Algolia.
One table that contains offers for an office.
One table that contains every bookings for those offers.
In my MySQL DB, there is a One-To-Many relationship, but since i’m new to document database, i don’t know how to handle relations in Algolia.
The way i see it , i have two options:
i replicate my schema : 2 indices, and i need to make 2 search queries each time.
i have one index for which each record contains a booking with the offer data ( the data of the offer is, then, duplicated )
What seems to be a good choice ? in matter of good practice and pricing ?
Glad we were able to answer you directly. Re-posting the gist of the response here:
Here's how you could format your data:
[{
objectID: '120',
officeName: 'Super office',
availabilityStart: timestampLastIndexing,
availabilityEnd: timestampOneYearInTheFutureFromLastIndexing
}]
This would be the state when you start your index, without any booking, it means there would be one record per office and availabilityStart would be the time of last indexing, availabilityEnd would be one year in the future.
Then if someone makes a booking for July 12 for this office (120, Super Office), you would have to update your index this way:
[{
objectID: '120',
officeName: 'Super office',
availabilityStart: timestampLastIndexing,
availabilityEnd: timestampJuly11EndOfTheDay
}, {
objectID: '120',
officeName: 'Super office',
availabilityStart: timestampJuly13StartOfTheDay,
availabilityEnd: timestampOneYearInTheFutureFromJuly13StartOfTheDay
}]
Basically you have to update your index everytime a booking is made and merge adjacent days while also making sure that someone in the process of booking does not have a bad surprise when the booking is completed if someone completed it before (classic booking design issue).
On your frontend, you would have to make requests this way:
index.search({
query: ..,
filters: 'userSelectedTimestamp >= availabilityStart AND userSelectedTimestamp <= availabilityEnd'
})

Podio API filtering of returned fields without using a view

Based on the information provided by Pavlo it appears there is no way to exclude certain fields and no way to include specific fields from Podio data returned from a JSON Post query. For the purpose of this question, the fields involved are (for example) text, category, date, calc field and others which can be added using 'modify template'.
The best work around is redesign of the app to reduce the amt of field data.
(original question)
Is there a way to limit the amount of Podio data returned from a JSON Post query to include only a few specific fields instead of every field?
I understand how to use a podio view or filtering in the Post query to limit how many items are returned, but my question has to do with reducing the amount of data returned for each item by preventing data in unnecessary fields from being returned.
(following is an example of the query I use currently, but as stated above I'm looking for a way to limit the fields returned to a small subset)
Example JSON query: https://api.podio.com/item/app/14773320/filter
Example JSON body:
{
"filters": {
"created_on": {
"from": "{date.addMonths(-6).format()}",
"to": "{date.today}"
}
},
"limit":250,
"offset":{props.offSet}
}
You can use fields parameter for that.
More details on how it works and how else it could be used are right here: https://developers.podio.com/index/api. Scroll down to Bundling responses using fields parameter section.
Most likely you are looking for fields=items.view(micro) parameter. Podio API will return then only 5 values for each item:
app_item_id
item_id
title
link
revision

Custom user-supplied filters, display number of matches for each

I have an app where users can track their website visitors in realtime. A user can create Groups, which is basically an array of JSON objects (filters) that they can use to filter a resource (here a website visitor).
Group(user_id:id, name:string, filters: JSONB[type, field, value])
Example of a group:
name: "my group"
filters: [
{field: "sessions", type: "greater_than", value: 5},
{field: "email", type: "contains", value: "#example.com}
]
I am displaying each of a user's groups in the interface, but I'd like to also show the amount of records (visitors) matching each group.
As can be seen, it's possible for website visitors to dynamically be included/excluded in a user's group, depending on their behavior.
I've thought of using a materialized view to keep a mapping of all groups and the count of matches, that'd be updated every 30 seconds. I fear that this will be very inefficeint however.
Is there a better approach?
Thanks
It really depends on the number of records involved, and how much of an impact to your system it will be regenerating the materialized view every 30 seconds. i.e. if the materialized view regenerates in 5 seconds or so, it wouldn't be much of an issue. If it takes 20 seconds while maxing-out processors and disks, then it's a really bad idea.
An alternative is to implement a trigger in your table (or triggers in all involved tables) to increase/decrease counters where appropriate, plus a trigger in your Groups table to calculate the current value whenever a new record is added or the condition is changed.

Saving a list of strings in sql

In our system we have an API call that returns a list of stations. Each station among other things has a name and a corresponding code.
For example, the response looks like:
[
{
name: "New York C",
code: "0001:074"
},
{
name: "Oslo C",
code: "0002:078"
},
...
]
This list is quite big and contains approximately 3500 stations.
What we need to do is to create a widget that can be configured to have max 50 stations to choose from. These stations are a subset of those that are returned by the above mentioned call.
Basically, we don't even need to save the names, just codes will be enough.
The question is how do we save the subset of stations (codes) in DB?
I know about 1NF and I have read this how-to-store-a-list-in-a-column-of-a-database-table.
The thing is that there is no need to import all 3500 stations and put to the database because we have access to the call from the widget. But we still need to save the configured subset of data.
Any help would be appreciated.
Your "list" is in JSON format, and you can export it to any database (let's assume it is MySQL):
So you will have a table called "Stations" for example in your DB, and your table will have two columns: name and code.
To export your Json data to MySQL, you need to export it to CSV first, using: http://www.danmandle.com/blog/json-to-csv-conversion-utility/ or https://github.com/danmandle/JSON2CSV
Then export your CSV file to MySQL using:
Then :
LOAD DATA INFILE 'filepath/your_file.csv' INTO TABLE Stations;
There are plenty of ways to achieve the same result, you can for example use PHP to do that: http://www.kodingmadesimple.com/2014/12/how-to-insert-json-data-into-mysql-php.html
Or using an ETL (like Talend, SSIS...).

How to design the schema for an author/user model in mongodb

I have looked through most of the mongodb schema design articles on mongo's website and most of the questions here on SO. There is still one use case which I haven't figured out. When looking at these tutorials, they usually reference the article comments problem and the products/categories problem. I want to figure out how to model the one to many relationship (author to posts) when querying a list of posts. Here are the example schemas:
Users: {
_id: ObjectID
Name: String
Email: String
}
Posts: {
_id: ObjectID
user_id: ObjectID
body: String
comments: [
body: String
]
}
Now, lets say you want to run a query for the latest 10 posts. A pretty simple query, but now you have posts with the possibility of each one having a unique ObjectID pointing to the user. Now, how should you accomplish getting the name and email of each user for a post.
Should you create an array of the user ObjectID's from the posts query and then run the query db.users.find({ _id: {$in: PostsUserIDArray}}); After that would you use your application logic to match the right user information to the correct post?
Should you keep a copy of the data in posts. I.E. keep the user ID, name, and email in the posts table. Then just have a hook when a user updates this information to update all the information in posts.
An option which myself or my friend have not thought of.
I appreciate all help as I try to wrap my head around mongo data modeling.
For a few videos I have seen from MongoDB creators, they advocate the second solution. If your user have more data than just a name and email and if you display only name and email when displaying apost, then it's not really bad to store it in the post. Thus you don't have to perform others queries when querying for posts. And since a user doesn't normally change his name every day, it's more effective to run an update to all posts once he changes his name than perform other queries to retrieve informations when displaying posts.
Edit : link to a video http://lacantine.ubicast.eu/videos/3-mongodb-deployment-strategies/