Short Version: Is there an application in Node.js that offers functionalities similar to Python-SQLAlchemy's backref?
What I actually want to achieve:
I have three sql tables: article, chapter and subchapter. One article has multiple chapters, and on chapter can contain multiple or zero subchapter.
With SQLAlchemy it's quite simple, in models.py
class Article(db.Model):
...
chapters = db.relationship('Chapter', backref='mainArticle', lazy=True)
class Chapter(db.Model):
...
articleID = db.Column(db.Integer, db.ForeignKey('article.id'), nullable=False)
subChapters = db.relationship('subChapter', backref='mainChapter', lazy=True)
class subChapter(db.Model):
...
chapterID = db.Column(db.Integer, db.ForeignKey('chapter.id'), nullable=False)
And then I can access even the attributes of Article from subChapter:
subchapter = subChapter.query.first()
subchapter.mainChapter.id # Returns the chapter ID
subchapter.mainChapter.mainArticle.id # Returns the article ID
I've been using SQLAlchemy so I'm not sure how to select with SQLite, I tried:
app.get('/test/:articleID', (req, res) => {
let article_id = req.params.articleID;
let sql = `SELECT article.*, chapter.*, subchapter.*
FROM article
LEFT JOIN chapter ON article.id = chapter.articleID
LEFT JOIN subchapter ON chapter.id = subchapter.chapterID
WHERE article.id = ?`;
db.get(sql, [article_id], (err, article) => {
res.send(article)
});
})
But it just spit out a bunch of null...
Unfortunately current situation forces me to use Node.js instead of Python, so is there any way to achieve a similar result in Node.js?
Ok after some trials and errors I figured out a not-so-elegant way (maybe an ugly way) to get the job done without any third-party application.
First we select ALL the rows of the database that are linked to a article number with LEFT JOIN method. The null problem seems to be caused by identical key name in different tables (i.e. article.title, chapter.title, subchapter.title), so just distinguish them with AS method.
SELECT article.title,
chapter.title AS cT, chapter.chapterNumber AS cN,
subchapter.title AS scT, subchapter.subChapterNumber AS scN
FROM article
LEFT JOIN chapter ON article.id = chapter.articleID
LEFT JOIN subchapter ON chapter.id = subchapter.chapterID
WHERE article.id = 1
ORDER BY
chapterNumber ASC,
subChapterNumber ASC
This will get a bunch of entries where every subchapter is shown once, every chapter is shown at least once, depending on the subchapters it has.
Now we can write an iteration to sort the data. The fundamental idea is to form an article object, with a chapter property that contains an array of chapter objects, each one contains a subchapter property populated with an array of its own subchapter objects:
article = {
"title": "Article Title",
"chapters": [
{
"chapterNumber": 1,
"subchapters": [
{
"subchapterNumber": 1,
},
{
"subchapterNumber": 2,
}
]
},
{
"chapterNumber": 2,
"subchapters": []
},
]
}
Then we can simply use article.chapters[0] to access the chapter 1, and use article.chapters[0].subchapters[0] to get chapter 1.1.
To achieve this, the iteration to organise the data would be:
article = {"title":entries[0].title, "chapters":[]};
// First create all the chapter entries for subchapters to depend on.
for (i in entries) {
// Use underscore.js utility to determine if this chapter has already been put in.
if (_.findWhere(article.chapters, {"chapterNumber":entries[i].cN}) == null) {
// Create a new chapter entry.
article.chapters.push({"chapterNumber":entries[i].cN, "subchapters":[]})
}
};
// Then put in place all the subchapters.
for (i in entries) {
// Only analyse all the entries that contain a subchapter.
if (entries[i].scN){
// Find the corresponding chapter
chapter = _.findWhere(article.chapters, {"chapterNumber":entries[i].cN})
// Determine if this subchapter has already been put in.
if (_.findWhere(chapter.subchapters, {"subchapterNumber":entries[i].scN}) == null) {
// Create a new subchapter entry.
chapter.subchapters.push({"chapterNumber":entries[i].cN, "subchapterNumber":entries[i].scN})
}
}
};
The same principal still applies if the database is more complicated, like if every subchapter contains zero to multiple secondary subchapters.
Now a single article object contains all the informations we can possibly need to display an article in its order. A simple run with pug to display the schema would be:
html
head
title= article.title
body
each chapter in article.chapters
li= chapter.title
each subchapter in chapter.subchapters
li(style="padding-left:30px")= subchapter.title
This might not be quite efficient, but it at least get the job done. Please tell me if you know any better solution. Happy coding to everyone!
Related
I know with GraphQL you are to implement the backend handlers for the queries. So if you are using PostgreSQL, you might have a query like this:
query {
authors {
id
name
posts {
id
title
comments {
id
body
author {
id
name
}
}
}
}
}
The naive solution would be to do something like this:
const resolvers = {
Query: {
authors: () => {
// somewhat realistic sql pseudocode
return knex('authors').select('*')
},
},
Author: {
posts: (author) => {
return knex('posts').where('author_id', author.id)
},
},
Post: {
comments: (post) => {
return knex('comments').where('post_id', post.id)
},
},
};
However, this would be a pretty big problem. It would do the following essentially:
Make 1 query for all authors.
For each authors, make query for all posts. (n + 1 query)
For each post, make query for all comments. (n + 1 query)
So it's like a fanning out of queries. If there were 20 authors, each with 20 posts, that would be 21 db calls. If each post had 20 comments, that would be 401 db calls! 20 authors resolves 400 posts, which resolves 8000 comments, not like this is a real way you would do it, but to demonstrate the point. 1 -> 20 -> 400 db calls.
If we add the comments.author calls, that's another 8000 db calls (one for each comment)!
How would you batch this into let's say 3 db calls (1 for each type)? Is that what optimized GraphQL query resolvers do essentially? Or what is the best that can be done for this situation?
This is the GraphQL N+1 loading issue.
Basically there are two ways to solve it (For simplicity , assume it only needs to load the authors and its posts)
Use Dataloader pattern. Basically its idea is to defer the actual loading time of the posts of each author to a particular time such that the posts for N authors can be batched loaded together by a single SQL. It also provides caching feature to further improve the performance for the same request.
Use "look ahead pattern" (A Java example is described at here) . Basically its idea is that when resolving the authors , you just look ahead to see if the query includeS the posts or not in the sub fields. If yes , you can then use a SQL join to get the authors together with its post in a single SQL.
Also , to prevent the malicious client from making a request that retrieve a very big graph , some GraphQL server will analyse the query and impose a depth limit on it.
I do query cars from an api with a single query but two resolvers (listing and listings)(hopefully resolver is the right name for it). One car I get by the id via listing and the other cars I get without filters by listings. The resolvers output the data i a little different structure on the server-side but I do get the same fields just at different „places“. I want to merge the structure in order to get a single array I can simply loop over in vue.js. For the apicalls I do use vue-apollo.
Couldn't find any information to merge data client-side inside graphqlqueries. All I found is about handling it serverside with resolvers but it's an api I do not own.
Is it possible with graphql or do I have to merge it inside my vuecomponent and if so what would be the best way to do so?
The output will be a grid of cars where I show the car of the week (requested by id) together with the newest cars of the regarding cardealer.
Full screenshot including response: https://i.imgur.com/gkCZczY.png
Stripped down example with just the id to show the problem:
query CarTeaser ($guid: String! $withVehicleDetails: Boolean!) {
search {
listing(guid: $guid){
details{
identifier{
id #for example: here I get the id under details->identifier
}
}
}
listings( metadata: { size: 2 sort:{ field: Age order: Asc}}) {
listings{
id #here it's right under listings
details{
…
}
}
}
}
}
}
Ideally you're right, it should be handled server-side, but if it's not your API the only solution is to manipulate the data on the client side, meaning in your component.
It's probably a lot simpler to leave the listings array untouched and to just merge the listing element with it, like this for instance:
// assuming 'search' holds the entire data queried from the api
const fullListing = [
// car of the week, data reformatted to have an identical structure as
// the 'other' cars
{
id: search.listing.details.identifier.id,
details: {
vehicle: search.listing.details.vehicle,
},
},
...search.listings.listings, // the 'other' cars
]
This is contrary to another post I asked about how to NOT use dynamic queries, but in preparation for the thought that I may need to, I am attempting to learn more.
I have situations where sometimes I want a field to be analyzed, and sometimes I do not. I am solving this right now by having two separate indexes.
/// <summary>
/// Query for an entity by its identity or name, using full indexing to search for specific parts of either.
/// </summary>
public class [ENTITY]__ByName : AbstractIndexCreationTask<[ENTITY]> {
public [ENTITY]__ByName() {
Map = results => from result in results
select new {
Id = result.Id,
Name = result.Name
};
Index(n => n.Name, FieldIndexing.Analyzed);
}
}
/// <summary>
/// Query for an entity by its full name, or full identity without any analyzed results, forcing
/// all matches to be absolutely identical.
/// </summary>
public class [ENTITY]__ByFullName : AbstractIndexCreationTask<[ENTITY]> {
public [ENTITY]__ByFullName() {
Map = results => from result in results
select new {
Id = result.Id,
Name = result.Name
};
}
}
However, I am being told that I should be using "dynamic indexes" (which to me defeats the purpose of having indexes, but this comment came from a senior developer that I greatly respect, so I am entertaining it)
So, I need to figure out how to pass my preference in analyzer to a dynamic query. Right now my query looks something along the lines of ...
RavenQueryStatistics statistics;
var query = RavenSession
.Query<[ENTITY], [INDEX]>()
.Customize(c => c.WaitForNonStaleResultsAsOfNow())
.Statistics(out statistics)
.Search(r => r.Name, [name variable])
.Skip((request.page - 1) * request.PageSize)
.Take(request.PageSize)
.ToList();
var totalResults = statistics.TotalResults;
Alright, so since I am informed having so many indexes isn't what I should do, I need to go to dynamic queries? So it would be more like this ... ?
RavenQueryStatistics statistics;
var query = RavenSession
.Query<[ENTITY]>()
.Customize(c => c.WaitForNonStaleResultsAsOfNow())
.Statistics(out statistics)
.Search(r => r.Name, [name variable])
.Skip((request.page - 1) * request.PageSize)
.Take(request.PageSize)
.ToList();
var totalResults = statistics.TotalResults;
But the problem is that sometimes I want an Analyzer - and sometimes I don't. For example...
On a grid of 6000 results, if the user does a "search" for One, I want it to find everything that has One anywhere in the name. The analyzer allows for this.
On a validator that is designed to ensure that the user does not add a new entity with the exact same name as another, I do not want such flexibility. If the user is typing in Item Number as the name, and Item Number 2 exists, I do not want it to match because one or two of the words match. I want it to match if they type in exactly the same word.
So, is there a way to incorporate this into dynamic queries? or am I smart to just keep using different queries?
From RavenDB's documentation:
The indexes each RavenDB server instance uses to facilitate fast
queries are powered by Lucene, the full-text search engine.
Lucene takes a Document , breaks it down into Fields , and then splits
all text in a Field into tokens ( Terms ) in a process called
Tokenization . Those tokens are what will be stored in the index, and
be later searched upon.
{...}
After the tokenization and analysis process is complete, the resulting
tokens are stored in an index, which is now ready to be search with.
{...}
The default values for each field are FieldStorage.No in Stores and
FieldIndexing.Default in Indexes.
Setting FieldIndexing.No causes values to not be available in where
clauses when querying (similarly to not being present in the original
projection). FieldIndexing.NotAnalyzed causes whole properties to be
treated as a single token and matches must be exact, similarly to
using a KeywordAnalyzer on this field. The latter is useful for
product Ids, for example. FieldIndexing.Analyzed allows to perform
full text search operations against the field. FieldIndexing.Default
will index the field as a single term, in lower case.
As I understand it, to create a RavenDB index, you simply need to specify the Map propertly, like the following:
public class PlayersIndex : AbstractIndexCreationTask<Player>
{
public PlayersIndex()
{
Map = players => from doc in players
select new { doc.PlayerId, doc.TeamId, doc.PositionId };
}
}
Here is my question:
If you assume that PlayerId is a Guid, TeamId is an int, and PositionId is an enum, should I:
Refrain from specifying any indexing options?
Configure each field as FieldIndexing.NotAnalyzed?
In other words, should I entertain the idea of specifying the fields like the following?
public class PlayersIndex : AbstractIndexCreationTask<Player>
{
public PlayersIndex()
{
Map = players => from doc in players
select new { doc.PlayerId, doc.TeamId, doc.PositionId };
Indexes.Add(x => x.PlayerId, FieldIndexing.NotAnalyzed);
Indexes.Add(x => x.TeamId, FieldIndexing.NotAnalyzed);
Indexes.Add(x => x.PositionId, FieldIndexing.NotAnalyzed);
}
}
Jim,
For your needs, you aren't going to have to specify any indexing options.
This has been a 2 week battle for me so far with no luck. :(
Let me first state my objective. To be able to search entities which are tagged "foo" and "bar". Wouldn't think that was too hard right?
I know this can be done easily with HQL but since this is a dynamically built search query that is not an option. First some code:
public class Foo
{
public virtual int Id { get;set; }
public virtual IList<Tag> Tags { get;set; }
}
public class Tag
{
public virtual int Id { get;set; }
public virtual string Text { get;set; }
}
Mapped as a many-to-many because the Tag class is used on many different types. Hence no bidirectional reference.
So I build my detached criteria up using an abstract filter class. Lets assume for simplicity I am just searching for Foos with tags "Apples"(TagId1) && "Oranges"(TagId3) this would look something like.
SQL:
SELECT ft.FooId
FROM Foo_Tags ft
WHERE ft.TagId IN (1, 3)
GROUP BY ft.FooId
HAVING COUNT(DISTINCT ft.TagId) = 2; /*Number of items we are looking for*/
Criteria
var idsIn = new List<int>() {1, 3};
var dc = DetachedCriteria.For(typeof(Foo), "f").
.CreateCriteria("Tags", "t")
.Add(Restrictions.InG("t.Id", idsIn))
.SetProjection( Projections.ProjectionList()
.Add(Projections.Property("f.Id"))
.Add(Projections.RowCount(), "RowCount")
.Add(Projections.GroupProperty("f.Id")))
.ProjectionCriteria.Add(Restrictions.Eq("RowCount", idsIn.Count));
}
var c = Session.CreateCriteria(typeof(Foo)).Add(Subqueries.PropertyIn("Id", dc))
Basically this is creating a DC that projects a list of Foo Ids which have all the tags specified.
This compiled in NH 2.0.1 but didn't work as it complained it couldn't find Property "RowCount" on class Foo.
After reading this post I was hopeful that this might be fixed in 2.1.0 so I upgraded. To my extreme disappointment I discovered that ProjectionCriteria has been removed from DetachedCriteria and I cannot figure out how to make the dynamic query building work without DetachedCriteria.
So I tried to think how to write the same query without needing the infamous Having clause. It can be done with multiple joins on the tag table. Hooray I thought that's pretty simple. So I rewrote it to look like this.
var idsIn = new List<int>() {1, 3};
var dc = DetachedCriteria.For(typeof(Foo), "f").
.CreateCriteria("Tags", "t1").Add(Restrictions.Eq("t1.Id", idsIn[0]))
.CreateCriteria("Tags", "t2").Add(Restrictions.Eq("t2.Id", idsIn[1]))
In a vain attempt to produce the below sql which would do the job (I realise its not quite correct).
SELECT f.Id
FROM Foo f
JOIN Foo_Tags ft1
ON ft1.FooId = f.Id
AND ft1.TagId = 1
JOIN Foo_Tags ft2
ON ft2.FooId = f.Id
AND ft2.TagId = 3
Unfortunately I fell at the first hurdle with this attempt, receiving the exception "Duplicate Association Path". Reading around this seems to be an ancient and still very real bug/limitation.
What am I missing?
I am starting to curse NHibernates name at making what is you would think so simple and common a query, so difficult. Please help anyone who has done this before. How did you get around NHibernates limitations.
Forget reputation and a bounty. If someone does me a solid on this I will send you a 6 pack for your trouble.
I managed to get it working like this :
var dc = DetachedCriteria.For<Foo>( "f")
.CreateCriteria("Tags", "t")
.Add(Restrictions.InG("t.Id", idsIn))
.SetProjection(Projections.SqlGroupProjection("{alias}.FooId", "{alias}.FooId having count(distinct t1_.TagId) = " + idsIn.Count,
new[] { "Id" },
new IType[] { NHibernateUtil.Int32 }));
The only problem here is the count(t1_.TagId) - but I think that the alias should be generated the same every time in this DetachedCriteria - so you should be on the safe side hard coding that.
Ian,
Since I'm not sure what db backend you are using, can you do some sort of a trace against the produced SQL query and take a look at the SQL to figure out what went wrong?
I know I've done this in the past to understand how Linq-2-SQL and Linq-2-Entities have worked, and been able to tweak certain cases to improve the data access, as well as to understand why something wasn't working as initially expected.