Complex SQL query or complex processing in controller? - sql

I am using Rails 3, and building an app in which a user can put objects to be rented. For each object, he can decide:
the object can be rented by the entire world (potentially) or not (object access is restricted)
the object can only be rented by my friends (in the app, via a has_many friends, :through => friendship association)
the object can be rented by members of the following groups (then the user selects either all the groups he belongs to, or a subset of these)
My question is how to display the list of objects a current_user can see ? I see two options, and I would like to know which is the best one or if they are equivalent:
1) In the object controller, build the collection of the objects a user can see (#objects = Object.can_see(current_user)) and then pass this to the index view and display the whole list
2) In the controller, collect ALL objects (#objects = Object.all), pass this to the index view, and in the view, for each object, conduct a serie of test to determine if the object should be displayed or not.

In the end, it is really up to you to decide based on where you see the project headed. You really have three options though:
Handle the entire query via SQL statements.
Handle some of the initial filtering via SQL statements, then filter the returned data.
Return all of the data via SQL statements, and filter the returned data.
Keep in mind that SQL db's are built for performing fast data access. I would be very surprised if your solution didn't have any initial filtering, unless you were returning very small data sets to filter. In the end, it comes down to how complex your logic is (can it be expressed in SQL and still understood?), how performant each option would be, and how scalable the solution would be.
There are also a number of good ORM's out there that can help blur the line between your business logic and data access. Many of them will dynamically create the complex SQL based on your code, so there isn't as much of a concern about how ugly it'll be.

Related

REST API Architecture: How to Represent Joined Tables

Question
I have a complex query that joins three tables and returns a set of rows with each row having data from it's sibling tables. How is it possible to represent this in a RESTful way?
FWIW I know there is not necessarily a "right" way to do it, but I'm interested in learning about what might be the most extensible and durable solution for this situation.
Background
In the past I've represented single tables that more or less mirror the literal structure of the url. For example, the url GET /agents/1/policies would result in a query like select * from policies where agent_id = 1.
Assumption
It seems like the url doesn't necessarily have to be so tightly coupled to the structure of the database layer. For example, if the complex query was something like:
select
agent.name as agent_name,
policy.status as policy_status,
vehicle.year as vehicle_year
from
policies as policy
join agents as agent on policy.agent_id = agent.id
join vehicles as vehicle on vehicle.policy_id = policy.id
where 1=1
and policy.status = 'active';
# outputs something like:
{ "agent_name": "steve", "policy_status": "single", "vehicle_year": "1999" }
I could represent this QUERY as a url instead of TABLES as url. The url for this could be /vehicles, and if someone were to want to query it (with id or some other parameter like /vehicles?vehicle_color=red) I could just pass that value into a prepared statement.
Bonus Questions
Is this an antipattern?
Should my queries always be run against EXISTING tables instead of prepared statements?
Thanks for your help!
You want to step back from the database tables and queries and think about the basic resources. In your examples, clearly these are agent, customer vehicle and policy.
Resources vs Collections
One misstep I see in your examples is that you don't separate collections from resources using plurals which can be useful when you are dealing with Searching and logistically, for your controller routes. In your example you have:
GET /agents/1/policies
Suppose instead, that this was GET /agent/1/policies.
Now you have a clear differentiation between location of an Idempotent resource: /agent/1, and finding/searching for a collection of agents: /agents.
Following this train of thought, you start to disassociate enumerating relationships from each side of the relationship in your API, which is inherently redundant.
In your example, clearly, policies are not specifically owned by an agent. A policy should be a resource that stands on its own, identifiable via some Idempotent url using whatever ID uniquely identifies that policy for the purpose of finding that policy ie. /policy/{Id}
Searching Collections
What this now does for you is allow you to separate the finding of a policy through: /policies where returning only policies for a specific Agent is but one of a number of different ways you might access that collection.
So rather than having GET /agents/1/policies you would instead find the policies associated with an agent via: GET /policies?agent=1
The expected result of this would be a collection of resource identifiers for the matching policies:
{ "policies" : ["/policy/234234", "/policy/383282"] }
How do you then get the final result?
For a given policy, you would expect a complete return of associated information, as in your query, only without the limitations of the select clause. Since what you want is a filtered version, a way to handle that would be to include filter criteria.
GET /policy/234234?filter=agentName,policyStatus,vehicleYear
With that said, this approach has pitfalls, and I question it for a number of reasons. If you look at your original list of resources, each one can be considered an object. If you are building an object graph in the client, then the complete information for a policy would instead include resource locators for all the associated resources:
{ ... Policy data + "customer": "/customer/2834", "vehicle": "/vehicle/88328", "agent": "/agent/32" }
It is the job of the client to access the data for an agent, a vehicle and a customer, and not your job to regurgitate all that data redundantly anytime you need some view of that data.
This is better because it is both restful, and supports many of the aims of REST to support Idempotency, caching etc.
This also better allows the client to cache locally the data for an Agent, and to determine whether or not it needs to get that data or just access data it already cached. At worst case there are maybe 3 or 4 REST calls that need to be made.
Bonus questions
REST has some grey area. You have to interpret Fielding and for that reason, there are frequently different opinions in regards to how to do things. While the approach of providing an api like GET /agents/1/policies to provide the list of policies associated with an agent is frequently used, there is a point where that becomes limiting and redundant in my experience, as it requires the end users to become familiar with the way you model relationships to the underlying resources.
As for your question on queries, it makes no difference how you access the underlying data and organize it, so long as you are consistent. What often happens (for the purposes of performance) is that the api doesn't return resource identifiers and starts returning the data as I illustrated previously. This is a slippery slope where you are just turning your REST api into a frontend to a bunch of queries, and at that point your API might as well just be: GET \query?filter=agent.name, policy.status, vehicle.year&from=policies&join=agents,vehicles&where=...

Limiting type of SELECT queries in SQL expression

I would like to know if there is any way to prevent SQL database from executing queries which do not contain any aggregate functions. The reason is preventing user from fetching the data of particular record (e.g. personal information), but giving him the option to query the population (e.g. average age). I would like not to write any wrapper/processor/parser if there is any out-of-box solution.
There is no real (or at least easy) way to do this.
Recommendation would be to restrict access to the basetables but create one or more views over the top which provide the aggregate data, and then allow access to this.
Alternatively, create 1-to-1 mapped views that simply select only non-personal data from the basetable. This way you can let someone use these views to run aggregate functions without worry of exposing personal or sensitive information.
The wiki page for views states:
Views can represent a subset of the data contained in a table. Consequently, a view can limit the degree of exposure of the underlying tables to the outer world: a given user may have permission to query the view, while denied access to the rest of the base table.
and
Views can act as aggregated tables, where the database engine aggregates data (sum, average, etc.) and presents the calculated results as part of the data.

SQL many value in one var [duplicate]

So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list.
AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization.
Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time.
... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database.
Thanks All!
John
UPDATE: So in the first flurry of answers I'm getting, I see "you can go the CSV/XML route... but DON'T!". So now I'm looking for explanations of why. Point me to some good references.
Also, to give you a better idea of what I'm up to: In my database I have a Function table that will have a list of (x,y) pairs. (The table will also have other information that is of no consequence for our discussion.) I will never need to see part of the list of (x,y) pairs. Rather, I will take all of them and plot them on the screen. I will allow the user to drag the nodes around to change the values occasionally or add more values to the plot.
No, there is no "better" way to store a sequence of items in a single column. Relational databases are designed specifically to store one value per row/column combination. In order to store more than one value, you must serialize your list into a single value for storage, then deserialize it upon retrieval. There is no other way to do what you're talking about (because what you're talking about is a bad idea that should, in general, never be done).
I understand that you think it's silly to create another table to store that list, but this is exactly what relational databases do. You're fighting an uphill battle and violating one of the most basic principles of relational database design for no good reason. Since you state that you're just learning SQL, I would strongly advise you to avoid this idea and stick with the practices recommended to you by more seasoned SQL developers.
The principle you're violating is called first normal form, which is the first step in database normalization.
At the risk of oversimplifying things, database normalization is the process of defining your database based upon what the data is, so that you can write sensible, consistent queries against it and be able to maintain it easily. Normalization is designed to limit logical inconsistencies and corruption in your data, and there are a lot of levels to it. The Wikipedia article on database normalization is actually pretty good.
Basically, the first rule (or form) of normalization states that your table must represent a relation. This means that:
You must be able to differentiate one row from any other row (in other words, you table must have something that can serve as a primary key. This also means that no row should be duplicated.
Any ordering of the data must be defined by the data, not by the physical ordering of the rows (SQL is based upon the idea of a set, meaning that the only ordering you should rely on is that which you explicitly define in your query)
Every row/column intersection must contain one and only one value
The last point is obviously the salient point here. SQL is designed to store your sets for you, not to provide you with a "bucket" for you to store a set yourself. Yes, it's possible to do. No, the world won't end. You have, however, already crippled yourself in understanding SQL and the best practices that go along with it by immediately jumping into using an ORM. LINQ to SQL is fantastic, just like graphing calculators are. In the same vein, however, they should not be used as a substitute for knowing how the processes they employ actually work.
Your list may be entirely "atomic" now, and that may not change for this project. But you will, however, get into the habit of doing similar things in other projects, and you'll eventually (likely quickly) run into a scenario where you're now fitting your quick-n-easy list-in-a-column approach where it is wholly inappropriate. There is not much additional work in creating the correct table for what you're trying to store, and you won't be derided by other SQL developers when they see your database design. Besides, LINQ to SQL is going to see your relation and give you the proper object-oriented interface to your list automatically. Why would you give up the convenience offered to you by the ORM so that you can perform nonstandard and ill-advised database hackery?
You can just forget SQL all together and go with a "NoSQL" approach. RavenDB, MongoDB and CouchDB jump to mind as possible solutions. With a NoSQL approach, you are not using the relational model..you aren't even constrained to schemas.
What I have seen many people do is this (it may not be the best approach, correct me if I am wrong):
The table which I am using in the example is given below(the table includes nicknames that you have given to your specific girlfriends. Each girlfriend has a unique id):
nicknames(id,seq_no,names)
Suppose, you want to store many nicknames under an id. This is why we have included a seq_no field.
Now, fill these values to your table:
(1,1,'sweetheart'), (1,2,'pumpkin'), (2,1,'cutie'), (2,2,'cherry pie')
If you want to find all the names that you have given to your girl friend id 1 then you can use:
select names from nicknames where id = 1;
Simple answer: If, and only if, you're certain that the list will always be used as a list, then join the list together on your end with a character (such as '\0') that will not be used in the text ever, and store that. Then when you retrieve it, you can split by '\0'. There are of course other ways of going about this stuff, but those are dependent on your specific database vendor.
As an example, you can store JSON in a Postgres database. If your list is text, and you just want the list without further hassle, that's a reasonable compromise.
Others have ventured suggestions of serializing, but I don't really think that serializing is a good idea: Part of the neat thing about databases is that several programs written in different languages can talk to one another. And programs serialized using Java's format would not do all that well if a Lisp program wanted to load it.
If you want a good way to do this sort of thing there are usually array-or-similar types available. Postgres for instance, offers array as a type, and lets you store an array of text, if that's what you want, and there are similar tricks for MySql and MS SQL using JSON, and IBM's DB2 offer an array type as well (in their own helpful documentation). This would not be so common if there wasn't a need for this.
What you do lose by going that road is the notion of the list as a bunch of things in sequence. At least nominally, databases treat fields as single values. But if that's all you want, then you should go for it. It's a value judgement you have to make for yourself.
In addition to what everyone else has said, I would suggest you analyze your approach in longer terms than just now. It is currently the case that items are unique. It is currently the case that resorting the items would require a new list. It is almost required that the list are currently short. Even though I don't have the domain specifics, it is not much of a stretch to think those requirements could change. If you serialize your list, you are baking in an inflexibility that is not necessary in a more-normalized design. Btw, that does not necessarily mean a full Many:Many relationship. You could just have a single child table with a foreign key to the parent and a character column for the item.
If you still want to go down this road of serializing the list, you might consider storing the list in XML. Some databases such as SQL Server even have an XML data type. The only reason I'd suggest XML is that almost by definition, this list needs to be short. If the list is long, then serializing it in general is an awful approach. If you go the CSV route, you need to account for the values containing the delimiter which means you are compelled to use quoted identifiers. Persuming that the lists are short, it probably will not make much difference whether you use CSV or XML.
If you need to query on the list, then store it in a table.
If you always want the list, you could store it as a delimited list in a column. Even in this case, unless you have VERY specific reasons not to, store it in a lookup table.
Many SQL databases allow a table to contain a subtable as a component. The usual method is to allow the domain of one of the columns to be a table. This is in addition to using some convention like CSV to encode the substructure in ways unknown to the DBMS.
When Ed Codd was developing the relational model in 1969-1970, he specifically defined a normal form that would disallow this kind of nesting of tables. Normal form was later called First Normal Form. He then went on to show that for every database, there is a database in first normal form that expresses the same information.
Why bother with this? Well, databases in first normal form permit keyed access to all data. If you provide a table name, a key value into that table, and a column name, the database will contain at most one cell containing one item of data.
If you allow a cell to contain a list or a table or any other collection, now you can't provide keyed access to the sub items, without completely reworking the idea of a key.
Keyed access to all data is fundamental to the relational model. Without this concept, the model isn't relational. As to why the relational model is a good idea, and what might be the limitations of that good idea, you have to look at the 50 years worth of accumulated experience with the relational model.
I'd just store it as CSV, if it's simple values then it should be all you need (XML is very verbose and serializing to/from it would probably be overkill but that would be an option as well).
Here's a good answer for how to pull out CSVs with LINQ.
Only one option doesn't mentioned in the answers. You can de-normalize your DB design. So you need two tables. One table contains proper list, one item per row, another table contains whole list in one column (coma-separated, for example).
Here it is 'traditional' DB design:
List(ListID, ListName)
Item(ItemID,ItemName)
List_Item(ListID, ItemID, SortOrder)
Here it is de-normalized table:
Lists(ListID, ListContent)
The idea here - you maintain Lists table using triggers or application code. Every time you modify List_Item content, appropriate rows in Lists get updated automatically. If you mostly read lists it could work quite fine. Pros - you can read lists in one statement. Cons - updates take more time and efforts.
I was very reluctant to choose the path I finally decide to take because of many answers. While they add more understanding to what is SQL and its principles, I decided to become an outlaw. I was also hesitant to post my findings as for some it's more important to vent frustration to someone breaking the rules rather than understanding that there are very few universal truthes.
I have tested it extensively and, in my specific case, it was way more efficient than both using array type (generously offered by PostgreSQL) or querying another table.
Here is my answer:
I have successfully implemented a list into a single field in PostgreSQL, by making use of the fixed length of each item of the list. Let say each item is a color as an ARGB hex value, it means 8 char. So you can create your array of max 10 items by multiplying by the length of each item:
ALTER product ADD color varchar(80)
In case your list items length differ you can always fill the padding with \0
NB: Obviously this is not necessarily the best approach for hex number since a list of integers would consume less storage but this is just for the purpose of illustrating this idea of array by making use of a fixed length allocated to each item.
The reason why:
1/ Very convenient: retrieve item i at substring i*n, (i +1)*n.
2/ No overhead of cross tables queries.
3/ More efficient and cost-saving on the server side. The list is like a mini blob that the client will have to split.
While I respect people following rules, many explanations are very theoretical and often fail to acknowledge that, in some specific cases, especially when aiming for cost optimal with low-latency solutions, some minor tweaks are more than welcome.
"God forbid that it is violating some holy sacred principle of SQL": Adopting a more open-minded and pragmatic approach before reciting the rules is always the way to go. Else you might end up like a candid fanatic reciting the Three Laws of Robotics before being obliterated by Skynet
I don't pretend that this solution is a breakthrough, nor that it is ideal in term of readability and database flexibility, but it can certainly give you an edge when it comes to latency.
What I do is that if the List required to be stored is small then I would just convert it to a string then split it later when required.
example in python -
for y in b:
if text1 == "":
text1 = y
else:
text1 = text1 + f"~{y}"
then when I required it I just call it from the db and -
out = query.split('~')
print(out)
this will return a list, and a string will be stored in the db. But if you are storing a lot of data in the list then creating a table is the best option.
If you really wanted to store it in a column and have it queryable a lot of databases support XML now. If not querying you can store them as comma separated values and parse them out with a function when you need them separated. I agree with everyone else though if you are looking to use a relational database a big part of normalization is the separating of data like that. I am not saying that all data fits a relational database though. You could always look into other types of databases if a lot of your data doesn't fit the model.
I think in certain cases, you can create a FAKE "list" of items in the database, for example, the merchandise has a few pictures to show its details, you can concatenate all the IDs of pictures split by comma and store the string into the DB, then you just need to parse the string when you need it. I am working on a website now and I am planning to use this way.
you can store it as text that looks like a list and create a function that can return its data as an actual list. example:
database:
_____________________
| word | letters |
| me | '[m, e]' |
| you |'[y, o, u]' | note that the letters column is of type 'TEXT'
| for |'[f, o, r]' |
|___in___|_'[i, n]'___|
And the list compiler function (written in python, but it should be easily translatable to most other programming languages). TEXT represents the text loaded from the sql table. returns list of strings from string containing list. if you want it to return ints instead of strings, make mode equal to 'int'. Likewise with 'string', 'bool', or 'float'.
def string_to_list(string, mode):
items = []
item = ""
itemExpected = True
for char in string[1:]:
if itemExpected and char not in [']', ',', '[']:
item += char
elif char in [',', '[', ']']:
itemExpected = True
items.append(item)
item = ""
newItems = []
if mode == "int":
for i in items:
newItems.append(int(i))
elif mode == "float":
for i in items:
newItems.append(float(i))
elif mode == "boolean":
for i in items:
if i in ["true", "True"]:
newItems.append(True)
elif i in ["false", "False"]:
newItems.append(False)
else:
newItems.append(None)
elif mode == "string":
return items
else:
raise Exception("the 'mode'/second parameter of string_to_list() must be one of: 'int', 'string', 'bool', or 'float'")
return newItems
Also here is a list-to-string function in case you need it.
def list_to_string(lst):
string = "["
for i in lst:
string += str(i) + ","
if string[-1] == ',':
string = string[:-1] + "]"
else:
string += "]"
return string
Imagine your grandmother's box of recipes, all written on index cards. Each of those recipes is a list of ingredients, which are themselves ordered pairs of items and quantities. If you create a recipe database, you wouldn't need to create one table for the recipe names and a second table where each ingredient was a separate record. That sounds like what we're saying here. My apologies if I've misread anything.
From Microsoft's T-SQL Fundamentals:
Atomicity of attributes is subjective in the same way that the
definition of a set is subjective. As an example, should an employee
name in an Employees relation be expressed with one attribute
(fullname), two (firstname and lastname), or three (firstname,
middlename, and lastname)? The answer depends on the application. If
the application needs to manipulate the parts of the employee’s name
separately (such as for search purposes), it makes sense to break them
apart; otherwise, it doesn’t.
So, if you needed to manipulate your list of coordinates via SQL, you would need to split the elements of the list into separate records. But is you just wanted to store a list and retrieve it for use by some other software, then storing the list as a single value makes more sense.

Database architecture in mongodb with ruby on rails

I am using MongoDB and Ruby on rails to Build a webservice. I have around 10GB of data. Collections(similar to Tables in RDBMS)in the data are divided by states in a country and the fields in the collection differ slightly from collection to collection. I have 60 collections. I wont have problems if I combine 2-3 collection with different fields as I am using a nosql database.
My problem
If have dont combine my collections then I would have 60 models in my rails application. If I combine them all then I would have a very large collection and performance would reduce. What would be the optimum choice as my server resources are limited.I will query my database based on 3-4 different parameters. For example I may only search for a particular area or may be for a particular license type a person owns or both some times.
As Sergio said, one large collection with indexes on the 3-4 fields you query on will probably work best.
However, you don't have to have 60 models, just use dynamic fields. This is one of the main benefits of using MongoDB. You can read about it in mongoid here. Basically, just define the fields that are common to all of the documents in the collection and then set and get the dynamic fields as needed.
The one gotcha here is that the method (".", dot) attribute accesor doesn't work until that attribute is set. So you can't say model.attribute until you have set one via model[:attribute] = "blah" or model.attribute = "blah".

ElasticSearch Mappings & Related Objects

Excuse the potential n00bness of this question - still trying to get my head around this non-relational NoSQL stuff.
I've been super impressed with the performance and simplicity of ElasicSearch, but I've got a mapping (borderline NoSQL theroy) question to answer before I dive too deeply into the implementation.
Lets continue to use the Twitter examples ElasticSearch have in their documentation.
Basically, we know a tweet belongs to an user, and a user has many tweets.
The objects look something like this:
user = {'screen_name':'d2kagw', 'id_str':'1234567890', 'favourites_count':'15', ...}
tweet = {'message':'lorem lipsum...', 'user_id_str':'1234567890', ...}
What I'm wondering is, can the tweet object have a reference to the user object?
Since I want to be able to write queries like:
{'query': {
'term':{'message':'lipsum'},
'range':{'user.favourites_count':{'from':10, 'to':30'}}
}}
Which I would like to return the tweets matching with the user objects as part of the response (vs. having to lazy load them later).
Am I asking too much of it?
Should I be expected to throw all the user data into the tweet object if I want to query the data in that way?
In my implementation (doesn't use twitter, this was just an elegant example) I need to have the two datasets as different indexes due to the various ways I have to query the data, so I'm not sure if I can use an object type AND have the index structure I require.
Thanks in advance for your help.
ElasticSearch doesn't really support table joins that we are so used to in SQL world. The closest it gets to it is Has Child Query that allows limiting results in one table based on a persence of a record in another table and even here it's limited to 1-to-many (parent-children) relationship.
So, a common approach in this world would be to denormalize everything and query one index at a time.