Pymongo array field: Guaranteed to have its elements stay in the order - pymongo

Python ensures to have list elements to stay in the order.
As I understand, MongoDB arrays also keeps the order of the array.
Using Pymongo also ensures the same order when inserting, retrieving and updating?
I do not have big reasons for doubt of that, but I can't find any reference about it!
pymongo==3.3.1
MongoDB 3.4.9
Thanks!

https://api.mongodb.com/python/current/api/bson/index.html provides the mapping of python types to BSON types. Since both sides maintain order, insertion, retrieval etc should maintain order.
Since python dicts do not maintain key order, the BSON package also provides http://api.mongodb.com/python/current/api/bson/son.html
In case array order wasn't getting maintained, they probably would have provided some tool to deal with it too

Related

Can RethinkDB efficiently handle lots of sparseness?

usecase: as part of a data-infra I'm contemplating storing many* entities of various schema.org types in the same RethinkDB table.
Given the inherent type-hierarchy of schema.org, some properties are shared by all types, some properties are only available on 1 type, and everything in between.
For example: a Person,Organization,LocalBusiness, share properties like name, description, postalAddress, etc. while some are only used by Person, such as firstName.
Mapping this to a RethinkDB table will result in many properties (fields in Rethink-speak) being empty for many entities. As a guess I'd say a field will be empty about 90% of the time on average. About ~150 fields exist.
Would RethinkDB be able to efficiently handle such a sparse layout? This is a broad question I realize, but I'm looking for specifics like:
If I were to build indexes on some (not all) of these fields would empty values consume space in these indexes?
what would the performance penalty (cpu and mem) be if these fields were all allowed to be multivalued? i.e.: arrays?
*) a couple of million to start with
RethinkDB works well with sparse data. Indexes are currently always sparse indexes, so your index won't be cluttered up by documents that don't have the indexed field.

Java 8 stream vs jpa postrgresql orderby. What is better for performance?

I'am using jpa entitymenager with postgresql and java 8.
I need to show some data order by name.
What is faster and have better perfomance:
make a Query to the database like
#Query("select t from Table t order by t.someField ")
or just get all records from the database and sort them using java 8 stream api like
someCollection.stream().sorted((e1, e2) -> e1.getSomeField()
.compareTo(e2.getSomeField())).
In general if you can sort with SQL, just go ahead. If your sorting column is indexed, then sorting will be trivial: PostgreSQL will just read this index which already contains the resulting order. Even if your sorting column is not indexed, DBMS may do it more effectively. For example, it's not necessary to hold the whole rows in memory during sorting inside DBMS, you just need the values from the sorted column and row ID. After you get the properly ordered list of row IDs, you can send the rows to the client in streaming way. Also when sorting really big tables DBMS may dump some data to hard-disk to reduce memory usage.
Note that DBMS sort is performed on DBMS side which can be completely different server, thus the resulting speed also depends on whether DBMS server or application server is more powerful or has more free resources right now.
If you want to sort the results in Java, probably it would be better to do in-place sort using someCollection.sort(Comparator.comparing(e -> e.getSomeField())) (assuming that your someCollection is the List). This will reduce the consumed memory and number of times your data should be copied. The in-place sorting is the most effective for array-based lists like ArrayList.
Also it should be noted that sorting results may be different as they may depend on current DBMS collation (in Java you just sort strings by UTF-16 code point values unless custom Collator is used).

SQL many value in one var [duplicate]

So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list.
AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization.
Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time.
... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database.
Thanks All!
John
UPDATE: So in the first flurry of answers I'm getting, I see "you can go the CSV/XML route... but DON'T!". So now I'm looking for explanations of why. Point me to some good references.
Also, to give you a better idea of what I'm up to: In my database I have a Function table that will have a list of (x,y) pairs. (The table will also have other information that is of no consequence for our discussion.) I will never need to see part of the list of (x,y) pairs. Rather, I will take all of them and plot them on the screen. I will allow the user to drag the nodes around to change the values occasionally or add more values to the plot.
No, there is no "better" way to store a sequence of items in a single column. Relational databases are designed specifically to store one value per row/column combination. In order to store more than one value, you must serialize your list into a single value for storage, then deserialize it upon retrieval. There is no other way to do what you're talking about (because what you're talking about is a bad idea that should, in general, never be done).
I understand that you think it's silly to create another table to store that list, but this is exactly what relational databases do. You're fighting an uphill battle and violating one of the most basic principles of relational database design for no good reason. Since you state that you're just learning SQL, I would strongly advise you to avoid this idea and stick with the practices recommended to you by more seasoned SQL developers.
The principle you're violating is called first normal form, which is the first step in database normalization.
At the risk of oversimplifying things, database normalization is the process of defining your database based upon what the data is, so that you can write sensible, consistent queries against it and be able to maintain it easily. Normalization is designed to limit logical inconsistencies and corruption in your data, and there are a lot of levels to it. The Wikipedia article on database normalization is actually pretty good.
Basically, the first rule (or form) of normalization states that your table must represent a relation. This means that:
You must be able to differentiate one row from any other row (in other words, you table must have something that can serve as a primary key. This also means that no row should be duplicated.
Any ordering of the data must be defined by the data, not by the physical ordering of the rows (SQL is based upon the idea of a set, meaning that the only ordering you should rely on is that which you explicitly define in your query)
Every row/column intersection must contain one and only one value
The last point is obviously the salient point here. SQL is designed to store your sets for you, not to provide you with a "bucket" for you to store a set yourself. Yes, it's possible to do. No, the world won't end. You have, however, already crippled yourself in understanding SQL and the best practices that go along with it by immediately jumping into using an ORM. LINQ to SQL is fantastic, just like graphing calculators are. In the same vein, however, they should not be used as a substitute for knowing how the processes they employ actually work.
Your list may be entirely "atomic" now, and that may not change for this project. But you will, however, get into the habit of doing similar things in other projects, and you'll eventually (likely quickly) run into a scenario where you're now fitting your quick-n-easy list-in-a-column approach where it is wholly inappropriate. There is not much additional work in creating the correct table for what you're trying to store, and you won't be derided by other SQL developers when they see your database design. Besides, LINQ to SQL is going to see your relation and give you the proper object-oriented interface to your list automatically. Why would you give up the convenience offered to you by the ORM so that you can perform nonstandard and ill-advised database hackery?
You can just forget SQL all together and go with a "NoSQL" approach. RavenDB, MongoDB and CouchDB jump to mind as possible solutions. With a NoSQL approach, you are not using the relational model..you aren't even constrained to schemas.
What I have seen many people do is this (it may not be the best approach, correct me if I am wrong):
The table which I am using in the example is given below(the table includes nicknames that you have given to your specific girlfriends. Each girlfriend has a unique id):
nicknames(id,seq_no,names)
Suppose, you want to store many nicknames under an id. This is why we have included a seq_no field.
Now, fill these values to your table:
(1,1,'sweetheart'), (1,2,'pumpkin'), (2,1,'cutie'), (2,2,'cherry pie')
If you want to find all the names that you have given to your girl friend id 1 then you can use:
select names from nicknames where id = 1;
Simple answer: If, and only if, you're certain that the list will always be used as a list, then join the list together on your end with a character (such as '\0') that will not be used in the text ever, and store that. Then when you retrieve it, you can split by '\0'. There are of course other ways of going about this stuff, but those are dependent on your specific database vendor.
As an example, you can store JSON in a Postgres database. If your list is text, and you just want the list without further hassle, that's a reasonable compromise.
Others have ventured suggestions of serializing, but I don't really think that serializing is a good idea: Part of the neat thing about databases is that several programs written in different languages can talk to one another. And programs serialized using Java's format would not do all that well if a Lisp program wanted to load it.
If you want a good way to do this sort of thing there are usually array-or-similar types available. Postgres for instance, offers array as a type, and lets you store an array of text, if that's what you want, and there are similar tricks for MySql and MS SQL using JSON, and IBM's DB2 offer an array type as well (in their own helpful documentation). This would not be so common if there wasn't a need for this.
What you do lose by going that road is the notion of the list as a bunch of things in sequence. At least nominally, databases treat fields as single values. But if that's all you want, then you should go for it. It's a value judgement you have to make for yourself.
In addition to what everyone else has said, I would suggest you analyze your approach in longer terms than just now. It is currently the case that items are unique. It is currently the case that resorting the items would require a new list. It is almost required that the list are currently short. Even though I don't have the domain specifics, it is not much of a stretch to think those requirements could change. If you serialize your list, you are baking in an inflexibility that is not necessary in a more-normalized design. Btw, that does not necessarily mean a full Many:Many relationship. You could just have a single child table with a foreign key to the parent and a character column for the item.
If you still want to go down this road of serializing the list, you might consider storing the list in XML. Some databases such as SQL Server even have an XML data type. The only reason I'd suggest XML is that almost by definition, this list needs to be short. If the list is long, then serializing it in general is an awful approach. If you go the CSV route, you need to account for the values containing the delimiter which means you are compelled to use quoted identifiers. Persuming that the lists are short, it probably will not make much difference whether you use CSV or XML.
If you need to query on the list, then store it in a table.
If you always want the list, you could store it as a delimited list in a column. Even in this case, unless you have VERY specific reasons not to, store it in a lookup table.
Many SQL databases allow a table to contain a subtable as a component. The usual method is to allow the domain of one of the columns to be a table. This is in addition to using some convention like CSV to encode the substructure in ways unknown to the DBMS.
When Ed Codd was developing the relational model in 1969-1970, he specifically defined a normal form that would disallow this kind of nesting of tables. Normal form was later called First Normal Form. He then went on to show that for every database, there is a database in first normal form that expresses the same information.
Why bother with this? Well, databases in first normal form permit keyed access to all data. If you provide a table name, a key value into that table, and a column name, the database will contain at most one cell containing one item of data.
If you allow a cell to contain a list or a table or any other collection, now you can't provide keyed access to the sub items, without completely reworking the idea of a key.
Keyed access to all data is fundamental to the relational model. Without this concept, the model isn't relational. As to why the relational model is a good idea, and what might be the limitations of that good idea, you have to look at the 50 years worth of accumulated experience with the relational model.
I'd just store it as CSV, if it's simple values then it should be all you need (XML is very verbose and serializing to/from it would probably be overkill but that would be an option as well).
Here's a good answer for how to pull out CSVs with LINQ.
Only one option doesn't mentioned in the answers. You can de-normalize your DB design. So you need two tables. One table contains proper list, one item per row, another table contains whole list in one column (coma-separated, for example).
Here it is 'traditional' DB design:
List(ListID, ListName)
Item(ItemID,ItemName)
List_Item(ListID, ItemID, SortOrder)
Here it is de-normalized table:
Lists(ListID, ListContent)
The idea here - you maintain Lists table using triggers or application code. Every time you modify List_Item content, appropriate rows in Lists get updated automatically. If you mostly read lists it could work quite fine. Pros - you can read lists in one statement. Cons - updates take more time and efforts.
I was very reluctant to choose the path I finally decide to take because of many answers. While they add more understanding to what is SQL and its principles, I decided to become an outlaw. I was also hesitant to post my findings as for some it's more important to vent frustration to someone breaking the rules rather than understanding that there are very few universal truthes.
I have tested it extensively and, in my specific case, it was way more efficient than both using array type (generously offered by PostgreSQL) or querying another table.
Here is my answer:
I have successfully implemented a list into a single field in PostgreSQL, by making use of the fixed length of each item of the list. Let say each item is a color as an ARGB hex value, it means 8 char. So you can create your array of max 10 items by multiplying by the length of each item:
ALTER product ADD color varchar(80)
In case your list items length differ you can always fill the padding with \0
NB: Obviously this is not necessarily the best approach for hex number since a list of integers would consume less storage but this is just for the purpose of illustrating this idea of array by making use of a fixed length allocated to each item.
The reason why:
1/ Very convenient: retrieve item i at substring i*n, (i +1)*n.
2/ No overhead of cross tables queries.
3/ More efficient and cost-saving on the server side. The list is like a mini blob that the client will have to split.
While I respect people following rules, many explanations are very theoretical and often fail to acknowledge that, in some specific cases, especially when aiming for cost optimal with low-latency solutions, some minor tweaks are more than welcome.
"God forbid that it is violating some holy sacred principle of SQL": Adopting a more open-minded and pragmatic approach before reciting the rules is always the way to go. Else you might end up like a candid fanatic reciting the Three Laws of Robotics before being obliterated by Skynet
I don't pretend that this solution is a breakthrough, nor that it is ideal in term of readability and database flexibility, but it can certainly give you an edge when it comes to latency.
What I do is that if the List required to be stored is small then I would just convert it to a string then split it later when required.
example in python -
for y in b:
if text1 == "":
text1 = y
else:
text1 = text1 + f"~{y}"
then when I required it I just call it from the db and -
out = query.split('~')
print(out)
this will return a list, and a string will be stored in the db. But if you are storing a lot of data in the list then creating a table is the best option.
If you really wanted to store it in a column and have it queryable a lot of databases support XML now. If not querying you can store them as comma separated values and parse them out with a function when you need them separated. I agree with everyone else though if you are looking to use a relational database a big part of normalization is the separating of data like that. I am not saying that all data fits a relational database though. You could always look into other types of databases if a lot of your data doesn't fit the model.
I think in certain cases, you can create a FAKE "list" of items in the database, for example, the merchandise has a few pictures to show its details, you can concatenate all the IDs of pictures split by comma and store the string into the DB, then you just need to parse the string when you need it. I am working on a website now and I am planning to use this way.
you can store it as text that looks like a list and create a function that can return its data as an actual list. example:
database:
_____________________
| word | letters |
| me | '[m, e]' |
| you |'[y, o, u]' | note that the letters column is of type 'TEXT'
| for |'[f, o, r]' |
|___in___|_'[i, n]'___|
And the list compiler function (written in python, but it should be easily translatable to most other programming languages). TEXT represents the text loaded from the sql table. returns list of strings from string containing list. if you want it to return ints instead of strings, make mode equal to 'int'. Likewise with 'string', 'bool', or 'float'.
def string_to_list(string, mode):
items = []
item = ""
itemExpected = True
for char in string[1:]:
if itemExpected and char not in [']', ',', '[']:
item += char
elif char in [',', '[', ']']:
itemExpected = True
items.append(item)
item = ""
newItems = []
if mode == "int":
for i in items:
newItems.append(int(i))
elif mode == "float":
for i in items:
newItems.append(float(i))
elif mode == "boolean":
for i in items:
if i in ["true", "True"]:
newItems.append(True)
elif i in ["false", "False"]:
newItems.append(False)
else:
newItems.append(None)
elif mode == "string":
return items
else:
raise Exception("the 'mode'/second parameter of string_to_list() must be one of: 'int', 'string', 'bool', or 'float'")
return newItems
Also here is a list-to-string function in case you need it.
def list_to_string(lst):
string = "["
for i in lst:
string += str(i) + ","
if string[-1] == ',':
string = string[:-1] + "]"
else:
string += "]"
return string
Imagine your grandmother's box of recipes, all written on index cards. Each of those recipes is a list of ingredients, which are themselves ordered pairs of items and quantities. If you create a recipe database, you wouldn't need to create one table for the recipe names and a second table where each ingredient was a separate record. That sounds like what we're saying here. My apologies if I've misread anything.
From Microsoft's T-SQL Fundamentals:
Atomicity of attributes is subjective in the same way that the
definition of a set is subjective. As an example, should an employee
name in an Employees relation be expressed with one attribute
(fullname), two (firstname and lastname), or three (firstname,
middlename, and lastname)? The answer depends on the application. If
the application needs to manipulate the parts of the employee’s name
separately (such as for search purposes), it makes sense to break them
apart; otherwise, it doesn’t.
So, if you needed to manipulate your list of coordinates via SQL, you would need to split the elements of the list into separate records. But is you just wanted to store a list and retrieve it for use by some other software, then storing the list as a single value makes more sense.

Does Collection preserve ordering?

While searching a way to update an entry in a collection I found that I should use a Dictionary instead. In a comment somebody said that it should be noted that the Dictionary does not preserve order. But does the Collection preserve order? Even if you delete elements and so on?
Yes, the Collection preserves the order of items.
A Dictionary is "optimised" to provide fast access to members via an arbitrary key. There is no particular order to either keys or values, because content might get reorganised any time.
A "simple" Collection is an unordered list of objects. They are just stored in the same sequence you have inserted them. Only if you remove or insert items (versus append them) the sequence can get changed.
There's a great article over at Experts Exchange that presents the commonalities and differences of Collections and Dictionaries and when to use one or the other.
Just for completeness, this SO question discusses the merits of Arrays vs. Collections (I thought I should mention Arrays, as they are often neglected in such discussions).

EAV vs Serialized Object vs SQL with Xpath?

I'm trying to implement a badge system, the badge are based on user's metadata which are subject to change.
Those metadata are variable, and are set on the fly.
Example of metadata :
commentCount
hasCompletedProfile
isActiveMember
etc. Later, I would want to add hasGravatar metadata, for this reason, I can't easily design and normalize a table.
Those data, while they are an important part of the application are not 'sensible', almost all those metadata could be re-computed that means that the integrity of the data is not part of constraints.
Currently, I know three options, even if I didn't know any of them.
EAV
Serialized Objects
XML Field (I read somewhere that it is possible to store XML in a column, and use XPATH or something to query data)
All of these options look to have pro & cons but since I've never experimented with them, I don't really know which.
Do you have any feedbacks or advices?
I'm currently working with Zend Framework & Doctrine 2 with a MySQL server
XML and Serialized Objects are both very similar as you would likely be using 1 column to store this arbitrary data. This quickly becomes very messy and difficult to easily distinguish in SQL WHERE clauses (though some DBMS have XPath support)
EAV on the otherhand will provide a separate row for every Key => Value pair you have, which you can easily extract out with a JOIN or subquery. The major downfall is that it can be a performance hit if you have a lot of data in here. Another drawback is that to keep things simple you would store all keys/values as text in the db. You could create an EAV table for every type, but it's not practically needed in most languages as what you fetch comes out as a string or can be converted there anyway. Simply storing user configuration/properties should be perfectly fine for EAV.
So you might have a table user_metadata with 3 fields:
metadata_id INTEGER
user_id INTEGER
key CHAR
value CHAR
You could then fetch this data all at once for a user:
SELECT * FROM user_metadata WHERE metadata_user_id = $user_id
Or you could fetch individual metadata along with your user data
SELECT user.*, meta_gravatar.value AS hasGravatar
FROM user
LEFT JOIN user_metadata AS meta_gravatar
ON meta_gravatar.user_id = user.user_id AND meta_gravatar.key = 'hasGravatar'
WHERE user.user_id = $user_id
EAV: It is complicated and slow. It is an example how not to use an SQL database. You cannot have an index on properties in EAV and you need some nontrivial logic to get the data from database into business logic objects. Also your SQL queries become difficult to optimize.
Serialized objects: Serialization often depends on language or platform. There is no way of having an index on some property or search anything, but it is a simple way to store data of undefined structure.
XML field: Use of a standardized representation is better than serialization. Also, there may be support for such data structures in you SQL server.
JSON field: The same as XML field, however, JSON supports primitive data types (int, bool, null) and it is faster and easier to parse and serialize than XML. Some SQL servers provide some support for it as well.
All the three ways of serialization share the same disadvantage: No indices on the properties. In most applications, it is acceptable because the data are not processed by the database anyway, so they are simply a blob for the application. The good thing is, that this blob does not complicate the database schema and operations.
There is one more way to implement such an EAV alternative: plain old SQL table. If a new property requires some change in the application code, then you can add the SQL column as well. If you have user interface and application logic to define properties at run-time, you can teach your application to use ALTER TABLE queries. Then you simply add or remove columns as you need. In the end, it will be much easier and more effective than implementing EAV, as long as you have a good query builder.