How do I set up the relationship between tables using firebase? - sql

I want to create a relational table on the google console screen using firebase. How can I make the relationship between the user table and the user's posts table on the google console screen

Both databases offered by Firebase, Realtime Database and Cloud Firestore, are both NoSQL type databases. By definition, this means that they are non-relational. They do not have tables, and there are no joins. You can't set up relationships between collections or nodes.
With Cloud Firestore, you can use a DocumentReference type field to have one document point to another document, but that doesn't help you with constructing a query that joins those two documents together.
If you want to use either of the databases that Firebase offers, please take some time to get acquainted with NoSQL data modeling techniques. There are plenty of tutorials out there, as well as the product documentation for each product.

NoSQL databases haven't data relationship with tables. Also they also saves documents in a json format.
nosql ecommerce data model maybe help for you

Related

Firebase Database in SQL

I would like to know if it is possible to use Firebase as a SQL database. I have trouble with relations in NoSQL.
For example: a user belongs to a team, and a team has users.
It is not possible to use Firebase in this way. Firebase is a real-time object store. It is not a SQL database and is not intended to be a replacement for one. It completely lacks mechanisms such as JOINs, WHERE query filters, foreign keys, and other tools relational databases all provide. That doesn't mean it isn't a valuable tool. But any attempt to treat it "like" a SQL replacement is going to fail. That's not its purpose.
I think what you need is supabase.
It uses an Open Source Postgres Database and they're trying to be a firebase alternative.
It's backed by YCombinator but is still in beta as of Feb, 2021.

Creating a blog service or a persistent chat with Table Storage

I'm trying azure storage and can't come up with real life scenarios when I would use it. As far as I understand the only index Table Storage has is Partition Key and Row Key. I can't sort or query on other columns without doing a full partition scan, right?
If I would migrate my blog service from a traditional sql server or a richer nosql database like Mongo i would probably be alright, considering users don't blog that much in one year (I would partition all blog posts per user per year for example). Even if someone would hit around a thousand blog posts a year i would be OK to load them all metadata in memory. I could do smarter partitioning if this won't work well.
If I would migrate my persistent chat service to table storage how would I do that? Users post thousands of messages a day and query history pretty often from desktop clients, mobile devices, web site etc. I don't want to lose on this and only return 1 day history with paging (which can be slow as well).
Any ideas or patterns or what am I missing here?
btw I can always use different database, however considering Table Storage is so cheap I don't want to.
PartitionKey and RowKey values are the only two indexed properties. To work around the lack of secondary indexes, you can store multiple copies of each entity with each copy using a different RowKey value. For instance, one entity will have PartitionKey=DepartmentName and RowKey=EmployeeID, while the other entity will have PartitionKey=DepartmentName and RowKey=EmailAddress. That will allow you to look up either by EmployeeID or emailAddress. Azure Storage Table Design Guide ( http://azure.microsoft.com/en-us/documentation/articles/storage-table-design-guide/) has more detailed example and has all the information that you need to design a scalable and performant Tables.
We will need more information to answer your second question about how you would migrate contents of your chat service to table storage. We need to understand the format and structure of the data that you currently store in your chat service.

PET technology Fluent Nhibernate

For a web application (with some real private data) we want to use privacy enhancing technology to prevent big risks when someone gets permission to our database.
The application is build with different layers, and we use (as said in the topic title) Fluent NHibernate to connect to our database and we've created our own wrapper class to create query's.
Security is a big issue for the kind of application we're building. I'll try to explain the setting by a simple example:
Our customers got some clients in their application (each installation of the application uses its own database), for which some sensitive data is added, there is a client table, and a person table, that are linked.
The base table, which links to the other tables (there will be hundreds of them soon), probably containing sensitive data, is the client table
At this moment, the client has a cleint_id, and a table_id in the database, our customer only knows the client_id, the system links the data by the table_id, which is unknown to the user.
What we want to ensure:
A possible hacker who would have gained access to our database, should not be able to see the link between the customer and the other tables by just opening the database. So actually there should be some kind of "hidden link" between the customer and other tables. The personal data and all sensitive other tables should not be obviously linked together.
Because of the data sensitivity we're looking for a more robust solution then "statically hash the table_id and use this in other tables", because when one of the persons is linked to the corresponding client, not all other clients data is compromised too.
Ultimately, the customer table cannot be linked to the other tables at all, just by working inside the database, the application-code is needed to link the tables.
To accomplish this we've been looking into different methods, but because of the multiple linked tables to this client, and further development (thus probably even more tables) we're looking for a centralised solution. That's why we concluded this should be handled in the database connector. Searching on the internet and here on Stack Overflow, did not point us in the right direction, perhaps we couldn't find this because of wrong search terms (PET, Privacy enhancing technology, combined with NHibernate did not give us any directions.
How can we accomplish our goals in this specific situation, or where to search to help us fix this.
We have a similar requirement for our application and what we ended up with using database schema's.
We have one database and each customer has a separate schema, where all the data for that customer is stored. It is possible to link from the schema to the rest of the database, but not to different schema's.
Security can be set for each schema separately so you can make the life of a hacker harder.
That being said I can also imagine a solution where you let NHibernate encrypt every peace of data it will send to the database and decrypt everything it gets back. The data will be store savely, but it will be very difficult to query over data.
So there is probably not a single answer to this question, and you have to decide what is better: Not being able to query, or just making it more difficult for a hacker to get to the data.

geocoding database provider(sql, nosql) and schema

Companies like Yahoo, Google, MS provide geocoding services. I'd like to know what is the best way to organize the backend for such services - what is the optimal solution in terms of database provider(SQL vs NOSQL) and database schema.
Some providers use Extensible Address Language (xAL) to describe an entity in the geocoding response. xAL has more than 30 data elements. Google geocoding API has about 20 data elements.
So in case of SQL database there will be 20-30 tables with mostly one-to-many relationships via foreign keys?
What about NOSQL databases, like MongoDB. How would one organize such a database? lots of collections for each data element, similar to SQL? One collection where each document completely describes given entity in the address space?
It's hard to say... It depends on what you need to do with the data in term of analysis and caching.
I had to deal with geo coordinates. But our app is very simple and we don't need to manipulate the geolocations in DB, simply store and retrieve. So I simply store start and end points in 2 columns of each route and a polyline in a binary column, with a few milestones being saved in a dedicated SQL table.
But for an advanced use of our APP we considered using this: https://simplegeo.com/

Can I create domain schema only (without any data) in Amazon SimpleDB?

I am evaluating Amazon SimpleDB at this time. SimpleDB is very flexible in the sense that it does not have to have table (or domain) schemas. The schema evolves as the create / update commands flow in. All this is good but while I am using a modeling tool (evaluating MindScape LightSpeed) I require the schema upfront, in order for the tool to generate models based on the schema. I can handcraft domains in SimpleDB and that does help but for that I have to perform at least one create operation on the domain. I am looking for the ability to create domain schema only. Any clues?
There is no schema in SimpleDB.
This is the reason why the NoSQL people suggest to "unlearn" relational databases before shifting the paradigm to these non-relational data stores.
So, you cannot do what you describe. Without the data, there will be nothing.
While it's true that SimpleDB has no schema support, keeping some type information turns out to be crucial if you run queries on on numeric data or dates*. Most NoSQL products have both queries and types, or else no-queries and no-types, but SimpleDB has chosen queries and no-types.
As a result, integrating with any tool outside of your main application will require you to either:
store duplicate type information in different places
create your own simple schema system to store the type information
Option 2 seems much better and choosing it, despite what some suggest, does not mean that you "don't have your mind right."
S3 can be a good option for this data, you can keep it in a file with the same name as your domain and it will be accessible from anywhere with the same AWS credentials as your SimpleDB account.
Storing the data as a list of attributename=formatname is the extent of what I have needed to do. You can, in fact, store all this in an item in your domain. The only issue is that this special item could unintentionally come back from a domain query where you are expecting live data not type information.
I'm not familiar with MindScape LightSpeed, but this is a general strategy I have found beneficial when using SimpleDB, and if the product is able to load/store a file in S3 then all the better.
*Note: just to be clear, I'm not talking about reinventing the wheel or trying to use SimpleDB as a relational database. I'm talking about the fact that numeric data must be stored with both zero padding (to a length of your choosing) and an offset value (depending on if it is signed or unsigned) in order to work with SimpleDB's string-base query language. Once you decide on a format, or a set of formats to be used in your application, it would be folly to leave that information hidden in and scattered across your source files in the case where that information is needed by source code tools, query tools, reporting tools or any other code.