I am using couchdb and each user has their own database.
However, I have a web app that should be able to look up an _id in any database and return the document. With thousands of users, querying across thousands of couchdb instances would be impractical.
How can I replicate my data to a single master database so that I can query by _id?
What I ended up doing is replicating every userdb to a master db when the database was created, using the _replicate API endpoint. The master db then contains a read-only copy of all the other databases.
Related
I need to create a database solely for analytical purposes. The idea here is for it to start off as a 1:1 replica of a current SQL Server database but we will then add in additional tables. The idea here is to be able to have read-write access to a db without dropping anything in production inadvertently.
We would ideally like to set a daily refresh schedule to update all tables in the new tb to match the tables in the live environment.
In terms of the DBMS for the new database, I am very easy - MySQL, SQL Server, PostgreSQL would be great -- I am not hugely familiar with the Google Storage/BigQuery stack but if this is an easy option, I'm open to it.
You could use a standard HA/DR solution with a readable secondary (Availability Groups/mirroring /log shipping).
then have a second database on the new server for your additional tables.
Cloud Storage and BigQuery are not RDBMS services themselves, but could be used in this case to store the backups/exports/dumps from the replica, and then have the analytical work performed on those backups.
Here is an example workflow:
Perform a backup and restore in a different database
Add the new tables in the new database
Export the database as a CSV file on your local machine
Here you could either directly load the CSV file in BigQuery, or upload that file in a Cloud Storage bucket previously created
Query the data
I suggest to take a look at the multiple methods for loading data in BigQuery, as well as the methods for querying against external data sources which may help to determine which database replication/export method might be best for your use case.
What are the disadvantages of using master database in MSSMS when querying?
One of the manager here was running his query using master database and it took 56mins to finish but when we ran it directly to the database (sdbfile.dbo.) it only took 32 seconds.
Usually users have public role to connect to master database, The disadvantage is SQL have to delegate again after authentication to specific database which is referred in query.
I realize that Azure SQL Database does not support doing an insert/select from one db into another, even if they're on the same server. We receive data files from clients and we process and load them into a "load database". Once the load is complete, based upon various rules, we then move the data into a production database of which there are about 20, all clones of each other (the data only goes into one of the databases).
Looking for a solution that will allow us to move the data. There can be 500,000 records in a load file and so moving them one by one is not really feasible.
Have you tried Elastic Query? Here is the Getting Started guide for it. Currently you cannot perform remote writes, but you can always read data from remote tables.
Hope this helps!
Silvia Doomra
I'm trying to sync between two SQL Azure databases as a solution to the inability to do cross domain queries. Basically I have 5 tables in a small database which is updated very frequently, and I want the contents of those 5 tables into my main application database so I've created them both with identical schema in each.
The sync SEEMS to be working but what I end up with a load of tables in another schema, but nothing in my own tables
eg My tables - dbo.ad, dbo.adgroup etc.
but I get datasync.ad, datasync.adgroup etc.
And what we learn from this is patience.
Sql Azure sync created those datasync schema tables as a tracking mechanism for the sync. It can take a little while (approx 30 mins for me) before your data starts to appear, but appear it does.
We've got the following scenario:
Central Database (replicated across multiple servers)
Client Database 1
Client Database 2
The Central db has Users and Roles amongst other things
The Client dbs have similar tables to each other but with some fields tweaked - contact, address, etc...
At present, each client db has its own user/role information which is copied from the central db by a scheduled process. I want to retrieve the user/role information directly from the central db instead (bearing in mind tables in the client db make reference to the user entity)
Is this even possible? If not, what's a better approach for having central user configuration across multiple databases?
Does this mean that you have referential integrity between tables?
bearing in mind tables in the client
db make reference to the user entity
If yes, as long as you have referential integrity between tables they must be in the same database. That points to your current solution being the best.
If no then linked tables would be the way to go, the tables would appear to be local, but the data would be retrieved from the cental database each time.
You EF4 will also not generate linked tables.
Your other option would to go for a more service orientated architecture, creating a user service connected to a web service. But this is probably a lot of work.