I am working with a client that has data in an MSSQL database. I only have read access to a remote ODBC connection and cannot modify the database in any form.
I'd like to replicate a subset of the data locally in an open-source alternative, syncing once per day or so. This is largely to eliminate reads against the data during peak hours. The local data will be used in a Rails 4 application. Note that syncing only needs to be one-way, as I don't have write access.
How can I best accomplish this?
FreeTDS?
Are there any libraries that will help with the syncing, or can I expect to write all the glue code myself?
I would advise you to create a ruby script that can be scheduled to do the data retrieving.
In order to connect to the MSSQL database, please take a look at this simple project I've created.
Then you only need to code the data you want to retrieve and the way you store it.
I prefer the approach of being decoupled from your rails application, although you can use a scheduler like rufus-scheduler or sidekiq and run it with your application.
Related
We have a multi-tenant, single db application where some customers have expressed the desire to get direct access to their own data.
I have been suggested looking into Azure Data Sync to achieve a setup where each of the customers get their own Azure SQL instance to which we setup a one-way synchronization of their data from the master database.
I managed to find some documentation on this, but one I got around to try it out in a lab setup, it looks like the ability to filter rows in the sync job has been removed in a later iteration of the Azure Data Sync service.
Am I wrong or is that feature really gone? If so, what would be your suggestions to achieve something similar on Azure?
You cannot filter rows using Azure SQL Data Sync. However, you can build a custom solution based on Sync Framework as explained here.
I have an SQL Azure Database instance which provide data to a windows 8 app. The data in my database should be updated periodically (weekly). Is the any way to make it? I'm thinking of write a app which will run weekly and update the database. But still don't know how to make it run on Window Azure? Please help!
Thank you,
There are a number of ways to achieve this, does the data however need to come from a different source or can it be calculated?
Either way, seeing as you're already knee deep in SQL Azure I would suggest putting your logic into a worker role that can be scheduled to run your updates once a week. This would give you a great opportunity to do calculations and/or fetch data externally.
Azure gives you the flexibility to scale this worker role into numerous instances as well depending on the work load.
Here is a nice intro tutorial on creating a worker role on Azure: link
Write the application and set it to run through your cron job manager on a weekly time schedule. The cron job manager should be provided by your host.
This might be too broad, but it's a problem I'm having a bear of a time dealing with. We have an application that we distribute to our end users. It's running on top of a derby back end. We can push out code changes fairly easily, it'll go out to our server, see there's a new version, download, overwrite old code, and reboot.
But, as we change our code, we also alter the schema of the derby database. We don't have great methods to update this. Currently we can push SQL updates via FTP. When the program is connected to the internet, it looks for new SQL files, downloads them and runs.
Unfortunately a lot of our clients have limited Internet access, so they get these updates intermittently. Sometimes because they changes are big enough, their local DB schema gets out of sync with what we want. Or they get the code changes via CD but not the SQL changes (someone mails them the CD).
What I've been trying to do is create a SOAP service that can serve up XML representations of the schema. It's been a huge PITA to develop so far.
What are some methods people are currently using to maintain databases like this? I feel like I'm not the first to do this, so there might be better ways than what I'm doing.
Based on some comments here, here's an update:
Basically, I think we screwed ourselves early on by not adhering to a strict versioning of the DB, so I don't know how everyone's DB is at. A lot of people got custom installs built (groan at will). I need a tool that can tell the differences between their DB and a "official" copy.
I have a tool built, it kind of works, but there's so…many…things to keep track of.
Can you distribute the DB changes as part of the code changes? Then, when the app restarts, it checks if it needs to run any updates on the DB.
Obviously, you'll need to version the DB schema to avoid applying the same update more than once.
I know some applications that do this (mostly in Ruby, but also in Java).
If you already have an update mechanism in place in your application that can download a program to alter the installed source code, why not package and run the schema changes as a part of that upgrade process? I would just run the updates as a part of the Java application then.
My team at work handles these changes by using the MyBatis Migration tool, which represents each schema change as a single migration script which contains the "make change" and "rollback" steps. A changelog table is stored in the database which lists which updates have been applied to that database, which makes it easy for the migrate command to determine which updates it needs to apply when run. This specific tool is probably only really useful when you control the database and have the ability to run shell commands and scripts to alter the database, but you can use the same concepts in your approach - package each schema change as an atomic unit and run them from within your program to bring the schema up to the current version, which you can track in the db itself.
You'll need a table containing the version of the database that the user is running, and then you'll need code to upgrade from version n to version n+1. Assuming you have a database user that has access to do schema changes, you can apply schema changes the same way you're now applying code changes.
Working in a VB.Net 3.5 WinForms application, and using Access 2003 (JET 4.0) as a database backend through ADO.Net.
I'd like to check the database for changes, before the application decides to refresh the data from the server. Are there any best practices for this, or should I trust the ADO.Net environment to optimise/handle this?
I was thinking of using a limited log on the server, which gets updated by every change. Pulling this log could tell whether or not a certain table has changed data. Any good?
Easiest way is to use a file based cache, which would just invalidate the cache if anything is written to the database.
This won't give you any table specific caching so this isn't the most efficient thinkable cache.
I have the following scenario - Our desktop application talks to a SQL Server on another machine. We are using Nhibernate 2.1.2. Now, we want to use SQLite on client machine to store data which could not be uploaded. For example, if Order table has not been updated on SQL Server, we want to save it to SQLite. And, then later try to upload to SQL Server. So, we are thinking to use Nhibernate for storing data in SQLite. How do I configure NHibernate to achieve this?
thanks
You will need to create a whole new session/session source. NHibernate can not simply switch contexts with the push of a button. Best bet is to spin up a separate repository and session that point at that specific second database.