Cache MySQL connection in rails. NO activerecord - ruby-on-rails-3

I'm working on a rails app with mongodb, mongoid as ORM. I am not using activerecord in this project. I do need however to connect to a couple of MySQL dbs to get and write some data. I have a total of 3 queries that I need to execute on these dbs.
I feel like for what I need to do, there's no reason to include activerecord as it would be overkill.
The problem is that every time I need to get data from the MySQL db, I need to open a new connection to the server and close it after a couple of seconds. These queries are frequent, every 5 seconds. Is there any way to store a MySQL connection and reuse that instead of closing and reopening? I find it to be less overhead. Also, I'd like to have a way to re-open a connection if for some reason it should close unexpectedly. I remember there was a fix some time ago about "MySQL server has gone away" error and I think this will happen if I don't use activerecord.

Related

How to use sqlite database for repeated calls

In my project i need to call the sqlite database on constant basis. I got the database locked error which i kind of solved by giving sqlite timeout for 500ms. Stil sometimes i get the error. Is there any way not to lock the database error.?

Remote MSSQL/ODBC Syncing With Rails

I am working with a client that has data in an MSSQL database. I only have read access to a remote ODBC connection and cannot modify the database in any form.
I'd like to replicate a subset of the data locally in an open-source alternative, syncing once per day or so. This is largely to eliminate reads against the data during peak hours. The local data will be used in a Rails 4 application. Note that syncing only needs to be one-way, as I don't have write access.
How can I best accomplish this?
FreeTDS?
Are there any libraries that will help with the syncing, or can I expect to write all the glue code myself?
I would advise you to create a ruby script that can be scheduled to do the data retrieving.
In order to connect to the MSSQL database, please take a look at this simple project I've created.
Then you only need to code the data you want to retrieve and the way you store it.
I prefer the approach of being decoupled from your rails application, although you can use a scheduler like rufus-scheduler or sidekiq and run it with your application.

How do you translate old SQL database data to a new table layout?

We have and old database with a poorly thought out table structure, virtually no relationships setup and no naming schemes. I've created a new database with a clean relational data structure that implements proper design practices.
I'm looking for advice on different methods to migrate the old data over to the new format. This will require a lot of data re-shaping which won't be fun. The data is heavily accessed and the challenge will be to keep both databases in sync for all relevant data (accounts, important services etc).
I thought triggers might be the way to go here - but maybe there is a different method that I am unaware of (maybe MS Sync Framework, or a code-level data adapter which will be more work because there is so much data access code spread all over the place, classic ASP and .Net over dozens of projects). The database in question is SQL Server 2005, running in SQL Server 2000 compatibility mode.
I think the way to go is to write a stored procedure in the new database, which will actually pull your delta changes (only the modifications that were done from the last run to the instant the stored proc is run), and put this stored procedure in the sql agent job.
Configure the sql agent job to run for every 15 minutes and let the data sync in.
disadvantages of using triggers in this scenario
triggers will reduce the performance, as the sql server will execute the trigger code as well along with the update/ insert /delete statements and includes these as part of the execution at every time, i.e. if your trigger code takes 2 seconds to execute and the update statement with no trigger takes 2 seconds to execute, then the update time will be increased to 4 seconds with trigger in place. So employing triggers in this case might result in huge performance bottle neck.
I'm dealing with the same situation at my work, and I'm currently writing an application to do the migration. The original database has no established relationships, so it's really like a set of disconnected spreadsheets. By building my own application, I'm able to migrate the data using newly-established foreign keys, and assign data-specific defaults in place of nulls.

SQLite, open one permanent connection or not?

I have been under the understanding that database connections are best used and closed. However with SQLite Im not sure that this applies. I do all the queries with a Using Connection statment. So it is my understanding that I open a connection and then close it doing this. When it comes to SQLite and optimal usage, is it better to open one permament connection for the duration of the program being in use or do I continue to use the method that I currently use.
I am using the database for a VB.net windows program with a fairly large DB of about 2gig.
My current method of connection example
Using oMainQueryR As New SQLite.SQLiteCommand
oMainQueryR.CommandText = ("SELECT * FROM CRD")
Using connection As New SQLite.SQLiteConnection(conectionString)
Using oDataSQL As New SQLite.SQLiteDataAdapter
oMainQueryR.Connection = connection
oDataSQL.SelectCommand = oMainQueryR
connection.Open()
oDataSQL.FillSchema(crd, SchemaType.Source)
oDataSQL.Fill(crd)
connection.Close()
End Using
End Using
End Using
As with all things database, it depends. In this specific case of sqlite, there are two "depends" you need to look at:
Are you the only user of the database?
When are implicit transactions committed?
For the first item, you probably want to open/close different connections frequently if there are other users of the database or if it's all possible that more than process will be hitting your sqlite database file at the same time.
For the second item, I'm not sure how sqlite specifically behaves. Some database engines don't commit implicit transactions until the connection is closed. If this is the case for sqlite, you probably want to be closing your connection a little more often.
The idea that connections should be short-lived in .Net applies mainly to Microsoft Sql Server, because the .Net provider for Sql Server is also able to take advantage of a feature known as connection pooling. Outside of Sql Server this advice is not entirely without merit, but it's not as much of a given.
If it is a local application being used by only one user I think it is fine to keep one connection opened for the life of the application.
I think with most databases the "Best used and closed" idea comes from the perspective of saving memory by ensuring you only have the minimum number of connections need open.
In reality opening the connection can be a large amount of of overhead and should be done when needed. This is why managed server infrastructure (weblogic etc.) promotes the use of connection pooling. In this way you have N connections that are utilizable at any given time. You never "waste" resources but you also aren't left with the responsibility of managing them at a global level.

Rails questions, SQL statement in logs

So I am new to Rails, so pardon the basic question.
I see rails spits this out (in the console) for every request I make to my controller. Why? I'm not even doing any DB operation, I just started writing a HelloWorld rails app. I did pick mysql as the db when creating this rails project (rails -d mysql helloworld)
SQL (0.1ms) SET NAMES 'utf8'
SQL (0.1ms) SET SQL_AUTO_IS_NULL=0
So I noticed, that rails attempts to establish a DB connection for every request, irrespective whether you do a DB/ActiveRecord operation. It does this right after it does action_controller/dispatch. This seems like a waste of DB resource to me, why establish a connection to DB when I may not even do a ActiveRecord operation??
You're seeing this on every request because you're in development mode. In production mode (or with class caching turned on), this only happens once, when the connection is added to the connection pool.