SQL database copied and updated - sql

I have a main system that I use to add records to and run multiple routines on an SQL database using MS Access. Some routines take days to run.
I want to build a second PC where I can hopefully easily update its copy of the database and then run the long routines on while continuing to keep up the day-to-day activities on the main system.
How easy (or feasible) is it to take a copy of a sql database from one computer and update it on another?

Those processing times sound rather large. I would suggest that you consider building some server side routines that run on SQL server – they will run much faster, and more important reduce if not eliminate most network traffic. Keep in mind that working with lots of data using Access to SQL server can and will often run SLOWER than just using Access without SQL server.
As for moving the SQL database to another computer? The idea and concept is very much like moving a file like Access to another computer. You simply from SQL manager create backup file on the first computer. (it will be a single file – choose device, and then add a file location). You also find that such .bak files from SQL zip REALLY well, so if you using some kind of FTP or internet, then zip it file before you transmit it.
Regardless, you can even transfer that “bak” file with a jump drive or whatever. You then restore that database to the other computer – and you off to the races. (on the target computer, or your local computer, simply choose “restore” and restore the database from the “bak” file you transferred to the other computer running sql server. So the whole database with many tables etc. will be turned into a single file - much like access is.
So moving and making copies of a SQL database is a common task, and once you get the hang of this process, it not really much harder than moving an access file between computers.
I would however question why the process are taking so long – they may well be complex, but the use of store procedures and pass-through quires would substantial speed up your application as opposed to processing the data with a local client like Access + VBA. So try adopting more t-sql and store procedures – they will run “many” times faster – often 1 hour can be cut down to say 1 minute or less. So moving is easy, but you might eliminate the whole "day" of time down to a few minutes of time if you can run the processing routines server side as opposed to client side of which will occur when using Access as the client. (the access client can most certainly send or call t-sql routines that run server side - but the main trick here is to get those processing routines running server side).

Last time I used ms access it was just a file like an excel file which you could simply copy to the other machine and do whatever you want with.
If you are fairly comfortable with sql/administration, and only need a one-way copy (main system to second PC) then you could migrate ms access to mysql:
http://www.kitebird.com/articles/access-migrate.html
(Telling Microsoft Access to Export Its Own Tables)
This process should be fairly easy to automate if you need to do this regularly.

Related

UnixODBC driver issues for .MDF databases. OR: is there a way to easily extract a bunch of tables without an sql server?

Disclaimer: I am somewhat of a n00b when it comes to database programming, so bear with me.
I've been attempting to batch process a rather large amount (~20 gb) of data all contained in .MDF SQL database files. The files contain meteorological data obtained through weather balloons, with each table consisting of ~1 second observations of winds, pressure, height, temps, etc, and are created with our radiosonde tracking software on an unnetworked Windows machine. It is possible (and quite easy) to load the files using the associated software and export the tables as an ASCII text file...however, this process involves manually loading each one. As I'm performing a study that requires as many soundings as possible (we have over 2000), doing this process over and over for several years of twice-daily observations is extremely time-prohibitive.
I've been taking the files off of the computer and putting them on my laptop running Linux Mint, and consider myself to be fluent with Perl...I do most of my data analysis with Perl scripts. That said, I've had the darndest time trying to get into the database files!
I've tried to connect to one of the files using the DBI package using variants on
$dbh = DBI->connect("DBI:ODBC:$filename") or die "blahblahblah";
I have unixODBC installed and configured, have downloaded "libmyodbc.so" and "libodbcmyS.so", and keep getting the error
DBI connect('','',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified (SQL-IM002) at dumpsql.pl line 6.
I've tried remedying this a number of ways over the past couple days, and I won't post them here for the sake of brevity. My odbcinst.ini file is as follows:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/sib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1
I'm seriously confused. I THINK I'm doing everything that various online tutorials are suggesting, but everyone else is connecting to servers and these files are all local and in the same directory! Could anyone attempt to point me in the right direction? All I want is to calculate meteorological values using vertical sounding data! Am I missing something totally obvious?
Any help would be greatly appreciated!
It seems the original database server was a Microsoft SQL Server (MDF files). I am afraid these files alone are useless on a Linux machine. You need a Microsoft SQL Server on a Windows machine to get access to the contained data.
You described that you are able to attach a MDF file on a SQL server manually and then you can export the needed data as text files. Try to automate that. I'm not a MS SQL Server expert but it should be possible.
E.g. here is a tutorial to attach and detach a MDF file via T-SQL. So my approach would be to write a script which iterates over the 2000 MDF files and attach each to the SQL server. Then execute a query to export your data and then detach the MDF.

How to migrate shared database from Access to SQL Express

I have been using MS Access databases via DAO for many years, but feel that I ought to embrace newer techniques.
My main application runs on end user PCs (no server) and uses a shared database that is created and updated on-the-fly. When the application is first run it detects the absence of a database and creates a new empty one.
Any local user running the application is allowed to add or update records in this shared database. We have a couple of other shared databases, that contain templates, regional information, etc., but these are not updated directly by the application.
Updates of the application are released from time to time and each new update checks the main database version and if necessary executes code to bring the database up to the latest specification. This may involve the creation or deletion of tables and/or columns. New copies of the template databases are also included as part of the update.
Our users are not required to be computer-literate and should not need to run any sort of database management software beyond those facilities provided by the application.
It all works very nicely with DAO/Access, but I'm struggling to find how to do it with SQL Express. The databases seem to be squirrelled away in locations that are user-specific and database creation and update seems at best awkward to do by program code alone.
I came across some references "Xcopy deployment" that looks like it could be promising, but there seem to be references to "user instances" that sound suspiciously like something that's not shared. I'd appreciate advice from anyone who has done it.
It sounds to me like you haven't fully absorbed the fundamental difference between the Access Database Engine (ACE/Jet) and SQL Server:
When your users launch your Access application it connects to the Access Database Engine that has been installed on their machine. Their copy of ACE/Jet opens the shared database file (.accdb or .mdb) in the network folder. The various instances of ACE/Jet work together to manage concurrent updates, record locking, and so on. This is sometimes called a "peer-to-peer" or "shared-file" database architecture.
With an application that uses a SQL Server back-end, the copies of your application on each user's machine connect over the network to the same instance of SQL Server (that's why it's called "SQL Server"), and that instance of SQL Server manipulates the database (which is stored on its local hard drive) on behalf of all of the clients. This is called "client-server" or "server-based" database architecture.
Note that for a multi-user database you do not install SQL Server on the client machines, you only install the SQL Server Client components (OleDb and ODBC drivers). SQL Server itself is only installed in one place: the machine that will act as the SQL... Server.
re: "database creation and update seems at best awkward to do by program code alone" -- Not at all, it's just "different". Once again, you pass all of your commands to the SQL Server and it takes care of creating the actual database files. For example, once you've connected to the SQL Server if you tell it to
CREATE DATABASE NewDatabase
it will create the database files (NewDatabase.mdf and NewDatabase_log.LDF) in whatever local folder it uses to store such things, which is usually something like
C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA
on the server machine.
Note that your application never accesses those files directly. In fact it almost certainly cannot do so, and indeed your application does not even care where those files reside or what they are called. Your app simply talks to the SQL Server (e.g. ServerName\SQLEXPRESS) and the server takes care of the details.
Just to update on my progress. Inspired by suggestions here and this article on code project:
http://www.codeproject.com/Articles/63147/Handling-database-connections-more-easily,
I've created a wrapper for the ADO.NET methods that looks quite similar to the DAO stuff that I am familiar with.
I have a class that I can use just like a DAO Database. It wraps ADO methods like ExecuteReader, ExecuteNonQuery, etc. with overloads that can accept a SQL parameter. This allows me to directly replace DAO Recordsets with readers, OpenRecordset with ExecuteReader and Execute with ExecuteNonQuery.
Each method obtains and releases the connection from its parent class instance. These in turn open or close the underlying connection as required depending on the transaction state, if any. So a connection is held open for method calls that are part of a transaction, but closed immediately for a single call.
This has greatly simplified the migration of my program since much of the donkey work can be done by a simple "find and replace". The remaining issues are then relatively easy to find and sort out.
Thanks, once again to Gord and Maxwell for your advice.
This answer is too long to right down... but go to Microsoft page, there they explain how to make it: http://office.microsoft.com/en-us/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx
I hope this help you!!

Lotus Notes ODBC Connection

I need to connect and send/receive information from an MS SQL server in my Lotus Notes app using #formula in realtime (I can connect using an agent, but I need to use inline code for this).
The commands themselves seem pretty straight forward, but setting up the configurations seems to be a topic with scarce documentation. Apparently I need to install an ODBC driver. Where would I find that, and do I install that onto the server or onto the workstations that will run this app?
If any Lotus gurus could step me through setting this up, it would be greatly appreciated.
Thanks
You'll need to install the ODBC driver on the workstations that run this app, if the users will be triggering the ODBC connections. If at all possible, I highly suggest setting this up on the server side, and having it run via an agent. That'll save you from a few headaches, including having to maintain the ODBC connections on each workstation and worrying if each workstation has access to the data and server.
You first just want to make sure your ODBC setup is correct. You'll need the appropriate driver, of course, and the connection information. This site has a walkthrough to give you an idea of how to setup an ODBC database connection
If you have MS Access you can use it to test querying from the ODBC data source. Once you've tested the connection works, you'll just refer to the data source name (DSN) in your #DbColumn, #DbLookup, or #DbCommand formulas.
Back to my suggestion on setting this up on the server side, that would mean you'd keep a copy of the data you're querying within the Notes database itself, and then users would be interacting with read-only data in Notes. You could schedule updates regularly on the server side of that read-only data and effectively create a cache of the data in your Notes environment. Then that data would replicate around to other replicas of the database, but remove the trouble of the ODBC connection being needed everywhere.
If you need realtime data, though, that solution is out the window and you'll have to go with a local solution. In that case, you might want to look at the LCConnection class or using an ADODB.Connection from script, as both will allow you to create DSN-less connections to data sources. You'd then save the trouble of requiring ODBC data sources on each workstation, and only have to worry about whether they can access the server from their workstation.
I would add another option to Ken's list. It involves having the server do the queries of the external database (therefore you are only setting up ODBC in on the server - you don't have to deal with it on the workstations). You create an agent that is launched on the server using the 'run on server' technique. When the workstation needs to query the external data, the code creates a throw-away document in the database, puts the query criteria into the temporary document, saves the document, then calls the 'run on server' agent passing a reference to the temporary document. The server launches the agent, reads the criteria from the temporary document, does the query, and writes the results back to the temporary document. Then the workstation can access the query results from the temporary document. A scheduled agent can delete the temp docs on a regular basis.
It sounds complicated, and it all has to be done in script, but I've done this in many applications and it is fast, flexible, easy to administer, and gives your applications a lot of power. Note that end users must have the ACL rights to create a document in the db (the temp doc) in order for this to work.
Good luck!

ADO query to MS Access Database, Performance Increase?

I'll try and keep this simple, I have worked on projects in the past whereby we use either Oracle or MS SQL server as the data store with Access as the front-end, rather than linking in the tables I tend to use an ADO connection to the respective database in order to open my recordsets as in most cases this is faster as the query is executed against the server and then the results returned rather than the work been on the local PC.
My question now I've finally got there is, if I place an access .mdb file on a server machine with more processing power than my local PC and then run queries from it using an ADO connection (like Oracle/MS SQL), will it provide better performance due to the .mdb been on the server; or as it's access will the work automatically still be done by the local PC as access is a file type database rather than a database server?
No it will be slower - the queries will still run client side, and you will have network activity on top.
Access applications always run on the client side. Locking takes place by using Windows filesystem byte range locks on the LDB file, to allow multiple instances of Access to modify the same MDB file.
All the code runs on the client, and you will be having to send the data across the network. The only work the server will be doing with an MDB file is acting as a file server.
Just use SQL Server Express if Access is not fast enough. Since SQL Server is a client-server system, putting it on a fast server will help.

Modifying SQL Database on Shared Hosting

I have a live database on a shared hosting server. I am making some major changes to my site's code and I would like to fix some stupid mistakes I made in initially designing the database. These changes involve altering the size of a large number of fields, and enforcing referential integrity between tables properly. I would like to make the changes on both my local test server and the remote server if possible.
I should note that while I'm fairly comfortable with writing complex queries to handle data, I have very little experience modifying database structure without a graphical interface.
I can access the remote database in the visual studio database explorer but I can not use that for anything other than data manipulation. I installed Sql Management Studio express last night and after 40+ crashes I gave up - I couldn't even patch the damn thing.
The remote server is SQL 2005 / The MyLittleAdmin web interface is available.
So my question is what is the best way to accomplish these changes. Is there a graphical interface I can use on the remote server? If not is there an easy way to copy the database to my local machine, fix it, and re upload? Finally if none of the above are viable does anyone have links to a decent info on fixing referential integrity via query?
Sorry for the somewhat general question - I feel like I am making this far harder than it should be but after searching / trying all night i haven't gotten anywhere. Thanks in advance for the help. I really appreciate it.
...Also does anyone have a time machine I can borrow- I need to go kick my past self's ass for this.
Usually hosting providers allow you to backup and restore your database, so the easiest way to accomplish the move is to backup your live DB, download the backup file, restore it locally, do all the changes, do a backup of the local db, upload it, then restore it in the live service. Your site should be placed on an administrative shutdown during this time so it does not continue to update the data while you're doing this operations. You have to make sure your local SQL instance is exactly at the same build version (##version) like the hosting provider, otherwise your local SQL may upgrade the database structure and you'll be unable to restore it back on the hosting provider (or you'll be unable to restore in on your local server if your version is earlier than the host's). The MSDN BOL has a detailed guid on how to Copy Databases using Backup/Restore.
An alternative to backup/restore is to detach/attach the database, but I do not recommend this because you need to move both the MDF and the LDF in sync, and they're also larger in size than a backup.
This assumes you can do all the schema changes on your local copy in a wizardly manner, ie. fast and correct. Of course, that is not easy. The recommended way is to prepare in time a script that applies all the transformations needed to reach the new schema. There are tools like SQL Diff, SQL Compare, SQL Delta and other that can generate such a script. Also Visual Studio Database Edition can do this.
How I would do this would be like this:
Ensure I have exactly the same schema on my dev machine as on the live host. If not sure, I can take a backup of the live server and restore it localy. This would be my reference, v1. schema.
Keep the backup of v1. for reference
Start developing a script that changes the schema to my target. Sometimes I need to refresh my memory on script syntax myself, and what I do is I go to the SQL Server Management Studio wizards for the operation I want to do, select all the options in the UI and then select the 'show script options', that will show me exactly the script SSMS is running to accomplish my desired change.
For each change I add to the script, I can test it by restoring the v1. reference backup I have from step 1 and running the script.
Keep iterating on the script, adding one change at a time, until all the needed schema changes are done. After each change, I can test it again like in step 4.
Yourscript should do not only DDL changes to the schema, but also any DML changes needed (modifying reference data, changing values, moving columns between tabels etc).
When the script is ready, I can download a newer backup, apply the script, and upload the updated backup and restore it on the live host. Alternatively you can simply run the script on the live host (after of course you backed it up in case something goes horribly wrong).
In my projects I always rely on scripts to deploy and upgrade the database. In fact I use the database extended properties to store a 'version' of my application deployed schema and in my code I simply roll forward all the scripts that bring the schema to my last version. I have an article on my blog describing this technique: Version Control and your Database.