What would happen if an SQL Server instance become offline/Inaccessible if you have an Entity Data Model pointing to one of the intance's databases? - sql

I am currently writing an application in VS2010 which will access many different SQL Servers spread out to a couple of servers on our network. This is however a dynamic environment, and the servers might be subject to decommissioning. I have a couple of entity data models which point to custom information-gathering databases in those servers, which will become useless to me when the servers decommission. The problem is that I am worried that if one of these servers decommission, my application would fail because the entity data models won't be able to point to the databases anymore. I cannot go like every 2 weeks to change the source code of the application to meet new server needs, as development time would be wasted.
Are my suspicions right, that my application would fail to work if the data models point to databases which may not exist anymore? Is there a workaround to cater for my needs to "ignore" a connection to a non-existent database?

You will get an exception when you try to do the first thing which connects to the DB.
The exception will note that the underlying provider failed on open, and will have a SqlException as the InnerException giving details of that.
Probably the best thing for you to do is to manually create and open the connection and pass that to the context in the constructor, using this overload.

Related

How to prevent errors when removing tables in the database used in Azure Mobile Services?

When I remove tables used in my Azure database (of course after removing the entities), I just use DROP TABLE TABLENAME. This has a bad effect. When I run the mobile service by just starting the browser, I get an Error 500 when I add a new record (of an existing table of course) with my TableControllers. Apparently, I did something wrong. It can be "solved" by creating a completely new database and use this one in my mobile service. The Seed method ensures that the right tables exist (and only the right tables) and everything works fine.
What is the best way (to prevent errors) when removing tables in a database used in Azure Mobile Services. Creating a completely new database seems to be a bit overdone and unneeded.
My first instinct is that it's an issue with Entity Framework. It doesn't generally play nicely with people touching the database. If you looked through your log, you'd probably see Entity Framework issues.
Take a look at this Azure Doc: http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-how-to-use-code-first-migrations/
It discusses how to enable code first migrations - I won't elaborate here because there are a couple of steps.
Essentially, the problem is that Entity Framework takes a number of dependencies and when those dependencies change, it just falls over on itself. Let me know if that doesn't help you.

Querying multiple database servers?

I am working on a database for a monitoring application, and I got all the business logic sorted out. It's all well and good, but one of the requirements is that the monitoring data is to be completely stand-alone.
I'm using a local database on my web-server to do some event handling and caching notifications. Since there is one event row per system on my monitor database, it's easy to just get the id and query the monitoring data if needed, and since this is something only my web server uses, integrity can be enforced externally. Querying is not an issue either, as all the relationships are one-to-one so it's very straight forward.
My problem comes with user administration. My original plan had it on yet another database (to meet the requirement of leaving the monitoring database alone), but I don't think I was thinking straight when I thought of that. I can get all the ids of the systems a user has access to easily enough, but how then can I efficiently pass that to a query on the other database? Is there a solution for this? Making a chain of ors seems like an ugly and buggy solution.
I assume this kind of problem isn't that uncommon? What do most developers do when they have to integrate different database servers? In any case, I am leaning towards just talking my employer into putting user administration data in the same database, but I want to know if this kind of thing can be done.
There are a few ways to accomplish what you are after:
Use concepts like linked servers (SQL Server - http://msdn.microsoft.com/en-us/library/ms188279.aspx)
Individual connection strings within your front end driving the database layer
Use things like replication to duplicate the data
Also, the concept of multiple databases on a single database server instance seems like it would not be violating your business requirements, and I investigate that as a starting point, with the details you have given.

How to limit the number of connections to a SQL Server server from my tomcat deployed java application?

I have an application that is deployed on tomcat on server A and sends queries to a huge variety of SQL Server databases on an server B.
I am concerned that my application could overload this SQL Server database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed.
I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the SQL Server server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective?
I have no access to the configuration of the SQL Server server.
Links to tutorials or working examples of your suggested solution are most welcome!
Have you considered using DBCP? See here for more info
You're correct. A database pool will limit connections to either the database, or to all databases, depending on how the pool is configured.
Here are several open source packages that implement database connection pools. One of them probably does what you want.
connection pooling ... this will only pool connections to a specific database on the mssql server
AFAIK, this is correct. Your connection pool will only limit the number of connections to the specific database it is defined for.
want to control the total of these combined connections that will occur to many different databases
I dont think you can control the number of connections to all databases from the pool. You've written you don't have access to change things on the MSSQL server. so creating SYNONYMs to the various databases on MSSQL itself is not an option.
You can write your own class within the application, say a ConnPoolManager, which has an internal counter prior to getting and releasing Connections from any of the pools.
This class should cache all the JNDI lookups to each pool.
For the app to get a connection to ANY pool, it goes through the ConnPoolManager and if the counter shows the maxlimit is not yet crossed, only then does it fetch the connection.
Else it throws some exception for you to try later.
There might be a design pattern for this on the lines of Business Delegate.
Having said that, I think a bigger problem for you will be
incidentally I can only find out the names of individual db's dynamically as they change day to day
since you will be expected to create new or edit the connection pool settings in Tomcat each day? This is a maintenance nightmare in the future.

What strategies are available for migrating Access databases to SQL server-based applications?

I'm considering undertaking a project to migrate a very large MS Access application to a new system based on SQL Server. The existing system is essentially an ERP application with a couple of dozen users, all sharing the Access database over the network. The database has around 300 tables and lots of messy VBA code. This system is beginning to break down (actually, it's amazing it has worked as long as it has).
Due to the size and complexity of the Access application, a 'big bang' approach is not really feasible. It seems sensible to rope off chunks of functionality and migrate them piecemeal to the new system. During the migration process, which I expect to take several months, there may be a need for both databases to be in operation and be able to query and modify data in both systems.
I have considered using something like the ADO.NET Entity Framework to implement a data abstraction layer to handle this, but as far as I can tell, the Entity Framework has no Access provider.
Does my approach seem reasonable? What other strategies have people used to accomplish similar goals?
You may find that the main problem is using the MS Access JET engine as the backend. I'm assuming that you do have an Access FE (frontend) with all objects except tables, and a BE (backend - tables only).
You may find that migrating the data to SQL Server, and linking the Access FE to that, would help alleviate problems immediately.
Then, if you don't want to continue to use MS Access as the FE, you could consider breaking it up into 'modules', and redesign modules one by one using a separate development platform.
We faced a similar situation a few years ago, but we knew from the beginning that we'll have to swich one day to SQL SERVER, so the whole code was written to work from an Access client to both Access AND SQL server databases.
The idea of having a 'one-step' migration to SQL server is certainly the easier way to manage this on the database side, and there are many tools for that. But, depending on the way your client app talks to the database, your code might then not work properly. If, for example, your code includes a lot of SQL instructions (or generates them on the fly by, for example, adding filters to SELECT instructions), your syntax might not be 'SQL server' compatible: access wildcards, dates, functions, will not work on SQL server.
In addition to this, and as said by #mjv, the other drawback of a one time switch to MS SQL is that you will inheritate many of the problems from the original database: wrong or inapropriate field names, inapropriate primary/foreign key policies, hidden one-to-many relations that you'd like to implement in the new database model, etc.
I'll propose here some principles and rules to implement a 'soft transition' solution, which clearly best fits you. Just to say that it's not going to be easy, but it's definitely very interesting, paticularly when dealing with 300 tables! Lucky you!
I assume here that yo have the ability to update the client code, and you'd prefer to keep at all times the same client interface. It is of course possible to have at transition time two different interfaces, one for each database, but this will be very confusing for the users, and a permanent source of frustration for them.
According to me, the best solution strongly depend on:
The original connection technology,
and the way data is managed in your
client's code: Access linked tables,
ODBC, ADODB, recordset, local
tables, forms recordsources, batch
updating, etc.
The possibilities to split your
tables and your app in 'mostly
independant' modules.
And you will not spare the following mandatory activities:
setup up of a transfer
procedure from Access database to SQL server. You
can use already existing tools (The
access upsizing wizard is very poor,
so do not hesitate to buy a real
one, like SSW or EMS SQL Manager,
very powerfull) or build your own
one with Visual Basic. If your plan
is to make some changes in Data
Definition, you'll definitely have
to write some code. Keep in mind
that you will run this code
maaaaaany times, so make sure that
it includes all time-saving
instructions that will allow you to
restart the process from the start
as many times as you want. You will
have to choose between 2 basic data
import strategies when importing data:
a - DELETE existing record, then INSERT imported record
b - UPDATE existing record from imported record
If you plan to switch to new Primary\foreign key types, you'll have to keep track of old identifiers in your new database model during the transition period. Do not hesitate to switch to GUID Primary Keys at this stage, especially if the plan is to replicate data on multiple sites one of these days.
This transfer procedure will be divided in modules corresponding to the 'logical' modules defined previously, and you should be able to run any of these modules independantly (keeping of course in mind that they'll probably have to be implemented in a specific order, where the 'customers' module has to run before the 'invoicing' module).
implement in your client's code the possibility to connect to both original ms-access database and new MS SQL server. Ideally, you should be able to manage from within your code both connections for displaying and validating data.
This possibility will be implemented by modules, where you will have, for each of them, a 'trial period', ie the possibility to choose at testing time between access connection and sql connection when using the module. Once testing is done and complete, the module can then be run in exclusive SQL server mode.
During the transfer period, that can last a few months, you will have to manage programatically the database constraints that exist between 'SQL server' modules and 'Access' modules. Going back to our customers/invoicing example, the customers module will be first switched to MS SQL. Before the Invoicing module can be switched, you'll have to implement programmatically the one to many relations between Customers and Invoices, where each of the tables will be in a different database. Such a constraint can be implemented on the Invoice form by populating the Customers combobox with the Customers recordset from the SQL server.
My proposal is to build your modules following your database model, allways beginning with the 'one' tables or your 'one-to-many' relations: basic lists like 'Units', 'Currencies', 'Countries', shall be switched first. You'll have a first 'hands on' experience in writting data transfer code, and managing a second connection in your client interface. You'll be then able to 'go up' in your database model, switching the 'products' and 'customers' tables (where units, countries and currencies are foreign keys) to the new server.
Good luck!
I would second the suggestion to upsize the back end to SQL Server as step 1.
I would never go to the suggested Step 2, though (i.e., replacing the Access front end with something else). I would instead suggest investing the effort in fixing the flaws of the schema, and adjusting the Access app to work with the new schema.
Obviously, it is never the case that everything just works hunky dory when you upsize -- some things that were previously quite fast will be dogs, and some things that were previously quite slow will be fast. And I've found that it is often the case that the problems are very often not where you anticipate that they will be. You can only figure out what needs to be fixed by testing.
Basically, anything that works poorly gets re-architected, or moved entirely server-side.
Leverage the investment in the existing Access app rather than tossing all that out and starting from scratch. Access is a fine front end for a SQL Server back end as long as you don't assume it's going to work just the same way as it would with a Jet/ACE back end.
...thinking out loud... I think this may work.
I appears that the complexity of the application resides in the various VBA modules rather than the database table/schema themselves. A possible migration path could therefore be to first migrate the data storage to SQL server, exactly as-is, as follow:
prevent any change to the data for a few hours
duplicate all tables to the SQL server; be sure to create the same indexes as well.
create linked tables to ODBC Source pointing to the newly created tables on SQL Server
these tables should have the very same name as the original tables (which therefore may require being renamed, say with a leading underscore, for possible reference).
Now, the application can be restarted and should be using the SQL tables rather than the Access tables. All logic should work as previously (right...), possible slowness to be expected, depending on the distance between the two machines.
All the above could be tested in about a day's work or so; the most tedious being the creation of the tables on SQL server (much of that can be automated, I'm sure). The next most tedious task is to assert that the application effectively works as previously, but with its storage on SQL.
EDIT: As suggested by a comment, I should stress that there is a [fair ?] possibility that the application would not readily work so smoothly under SQL server back-end, and could require weeks of hard work in testing and fixing. However, and unless some of these difficulties can be anticipated because of insight into the application not expressed in the question, I propose that attempting the "As-is" migration to SQL Server should be considered; after all, it may just work with minimal effort, and if it doesn't, we'd know this very quickly. This is therefore a hi-return, low risk proposal...
The main advantage sought with this approach is that there will be a single storage during the [as the OP expects] longer period during which the old Access application will co-exist with the new application.
The drawback of this approach, is that, at least at first, the schema of original database is reproduced verbatim, i.e. including some of its known quirks and legacy-herited idiosyncrasies. These schema issues (and the underlying application logic) can be in time corrected, but this is of course less easy than if the new application starts ab initio, with its own, separate, storage, and distinct schema.
After the storage is moved to SQL server, the most used and/or the most independent modules of the Access application can be re-written in the new application, and as significant portions of the original application is ported, effective usage, by select beta testers or by actual users can start to be switched to the new application.
Possibly, some kind of screen-scraping based logic or some other system could be used to produce an hybrid application which would provide the end users with a comprehensive application, which sometimes work from new logic, and sometimes from the original MS-Access program.

What ORMs are developers using to connect to Azure?

Im interested to find out what techniques developers are using to connect to a Windows Azure instance running in the cloud?
From what i understand it is very similar to SQL Server with two of the key differences being Multiple Active Recordsets are not supported and idle/long running connections are automatically terminated by azure. For this microsoft suggest incorporating retry logic in your application to detect a closed connection and then attempt to complete the interrupted action. Does any one have example code that they are currently using on this?
To build out the data layer i was looking at various ORMs. Since im going to be accessing azure from windows azure (ie seperate boxes) to me it would seem key that any ORM mapper would need to support asynchronous methods so as not to block any windows azure instances.
Any suggestions as to which ORM mapper to use, or comments on what you are currently using
I have successfully used NHibernate with Azure and we are in the process of building a commercial app on top of NHibernate. The only problem that I had was with the connection pools when running locally and connecting to SQL Azure in the cloud - which was fixed when turning connection pooling off.
You may find similar problems with other ORM's... SQL Azure is less patient (for obvious reasons) than most people are used to. Connections timeout quicker, recycle sooner and so on.
Test first!
Here's one specifically designed for Azure:
"Telerik recently announced the
availability of Open Access, the first
ORM that works seamlessly with SQL
Azure relational databases in the
Windows Azure cloud."
And a few commenters at the Azure User Group recommend LLBLGen and Entity Framework.
I've been using Entity Framework - runs without any problems, just a different connection string.
What you do have to think about is your connection strategy, and how efficient your queries are. I've got method that's easy to write in EF - I've got a new record that could be duplicated, so I check if it's there, and if not, add it.
EF makes it really easy to do this, as if you're just accessing a local collection. BUT ... if you're paying for your dB access because it's in Azure and not on your local network, hmm, maybe there's a better (aka cheaper) way of doing that
According to Ayende, NHibernate "just works" with SQL Azure.
We have been using NHibernate without any customization on Azure (indeed, it just works), you can check Lokad.Translate as an open source example of such use.