Just some background.
This question is to help my team and I make a decision on how to handle an app we are working on. Currently the application is hosted locally in ASP.NET C# and the complex calculations that are done are handled in SQL as a stored procedure. The data is pulled into queries and the server will handle all the calculations and return a recordset which is written to a table and the front end can access that table.
So moving forward, we're thinking of moving the application to the web with Django and deciding how to handle the server side of things. So what we really wanted to understand is, how do others handle situations like these? Do we continue with the same idea and build a SQL server and do the same thing or is there a better way to solve this problem? We want to the user to be able to kick the calculations off and do other things while it runs in the background. The user shouldn't have to sit on the page because these calculation can take an hour or more depending on the complexity. We'd appreciate the thought and ideas.
Thank you so much.
That is exactly what Celery does. And the integration with Django is very easy. Read about it.
When I remove tables used in my Azure database (of course after removing the entities), I just use DROP TABLE TABLENAME. This has a bad effect. When I run the mobile service by just starting the browser, I get an Error 500 when I add a new record (of an existing table of course) with my TableControllers. Apparently, I did something wrong. It can be "solved" by creating a completely new database and use this one in my mobile service. The Seed method ensures that the right tables exist (and only the right tables) and everything works fine.
What is the best way (to prevent errors) when removing tables in a database used in Azure Mobile Services. Creating a completely new database seems to be a bit overdone and unneeded.
My first instinct is that it's an issue with Entity Framework. It doesn't generally play nicely with people touching the database. If you looked through your log, you'd probably see Entity Framework issues.
Take a look at this Azure Doc: http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-how-to-use-code-first-migrations/
It discusses how to enable code first migrations - I won't elaborate here because there are a couple of steps.
Essentially, the problem is that Entity Framework takes a number of dependencies and when those dependencies change, it just falls over on itself. Let me know if that doesn't help you.
Our team (QA) is facing the following problem:
We have a database that is accessed only by our Core application which is a WCF services app. Our client applications are using the Core to access the database.
At some point we were provided with a new Version of our Core application and of our Database. The Dev department also gave us a sql script which is altering a big part of our database Core data. The core data are used by the Core Application to describe the Logic of our system, so every change on that data may have affects on any of our client application's functionality.
My questions are:
Should we test all of our applications again (even if they are
already fully tested) or is there is a more efficient way to test the
SQL script?
Is there a testing technique/tool for data integrity/migration testing?
I am looking for a quick validity/integrity testing of the database after running a migration script. That will prevent us losing time by testing it through the applications. If the validity/integrity testing is successful then we can test the apps.
There are unit testing frameworks available for T-SQL. The TSQLUnit project is one such framework. You can use it to set up automated testing, just like you would in the applications.
as #Tim Lentine already posted, I would recomend testing the full application. As you commented, the new sql script your team received has made important changes on the core of your database development, according to your description, both on the structure and the data itself. So in order to be sure that everything is still on one piece I would preferably do a full application test. As for a tool or technique I can recomend the new RedGate (no, I do not work for them) addon on the SSMS called "SQL Test". It uses the unit testing open source tSQLt for its purposes. It only has the drawback that someone will need to first learn how to work with tSQLt but is prettu straightforward.
From the description you gave:
We have a database that is accessed only by our Core application ...
we were provided with a new Version of our Core application and of
our Database ...
tells me it is not your team's responsibility to test the database in isolation, but you can test the Core service from your client's perspective and therefore assume the database is correct.
Treat the Core application and the database as a black box and test using Unit Tests. These test should not require you to go poking around in the database as for all intents and purposes any application using your Core application doesn't know, nor should care, that the information is actually stored in to a database. You development team could decide in 6 months they are going to store the data in the Cloud in which case all your database test will be broken.
If you do have to look in the database to check data has been stored correctly then there is a problem with your Core service's interface as any data you put in should be retrievable via the same interface (I just know someone is going to comment that their app does store data which cannot be read back but without a more detailed description of your app it's easier to generalise).
Back in the real world I am assuming you are part of the QA team and unless the database developers are doing some testing (they are, aren't they?) you are more than likely going to have to validate the database changes.
To the end you may be interested to read a question I posted on the DBA Stack Exchage site about performing a data comparison between two different schemas. Spoiler: there's no easy answer.
show below links :
http://www.simple-talk.com/sql/t-sql-programming/sql-server-unit-testing-with-tsqlt/
http://www.red-gate.com/products/sql-development/sql-test/
I'm considering undertaking a project to migrate a very large MS Access application to a new system based on SQL Server. The existing system is essentially an ERP application with a couple of dozen users, all sharing the Access database over the network. The database has around 300 tables and lots of messy VBA code. This system is beginning to break down (actually, it's amazing it has worked as long as it has).
Due to the size and complexity of the Access application, a 'big bang' approach is not really feasible. It seems sensible to rope off chunks of functionality and migrate them piecemeal to the new system. During the migration process, which I expect to take several months, there may be a need for both databases to be in operation and be able to query and modify data in both systems.
I have considered using something like the ADO.NET Entity Framework to implement a data abstraction layer to handle this, but as far as I can tell, the Entity Framework has no Access provider.
Does my approach seem reasonable? What other strategies have people used to accomplish similar goals?
You may find that the main problem is using the MS Access JET engine as the backend. I'm assuming that you do have an Access FE (frontend) with all objects except tables, and a BE (backend - tables only).
You may find that migrating the data to SQL Server, and linking the Access FE to that, would help alleviate problems immediately.
Then, if you don't want to continue to use MS Access as the FE, you could consider breaking it up into 'modules', and redesign modules one by one using a separate development platform.
We faced a similar situation a few years ago, but we knew from the beginning that we'll have to swich one day to SQL SERVER, so the whole code was written to work from an Access client to both Access AND SQL server databases.
The idea of having a 'one-step' migration to SQL server is certainly the easier way to manage this on the database side, and there are many tools for that. But, depending on the way your client app talks to the database, your code might then not work properly. If, for example, your code includes a lot of SQL instructions (or generates them on the fly by, for example, adding filters to SELECT instructions), your syntax might not be 'SQL server' compatible: access wildcards, dates, functions, will not work on SQL server.
In addition to this, and as said by #mjv, the other drawback of a one time switch to MS SQL is that you will inheritate many of the problems from the original database: wrong or inapropriate field names, inapropriate primary/foreign key policies, hidden one-to-many relations that you'd like to implement in the new database model, etc.
I'll propose here some principles and rules to implement a 'soft transition' solution, which clearly best fits you. Just to say that it's not going to be easy, but it's definitely very interesting, paticularly when dealing with 300 tables! Lucky you!
I assume here that yo have the ability to update the client code, and you'd prefer to keep at all times the same client interface. It is of course possible to have at transition time two different interfaces, one for each database, but this will be very confusing for the users, and a permanent source of frustration for them.
According to me, the best solution strongly depend on:
The original connection technology,
and the way data is managed in your
client's code: Access linked tables,
ODBC, ADODB, recordset, local
tables, forms recordsources, batch
updating, etc.
The possibilities to split your
tables and your app in 'mostly
independant' modules.
And you will not spare the following mandatory activities:
setup up of a transfer
procedure from Access database to SQL server. You
can use already existing tools (The
access upsizing wizard is very poor,
so do not hesitate to buy a real
one, like SSW or EMS SQL Manager,
very powerfull) or build your own
one with Visual Basic. If your plan
is to make some changes in Data
Definition, you'll definitely have
to write some code. Keep in mind
that you will run this code
maaaaaany times, so make sure that
it includes all time-saving
instructions that will allow you to
restart the process from the start
as many times as you want. You will
have to choose between 2 basic data
import strategies when importing data:
a - DELETE existing record, then INSERT imported record
b - UPDATE existing record from imported record
If you plan to switch to new Primary\foreign key types, you'll have to keep track of old identifiers in your new database model during the transition period. Do not hesitate to switch to GUID Primary Keys at this stage, especially if the plan is to replicate data on multiple sites one of these days.
This transfer procedure will be divided in modules corresponding to the 'logical' modules defined previously, and you should be able to run any of these modules independantly (keeping of course in mind that they'll probably have to be implemented in a specific order, where the 'customers' module has to run before the 'invoicing' module).
implement in your client's code the possibility to connect to both original ms-access database and new MS SQL server. Ideally, you should be able to manage from within your code both connections for displaying and validating data.
This possibility will be implemented by modules, where you will have, for each of them, a 'trial period', ie the possibility to choose at testing time between access connection and sql connection when using the module. Once testing is done and complete, the module can then be run in exclusive SQL server mode.
During the transfer period, that can last a few months, you will have to manage programatically the database constraints that exist between 'SQL server' modules and 'Access' modules. Going back to our customers/invoicing example, the customers module will be first switched to MS SQL. Before the Invoicing module can be switched, you'll have to implement programmatically the one to many relations between Customers and Invoices, where each of the tables will be in a different database. Such a constraint can be implemented on the Invoice form by populating the Customers combobox with the Customers recordset from the SQL server.
My proposal is to build your modules following your database model, allways beginning with the 'one' tables or your 'one-to-many' relations: basic lists like 'Units', 'Currencies', 'Countries', shall be switched first. You'll have a first 'hands on' experience in writting data transfer code, and managing a second connection in your client interface. You'll be then able to 'go up' in your database model, switching the 'products' and 'customers' tables (where units, countries and currencies are foreign keys) to the new server.
Good luck!
I would second the suggestion to upsize the back end to SQL Server as step 1.
I would never go to the suggested Step 2, though (i.e., replacing the Access front end with something else). I would instead suggest investing the effort in fixing the flaws of the schema, and adjusting the Access app to work with the new schema.
Obviously, it is never the case that everything just works hunky dory when you upsize -- some things that were previously quite fast will be dogs, and some things that were previously quite slow will be fast. And I've found that it is often the case that the problems are very often not where you anticipate that they will be. You can only figure out what needs to be fixed by testing.
Basically, anything that works poorly gets re-architected, or moved entirely server-side.
Leverage the investment in the existing Access app rather than tossing all that out and starting from scratch. Access is a fine front end for a SQL Server back end as long as you don't assume it's going to work just the same way as it would with a Jet/ACE back end.
...thinking out loud... I think this may work.
I appears that the complexity of the application resides in the various VBA modules rather than the database table/schema themselves. A possible migration path could therefore be to first migrate the data storage to SQL server, exactly as-is, as follow:
prevent any change to the data for a few hours
duplicate all tables to the SQL server; be sure to create the same indexes as well.
create linked tables to ODBC Source pointing to the newly created tables on SQL Server
these tables should have the very same name as the original tables (which therefore may require being renamed, say with a leading underscore, for possible reference).
Now, the application can be restarted and should be using the SQL tables rather than the Access tables. All logic should work as previously (right...), possible slowness to be expected, depending on the distance between the two machines.
All the above could be tested in about a day's work or so; the most tedious being the creation of the tables on SQL server (much of that can be automated, I'm sure). The next most tedious task is to assert that the application effectively works as previously, but with its storage on SQL.
EDIT: As suggested by a comment, I should stress that there is a [fair ?] possibility that the application would not readily work so smoothly under SQL server back-end, and could require weeks of hard work in testing and fixing. However, and unless some of these difficulties can be anticipated because of insight into the application not expressed in the question, I propose that attempting the "As-is" migration to SQL Server should be considered; after all, it may just work with minimal effort, and if it doesn't, we'd know this very quickly. This is therefore a hi-return, low risk proposal...
The main advantage sought with this approach is that there will be a single storage during the [as the OP expects] longer period during which the old Access application will co-exist with the new application.
The drawback of this approach, is that, at least at first, the schema of original database is reproduced verbatim, i.e. including some of its known quirks and legacy-herited idiosyncrasies. These schema issues (and the underlying application logic) can be in time corrected, but this is of course less easy than if the new application starts ab initio, with its own, separate, storage, and distinct schema.
After the storage is moved to SQL server, the most used and/or the most independent modules of the Access application can be re-written in the new application, and as significant portions of the original application is ported, effective usage, by select beta testers or by actual users can start to be switched to the new application.
Possibly, some kind of screen-scraping based logic or some other system could be used to produce an hybrid application which would provide the end users with a comprehensive application, which sometimes work from new logic, and sometimes from the original MS-Access program.
We have a setup where we have multiple instances of an application - one instance for each customer.
We call a lot of our reports via URL, passing in parameters on the querystring.
Early on, when we were on 2005, we identified a problem with this: I could change my querystring a bit and get into someone else's data.
We got around the problem by spoofing a user.
Now, due to some intermittent instability in our 2005 report services install, we are taking the opportuntiy to upgrade to 2008. However, the spoofing situation doesn't seem to work any more.
The technet articles that appear relevant seem to say that we need to create a very large security extension (article). This seems like overkill. Surely there is an easier way to call a URL-based report.
How are you accomplishing this in your applications?
Note: This is a repost (paraphrased) of my colleague's question. He didn't get any answers, and since he doesn't have any reputation he couldn't try out the bounty system. I reworded it and decided to give it a whirl. Please be tolerant - we really need an answer to this one. :)
I'm curious, what were you doing in 2005 for user spoofing that doesn't work in 2008?
But to the real question, it sounds to me like you probably should use custom authentication. The sample you linked to actually does a pretty good job of explaining what is going on and guides you through the basics. I had a similar problem to yours where we have many clients accessing reports and it's extremely important that there is no way for one client to get access to another client's data.
I ended up writing a custom authentication extension that creates client specific folders and permissions only the client specific user (which I set via the custom authentication) Browser access to all reports in that folder.
I'd also suggest that you look at http://www.gotreportviewer.com/ if you're writing an application that lives outside of the /Reports/ area. I unfortunately learned that this existed after I'd invested too much time in my custom authentication scheme.
Good luck!