How to create multi database support software? - sql

I am creating a software for retail shops and I want that my software support SQL Server and SQLite. If the user is a standalone (one PC) select the sqlite database and if it is over the network then choose the SQL Server option.
I am developing this software in Visual Studio 2010 and vb.net language.
As research we have three types of connections in Visual Studio, ODBC, OleDB and MSSQL.
And OLEDB can support MS-Access database and SQL Server.
Any comment and idea is highly appreciated.

The best way to code your applications is to abstract functionality into different tiers or layers. This can mean lots of things and can get quite complex, but the general idea is to keep your application's parts separated. Let's assume you have an inventory form in your program where you can look up current inventory. The form that displays the inventory doesn't need to know what database your customer is running. Generally you're better served by it not knowing. Likewise, your code that accesses the respective database, whether it be SQL Server, SQLite, or Access, doesn't really need to know what your Inventory form is going to do with the data it is retrieving. All your Inventory code should be doing is displaying your inventory in a way that's most useful to your customer, and all your data coding should be doing is getting the data that is requested of it.
The route I would probably take in your situation is to create a data provider class. Inside that class is where you would encapsulate logic for the different database functions you may have, as well as the different database systems your customers may have. Say for instance a store owner just received a shipment of products and needs to add one to his store's inventory. Ideally, your program should simply be able to perform a call like DataProvider.AddInventory(). Inside the DataProvider class, you would write code to keep track of which database solution the customer is using as well as an implementation of logic for each of the database solutions you'd like to support. Ideally, you should implement every data function you may need your application to perform so that it can be called very simply like the AddInventory() example.
Implementations of data providers can be as simple or complex as you like. In some cases where you're going to have multiple applications written in multiple different languages on multiple different platforms accessing your data source from multiple locations, it may make sense to write some sort of middleware. In your case, it sounds like this is the type of application that will reside "in house" and should be served fine by abstracting the data access to a separate class.

Related

How to generate a RowPointer that works with Infor's Syteline software's SQL database

We currently use Infor Syteline to manage our manufacturing and customer information. I am working on an alternative front end (vb.net) software that would work seamlessly with our Syteline software to add custom document generation and other functions to a SQL database and the only problem I am running into is with generating RowPointers that Syteline uses. Example of one of these pointers is below:
F5025224-046D-40DC-95F6-C517303A0D26
Can someone fill me in on what these pointers do, how I can replicate them, and why standard ID's aren't used to reference rows? The goal is to be able to add entries into the SQL database that are identical to those created by Syteline so that both softwares can be used to manage customer data, production data, etc.
Thanks
You can generate them with select newid(). They are always unique, so you can use them to filter on 1 specific row. They will never have a duplicate.
Not sure what you mean by standard ID.

How to create a SQL database within VB.net?

I am currently creating a vb.net program in which users upload a song file to the program and then it is saved within the programs files. I have set up the actual saving of the files but would also like to store some meta data of each in a SQL database within my program.
I have looked online and although i now understand the basics of SQL, im still a little fuzzy on how you actually implement this within VB.net. I have already added the library- Imports System.Data.SqlClient but failed to work out how to begin coding in SQL.
The basics of what im trying to acheive is a if statement that will determine wether or not a SQL database has been created in a specific location, and if it hasnt it should create it.
All constructive answers appreciated, thanks.
There are a number of different database engines available. The namespace that you have chosen contains the ADO.NET client classes for Microsoft SQL Server. You would use a connection string to specify how to connect to the database. This would often contain connection information, such as server name, user name, password etc, but it sounds like you want to store data locally.
There is a local version of SQL Server called LocalDB, but I think you would still need quite a lot of the SQL Server components installed for that to work. Although you can package these with your application they may be too large for you, so you may want to look at SQL Server Compact Edition, which is much smaller and allows you to package the whole engine as part of your application and is useful for storing data locally. Compact edition doesn't have quite all of the features that LocalDB does, so you may want to compare the features available for each.
Although you can use the ADO.NET objects to connect to a database, I think most people these days would use a layer on top which transfers data back and forwards between objects in memory and the database. This also allows you to use Linq to query the database in most cases. I personally use Entity Framework. You might want to look into that. There are different ways of configuring EF so you may want to look at a tutorial. Once you have it set up, you will probably find it much easier and safer to work with than writing SQL manually though.

SQL vs. NoSQL database for 'tags-heavy' CRM application

I'm building a talent management CRM application and I'm having trouble choosing between a SQL or NoSQL database for my data.
The application will only have a few 'core' entities (Person, Job, Company, Interview), and will rely heavily on 'tagging' of those entities. You can add Tags and Notes to a Person, a Job, a Company, and then sort/search data by those tags.
What I learned about NoSQL is that I can just have a Person object (document) with an array of Tags and Notes, where in SQL I would need separate Tags and Notes tables and construct joins to gather all my data for a Person.
Could anyone give me some pointers on what would be the way to go for my particular scenario?
Thanks!
Our ERP system is based on UniData (NoSQL), it is okay for performing the standard tasks needed to do business like entering in customers, creating sales orders, invoicing etc. But when it comes to creating reports that were not originally foreseen it is quite cumbersome. The system only lets you create reports off of one table, if you need data from another table you have two options: 1. Create what is called a virtual attribute for every field you need to look up from a different table, Or write a UniBasic program to retrieve the data needed.
To meet most of our business needs on the reporting front it is more beneficial for us to export the Data to SQL and then perform reports in SQL, the result is the reports run quicker from SQL and most of the time a reporting tool can be used to create the reports - this can usually be performed by a power user as opposed to someone that has to have quite a high level of programming abilities to just build a report.
It would have been nice if it had already been in SQL in the first place.
But maybe some other NoSQL database has better functionality than UniData, that said too usually 3rd party support for NoSQL database engines comes at a higher premium because there are less specialists available than 3rd party support for SQL engines.

What is the fastest way for me to take a query and turn it into a refreshable graph of the results set?

I often find myself writing one off queries to either answer someone's question or trouble shoot something and I would like to be able to quickly expose the on demand refreshable results of the query graphically so that I can share these results to others without having to go through the process of creating an SSRS report and publishing it to a reporting services server.
I have thought about using excel to do this or maybe running a local SSRS server but both of these options are still labor intensive and I cannot justify the time it would take to do these since no one has officially requested that I turn this data into a report.
The way I see it the business I work for has invested money in me creating these queries that often return potentially useful data that other people in the organization might want but since it isn't exposed in any way and I don't know that this data is something they want and they may not even realize they want this data, the potential value of the query is not realized. I want to increase the company's return on investment on all these one off queries that I and other developers write by exposing their results graphically so that they can be browsed by others and then potentially turned into more formalized SSRS reports if they provide enough value to justify the development of the report.
What is the fastest way for me to take a query and turn it into a refreshable graph of the results set?
Why dont you simply use what you may already have. Excel...you can import data via an ODBC / Oracle / SQL Connection. Get Data..and bam you can run the query and format it right in the spreadsheet and provide sorting etc. All you need to supply is the database name and user name and password to connect to the db.
JonH is right regarding Excel's built in ODBC support, but I have had tons of trouble with this. In my case, the ODBC connection required the client software to be installed so that it could use the encryption methods, etc. Also, even if that were not the case, the user (I believe) would still have to manually install and set up an ODBC connection.
Now if you just want something on your machine to do the queries and refresh them, JohH's solution is great and my caveats are probably irrelavent. But if you want other users to have access, you should consider having a middle-man app (basically a PHP script, assuming a web server is an option for you), that does a query, transforms the results into XML, and outputs it as "report-xyz.xml". You can then point anybody running a newer version of Excel to that address and they can very easily import the data into Excel with no overhead. (basically a kind of web service).
Keep in mind, I don't think you should have a web script that will allow users to make queries to your Database server! You would have some admin page where you make pass the query in and a new xml file with the results gets made. So my idea is also based on the idea that you want to run the same queries over and over without any specifics passed in. (if that were the case, I'd look into just finding a pre-built web services bridge for your database that already has security features built in. Then you could have users make the limited changes allowed.)

What strategies are available for migrating Access databases to SQL server-based applications?

I'm considering undertaking a project to migrate a very large MS Access application to a new system based on SQL Server. The existing system is essentially an ERP application with a couple of dozen users, all sharing the Access database over the network. The database has around 300 tables and lots of messy VBA code. This system is beginning to break down (actually, it's amazing it has worked as long as it has).
Due to the size and complexity of the Access application, a 'big bang' approach is not really feasible. It seems sensible to rope off chunks of functionality and migrate them piecemeal to the new system. During the migration process, which I expect to take several months, there may be a need for both databases to be in operation and be able to query and modify data in both systems.
I have considered using something like the ADO.NET Entity Framework to implement a data abstraction layer to handle this, but as far as I can tell, the Entity Framework has no Access provider.
Does my approach seem reasonable? What other strategies have people used to accomplish similar goals?
You may find that the main problem is using the MS Access JET engine as the backend. I'm assuming that you do have an Access FE (frontend) with all objects except tables, and a BE (backend - tables only).
You may find that migrating the data to SQL Server, and linking the Access FE to that, would help alleviate problems immediately.
Then, if you don't want to continue to use MS Access as the FE, you could consider breaking it up into 'modules', and redesign modules one by one using a separate development platform.
We faced a similar situation a few years ago, but we knew from the beginning that we'll have to swich one day to SQL SERVER, so the whole code was written to work from an Access client to both Access AND SQL server databases.
The idea of having a 'one-step' migration to SQL server is certainly the easier way to manage this on the database side, and there are many tools for that. But, depending on the way your client app talks to the database, your code might then not work properly. If, for example, your code includes a lot of SQL instructions (or generates them on the fly by, for example, adding filters to SELECT instructions), your syntax might not be 'SQL server' compatible: access wildcards, dates, functions, will not work on SQL server.
In addition to this, and as said by #mjv, the other drawback of a one time switch to MS SQL is that you will inheritate many of the problems from the original database: wrong or inapropriate field names, inapropriate primary/foreign key policies, hidden one-to-many relations that you'd like to implement in the new database model, etc.
I'll propose here some principles and rules to implement a 'soft transition' solution, which clearly best fits you. Just to say that it's not going to be easy, but it's definitely very interesting, paticularly when dealing with 300 tables! Lucky you!
I assume here that yo have the ability to update the client code, and you'd prefer to keep at all times the same client interface. It is of course possible to have at transition time two different interfaces, one for each database, but this will be very confusing for the users, and a permanent source of frustration for them.
According to me, the best solution strongly depend on:
The original connection technology,
and the way data is managed in your
client's code: Access linked tables,
ODBC, ADODB, recordset, local
tables, forms recordsources, batch
updating, etc.
The possibilities to split your
tables and your app in 'mostly
independant' modules.
And you will not spare the following mandatory activities:
setup up of a transfer
procedure from Access database to SQL server. You
can use already existing tools (The
access upsizing wizard is very poor,
so do not hesitate to buy a real
one, like SSW or EMS SQL Manager,
very powerfull) or build your own
one with Visual Basic. If your plan
is to make some changes in Data
Definition, you'll definitely have
to write some code. Keep in mind
that you will run this code
maaaaaany times, so make sure that
it includes all time-saving
instructions that will allow you to
restart the process from the start
as many times as you want. You will
have to choose between 2 basic data
import strategies when importing data:
a - DELETE existing record, then INSERT imported record
b - UPDATE existing record from imported record
If you plan to switch to new Primary\foreign key types, you'll have to keep track of old identifiers in your new database model during the transition period. Do not hesitate to switch to GUID Primary Keys at this stage, especially if the plan is to replicate data on multiple sites one of these days.
This transfer procedure will be divided in modules corresponding to the 'logical' modules defined previously, and you should be able to run any of these modules independantly (keeping of course in mind that they'll probably have to be implemented in a specific order, where the 'customers' module has to run before the 'invoicing' module).
implement in your client's code the possibility to connect to both original ms-access database and new MS SQL server. Ideally, you should be able to manage from within your code both connections for displaying and validating data.
This possibility will be implemented by modules, where you will have, for each of them, a 'trial period', ie the possibility to choose at testing time between access connection and sql connection when using the module. Once testing is done and complete, the module can then be run in exclusive SQL server mode.
During the transfer period, that can last a few months, you will have to manage programatically the database constraints that exist between 'SQL server' modules and 'Access' modules. Going back to our customers/invoicing example, the customers module will be first switched to MS SQL. Before the Invoicing module can be switched, you'll have to implement programmatically the one to many relations between Customers and Invoices, where each of the tables will be in a different database. Such a constraint can be implemented on the Invoice form by populating the Customers combobox with the Customers recordset from the SQL server.
My proposal is to build your modules following your database model, allways beginning with the 'one' tables or your 'one-to-many' relations: basic lists like 'Units', 'Currencies', 'Countries', shall be switched first. You'll have a first 'hands on' experience in writting data transfer code, and managing a second connection in your client interface. You'll be then able to 'go up' in your database model, switching the 'products' and 'customers' tables (where units, countries and currencies are foreign keys) to the new server.
Good luck!
I would second the suggestion to upsize the back end to SQL Server as step 1.
I would never go to the suggested Step 2, though (i.e., replacing the Access front end with something else). I would instead suggest investing the effort in fixing the flaws of the schema, and adjusting the Access app to work with the new schema.
Obviously, it is never the case that everything just works hunky dory when you upsize -- some things that were previously quite fast will be dogs, and some things that were previously quite slow will be fast. And I've found that it is often the case that the problems are very often not where you anticipate that they will be. You can only figure out what needs to be fixed by testing.
Basically, anything that works poorly gets re-architected, or moved entirely server-side.
Leverage the investment in the existing Access app rather than tossing all that out and starting from scratch. Access is a fine front end for a SQL Server back end as long as you don't assume it's going to work just the same way as it would with a Jet/ACE back end.
...thinking out loud... I think this may work.
I appears that the complexity of the application resides in the various VBA modules rather than the database table/schema themselves. A possible migration path could therefore be to first migrate the data storage to SQL server, exactly as-is, as follow:
prevent any change to the data for a few hours
duplicate all tables to the SQL server; be sure to create the same indexes as well.
create linked tables to ODBC Source pointing to the newly created tables on SQL Server
these tables should have the very same name as the original tables (which therefore may require being renamed, say with a leading underscore, for possible reference).
Now, the application can be restarted and should be using the SQL tables rather than the Access tables. All logic should work as previously (right...), possible slowness to be expected, depending on the distance between the two machines.
All the above could be tested in about a day's work or so; the most tedious being the creation of the tables on SQL server (much of that can be automated, I'm sure). The next most tedious task is to assert that the application effectively works as previously, but with its storage on SQL.
EDIT: As suggested by a comment, I should stress that there is a [fair ?] possibility that the application would not readily work so smoothly under SQL server back-end, and could require weeks of hard work in testing and fixing. However, and unless some of these difficulties can be anticipated because of insight into the application not expressed in the question, I propose that attempting the "As-is" migration to SQL Server should be considered; after all, it may just work with minimal effort, and if it doesn't, we'd know this very quickly. This is therefore a hi-return, low risk proposal...
The main advantage sought with this approach is that there will be a single storage during the [as the OP expects] longer period during which the old Access application will co-exist with the new application.
The drawback of this approach, is that, at least at first, the schema of original database is reproduced verbatim, i.e. including some of its known quirks and legacy-herited idiosyncrasies. These schema issues (and the underlying application logic) can be in time corrected, but this is of course less easy than if the new application starts ab initio, with its own, separate, storage, and distinct schema.
After the storage is moved to SQL server, the most used and/or the most independent modules of the Access application can be re-written in the new application, and as significant portions of the original application is ported, effective usage, by select beta testers or by actual users can start to be switched to the new application.
Possibly, some kind of screen-scraping based logic or some other system could be used to produce an hybrid application which would provide the end users with a comprehensive application, which sometimes work from new logic, and sometimes from the original MS-Access program.