I need some clarification on databases and backend.
I am currently using microsoft sql server with a database connection. Does that mean, that the sql server uses an sql database? Or is it using a proprietary sql server database?
I don't understand the concept of database and it's associated platforms used. SQL server uses T-sql to connect with the database. But, can i use sqlite or mysql to connect with the same database?
Applications are often split into "front-end" and "back-end"; this is often an informal definition, but it might also refer to "client/server" architecture, or "n-tier" architecture.
The "front-end" is typically the user interface; the "backend" is everything else.
One of the key tasks of "backend" components is storing information.
There are lots of ways to do that; if you application domain is a good fit for relational data, it's common to use a relational database. The most common technology for relational databases (rdbms) is SQL, which is a standardized language for defining relational databases through data definition language (DDL), and retrieving and modifying data (DML).
There are many implementations of SQL - SQLite, MySQL, Postgres, Oracle, MS SQL Server are all examples. Each implementation has its strengths and weaknesses; they all adhere to the SQL specification to some degree, and many have extensions beyond the SQL specification such as stored procedures, support for XML, free text search etc. Theoretically, it is possible to migrate between SQL implementations (e.g. from Oracle to MS SQL Server); in practice, this is usually very hard. To the best of my knowledge, there is no RDBMS which adheres 100% to the standard; it would probably not be hugely useful because the standard defines the language, rather than the implementation, and the implementation is what makes the database useful. For instance, the way the database implements indexes is a huge influence on performance.
T-SQL is a proprietary language, originally developed by Sybase (MS SQL Server was originally Sybase); it supports things like stored procedures etc. As far as I know, MS SQL Server is the only platform with support for T-SQL.
Your database connection specifies a few things:
- which driver to use to connect to the database. The driver understands how to connect across the physical hardware (e.g. network), and how to convert your instructions into something the server understands.
- the location of the server, and which particular database to use (one server can contain multiple databases)
- credentials to for authentication and authorization.
Related
I have a SQL query I want to optimize, so I asked the database owner what version of SQL they were using (since ordinary methods didn't seem to have support). They answered that my version of SQL is not decided by them but by my local SQL client. They claimed to use a system called "DB2", with support for multiple SQL dialects.
I then went on to ask our IT department which version of SQL our client was using (that client being Squirrel SQL). After some fiddling around they logged on to the database, queried it and reported the version of SQL to be DB2.[sequence of decimals].
This is probably a stupid question, but would someone mind clarifying this?
Is the version of SQL I'm using determined by the database I'm accessing or by my SQL client?
If the version of SQL is decided by the database, then which version of SQL is DB2 associated with? Does it use its own version?
"They answered that my version of SQL is not decided by them but by my local SQL client"
That's complete and utter nonsense - those people apparently have no idea what they are talking about.
The SQL dialect that is understood by the server is only defined by that server. The client has absolutely nothing to do with that. If the database server doesn't support some specific feature, no SQL client will change that.
There is an industry standard called ANSI SQL that database vendors implement. Then on top of that they tack on non-standard proprietary stuff, extra commands, keywords, procedural stuff like stored procedures and triggers and cursors, that are not covered by a standard but which they expect will provide useful features that will differentiate them from the competition.
For Db2 11’s compliance with standards see https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.common.doc/doc/c0011215.html. The actual spec is behind a paywall so this is not that helpful. See https://www.whoishostingthis.com/resources/ansi-sql-standards/#sql-ansi-standards-for-database-administration for an explanation of ANSI SQL standards.
Different db2 products (Z/OS, LUW) might have different extensions. Z/OS has to do horrible mainframey stuff that LUW can do without. But you wouldn’t be given a choice, you have to use the commands implemented by the database that you are connected to. The SQL client doesn’t have a role in this.
After many years working with SQL databases, it feels unconformable working with a database that doesn't rely on a schema to model the data.
I understand that SQL and NoSQL solutions have their places for different business needs and goals, but I don't have any experience with NoSQL databases.
But since I discovered that Microsoft SQL Server has support to also work with JSON data (https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-2017), I wonder:
Can I always default to SQL Server for any (new) application I might need to create and use this flexibility of JSON querying when needed?
That would mean I don't have to wrap my head around considering between SQL Server OR MongoDB OR both. I could just use SQL Server always and be good to go.
A similar consideration of mine is about graph-databases. SQL Server vs Neo4j for graph databases.
(https://learn.microsoft.com/en-us/sql/relational-databases/graphs/sql-graph-architecture?view=sql-server-2017).
Sure SQL Server support for graph is inferior compared to Neo4j which is specialized for that task, but it seems that Microsoft is trying to create a one-for-all database solution that every project could rely on.
Now a days mostly all database providing the datatype of any field in a table as json type.
But relational database is not providing the solutions as nosql database.
I am working on an application that will need to communicate with many different applications running on different database platforms. I will know the table schema before runtime but I won't know the database platform (MS SQL 200X, Oracle 9i, 10g, etc, MySQL 4.0.1, 5.x, etc, sybase, etc) until runtime.
It's my understanding that each of these systems have a slightly different dialect. Do I need to use nhibernate to handle the differences when connecting to these systems or can I use ADO.NET and pass raw SQL strings (select * from table)?
If you only need to use ANSI SQL statements, which should be implemented by all of the databases then yes, you can just use ADO.NET.
In my experience the main problem with database-agnostic code is the use of surrogate keys, like sequences or autonumber fields, as all databases implement these differently.
If you do need to use features that differ across databases then I don't think that it is reason enough to go to an object relational mapper like NHibernate - only do that if you have other reasons to do so. You can implement your own handling of syntax differences by generating different SQL for different databases easily enough.
SQL should be standardized for all dbs but they don't all use the same syntax so it really depends on what SQL you're calling. For example, SQL Server uses TOP while Oracle uses rownum. Even if they're all DDL, some syntactically differences between DBMSes can be an issue.
If select * from table is all you want, then there shouldn't be a problem, other than performance hits.
is there a windows xp utility to make a database such that its support by sql server, oracle, and other db management systems.
the database schema is very huge so i would like to know what to use to make it so its protable from sql server to oracle if future demands that change?
In short, what you seek is nearly impossible to do successfully. Every database product has enough quirks that building such database would not perform well and would be too limiting in terms of the features you were able to use. I.e, you have to play the game of lowest common denominator with respect to features that all products implement you want to support. A far better solution is to abstract the data layer into its own library accessed via interfaces so that you can swap out your data layer. ORMs, as Rafael E. Belliard suggested, makes this simpler but it can also be done manually.
I would recommend building your database using an ORM like Hibernate for Java (or NHibernate for .NET). This would allow you to seamlessly transition from one database type to the other with little to no issues. They would allow you to logically create the database schema without a specific database in mind, which you could then move from one database to the other.
I have created applications which change from SQL Server to MySQL to Oracle to MS Access to SQLite easily (clients love that flexibility).
However, you would need to know your way around programming...
My team is looking into geospatial features offered by different database platforms.
Are all of the implementations database specific, or is there a ANSI SQL standard, or similar type of standard, which is being offered, or will be offered in the future?
I ask, because I would like the implemented code to be as database agnostic as possible (our project is written to be ANSI SQL standard).
Is there any known plan for standardization of this functionality in the future?
Currently, there are more than one specifications followed by popular proprietary and open source implementations of spatial databases:
The OpenGIS - Simple Features for SQL
ISO SQL Multimedia Specification for Spatial - ISO/IEC 13249-3:2006 - Information technology -- Database languages -- SQL multimedia and application packages -- Part 3: Spatial
PostGIS, Oracle, Microsoft SQL Server and to some limited degree MySQL, all the databases implement the standard interfaces to manipulate spatial data. However, in spite of this fairly standardized features, all databases usually differ on simple SQL level what may make the database-agnostic implementation of your solution tricky. You likely need to survey the features you are interested and compare what various vendors provide.
For example GIS extensions for MySQL and for PostgreSQL both follow OpenGIS "Simple Features Specification for SQL" standard.
I haven't tried it, but Google tells me FDO is "an open-source API for manipulating, defining and analyzing geospatial information regardless of where it is stored". It's listed on osgeo.org - a point in its favour in my opinion.
There are providers for MySQL & Oracle. Disappointingly though SQL Server and Postgis aren't listed on the FDO providers page.
The only standard I know of is http://www.opengeospatial.org/standards/sfs and I don't know how well all the spatial database extensions implement it.
there are a number of geo-databases which are accessible with hibernate spatial
Oracle10g
Postgresql
MySQL
using an abtraction layer like hibernate is a good idea anyways, if you plan to write a database agnostic application. hibernatespatial fills this gap for geo features.