I am looking for a performant default policy for dealing with the .dbo prefix.
I realize that the dbo. prefix is more than syntactic noise, however I got through the past 8 years of MS based development skipping typing the dbo. prefix and ignoring its function.
Apart from a performance issue with stored proc compile locks is there a downside to skipping typing ".dbo" in SQLqueries and stored procedures?
Further background: All my development is web middle-tier based with integrated security based on a middle tier service account.
[dbo].[xxx]
The SQL Server engine always parse the query into pieces, if you don't use the prefix definitely it going search for object in similar name with different users before it uses [dbo]. I would suggest you follow the prefix mechanism not only to satisfy the best practices, also to avoid performance glitches and make the code scalable.
I don't know I answered your question, but these are just my knowledge share
Most of the time you can ignore it. Sometimes you will have to type it. Sometimes when you have to type it you can just type an extra '.':
SELECT * FROM [LinkedServer].[Database]..[Table]
You'll need to start watching for it if you start using extra schemas a lot more, where you might have two schemas in the same database that both have tables with the same name.
The main issue is not security, is name conflict resolution, in the case that your application will ever be deployed side-by-side with another application using the same names in the database.
If you package and sale your product, I would strongly advise to use schemas, for the sake of your costumers. If you develop for a one particular shoppe, then is not so much of a concern.
Yes you can ignore - for the most part - if you never ever create anything outside the (default) "dbo" schema. One place you can't ignore it is when calling a stored function - that always has to have the "two-part" notation:
select * from dbo.myFunc
However, it is considered a best practise to always use the "dbo." prefix (or other schema prefixes, if your database has several schemas).
Marc
"however I got through the past 8 years of MS based development skipping typing the dbo. prexfix and ignoring its function."
This is your answer. If your DB is performing fine you are OK. Best practices don't replace real testing on your actual system. If your performance is fine and maintenance is OK, your time is better spent elsewhere where you can get better bang for your proverbial buck.
After working in the oracle world, I would advise against skipping the schema declaration. Remember now that SQL server versions after 7.0 support multiple schemas per database - it is important to distinguish between them to be sure that you are grabbing the proper tables.
If you can ensure that you'll never have two separate schema namespaces per database, then you can ignore it. The dbo. prefix should do nothing to affect performance by itself - parsing is such a small part of the SQL query as to be insignificant.
Related
Here in the company I work, we have a support tool that, among other things, provides a page that allows the user to run SELECT queries. It should prevent the user from running UPDATE, INSERT, DELETE, DROP, etc. Besides that, every select statement is accepted.
The way it works is by executing
SELECT * FROM (<query>)
so any statement besides a SELECT should fail due to a syntax error.
In my opinion, this approach is not enough to prevent an attack since anything could change the out-query and break the security. I affirmed that along with that solution it should also check the syntax of the inside query. My colleagues asked me to prove that the current solution is unsafe.
To test it, I tried to write something like
SELECT * from dual); DROP table users --
But it failed because of the ; character that is not accepted by the SQL connector.
So, is there any way to append a modification statement in a query like that?
By the way, it is Oracle SQL.
EDIT:
Just to put it more clear: I know this is not a good approach. But I must prove it to my colleagues to justify a code modification. Theoretical answers are good, but I think a real injection would be more efficient.
The protection is based on the idea/assumption that "update queries" are never going to produce a result table (which is what it would take to make it a valid sub-expression to your SELECT FROM (...) ).
Proprietary engines with proprietary language extensions might undermine that assumption. And although admittedly it still seems unlikely, in the world of proprietary extensions there really is some crazy stuff flying around so don't assume too lightly.
Maybe also beware of expression compilers that coerce "does not return a table" into "an empty table of some kind". You know. Because any system must do anything it can to make the user action succeed instead of fail/crash/...
And maybe also consider that if "query whatever you like" is really the function that is needed, then your DBMS most likely already has some tool or component that actually allows that ... (and is even designed specifically for the purpose).
I'm going to assume that it's deemed acceptable for users to see any data accessible from that account (as that is what this seems designed to do).
It's also fairly trivial to perform a Denial of Service with this, either with an inefficient query, or with select for update, which could be used to lock critical tables.
Oracle is a feature rich DB, and that means there is likely a variety of ways to run DML from within a query. You would need to find an inline PL/SQL function that allow you to perform DML or have other side effects. It will depend on the specific schema as to what packages are available - the XML DB packages have some mechanisms to run arbitrary SQL, the UTL_HTTP packages can often be used to launch network attacks, and the java functionality is quite powerful.
The correct way to protect against this is to use the DB security mechanisms - run this against a read-only schema (one with query privs only on the tables).
I have recently started working on a web application at a new company and am starting to bring in a few changes, for instance all my new code uses stored procedures rather than using a querystring in the code behind.
I recently discussed using Database schemas with my supervisor to tidy up our database and keep things manageable. The application I am working with is an intranet ASP.Net site with multiple sections for each department, I would like to implement a schema for each department so it is clear which procedures / tables belong to which parts of the application.
My question is would it be bad practice to add new tables / procedures etc into a database schema but leave everything else as it is then slowly add the existing tables to schemas? Or should we go through and add everything to schemas in one go?
Also is there any risk (performance or otherwise) to having some tables within a schema and some just within the database as they are now?
Sorry for what is probably a rather simple question but I'm not a DBA and have struggled to find any answers about this so far.
Thanks
I think the incremental/refactoring approach of moving to schemas as you go is a fine approach. I see no performance issues from it. The biggest issue will be remembering to start schema qualifying your objects (which you should be doing anyway even today as it is a best practice for performance)
SELECT columns
FROM dbo.Table
instead of
SELECT columns
FROM Table
If you are consistent with your schema qualifying you should be fine. If you get lazy with it, you'll end up having more troubleshooting to do.
Schema's are not widely used. You'll find some database source control and code generators don't play well with schema's. Even the database itself is uncomfortable with schema's and will, for example, refuse to cache an execution plan if there's a chance of a security mismatch (schema's can have their own security settings.)
Like triggers, schema's are best avoided altogether.
Well, It seems like such a simple solution to the many problems that can arise from insecure services and applications. But I'm not sure if it's possible, or maybe nobody's thought of this idea yet...
Instead of leaving it up to programmers/developers to ensure that their applications use stored procedures/parameterised queries/escape strings etc to help prevent sql injection/other attacks - why don't the people who make the databases just build these security features into the databases so that when an update or insert query is performed on the database, the database secures/sanitizes the string before it is inserted into the database?
The database would not necessarily know the context of what is going on. What is malicious for one application is not malicious for another. Sometimes the intent IS to
drop table users--
It is much better to let the database do what it does best, arranging data. And let the developers worry about the security implementations.
The problem is that the database cannot readily tell whether the command it is requested to execute is legitimate or not - it is syntactically valid and there could be a valid reason for the user to request that it be executed.
There are heuristics that the DBMS could apply. For example, if a single request combined both a SELECT operation and a DELETE operation, it might be possible to infer that this is more likely to be illegitimate than legitimate - and the DBMS could reject that combined operation. But it is hard to deal with a query where the WHERE condition has been weakened to the point that it shows more data than it was supposed to. A UNION query can deliberately select from multiple tables. It is not sufficient to show that there is a weak condition and a strong condition OR'd together - that could be legitimate.
Overall, then, the problem is that the DBMS is supposed to be able to execute a vast range of queries - so it is essentially impossible to be sure that any query it is given to execute is, or is not, legitimate.
The proper way to access the database is with stored procedures. If you were using SQL Server and C#/VB.NET you could use LINQ to SQL, which allows you to build the query in the language witch then gets turned into a parameterized SP. Good stuff.
Is using MS SQL Identity good practice in enterprise applications? Isn't it make difficulties in creating business logic, and migrating database from one to another?
Personally I couldn't live without identity columns and use them everywhere however there are some reasons to think about not using them.
Origionally the main reason not to use identity columns AFAIK was due to distributed multi-database schemas (disconnected) using replication and/or various middleware components to move data. There just was no distributed synchronization machinery avaliable and therefore no reliable means to prevent collisions. This has changed significantly as SQL Server does support distributing IDs. However, their use still may not map into more complex application controlled replication schemes.
They can leak information. Account ID's, Invoice numbers, etc. If I get an invoice from you every month I can ballpark the number of invoices you send or customers you have.
I run into issues all the time with merging customer databases and all sides still wanting to keep their old account numbers. This sometimes makes me question my addiction to identity fields :)
Like most things the ultimate answer is "it depends" specifics of a given situation should necessarily hold a lot of weight in your decision.
Yes, they work very well and are reliable, and perform the best. One big benefit of using identity fields vs non, is they handle all of the complex concurrency issues of multiple callers attempting to reserve new id's. This may seem like something trivial to code but it's not.
These links below offer some interesting information about identity fields and why you should use them whenever possible.
DB: To use identity column or not?
http://www.codeproject.com/KB/database/AgileWareNewGuid.aspx?display=Print
http://www.sqlmag.com/Article/ArticleID/48165/sql_server_48165.html
The question is always:
What are the chances that you're realistically going to migrate from one database to another? If you're building a multi-db app it's a different story, but most apps don't ever get ported over to a new db midstream - especially when they start out with something as robust as SQL Server.
The identity construct is excellent, and there's really very few reasons why you shouldn't use it. If you're interested, I wrote a blog article on some of the common myths surrounding identity values.
The IDENTITY Property: A Much-Maligned Construct in SQL Server
Yes.
They generally works as intended, and you can use the DBCC CHECKIDENT command to manipulate and work with them.
The most common idea of an identity is to provide an ordered list of numbers on which to base a primary key.
Edit: I was wrong about the fill factor, I didn't take into account that all of the inserts would happen on one side of the B-tree.
Also, In your revised question, you asked about migrating from one DB to another:
Identities are perfectly fine as long as the migrating is a one-way replication. If you have two databases that need to replicate to each other, a UniqueIdentifier column may be your best bet.
See: When are you truly forced to use UUID as part of the design? for a discussion on when to use a UUID in a database.
Good article on identities, http://www.simple-talk.com/sql/t-sql-programming/identity-columns/
IMO, migrating to another RDBMS is rarely needed these days. Even if it is needed, the best way to develop portable applications is to develop a layer of stored procedures isolating your application from proprietary features:
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/02/24/writing-ansi-standard-sql-is-not-practical.aspx
Is there an incantation of mysqldump or a similar tool that will produce a piece of SQL2003 code to create and fill the same databases in an arbitrary SQL2003 compliant RDBMS?
(The one I'm trying right now is MonetDB)
DDL statements are inherently database-vendor specific. Although they have the same basic structure, each vendor has their own take on how to define types, indexes, constraints, etc.
DML statements on the other hand are fairly portable. Therefore I suggest:
Dump the database without any data (mysqldump --no-data) to get the schema
Make necessary changes to get the schema loaded on the other DB - these need to be done by hand (but some search/replace may be possible)
Dump the data with extended inserts off and no create table (--extended-insert=0 --no-create-info)
Run the resulting script against the other DB.
This should do what you want.
However, when porting an application to a different database vendor, many other things will be required; moving the schema and data is the easy bit. Checking for bugs introduced, different behaviour and performance testing is the hard bit.
At the very least test every single query in your application for validity on the new database. Ideally do a lot more.
This one is kind of tough. Unless you've got a very simple DB structure with vanilla types (varchar, integer, etc), you're probably going to get the best results writing a migration tool. In a language like Perl (via the DBI), this is pretty straight-forward. The program is basically an echo loop that reads from one database and inserts into the other. There are examples of this sort of code that Google knows about.
Aside from the obvious problem of moving the data is the more subtle problem of how some datatypes are represented. For instance, MS SQL's datetime field is not in the same format as MySQL's. Other datatypes like BLOBs may have a different capacity in one RDBMs than in another. You should make sure that you understand the datatype definitions of the target DB system very well before porting.
The last problem, of course, is getting application-level SQL statements to work against the new system. In my work, that's by far the hardest part. Date math seems especially DB-specific, while annoying things like quoting rules are a constant source of irritation.
Good luck with your project.
From SQL Server 2000 or 2005 you can have it generate scripts for your objects, but I am not sure how well they will transfer to other RDBMS.
The generate script option is probably the easiest way to go. You'll undoubtedly have to do some search/replace on a few data types though.