This stems from an earlier SO question.
If you are having to perform actions on the file system are you usually better off writing an application to handle those actions and making calls to SQL Server from that app? In what situations is using xp_cmdshell a good idea?
It is just another tool to be used. As with all tools, use it when it fits. Some people may have very strong opinions one way or another, but at the end of the day, it is there.
SQL Server 2005 introduced sp_xp_cmdshell_proxy_account which alleviates the issue somewhat with privileges, so it becomes more useful.
Consider the powder-key question: Is it generally bad to allow people to carry guns (guns being dangerous being the correlation)? Cue arguments...
When dealing with 3rd party apps where you don't have access to their source code, SQL Server may be the only or at least the most convenient place to put the logic needing access to the file system. Creating another app is just one more thing to worry about.
Security does become an issue. Additional privilages can be made available to users. http://msdn.microsoft.com/en-us/library/ms175046.aspx
Related
I am new in Yii. I want to know is SQL injection or any hacking possible in the Yii web application? If possible how to avoid that problem?
Yes. Any "hacking" is possible in any web application.
Because no software makes an application safe, but a programmer. Yii is only a tool, but how to use it is entirely up to one who uses it.
So, you have to learn how to use Yii and technology and security basics in general. Without such education that cannot be done by means of asking and answering just one question, one cannot create a safe application.
To make this answer not entirely off topic, as long as you're using Yii ActiveRecord, you can consider your code SQL injection safe, because AR takes the trouble of creating SQL queries for you.
Yeah. It depends to the programmer how he/she use the code, If executed correctly.. Try to read the document of Yii, they show it how to use the code properly and to make it anti sql injection.
Yes. The saying "Security is insecurity." is a big issue in web security.
Everything is hack-able, but it depends on the security of system & performance of the device trying to hack. If the hacker trying to hack a website by a Normal PC may takes Millions of year, but using a Quantum Computer may break within a second.
In case of web application build from Yii PHP Framework, it may also be hack-able. Some how this framework provides strong security measures.
This question might not be perfect for this platform but I trust people here a lot.
I am coding a data transfer application for an ERP system, many people create C# projects that are slow, not flexible and full of errors. I found a way to do it in SQL SERVER with Stored Procedures. But I dont want my SPs to be stolen by any programmers since SPs are kind of open source. I create my SPs with encryption but I found a small application on the internet which can show the content of SPs.
Is there any other way of encrypting and securing my SPs in sql server?
Is your application entirely written in Stored Procedures with no other code? How likely do you think it is that someone really will want to steal your code? I only ask because usually, if someone wants the code badly enough, they will find a way to get it. If they will go so far as to counterfeit an entire Apple store then someone can always crack any encryption scheme.
The best way to protect your intellectual property (i.e. the code) is to simply not give it to anyone: host it as a service. Outside of that obvious, simple, yet not always possible option, here are some things to consider:
For T-SQL stored procedures you can certainly deter the lower-end thieves by using the built-in encryption. Yes, you found a way to decrypt it by searching around, but not everyone will do that or know what to look for. This is not much of a barrier, but again, it is a very easy step that weeds out folks who are just poking around.
You can put some of your code into SQLCLR (i.e. .NET C# or VB via SQL Server's "CLR Integration" feature/API), though this won't work for everything, nor would it be a good choice to do for everything. But, for any code that would be more efficient to do in SQLCLR then it would be even harder to get the source code of the Assemblies. Again, not impossible, and there are free tools out there to decompile Assemblies, but this does raise the bar a bit as someone would have to extract the Assembly to a DLL and then decompile it (though I believe one tool will extract it from SQL Server, but still harder to come by than "how to decrypt an encrypted Stored Procedure"
With regards to .NET code (definitely for stand-alone app and possibly also for SQLCLR code), it is also possible to obfuscate the Assemblies such that it is very difficult, at best, to decompile it. There are products such as Red-Gate's SmartAssembly that can do this.
Be better than your competition:
Innovate and offer better features (i.e. a better product). Listen to customers and make their lives easier. Even if someone does end up getting your code, they can't steal you. Stolen code might teach them something, but it is essentially stagnant compared to what you should be producing.
Offer better service. Be personable and answer questions thoroughly, respectfully, and with a smile (even silly / stupid questions--if you need to, vent to a friend, but never in writing). Sure, some customers decide to purchase purely on price, but service/support is usually a large factor in both getting and retaining customers.
So, if you can find a quick and easy means of doing this then great. But don't spend too much time on it when your time is better spent improving your product. Besides, unless you came up with some highly complex algorithm, most things can be reverse-engineered, if they are smart enough. But if the folks you are worried about were that smart, would their software be "slow, not flexible and full of errors"? And along those same lines, (and just to have it stated), the other software being "slow, not flexible and full of errors" has nothing to do with them being written in C# (especially not the "not flexible and full of errors" issues): they simply just aren't written well ;-).
Yes there is a way, simply use the option WITH ENCRYPTION in your sp's definition.
Example
CREATE PROCEDURE spEncryptedProc
WITH ENCRYPTION
AS
BEGIN
SELECT 1
END
GO
Now try to see the definition of that procedure.....
exec sp_helptext 'spEncryptedProc'
Result: The text for object 'spEncryptedProc' is encrypted.
Make sure you personally have a copy of the stored procedure saved somewhere else you cannot see the procedure definition yourself.
I am in the planning stages of writing a new program, and there's a feature I'd like to include, but I was wondering if it's too much for non-technical users to handle, or if it invites potential problems.
My program is a C# app with a SQL db. I'd like to use a version of SQL that would allow the db to be accessed from multiple computers (btw, I'd definitely build it so that only one computer at a time could have the db open.) The user would be able to install the program on multiple computers, but if they tried to open it while it's open on another computer, they'd get a message that it can't be opened at this computer until it's closed on the other one (and I don't feel bad about that restriction.)
For non-technical users, even a standard next-next-next setup can be confusing and intimidating. I was wondering if including this ability might result in the install being too complicated or if there are too many other things that could go wrong, making the feature not worth the potential down side. (I want to keep support and troubleshooting as low as possible.) I appreciate your opinions.
I really can't envision a scenario where you would have complete, singular access to a database unless your users were very technical and performing surgical-like techniques with the database.
If this is a regular application for multiple users, then the application should be coded to handle multiple users without the expense of queuing them serially.
If you have no problems with this restriction of one-user-at-a-time, then a database is really a bad idea, as almost all modern database systems are meant for access by multiple users at one time.
If I understand correctly, to get the multiple-computer feature you need to add a "next-next-next" setup, to install the database?
I don't think that's going to be an issue for non-technical users; most programs do that anyways. However, those same non-technical users might freak out when their application firewall tells them that "application SQL Server is trying to accept connections from the outside"
Time and again, I've seen people here and everywhere else advocating avoidance of nonportable extensions to the SQL language, this being the latest example. I recall only one article stating what I'm about to say, and I don't have that link anymore.
Have you actually benefited from writing portable SQL and dismissing your dialect's proprietary tools/syntax?
I've never seen a case of someone taking pains to build a complex application on mysql and then saying You know what would be just peachy? Let's switch to (PostGreSQL|Oracle|SQL Server)!
Common libraries in -say- PHP do abstract the intricacies of SQL, but at what cost? You end up unable to use efficient constructs and functions, for a presumed glimmer of portability you most likely will never use. This sounds like textbook YAGNI to me.
EDIT: Maybe the example I mentioned is too snarky, but I think the point remains: if you are planning a move from one DBMS to another, you are likely redesigning the app anyway, or you wouldn't be doing it at all.
Software vendors who deal with large enterprises may have no choice (indeed that's my world) - their customers may have policies of using only one database vendor's products. To miss out on major customers is commercially difficult.
When you work within an enterprise you may be able to benefit from the knowledge of the platform.
Generally speaking the DB layer should be well encapsulated, so even if you had to port to a new database the change should not be pervasive. I think it's reasonable to take a YAGNI approach to porting unless you have a specific requriement for immediate multi-vendor support. Make it work with your current target database, but structure the code carefully to enable future portability.
The problem with extensions is that you need to update them when you're updating the database system itself. Developers often think their code will last forever but most code will need to be rewritten within 5 to 10 years. Databases tend to survive longer than most applications since administrators are smart enough to not fix things that aren't broken so they often don't upgrade their systems with every new version.Still, it's a real pain when you upgrade your database to a newer version yet the extensions aren't compatible with that one and thus won't work. It makes the upgrade much more complex and demands more code to be rewritten.When you pick a database system, you're often stuck with that decision for years.When you pick a database and a few extensions, you're stuck with that decision for much, much longer!
The only case where I can see it necessary is when you are creating software the client will buy and use on their own systems. By far the majority of programming does not fall into this category. To refuse to use vendor specific code is to ensure that you have a porrly performing database as the vendor specific code is usually written to improve the performance of certain tasks over ANSII Standard SQL and it written to take advatage of the specific architecture of that database. I've worked with datbases for over 30 years and never yet have I seen a company change their backend database without a complete application rewrite as well. Avoiding vendor-specific code in this case means that you are harming your performance for no reason whatsoever most of the time.
I have also used a lot of different commercial products with database backends through the years. Without exception, every one of them was written to support multiple backends and, without exception, every one of them was a miserable, slow dog of a program to actually use on a daily basis.
In the vast majority of applications I would wager there is little to no benefit and even a negative effect of trying to write portable sql; however, in some cases there is a real use case. Let's assume you are building a Time Tracking Web Application. And you'd like to offer a self hosted solution.
In this case your clients will need to have a DB Server. You have some options here. You could force them into using a specific version which could limit your client base. If you can support multiple DBMS then you have a wider potential client that can use your web application.
If you're corporate, then you use the platform you are given
If you're a vendor, you have to plan for multiple platforms
Longevity for corporate:
You'll probably rewrite the client code before you migrate DBMS
The DBMS will probably outlive your client code (Java or c# against '80 mainframe)
Remember:
SQL within a platform is usually backward compatible, but client libraries are not. You are forced to migrate if the OS can not support an old library, or security environment, or driver architecture, or 16 bit library etc
So, assume you had an app on SQL Server 6.5. It still runs with a few tweaks on SQL Server 2008. I bet you're not using the sane client code...
There are always some benefits and some costs to using the "lowest common denominator" dialect of a language in order to safeguard portability. I think the dangers of lock-in to a particular DBMS are low, when compared to the similar dangers for programming languges, object and function libraries, report writers, and the like.
Here's what I would recommend as the primary way of safeguarding future portability. Make a logical model of the schema that includes tables, columns, constraints and domains. Make this as DBMS independent as you can, within the context of SQL databases. About the only thing that will be dialect dependent is the datatype and size for a few domains. Some older dialects lack domain support, but you should make your logical model in terms of domains anyway. The fact that two columns are drawn from the same domain, and don't just share a common datatype and size, is of crucial importance in logical modelling.
If you don't understand the distinction between logical modeling and physical modeling, learn it.
Make as much of the index structure portable as you can. While each DBMS has its own special index features, the relationship between indexes, tables, and columns is just about DBMS independent.
In terms of CRUD SQL processing within the application, use DBMS specific constructs whenever necessary, but try to keep them documented. As an example, I don't hesitate to use Oracle's "CONNECT BY" construct whenever I think it will do me some good. If your logical modeling has been DBMS independent, much of your CRUD SQL will also be DBMS independent even without much effort on your part.
When it comes time to move, expect some obstacles, but expect to overcome them in a systematic way.
(The word "you" in the above is to whom it may concern, and not to the OP in particular.)
I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance.
There's really two parts to your question:
Is native log shipping good enough?
If not, whose log shipping should I use?
Here's my two cents, but like you're already discovering, a lot of this is based on opinions.
About the first question - native log shipping is fine for small implementations - say, 1-2 servers, a handful of databases, and a full time DBA. In environments like this, the native log shipping's lack of monitoring, alerting, and management isn't a problem. If it breaks, you don't sweat bullets because it's relatively easy to repair. When would it break? For example, if someone accidentally deletes the transaction log backup file before it's restored on the disaster recovery server. (Happens all the time with automated processes.)
When you grow beyond a couple of servers, the lack of management automation starts to become a problem. You want better automated email alerting, alerts when the log shipping gets more than X minutes/hours behind, alerts when the file copying is taking too long, easier handling of multiple secondary servers, etc. That's when people turn to alternate solutions.
About the second question - I'll put it this way. I work for Quest Software, the makers of LiteSpeed, a SQL Server backup & recovery product. I regularly talk to database administrators who use our product and other products like Idera SQLSafe and Red Gate SQL Backup to make their backup management easier. We build GUI tools to automate the log shipping process, give you a nice graphical dashboard showing exactly where your bottlenecks are, and help make sure your butt is covered when your primary datacenter goes down. We sell a lot of licenses. :-)
If you roll your own scripts - and you certainly can - you will be completely alone when your datacenter goes down. You won't have a support line to call, you won't have tools to help you, and you won't be able to tell your coworkers, "Open this GUI and click here to fail over." You'll be trying to walk them through T-SQL scripts in the middle of a disaster. Expert DBAs who have a lot of time on their hands sometimes prefer writing their own scripts, and it does give you a lot of control, but you have to make sure you've got enough time to build them and test them before you bank your job on it.
Have you considered mirroring instead? Here is some documentation to determine if you could do that instead
If you decide to roll your own, here's a nice guide.
I'm assuming you're going this route because Enterprise Edition is so costly?
If you don't need a "live-backup", but really just want a frequently updated backup, I think this approach makes a lot of sense.
One more thing:
Make sure you regularly verify that your backup strategy is working.
I'm pretty sure it's available in Standard, since we're doing some shipping, but I'm not sure about the Workgroup edition - it's pretty stripped down.
I'm always in favor of the packages solution, but mostly because I trust a whole team of MSFT developers more than I trust myself, but that comes with a price for sure. I'd second that any solution you roll on your own has to come with a lag notification piece so that you'll know immediately if it isn't working - how many times do we only find out backup solutions aren't working when somebody needs a backup? Also, think honestly about how much time it will take you to design and roll your own solution, including bug fixes and maintenance - can you really do it more cheaply? Maybe you can, but maybe not.
Also, one problem we ran into with Workgroup edition is that it only supports 5 connections at once, and it seems to start dropping connections if you get more users than that, so we had to upgrade to Standard. We were getting ASP.NET errors that our connections were closed if we left them unattended for even a few seconds, which caused us all kinds of problems.
I would expect this to be close to the last place you'd want to save a few bucks, especially given the likely consequences if you screw up. Would you rather have your job on the line? I don't even think I'd admit it, if I felt I had a chance of getting this one right?
What's your personal upside benefit in this?
I tried the built-in log shipping and found some real problems with it so I developed my own. I blogged about it here.
PS: And just for the record, you definitely get log shipping in the Workgoup edition. I don't know where this Enterprise-only thing started.