A question for which I already know there is no pretty answer.
I have a third party application that I cannot change. The application's database has been converted from MS Access to SQL Server 2012. The program connects with ODBC and does not care about the backend. It sends pretty straight-forward SQL that seems to work on SQL Server nicely as well.
There is however a problem with one table that has the name "PLAN" which I already know is a SQL Server keyword.
I know that you would normally access such a table with square brackets, but since I'm not able to change the SQL I was wondering if there is any "ugly" hack that can either override a keyword or transform SQL on the fly.
You could try to edit the third party application with a hex editor. If you find the strings PLAN, edit this to something like PPAN and then rename the table, views etc. If you catch all, it could work. But, of course it is an ugly thing.
I think you are screwed I am afraid. The only other approaches I could suggest are
Intercepting the network packets before it hits the SQL Server which is clearly quite complicated. See https://reverseengineering.stackexchange.com/questions/1617/server-side-query-interception-with-ms-sql-server and in particular answer https://reverseengineering.stackexchange.com/a/1816
Decompiling the program in order to change it if it's a Java or .Net app for instance.
I suspect you're hosed. You could
Wire up the 3rd party app to a shim MS Access database that uses linked tables, where the Access table is nothing but a pass-through to the underlying SQL Server table. What you want to do is:
Change the offending column names in the SQL Server schema.
Create the linked tables in Access
Create a set of view/query in access that has the same schema that the 3rd party app expects.
Having done that, the 3rd party app should be able to speak "Access SQL" like it always has. Access takes care of the "translation" to T-SQL. Life is good. I suspect you'll take something of a performance hit, since you're proxying everything through Access, but I don't think it'll be huge.
That would be my preferred solution.
The other option would be to write a "shim" DLL that implements the ODBC API and simply wraps the actual calls to the true ODBC driver. Then capture the requests and improve them as necessary prior to invoking the wrapped DLL method. The tricky part is that your 3rd party app might be going after columns by ordinal position or might be going after them by column name. Or a mix. That means that you might need to transform the columns names on the way back, which might be more difficult than it seems.
Related
WSO2 Identity Server 5.0.0
I am wondering what the full path of SQL queries would be if i wanted to create and update Service providers via SQL queries. It's more than adding to the SP_APP table as a newly inserted entry by that means doesn't show in the UI. I was looking through the identity-core code and got a little lost, since it seems to abstract out some intricate registry stuff.
Anyway, I'd love to know how I could navigate the database to look at stuff by these means.
I would suggest moving your WSO2 IS completely onto SQL Server. You can then see all the tables and scripts that are run against it (using something like QueryProfiler). To do this, follow the instructions: https://docs.wso2.com/display/ML111/Setting+up+Microsoft+SQL.
Note that at the end of the document you must run two scripts (not just the one they show):
<PRODUCT_HOME>/dbscripts/mssql.sql
<PRODUCT_HOME>/dbscripts/identity/mssql.sql
I have to admit that the documentation on WSO2 IS is not very good or complete, but they have been good at answering questions. SQL Server was hard to set up as some of the steps were old (as you see they reference SQL Server 2005), but if you know your way around SQL Server pretty well, you can figure out the minor things. The largest issue was the second .sql script that needed to be run.
That should set you up well enough to see what is being called when set up Service Providers, or anything else that goes into the database.
In SQL Server 2008 R2, I would like to execute a statement that I want to be invisible to the SQL Profiler or other means of observing user queries. Is there a way to control what is displayed by SQL profiler?
I would like to execute something like:
SELECT 'MyPassword' INTO #passwordTable
I don't want to show 'MyPassword' through SQL Server Profiler or other means. Any ideas?
Essentially, no, you can't. You used to be able to do this by adding a comment like this into the batch or statement:
-- sp_password
But this no longer works. Why aren't you hashing your password?
Well, you have to be a server administrator to run the SQL Profiler, so even if you could prevent it from seeing the command, the user could just go grab the password table anyway. Ideally you would be storing hashes of the passwords rather than the passwords, making any viewing from the profiler useless.
If you really want to try and keep the profiler from seeing the statements, you could try a third party tool like this: http://www.dbdefence.com/support/dbdefence-documentation/
I have no idea if it works though, or how reputable that company is.
Denis, Aaron is correct, there is nothing like an "invisible statement", you can't tweak SQL Profiler to NOT show statements: once aboard, one can see all statements running in the DB.
You need to obfuscate this sensible data before submitting it to the DB. There are some obfuscated methods available (one-way hash, symmetric algoritms, home-made methods), you need to choose the more suitable method to your needs and implement it. Unfortunatelly, there is no free-lunch to your case...
I have seen a product called DBDefence.
It hides SQL statements from the profiler completely. I do not know how do they do it.
I use free version because I have small database.
In earlier versions of SQL Server it was possible to add a comment --sp_password
but not in SQL Server 2008 and above.
I don't see the point, really. If one is able to view a query with SQL profiler, surely he could access the database to view the actual data.
The key is to not store sensitive data (like passwords) in clear text.
Preventing people to use SQL profiler will come down to applying the proper security configuration on your SQL Server.
Background
The main application where I work is based heavily on the MUMPS-esque Caché database engine from InterSystems. Everything is stored in global arrays. Getting data out of the system for external reporting ranges from simply being a pain to being egregiously slow and painful.
Caché provides an ODBC driver for the database but unless the global arrays involved happen to be keyed by the selection criteria, it resorts to scans and a simple query will take hours to run. For scale, the entire Caché production namespace is about 100GB. I can write ObjectScript (Intersystems' dialect of MUMPS) programs that pull data much faster than the ODBC driver in these cases.
Part of the problem I think is that the application vendor doesn't use Caché's object persistence support but instead has the SQL tables defined as a façade over the global arrays, and it often doesn't work well for batch requests.
I built a reporting database in MS SQL Server that pulls the most common data (2.5GB worth) and even if it has to scan every table, all results are returned within 3 seconds. Unfortunately, it takes a long time to refresh the data so I can only do a full refresh once a week and an active refresh once a day. This is good enough for most needs, but I want to do better.
I'm on Caché 2007, SQL Server 2008 R2, VS2010 on Windows 7 and Windows Server 2008 R2.
Scope of question
I need a way to integrate live data from the source Caché database with other data on SQL Server. I want to be able to integrate views or table valued functions into a SQL query and have it pull live data from the source db.
The live data must be available within SQL Server for processing. Doing it with a secondary application would be a huge pain and wouldn't work with reporting tools that just expect to push a query over ODBC and get a final dataset in the right format.
I understand that there are ways to get data into SQL Server or accomplish the same general things I want to do. That's not what this question is about.
The data needs to come from ObjectScript programs run on Caché since not all the data I need is exposed through the SQL defined tables and I get the control I need to make the performance usable with ObjectScript.
I am looking for advice on any new options or how I can improve one of the options I've tried or considered or other pros or cons for those approaches.
What I've tried so far
This project has been an exercise in frustration where each promising avenue I've looked into is either terrible or doesn't work for some reason. Often the reason is some unnecessary restriction on SQLCLR assemblies.
Pulling everything through InterSystem's Caché ODBC driver via a linked server. SQL Server often resorts to scans if it can't push conditions to the remote server or has to perform a join locally. A scan of any nontrivial table takes many hours and is unacceptable. Also, the length of many columns is incorrectly defined by the SQL table definitions in Caché; SQL Server doesn't like that and aborts the query. See this SO question. I can't change the table defs and the vendor doesn't think it's a problem because it works with MS Access.
Using OPENQUERY on demand. This works to some extent but I can still have the column length problem from the previous item and there's no way to parameterize OPENQUERY queries so that makes it pretty useless to pull contextual data.
Using SQLCLR to call the ODBC data provider through CLR table valued functions. This takes care of the parameterization and data length issues, although it does require me to define or modify a function each time I need a new piece of data. Unfortunately, not all the data elements I'm interested in are available through SQL. For some things, I need to access the global arrays directly.
Intersystems provides an ActiveX control that lets you run ObjectScript programs over TCP on the server and get the results. This works great in a stand-alone C# app but as soon as I try to make a connection from a SQLCLR assembly I get a ridiculous URI error:
A .NET Framework error occurred during execution of user-defined routine or aggregate "GetActiveAccounts":
System.UriFormatException: Invalid URI: The URI is empty.
System.UriFormatException:
at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)
at System.Uri..ctor(String uriString)
at System.ComponentModel.Design.RuntimeLicenseContext.GetLocalPath(String fileName)
at System.ComponentModel.Design.RuntimeLicenseContext.GetSavedLicenseKey(Type type, Assembly resourceAssembly)
at System.ComponentModel.LicenseManager.LicenseInteropHelper.GetCurrentContextInfo(Int32& fDesignTime, IntPtr& bstrKey, RuntimeTypeHandle rth)
at FacsAccess.GetActiveAccounts.Client.connect()
at FacsAccess.GetActiveAccounts.Client..ctor()
at FacsAccess.GetActiveAccounts.E1.GetEnumerator()
See this unanswered SO question. There are other postings about it on the net but no one seems to have a clue. This is an extremely simple COM wrapper over a C++ DLL; it's not doing anything with licensing and has no reason to be in the managed licensing libraries. I wonder if this is some kind of boilerplate that's trying to get the name for an assembly that doesn't have a name because it's been loaded into the SQL database.
Intersystems also provides a more direct unmanaged interface but those interfaces are all C++, which I can't use through P/Invoke and I can't load a C++/CLI mixed mode impure assembly in SQLCLR.
Options I've considered but seem kind of terrible
I've considered trying the ActiveX control through SQL Server's COM support but that's terribly slow and really cumbersome.
I could create an out of process service to proxy the traffic but I can't use .NET remoting from SQLCLR and you're not supposed to use WCF and it would be really heavy weight for a such a simple interface anyway. I'd sooner roll my own IPC interface.
I could write some kind of extra unmanaged wrapper with a C style interface for the VisM or CacheDirect interfaces and access THAT through P/Invoke.
It doesn't seem like this should be so hard but it's really driving me up the wall and I need some perspective.
I think you can use ODBC via a linked server accessing stored procedures on the Cache database that are visible to the ODBC driver and that return result sets, but are not implemented using SQL.
I am 100% certain you can create such stored procedures and access them via ODBC, but I have never tried accessing them from SQL server as a linked server. Even if the linked server doesn't work, it seems like it would be preferable to access via the Intersystem's ODBC driver rather than the Active X control or CacheDirect.
I have an example of such a procedure for this question.
In case that link dies, here is the code:
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
FYI, that is not an example to use in production, since it exposes all the data from all the globals.
However, it sounded like you were prepared to write Cache Object Script to directly get the data anyway - this is a template for doing so. The main thing to understand is that qHandle will be passed back by the ODBC driver on each call, so you can use it to store state. If there is a lot of state, make qHandle be an integer index into a temporary global holding the "real" state, and clean that up in the close method.
Since you are concerned about performance, you may want to also implement a
MyQueryFetchRows (ByRef qHandle As %Binary, FetchCount As %Integer = 0, ByRef RowSet As %List, ByRef ReturnCount As %Integer, ByRef AtEnd As %Integer) As %Status
method - see the documentation for %Library.Query for more details.
If you really need this to appear to ODBC as a (read-only) table rather than a stored procedure I think it might be possible - but I've never before tried to see if an arbitrary stored procedure can be exposed as a read-only table and I'm not sure how easy it is, or if it's actually always possible.
I've written my website using ASP.NET MVC and SQL Server (used a SQL Server instance which ran locally on my machine).
I'm about to upload my site to a hosting provider. However, his DB works under MySQL. I don't care about the data already in the DB itself. It's mostly mock data and a few tables which I don't mind rewriting. But how do I go about the transition from SQL Server to MySQL? How does this influence my queries inside my code? is it the same code syntax? Will I have to recreate the table definitions? In my project I used LINQ to SQL.
Am I forced to look for a host with SQL Server capabilities (i.e. licenses)? (I hope not...)
Thanks!
You may be able to transition smoothly, but I greatly doubt this will be the case.
The differences are many and whether you could depends on what features you used when developing.
If you kept to one of the standards, you may be in luck.
See a comparison sheet on wikipedia.
In regards to the Linq aspect of your question - you should be able to use a Linq provider for MySql instead of MSSql without a problem.
Here is a link to one: http://code2code.net/DB_Linq/
If you do decide to go with the MySql hosting, I suggest you test all aspects of you application to ensure they are working as expected.
LINQ to SQL works with MS SQL Server only...so if you want to keep using it, you need to find a host with a MSSQL database.
I have application that requires SQL Server 2000 as database storage.
I do not really want to use SQL Server 2000, but I can user MySQL Server instead.
Application uses ODBC to connect to SQL Server Database.
I would like to know if it is possible to make fake SQL Server which will send and receive data to/from MySQL Server
application <---> odbc manager <---> fake SQL Server driver <---> mysql server
Any one if such thing is possible to make?
If your application simply uses vanilla SQL via the ODBC driver, you should be able to use MySQL with few problems. If it uses specific features of SQLServer, then you need SQLServer - you cannot realistically fake it.
I wouldn't.
You're going to spend so long persuading the two to play nicely to no real benefit. You'll have to do most code the SQL Server way to work in this scenario. Given these, you might as well just bite the bullet and learn to use SQL Server directly rather than trying to tie the two together somehow, I'm afraid.
You can use a provider model and just switch out which provider your using at run time.
Of course, the biggest issue will be in the differing SQL code support. So you will have to take care that all of your SQL is located inside of each provider and stay away from any sort of embedding it in your application logic.. which you should be doing anyway.
Another way is to simply change the ODBC data source at deployment time, but again, you will have to make sure the SQL code actually works in both environments; which is tough.
Typically supporting multiple database back ends is a art form in itself. Simple things like SELECT TOP 100 for SQL Server 2k versus MySql's LIMIT command are enough to keep people from doing this.
There's no real way of "faking" it because the database servers are fundamentally different. You would end up writing a fair amount of code just to translate a sql call from one to the other... Which is a waste of time.
I'd suggest you just bite the bullet and learn MS SQL Server.
This site shows a very simple example of how SQL Server, Oracle, and MySql differ on just one implementation of a select statement.
Not sure why you "do not really want to use SQL Server 2000" but, if you decide you need to and you have a PC with Windows available, you can use the Microsoft Database Engine 2000 Release A (MSDE2000A.exe). It is the real thing and free to use on a desktop.
http://msdn.microsoft.com/en-us/library/ms811304.aspx
I do not think it is available for download from Microsoft anymore but you might be able to find it somewhere else. If you can't find it, your next best option may be to use the 2005 version (SQL Server 2005 Express Edition) and make sure you do not use any new features since 2000:
http://www.microsoft.com/Sqlserver/2005/en/us/express.aspx