I've got a large number of old Delphi apps accessing a remote SQL Server database using ADO. I would like to get direct those queries to a middleware layer instead of said database. The Delphi clients must run unchanged; I am not the owner of most of them.
Is it possible to do this? If so, how should I go about it?
Don't worry about parsing the T-SQL (both raw T-SQL and stored proc calls, incidentally).
Create a new SQL database, and use a combination of views, T-SQL and managed code to fake up enough database objects for the application to work.
Technique 1: Use tables, but populate them asyncrhonously from the new data source.
Technique 2: Fake the tables and procedures
E.g. you can have a stored procedure which calls managed code to your middleware, to replace the existing stored procedure.
Where the application reads directly from a table, you can use a view, which references a managed table-valued function.
-
You should have no trouble wherever stored procedures are used. If the application sends dynamic SQL you have more of an uphill struggle however.
Related
I am thinking of using some Azure services to share SQL reports with external users. And now I am a little bit confused because there seems no good option for the thing I need.
I have an on-premise SQL database with parametrised stored procedure which returns a set of rows like a select statement. The final effect should be a solution which will give external user possibility to execute this stored procedure with provided input parameters and be able to save the results to Excel file.
So I've thought about migrating tables which are used by this procedure to Azure SQL database and then maybe build some application using PowerApp or Functions, but at this moment I am just stuck and don't know from where to start.
When I troubleshoot a large .NET app which uses only stored procedures, I capture the sql which includes the SP name from SQL Server Profiler and then it's easy to do a global search for the SP in the source files and find the exact line which produced the SQL.
When using Entity Framework, this is not possible due to the dynamic creation of SQL statements. However there are times when I capture some problematic sql statements from production and want to know where in the code they were generated from.
I know one can have EF generate logs and tracing on demand. This probably would be taxing for a busy server and produces too much logs. I read some stuff about using mini profiler but not sure if it fits my needs as I don't have access to the production server. I do however have access to attach SQL Server Profiler to the database server.
My idea is to find a way to have EF attach/inject a unique code to the generated SQL but it doesn't affect the outcome of the SQL. I can then use it to cross reference it to the line of code which injected it into the SQL. The unique code is static which means a unique static code is used for every EF linq statement. Maybe sent as a dummy sql or a comment along with the sql statement.
I know this will add some extra traffic but in my case, it will add extra flexibility and cut a lot of troubleshooting time.
Any ideas of how to do this or any alternatives?
One very simple approach would be to execute something via ExecuteStoreCommand(): Refresh data from stored procedure. I'm not sure if you can "execute" just a comment, but at the very least you should be able to do something like:
ExecuteStoreCommand("DECLARE #MyTag VARCHAR(100) = 'some_unique_id';");
This is very simple, but you would have to find the association in two steps:
Get the SessionID (i.e. SPID) from poorly performing query in SQL Server Profiler
Search the Profiler entries for the prior SQL statement for that same SPID
Another option that might be a little more complicated but would remove that additional step when it comes to making that association is to "intercept" the commands before they get executed and inject a comment with your unique id. Please see the following S.O. Answer for details. You shouldn't need the full extent of what they did, but even if you do, it seems like all of the code (or all the relevant stuff) is there:
Adding a query hint when calling Table-Valued Function
By the way, this situation is a point in favor of using Stored Procedures instead of an ORM. And, what do you expect to be able to do in terms of performance tuning once you do find the offending app code? (another point in favor of using Stored Procedures instead of an ORM ;-).
Scenario:
Our application database (in SQL Server 2012) contains entire business logic in stored procedures. Every time we have to publish the database to the client, it unnecessarily results in copying the stored procedures to the client database.
Problem:
All the business logic gets copied to the client side and results in proprietary issues.
Solutions tried earlier:
Using WITH ENCRYPTION
CREATE PROCEDURE Proc_Name WITH ENCRYPTION
This method results in encrypted and non-maintainable stored procedure code. We cannot judge which version of code is running at the client side, hence cannot debug. Version control cannot be applied. Moreover, client cannot perform database replication since encrypted stored procedures do not get replicated.
Creating synonyms
CREATE SYNONYM SchemaName.Proc_Name FOR LinkedServerDB.SchemaName.Proc_Name
This allows for creation of references (synonyms) at Client_DB which access the actual stored procedures residing on a Remote_Linked_Server_DB. In each stored procedure call, entire data is accessed from Client_DB and transmitted to Remote_Linked_Server_DB where the calculations are done and the result is sent back. This results in acute performance issues. Also requires 24x7 internet connectivity to the remote linked server.
Requirement
We are looking for a solution whereby the stored procedure code could be compiled (secured) and separated from the client database. Also, the compiled stored procedure code should be available at the client-end so that client does not require 24x7 connection to a remote location for accessing the stored procedures. Maybe Visual Studio database projects could be used to compile stored procedure code or something else.
[Edit]
I got to know that SQL Server 2014 allows for natively compiled stored procedure code. Can that be of help? msdn link And is the SQL Server 2014 RTM version stable enough?
An assignment I have as part of my pl/sql studies requires me to create a remote database connection and copy down all my tables to it from local, and then also copy my other objects that reference data, so my views and triggers etc.
The idea is that at the remote end, the views etc should reference the local tables provided the local database is online, and if it is not, then they should reference the tables stored on the remote database.
So I've created a connection, and a script that creates the tables at the remote end.
I've also made a pl/sql block to create all the views and triggers at the remote end, whereby a simple select query is run against the local database to check if it is online, if it is online then a series of execute immediate statements creates the views etc with reference to table_name#local, and if it isn't online the block skips to the exception section, where a similar series of execute immediate statements creates the same views but referencing the remote tables.
OK so this is where I become unsure.
I have a package that contains a few procedures and a function, and I'm not sure what's the best way to create that at the remote end so that it behaves in a similar way in terms of where it picks up its reference tables from.
Is it simply a case of enclosing the whole package-creating block within an 'execute immediate', in the same way as I did for the views, or should I create two different packages and call them something like pack1 and pack1_remote?
Or is there as I suspect a more efficient method of achieving the goal?
cheers!
This is absolutely not how any reasonable person in the real world would design a system. Suggesting something like what I suggest here in the real world will, in the best case, get you laughed out of the room.
The least insane approach I could envision would be to have two different schemas. Schema 1 would own the tables. Schema 2 would own the code. At install time, create synonyms for every object that schema 2 needs to reference. If the remote database is available when the code is installed, create synonyms that refer to objects in the remote database. Otherwise, create synonyms that refer to objects in the local database. That lets you create a single set of objects without using dynamic SQL by creating an extra layer of indirection between your code and your tables.
We have a SQL server that we are using for our data warehouse.
We want to give a department the ability to update the data when they want (and not just on schedule).
Was is the best way to do this? We have a SP that we are thinking of calling from a batch script, but is there a more elegant way?
The data will eventually go into Palo Jedox for BI.
I do this sort of thing by writing a ColdFusion web page that the user can run. It could also be done with .net, php, java, etc.
Do not give users the ability to change the tables directly.
Instead, create one or more stored procedures to do the updates/inserts/deletes that you want to do. If it is one record, you can just pass in the values as arguments. If it is a bunch of records, you need a mechanism to transfer larger data into the database -- either reading from a text file or putting it into a table in the database some way.
Be sure the stored procedure has the same owner as the underlying tables. Using owner chaining, the stored procedure will be able to make changes to the tables. At no time can a user make a change to the data directly, only through the stored procedure.
Then, log, log, log everything that gets done. You want to know every time this stored procedure is called to change the data.