Cross database DML query in azure SQL - sql

Is it possible to use DML commands like insert, update from cross database in azure SQL.
In my requirement I'm running one SP on DB1 after execution i want to update status in a table that belongs to DB2. I'm using azure SQL. Is there any way to call SPs from cross database?

Yes, you can leverage the elastic database query feature which supports Cross-Database Queries. Check sp_execute_remote for details of executing t-sql queries on remote Azure SQL Databases.
Here is a similar thread for your reference: Call stored procedure from Elastic Database in Azure

Related

how to orchestrate data lake activities?

How do we orchestrate the execution of stored procedures in data lake?
Example
1. execute sproc dbo.abc
2. execute sproc dbo.xyz
3. execute sproc dbo.aaa
The question could be more specifically restated: what integrations does Azure provide in order to be able to execute usql stored procedures? Azure Functions? Events?
I recommend you to use DataFactory. Its easy and powerfull.
You can create a pipeline of U-SQL activities.
Check this:
https://learn.microsoft.com/pt-pt/azure/data-factory/transform-data-using-data-lake-analytics

How to execute insert queries in One database for another database in Azure

I need to execute update & insert queries in multiple database of azure in one stored procedure. please help
Actually, if you want to execute the same stored procedure in two different databases, you can keep the same stored procedure in both the databases. If you are referring to cross database queries, then it is not possible directly. However, you can make use of ADF/Databricks to achieve the same.

ORACLE XE clear Top Sql Statistics

The "top" SQL statements represent the SQL statements that are executed most often, that use more system resources than other SQL statements, or that use system resources more frequently than other SQL statements. Viewing the top SQL statements report that is available in the Oracle Database XE graphical user interface enables you to focus your SQL tuning efforts on the statements that can have the most impact on database performance.
But how do I clear the information currently held?
Try to flush the shared pool:
ALTER SYSTEM FLUSH SHARED_POOL;
Because the shared pool contains the SQL cache, cleaning it, also it's going to clear the other data.
Of course, be carefull because it also flush other useful data:
Cached data dictionary information and
Shared SQL and PL/SQL areas for SQL statements, stored procedures, function, packages, and triggers.
References:
ALTER SYSTEM - Oracle® Database SQL Language Reference 11g Release 2 (11.2)
Shared pool - Oracle® Database Concepts 11g Release 2 (11.2)

Is it possible to create a temp table on a linked server?

I'm doing some fairly complex queries against a remote linked server, and it would be useful to be able to store some information in temp tables and then perform joins against it - all with the remote data. Creating the temp tables locally and joining against them over the wire is prohibitively slow.
Is it possible to force the temp table to be created on the remote server? Assume I don't have sufficient privileges to create my own real (permanent) tables.
This works from SQL 2005 SP3 linked to SQL 2005 SP3 in my environment. However if you inspect the tempdb you will find that the table is actually on the local instance and not the remote instance. I have seen this as a resolution on other forums and wanted to steer you away from this.
create table SecondServer.#doll
(
name varchar(128)
)
GO
insert SecondServer.#Doll
select name from sys.objects where type = 'u'
select * from SecondServer.#Doll
I am 2 years late to the party but you can accomplish this using sp_executeSQL and feeding it a dynamic query to create the table remotely.
Exec RemoteServer.RemoteDatabase.RemoteSchema.SP_ExecuteSQL N'Create Table here'
This will execute the temp table creation at the remote location..
It's not possible to directly create temporary tables on a linked remote server. In fact you can't use any DDL against a linked server.
For more info on the guidelines and limitations of using linked servers see:
Guidelines for Using Distributed Queries (SQL 2008 Books Online)
One work around (and off the top of my head, and this would only work if you had permissions on the remote server) you could:
on the remote server have a stored procedure that would create a persistent table, with a name based on an IN parameter
the remote stored procedure would run a query then insert the results into this table
You then query locally against that table perform any joins to any local tables required
Call another stored procedure on the remote server to drop the remote table when you're done
Not ideal, but a possible work around.
Yes you can but it only lasts for the duration of the connection.
You need to use the EXECUTE AT syntax;
EXECUTE('SELECT * INTO ##example FROM sys.objects; WAITFOR DELAY ''00:01:00''') AT [SERVER2]
On SERVER2 the following will work (for 1 minute);
SELECT * FROM ##example
but it will not work on the local server.
Incidently if you open a transaction on the second server that uses ##example the object remains until the transaction is closed. It also stops the creating statement on the first server from completing. i.e. on server2 run and the transaction on server1 will continue indefinately.
BEGIN TRAN
SELECT * FROM ##example WITH (TABLOCKX)
This is more accademic than of practical use!
If memory is not much of an issue, you could also use table variables as an alternative to temporary tables. This worked for me when running a stored procedure with need of temporary data storage against a Linked Server.
More info: eg this comparison of table variables and temporary tables, including drawbacks of using table variables.

SQL Server, Remote Stored Procedure, and DTC Transactions

Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures".
What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set.
The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections.
Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior?
Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?
Check out SET REMOTE_PROC_TRANSACTIONS OFF which should disable it.
Or sp_serveroption to configure the linked server generally, not per batch.
Because you are writing on the MS SQL side, you start a transaction.
By default, it escalates whether it needs to or not.
Even though the table variable does not particapate in the transaction.
I've had similar issues before where the MS SQL side behaves differently based on if MS SQL writes, in a stored proc and other stuff. The most reliable way I found was to use dynamic SQL calls to my Sybase linked server...
The following code sets the "Enable Promotion of Distributed Transactions" for linked servers:
USE [master]
GO
EXEC master.dbo.sp_serveroption #server=N'REMOTE_SERVER', #optname=N'remote proc transaction promotion', #optvalue=N'false'
GO
This will allow you to insert the results of a linked server stored procedure call into a table variable.
I'm not sure about DTC, but DTSX (Integration Services) may be useful for moving the data. However, if you can simply query the data, you may want to look at adding a linked server for direct access. You could then just write a simple query to populate your table based on a select from the linked server's table.
That's true. As you might guess, the Natural procedures we want to call do lookups and calculations that we'd like to keep at that level if possible.