Select Query on 2 tables, on different database servers - sql

I am trying to generate a report by querying 2 databases (Sybase) in classic ASP.
I have created 2 connection strings:
connA for databaseA
connB for databaseB
Both databases are present on the same server (don't know if this matters)
Queries:
q1 = SELECT column1 INTO #temp FROM databaseA..table1 WHERE xyz="A"
q2 = SELECT columnA,columnB,...,columnZ FROM table2 a #temp b WHERE b.column1=a.columnB
followed by:
response.Write(rstsql) <br>
set rstSQL = CreateObject("ADODB.Recordset")<br>
rstSQL.Open q1, connA<br>
rstSQL.Open q2, connB
When I try to open up this page in a browser, I get error message:
Microsoft OLE DB Provider for ODBC Drivers error '80040e37'
[DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]#temp not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
Could anyone please help me understand what the problem is and help me fix it?
Thanks.

With both queries, it looks like you are trying to insert into #temp. #temp is located on one of the databases (for arguments sake, databaseA). So when you try to insert into #temp from databaseB, it reports that it does not exist.
Try changing it from Into #temp From to Into databaseA.dbo.#temp From in both statements.
Also, make sure that the connection strings have permissions on the other DB, otherwise this will not work.
Update: relating to the temp table going out of scope - if you have one connection string that has permissions on both databases, then you could use this for both queries (while keeping the connection alive). While querying the table in the other DB, be sure to use [DBName].[Owner].[TableName] format when referring to the table.

your temp table is out of scope, it is only 'alive' during the first connection and will not be available in the 2nd connection
Just move all of it in one block of code and execute it inside one conection

temp is out of scope in q2.
All your work can be done in one query:
SELECT a.columnA, a.columnB,..., a.columnZ
FROM table2 a
INNER JOIN (SELECT databaseA..table1.column1
FROM databaseA..table1
WHERE databaseA..table1.xyz = 'A') b
ON a.columnB = b.column1

Related

SQL: temp table "invalid object name" after "USE" statement

I do not fully understand the "USE" statement in Transact-SQL and how it affects the scope of temp tables. I have a user-defined table type in one database but not another, and I've found I need to "USE" that database in order to define a table of that type. Earlier in the query, I define a temporary table. After the "USE" statement, SSMS does not recognize the temp table as a valid object name, however I can still query from it without error.
The skeleton of my SQL query is as follows:
USE MYDATABASE1
[... a bunch of code I did not write...]
SELECT * INTO #TEMP_TABLE FROM #SOME_EARLIER_TEMP_TABLE
USE MYDATABASE2
DECLARE #MYTABLE MyUserDefinedTableType -- this table type only exists in MYDATABASE2
INSERT INTO #MYTABLE(Col1, Col2)
SELECT Col1, Col2 FROM (SELECT * FROM MYDATABASE2.dbo.SOME_TABLE_VALUED_FUNCTION(param1, param2)) T
SELECT A.*, B.Col2
FROM #TEMP_TABLE A
CROSS APPLY DATABASE2.dbo.SOME_OTHER_TABLE_VALUED_FUNCTION(#MYTABLE, A.SomeColumn) B
In the last SELECT statement, SSMS has red squiggly lines under "A.*" and "#TEMP_TABLE", however there is no error running the query.
So my question is: am I doing something "wrong" even though my query still works? Assuming the initial "USE MYDATABASE1" is necessary, what is the correct way to switch databases while still having #TEMP_TABLE available as a valid object name? (Note that moving the definition of #TEMP_TABLE to after "USE MYDATABASE2" would just shift the problem to #SOME_EARLIER_TEMP_TABLE.)
In SQL USE basically tells the query which database is the "default" database.
Temp tables can play tricks on intellisense - unless they're explicitly defined using the CREATE TABLE #MyTempTable route, intellisense doesn't really know what to do with them a lot of the time. Don't worry though - temp tables are scoped to the query.
Although I do feel it's worth pointing out: while UDTs are database specific, you can create an assembly to use across databases

Insert statement from a linked server won't insert into the table. What am I missing here?

I'm using SQL Server 2016 to attempt to insert records from couple of tables located on a linked server. I can run the query and pull the data that I'm looking for, but when I attempted to insert it into the table it runs successfully, but no data is inserted into the SQL Server table. Here's my code;
insert into BTGroup (authorizedgroup, itemno)
select custno, prod
from OPENQUERY(NxtTest, '
select s.custno, p.prod, p.zauthorized
from pub.zics s
join pub.csp p on s.prod = p.prod
where p.zauthorized = 1
')
I feel like I'm missing something obvious here, but I'm new to working with linked servers so I'm a bit lost. Any help is greatly appreciated.
If you didn't get any error message and receive a message like (20 rows affected) in result window so everything is ok.
Check the selected database that contains BTGroup table when you are executing the query or change it to the full address. (e.g. MyDatabase.dbo.BTGroup)

Query across multiple databases on same server

I am looking for a way of dealing with the following situation:
We have a database server with multiple databases on it (all have the same schema, different data).
We are looking for a way to query across all the databases (and for it to be easy to configure, as more databases may be added at any time). This data access must be realtime.
Say, as an example, you have an application that inserts orders - each application has its own DB etc. What we are then looking for is an efficient way for a single application to then access the order information in all the other databases in order to query it and subsequently action it.
My searches to date have not revealed very much, however I think I may just be missing the appropriate keywords in order to find the correct info...
You must specify the database name before any database object.
Single database:
SELECT * FROM [dbo].[myTable]
Multiple dabases:
SELECT * FROM [DB01].[dbo].[myTable]
UNION ALL
SELECT * FROM [DB02].[dbo].[myTable]
UNION ALL
SELECT * FROM [DB03].[dbo].[myTable]
It's not going to be the cleanest solution ever, but you could define a view on a "Master database" (if your individual databases are not going to stay constant) that includes the data from the individual databases, and allows you to execute queries on a single source.
For example...
CREATE VIEW vCombinedRecords AS
SELECT * FROM DB1.dbo.MyTable
UNION ALL
SELECT * FROM DB2.dbo.MyTable
Which allows you to do...
SELECT * FROM vCombinedRecords WHERE....
When your databases change, you just update the view definition to include the new tables.
You can build the union dynamically:
select name from sys.databases
and then check if the database has the table:
select name from [dbname_from_above].sys.tables where name = 'YourTable'
That gives you all databases for the union. You can build the query client side or in dynamic SQL.
Note - My answer below received a couple down votes, but only one comment giving any reason why it might be down-voted. The comment was that this answer is very similar to the accepted answer, but even less performant. I disagree with this opinion and I reproduce my response here - in the actual answer - so that anyone else reading my answer might have a better chance at seeing why this is not the same as the accepted answer at all, and in fact better addresses the original question.
My response to the suggestion this is similar to the accepted answer:
on the contrary - the original question notes that new databases are added regularly. The accepted solution will require maintenance each time a new database is added. The solution here will work regardless of whether any new databases are added (in line with the original question that states they all have the same schema). Further, the accepted answer requires you to duplicate the query once per database queried. If the query is complex, that gets ugly fast. The proposal here ensures a single source of truth for the logic being used in the query.
And the answer itself:
Shooting from the hip here.
use master;
go
create table #Temp (sourceDBName varchar(128), colA varchar(128), colB varchar(128));
exec sp_MSforeachDB ' USE [?];
insert into #Temp
SELECT DISTINCT
''?'',
tableA.colA,
tableB.colB
FROM tableA JOIN tableB on some_conditions
WHERE someCol LIKE ''%some_term%''
'
select sourceDBName, colA, colB from #Temp order by 1, 2, 3;
drop table #Temp;
This logic should allow you to apply a single query to all databases. To use it though, you will want to add logic to filter out system databases, or explicitly include only the databases you specify. To achieve that, you might like to put this logic into a stored procedure which then returns a result set, so in the end, your call to this logic is a select statement that returns a rowset you can join, filter, etc.
Check out https://www.mssqltips.com/sqlservertip/2855/sql-server-multi-database-query-with-registered-servers/
SELECT * FROM (
SELECT
##SERVERNAME as [ServerName],
##version [Version],
Format(##CONNECTIONS,'N0') [Conections],
Format(##CPU_BUSY ,'N0')[CPUBusy]
) SQLInfo
LEFT JOIN (
SELECT
##SERVERNAME AS [ServerName],
SERVERPROPERTY('ProductVersion') [Version Build],
SERVERPROPERTY ('Edition') AS [Edition],
SERVERPROPERTY('ProductLevel') AS [Service Pack],
CASE SERVERPROPERTY('IsIntegratedSecurityOnly')
WHEN 0 THEN 'SQL Server and Windows Authentication mode'
WHEN 1 THEN 'Windows Authentication mode'
END AS [Server Authentication],
CASE SERVERPROPERTY('IsClustered')
WHEN 0 THEN 'False'
WHEN 1 THEN 'True'
END AS [Is Clustered?],
SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [Current Node Name],
SERVERPROPERTY('Collation') AS [ SQL Collation],
[cpu_count] AS [CPUs],
[physical_memory_kb]*0.00000095367432 AS [RAM (GB)]
FROM [sys].[dm_os_sys_info]
) SQLInfo2 on SQLInfo.[ServerName]=SQLInfo2.[ServerName]
LEFT JOIN (
SELECT
##SERVERNAME as [ServerName],
NodeName,
Status_Description,
is_Current_Owner
FROM [MASTER].[sys].[fn_virtualservernodes]()
)Clusterinfo on SQLInfo.[ServerName]=Clusterinfo.[ServerName]

How to get data from two databases in two servers with one SELECT statement?

I don't actually want to modify either database, just get the data.
I know how to connect to each database individually, with these connection strings:
Provider=SQLOLEDB.1;Data Source={0};Initial Catalog={1};Integrated Security=SSPI;Persist Security Info=False;
Provider=OraOLEDB.Oracle.1;Data Source={0};User ID={1};Password={2};Persist Security Info=True;
But how can I get this overlapping data together? Is that even possible, especially considering that one is Oracle and one is SQL Server? Or would it be better to do the SELECT statements on each database individually and then match them after?
For example, how would I get all students that are 10 years old and like the color blue?
Notice that all items in DatabaseB have an ID that maps to DatabaseA, but not the other way around.
I have done this with MySQL,Oracle and SQL server. You can create linked servers from a central MSSQL server to your Oracle and other MSSQL servers. You can then either query the object directly using the linked server or you can create a synonymn to the linked server tables in your database.
Steps around creating and using a linked server are:
On your "main" MSSQL server create two linked servers to the servers that contains the two databases or as you said database A and database B.
You can then query the tables on the linked servers directly using plain TSQL select statements.
To create a linked server to Oracle see this link: http://support.microsoft.com/kb/280106
A little more about synonyms. If you are going to be using these linked server tables in a LOT of queries it might be worth the effort to use synonymns to help maintain the code for you. A synonymn allows you to reference something under a different name.
So for example when selecting data from a linked server you would generally use the following syntax to get the data:
SELECT *
FROM Linkedserver.database.schema.table
If you created a synonym for Linkedserver.database.schema.table as DBTable1 the syntax would be:
SELECT *
FROM DBTable1
It saves a bit on typing plus if your linked server ever changed you would not need to go do changes all over your code. Like I said this can really be of benefit if you use linked servers in a lot of code.
On a more cautionary note you CAN do a join between two table on different servers. HOwever this is normally painfully slow. I have found that you can select the data from the different server into temp tables and joining the temp tables can generally speed things up. Your milage might vary but if you are going to join the tables on the different servers this technique can help.
Let me know if you need more details.
Which database are you using? Most of databases come with concept called dblinks. You have to create a dblink of database B in database A and then you can create a synonym (not a must but for ease) and use it as if it is table of database A.
Looks like a heterogeneous join (data on disparate servers/technologies etc).
As such, not straightforward. If you can make Namphibian's method work, go that way.
Otherwise, you need to gather the data from both tables to a common location (one or other of the servers 'in play', or a third server/technology solely for the purpose of co-locating the data). Then you can join the data happily. Many ETL Tools work this way, and this situation (almost) always involves redistribution of one or more of the tables to a common location before joining.
Oracle Data Integrator ETL tool does this, so does Talend Open Studio's tJoin component.
HTH
Try creating 3 Linq queries in Visual Studio. One for SQL Server, one for Oracle and one to combine the 2 database objects.
SELECT (things)
FROM databaseA.dbo.table t1
INNER JOIN databaseB.dbo.table t2 ON t1.Col1 = t2.Col2
WHERE t1.Col1 = 'something'
EDIT - This statement should meet the new requirements:
SELECT *
FROM databaseA.dbo.table t1
INNER JOIN databaseB.dbo.table t2 ON t1.ID = t2.ID
WHERE t1.Age = 10 AND t2.FavoriteColor = 'Blue'
If you want to select data from two different servers and database I would do a union and not a join as the data from one may be like apples and the other may be like oranges. You still would need to set up linked Servers and I believe you may link Oracle and SQL Server if after certain versions as shown but you could do something like this:
select ColA, ColB, ColC
from (ServerASQLServer).(DatabaseA).(schema).(table)
UNION
select ColA, ColB, ColC
from (ServerBOracleServer).(DatabaseB).(schema).(table)
If you perform inner joins your data must share data types to bind to or else they will be ommitted from the dataset returned. A union must just shared column data types but does not care on the logic. You are in essence saying: "Put these two sets of varying rows together based on their column logic matching."
But you were mentioning connection strings so I was curious if you would want to do it in a type of code method like .NET? I could provide an idea for that too possibly.
Assuming the databases are on the same server, you should be able to do something like this:
SELECT t.field1, t.field2
FROM database.schema.table t
JOIN database2.scheme.table2 t2
on t.id = t2.id
WHERE t2.field3 = ...
If the databases are on separate servers, look into using Linked Servers.
While I was having trouble join those two tables, I got away with doing exactly what I wanted by opening both remote databases at the same time. MySQL 5.6 (php 7.1) and the other MySQL 5.1 (php 5.6)
//Open a new connection to the MySQL server
$mysqli1 = new mysqli('server1','user1','password1','database1');
$mysqli2 = new mysqli('server2','user2','password2','database2');
//Output any connection error
if ($mysqli1->connect_error) {
die('Error : ('. $mysqli1->connect_errno .') '. $mysqli1->connect_error);
} else {
echo "DB1 open OK<br>";
}
if ($mysqli2->connect_error) {
die('Error : ('. $mysqli2->connect_errno .') '. $mysqli2->connect_error);
} else {
echo "DB2 open OK<br><br>";
}
If you get those two OKs on screen, then both databases are open and ready. Then you can proceed to do your querys.
On your specific question I will do something like first selecting from database A all the 10 year old kids then match them to the colors by the ID from database B. It should work, I havent tested this code on my server, but my sample below this code works. You can custom query by anything, color, age, whatever, even group them as you require to.
$results = $mysqli1->query("SELECT * FROM DatabaseTableA where age=10");
while($row = $results->fetch_array()) {
$theColorID = $row[0];
$theName = $row[1];
$theAge = $row[2];
echo "Kid Color ID : ".$theColorID." ".$theName." ".$theAge."<br>";
$doSelectColor = $mysqli2->query("SELECT * FROM DatabaseTableB where favorite_color=".$theColorID." ");
while($row = $doSelectColor->fetch_assoc()) {
echo "Kid Favorite Color : ".$row["favorite_color"]."<br>";
}
}
I have use this to switch back and forth for our programs without joining tables from remote servers and have no problem so far.
$results = $mysqli1->query("SELECT * FROM video where video_id_old is NULL");
while($row = $results->fetch_array()) {
$theID = $row[0];
echo "Original ID : ".$theID." <br>";
$doInsert = $mysqli2->query("INSERT INTO video (...) VALUES (...)");
$doGetVideoID = $mysqli2->query("SELECT video_id, time_stamp from video where user_id = '".$row[13]."' and time_stamp = ".$row[28]." ");
while($row = $doGetVideoID->fetch_assoc()) {
echo "New video_id : ".$row["video_id"]." user_id : ".$row["user_id"]." time_stamp : ".$row["time_stamp"]."<br>";
$sql = "UPDATE video SET video_id_old = video_id, video_id = ".$row["video_id"]." where user_id = '".$row["user_id"]."' and video_id = ".$theID.";";
$sql .= "UPDATE video_audio SET video_id = ".$row["video_id"]." where video_id = ".$theID.";";
// Execute multi query if you want
if (mysqli_multi_query($mysqli1, $sql)) {
// Query successful do whatever...
}
}
}
// close connection
$mysqli1->close();
$mysqli2->close();
I was trying to do some joins but since I got those two DBs open, then I can go back and forth doing querys by just changing the connection $mysqli1 or $mysqli2
It worked for me, I hope it helps... Cheers
As long as both databases are in the same server you can refer to tables with the database name :)
SELECT * FROM db1.table1
join
db2.tbable2
WHERE db1.table1.col1 = db2.table2.col1;

Operation must use an updatable query. (Error 3073)

I have written this query:
UPDATE tbl_stock1 SET
tbl_stock1.weight1 = (
select (b.weight1 - c.weight_in_gram) as temp
from
tbl_stock1 as b,
tbl_sales_item as c
where
b.item_submodel_id = c.item_submodel_id
and b.item_submodel_id = tbl_stock1.item_submodel_id
and b.status <> 'D'
and c.status <> 'D'
),
tbl_stock1.qty1 = (
select (b.qty1 - c.qty) as temp1
from
tbl_stock1 as b,
tbl_sales_item as c
where
b.item_submodel_id = c.item_submodel_id
and b.item_submodel_id = tbl_stock1.item_submodel_id
and b.status <> 'D'
and c.status <> 'D'
)
WHERE
tbl_stock1.item_submodel_id = 'ISUBM/1'
and tbl_stock1.status <> 'D';
I got this error message:
Operation must use an updatable query. (Error 3073) Microsoft Access
But if I run the same query in SQL Server it will be executed.
Thanks,
dinesh
I'm quite sure the JET DB Engine treats any query with a subquery as non-updateable. This is most likely the reason for the error and, thus, you'll need to rework the logic and avoid the subqueries.
As a test, you might also try to remove the calculation (the subtraction) being performed in each of the two subqueries. This calculation may not be playing nicely with the update as well.
Consider this very simple UPDATE statement using Northwind:
UPDATE Categories
SET Description = (
SELECT DISTINCT 'Anything'
FROM Employees
);
It fails with the error 'Operation must use an updateable query'.
The Access database engine simple does not support the SQL-92 syntax using a scalar subquery in the SET clause.
The Access database engine has its own proprietary UPDATE..JOIN..SET syntax but is unsafe because, unlike a scalar subquery, it doesn’t require values to be unambiguous. If values are ambiguous then the engine silent 'picks' one arbitrarily and it is hard (if not impossible) to predict which one will be applied even if you were aware of the problem.
For example, consider the existing Categories table in Northwind and the following daft (non-)table as a target for an update (daft but simple to demonstrate the problem clearly):
CREATE TABLE BadCategories
(
CategoryID INTEGER NOT NULL,
CategoryName NVARCHAR(15) NOT NULL
)
;
INSERT INTO BadCategories (CategoryID, CategoryName)
VALUES (1, 'This one...?')
;
INSERT INTO BadCategories (CategoryID, CategoryName)
VALUES (1, '...or this one?')
;
Now for the UPDATE:
UPDATE Categories
INNER JOIN (
SELECT T1.CategoryID, T1.CategoryName
FROM Categories AS T1
UNION ALL
SELECT 9 - T2.CategoryID, T2.CategoryName
FROM Categories AS T2
) AS DT1
ON DT1.CategoryID = Categories.CategoryID
SET Categories.CategoryName = DT1.CategoryName;
When I run this I'm told that two rows have been updated, funny because there's only one matching row in the Categories table. The result is that the Categories table with CategoryID now has the '...or this one?' value. I suspect it has been a race to see which value gets written to the table last.
The SQL-92 scalar subquery is verbose when there are multiple clauses in the SET and/or the WHERE clause matches the SET's clauses but at least it eliminates ambiguity (plus a decent optimizer should be able to detects that the subqueries are close matches). The SQL-99 Standard introduced MERGE which can be used to eliminate the aforementioned repetition but needless to say Access doesn't support that either.
The Access database engine's lack of support for the SQL-92 scalar subquery syntax is for me its worst 'design feature' (read 'bug').
Also note the Access database engine's proprietary UPDATE..JOIN..SET syntax cannot anyhow be used with set functions ('totals queries' in Access-speak). See Update Query Based on Totals Query Fails.
Keep in mind that if you copy over a query that originally had queries or summary queries as part of the query, even though you delete those queries and only have linked tables, the query will (mistakenly) act like it still has non-updateable fields and will give you this error. You just simply re-create the query as you want it but it is an insidious little glitch.
You are updating weight1 and qty1 with values that are in turn derived from weight1 and qty1 (respectively). That's why MS-Access is choking on the update. It's probably also doing some optimisation in the background.
The way I would get around this is to dump the calculations into a temporary table, and then update the first table from the temporary table.
There is no error in the code. But the error is Thrown because of the following reason.
Please check weather you have given Read-write permission to MS-Access database file.
The Database file where it is stored (say in Folder1) is read-only..?
suppose you are stored the database (MS-Access file) in read only folder, while running your application the connection is not force-fully opened. Hence change the file permission / its containing folder permission like in C:\Program files all most all c drive files been set read-only so changing this permission solves this Problem.
In the query properties, try changing the Recordset Type to Dynaset (Inconsistent Updates)