Linked Server Performance Issues - sql-server-2005

I've created a linked server for an Access database from MSSQL that takes a very long time to return rows. I'm using SQL Server 2005 and the Access database is 2003. A select statement in SSMS of 8 columns and 80,000 rows takes 60s to run.
I gave SQL authenticated credentials ('sa' user) when creating the linked server so I don't think it has to do with permissions, and I'm unsure of what the problem is. I get the exact same running time using OPENQUERY as when I don't. Any guidance to why a simple query would take so long to run is very much appreciated.
My query.
SELECT * FROM AccessDB12...AP

Related

SSIS performance vs OpenQuery with Linked Server from SQL Server to Oracle

We have a linked server (OraOLEDB.Oracle) defined in the SQL Server environment. Oracle 12c, SQL Server 2016. There is also an Oracle client (64 bit) installed on SQL Server.
When retrieving data from Oracle (a simple query, getting all columns from a 3M row, fairly narrow table, with varchars, dates and integers), we are seeing the following performance numbers:
sqlplus: select from Oracle > OS File on the SQL Server itself
less than 2k rows/sec
SSMS: insert into a SQL Server table select from Oracle using OpenQuery (passthrough to Oracle, so remote execution)
less than 2k rows/sec
SQL Export/Import tool (in essence, SSIS): insert into a SQL Server table, using the OLEDB Oracle for source and OLEDB SQL Server for target
over 30k rows/second
Looking for ways to improve throughput using OpenQuery/OpenResultSet, to match SSIS throughput. There is probably some buffer/flag somewhere that allows to achieve the same?
Please advise...
Thank you!
--Alex
There is probably some buffer/flag somewhere that allows to achieve the same?
Probably looking for the FetchSize parameter
FetchSize - specifies the number of rows the provider will fetch at a
time (fetch array). It must be set on the basis of data size and the
response time of the network. If the value is set too high, then this
could result in more wait time during the execution of the query. If
the value is set too low, then this could result in many more round
trips to the database. Valid values are 1 to 429,496, and 296. The
default is 100.
eg
exec sp_addlinkedserver N'MyOracle', 'Oracle', 'ORAOLEDB.Oracle', N'//172.16.8.119/xe', N'FetchSize=2000', ''
See, eg https://blogs.msdn.microsoft.com/dbrowne/2013/10/02/creating-a-linked-server-for-oracle-in-64bit-sql-server/
I think there are many way to enhance the performance on the INSERT query, I suggest reading the following article to get more information about data loading performance.
The Data Loading Performance Guide
There are one method you can try which is minimizing the logging by using clustered index. check the link below for more information:
New update on minimal logging for SQL Server 2008

KDB: Connection Error Between ODBC and SQL Server; Crashes Midway Through Running Query

I am using odbc.eval on KDB to run a SQL stored procedure that generates over tens of millions of rows of data. We have 2 different RDBs (SQL Server 2012 and SQL Server 2016) set up with the same data, allocated memory, etc. The KDB code works against one of them, but doesn't work against the other. For the newer server, KDB crashes midway through the query. The stored procedure seems to work on SQL Management Studio 2016 just fine, though it does take a long time to fully execute - around an hour or so. Could this be a time-out error? Any suggestions for running a SQL query with this large amount of data on KDB without running into memory or timeout issues?

Saving or passing results of SQL query to another table on another server

How to save the results of a query (table) to SQL database which is on different server?
I have the following problem. I have only READ access to a HugeDatabase. I can execute Select commend an then I want to JOIN the results with another table 'TableB' which is on my local server. I cannot create views on HugeDatabase server (no rights, READ ONLY). I cannot make a linked server on HugeDatabes server to my local server (admin of the server does not allow that). Note that I have only sufficient permission to change settings of my local SQL server. Dropping the HugeDatabase to my local server is pointless since it is huge and my query takes only seconds.
Of course I can save the results as CSV and import it to a table on my local server. But this is Middle Ages way. I am in a need to execute the query every 5 minutes and make a JOIN with results, so dropping results to CSV is a fatal solution. Can you please suggest me a better way?

Measuring MS Access SQL query duration

I'm trying to compare MS Access SQL queries for local table vs linked table
(it is linked to an Oracle and to a SQL Server database).
I can get query duration when running the SQL command directly on Oracle or SQL Server, but when running the SQL in MS Access, I don't know how to capture the query duration.
Is there a way to get the query duration when running a SQL command inside MS Access?
Thanks. :-)
Yes, it is.
Record in a variable the actual time.
Create a recordset with data source pointing to your query/view/table
Open the recordset (eventually you may check the recordcount)
Record in another variable the actual time
DateDiff between 1. amd 4.
Access does not provide that sort of information, unlike server databases.
You could use a Form Timer and get an idea of the duration, but with linked tables a lot of that depends on the network, server overhead, etc.

SQL Select Query Timing Out

I have a table in my SQL Server 2008 database called dbo.app_additional_info which contains approximately 130,000 records. Below shows the structure of the table.
When I run a query like the one below in SQL Server Management Studio 2008
select app_additional_text
from app_additional_info
where application_id = 2665 --Could be any ID here
My query takes a long time to execute (up to 5minutes) and sometimes it times out. This database is also connected to a Web Application and when it runs the above query, I always get a timeout error.
Is there anything I can do to speed up the performance of my query?
Your help with this would be greatly appreciated as this is grinding my web application to a halt.
Thanks.
Update
Below shows my execution plan from SSMS (I apologise for poor quality)
based on the limited info in the question, it looks like you are doing a table scan because there is no index on application_id. So, try this:
CREATE INDEX IX_app_additional_info_application_id on
app_additional_info (application_id)
your query should run much faster now.