SQL Express Database size limit via query - sql

We know the max size of SQL data files allowed in SQL express editions are 2 GB, 4 GB , 10 GB for SQL 2000, 2005, 2008 express respectively.
Is there any way we could see the max size of the database allowed via SQL query?

The Max size is exactly that. I'm not aware of any query you can run to get this information directly.
You could get the SQL Server version:
SELECT ##VERSION
and infer from that.

Related

SSIS performance vs OpenQuery with Linked Server from SQL Server to Oracle

We have a linked server (OraOLEDB.Oracle) defined in the SQL Server environment. Oracle 12c, SQL Server 2016. There is also an Oracle client (64 bit) installed on SQL Server.
When retrieving data from Oracle (a simple query, getting all columns from a 3M row, fairly narrow table, with varchars, dates and integers), we are seeing the following performance numbers:
sqlplus: select from Oracle > OS File on the SQL Server itself
less than 2k rows/sec
SSMS: insert into a SQL Server table select from Oracle using OpenQuery (passthrough to Oracle, so remote execution)
less than 2k rows/sec
SQL Export/Import tool (in essence, SSIS): insert into a SQL Server table, using the OLEDB Oracle for source and OLEDB SQL Server for target
over 30k rows/second
Looking for ways to improve throughput using OpenQuery/OpenResultSet, to match SSIS throughput. There is probably some buffer/flag somewhere that allows to achieve the same?
Please advise...
Thank you!
--Alex
There is probably some buffer/flag somewhere that allows to achieve the same?
Probably looking for the FetchSize parameter
FetchSize - specifies the number of rows the provider will fetch at a
time (fetch array). It must be set on the basis of data size and the
response time of the network. If the value is set too high, then this
could result in more wait time during the execution of the query. If
the value is set too low, then this could result in many more round
trips to the database. Valid values are 1 to 429,496, and 296. The
default is 100.
eg
exec sp_addlinkedserver N'MyOracle', 'Oracle', 'ORAOLEDB.Oracle', N'//172.16.8.119/xe', N'FetchSize=2000', ''
See, eg https://blogs.msdn.microsoft.com/dbrowne/2013/10/02/creating-a-linked-server-for-oracle-in-64bit-sql-server/
I think there are many way to enhance the performance on the INSERT query, I suggest reading the following article to get more information about data loading performance.
The Data Loading Performance Guide
There are one method you can try which is minimizing the logging by using clustered index. check the link below for more information:
New update on minimal logging for SQL Server 2008

Copy large amount of data from ms access table (with 2.3 mimion recs) to sql 2000 server using ADO in VB6

i have a ms access (2.3 mimion recs) table and need to copy to sql server 2000 DB, every day i have new 15000 Recs to be imported to sql server 2000
so i need a sql statement NOT a loop to copy data from access to sql.
using vb6, ado
Copying 15,000 from Access to Sql daily even a row at a time should not be a significant problem in terms of performance, etc.
However, you could connect to the Jet provider from inside SQL server and then treat the Access data just like any other ODBC linked server from inside SQL server.
Just in case you see references to an ACE driver -- The Ace driver replaces the Jet Driver in newer version of Access.
Personally, I would still use a client based app to simply load the 15K records and not connect sql server to Jet. If I needed more performance, maybe, but not for 15K records daily.

how many FileTables we can create in SQL Server 2012?

i am using SQL Server 2012 and would like to implement FileTables feature in application. how many FileTable we can create in one Database in SQL Server 2012?
The number of tables allowed in a single database is limited to the total number of object, which can't exceed 2,147,483,647. I've never run across any article that diferentiates between regular tables and file tables.
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.110).aspx

Changing compatibility level on SQL Server 2005 could be dangerous?

We need to execute a query on a SQL Server 2005 database to get some stats about the longest executions on it.
We've found the next query:
select top 10 source_code,stats.total_elapsed_time/1000000 as seconds,
last_execution_time from sys.dm_exec_query_stats as stats
cross apply
(SELECT text as source_code FROM sys.dm_exec_sql_text(sql_handle))
AS query_text
order by total_elapsed_time desc
It works fine, but it requires the database has a 90 compatibility level (SQL Server 2005). This database has a 80 level (SQL Server 2000). If we change it to 90... could it be dangerous to the daily tasks? I mean, could our applications crash if we change it?
Thank's and sorry for my English.
Finally I didn't need it. There was another database that had the 90 compatibility level and I used that one.

SQL Server 2000 vs SQL Server 2008 Query Performance

I'm working with a client who had a SQL Server 2008 converted from a SQL Server 2000 DB and one of the queries has quite dramatically increased in time since it was on SQL Server 2000.
However, if I change the compatibility level to 2008 in the DB, the query goes like a rocket (40-50 times faster).
The query does use a number of UDFs.
My questions:
- are there issues with running SQL2000 compatibility in SQL Server 2008
- has SQL Server 2008 improved the performance when using UDFs?
There are some other things you might want to do after upgrading. See the "After upgrading..." section here: http://msdn.microsoft.com/en-us/library/bb933942.aspx