Currently i am working on the migration project. Found the below query in a procedure. I am able to get the size of database from sys.master_files table. In WHERE condition segmap is used. I am not able to find the simillar column in sys.master_files. Please help me on this
SELECT sum(size) * 2
FROM master..sysusages U
WHERE U.segmap = 3
AND U.dbid = db_id(#db_name)
SYBASE and SQLSERVER used to share same code base.So as from SYBASE docs..below is definition of segmap
The values of master..sysusages.segmap mean the following:
3: Data stored on this segment
4: Log stored on this segment
7: Since 7=4+3, both log and data stored on this segment
So the equivalent would be type='0' which means get only data space
Related
is there any way within snowflake/sql query to view what tables are being queried the most as well as what columns? I want to know what data is of most value to my users and not sure how to do this programatically. Any thoughts are appreciated - thank you!
2021 update
The new ACCESS_HISTORY view has this information (in preview right now, enterprise edition).
For example, if you want to find the most used columns:
select obj.value:objectName::string objName
, col.value:columnName::string colName
, count(*) uses
, min(query_start_time) since
, max(query_start_time) until
from snowflake.account_usage.access_history
, table(flatten(direct_objects_accessed)) obj
, table(flatten(obj.value:columns)) col
group by 1, 2
order by uses desc
Ref: https://docs.snowflake.com/en/sql-reference/account-usage/access_history.html
2020 answer
The best I found (for now):
For any given query, you can find what tables are scanned through looking at the plan generated for it:
SELECT *, "objects"
FROM TABLE(EXPLAIN_JSON(SYSTEM$EXPLAIN_PLAN_JSON('SELECT * FROM a.b.any_table_or_view')))
WHERE "operation"='TableScan'
You can find all of your previous ran queries too:
select QUERY_TEXT
from table(information_schema.query_history())
So the natural next step would be combine both - but that's not straightforward, as you'll get an error like:
SQL compilation error: argument 1 to function EXPLAIN_JSON needs to be constant, found 'SYSTEM$EXPLAIN_PLAN_JSON('SELECT * FROM a.b.c')'
The solution would be to combine the queries from the query_history() with the SYSTEM$EXPLAIN_PLAN_JSON outside (to make the strings constant), and then you will be able to find out the most queried tables.
I am trying to select the maximum value of the last 6 digits in a list of strings
This is for creating an Inbox Query in Infor EAM
OBJ_CODE is the column and R5OBJECTS is the table. I have tried the following code but the number returned is 0.
SELECT MAX(RIGHT(OBJ_CODE,6)) FROM R5OBJECTS
My list looks like this
AAAA100000
AAAA100001
AAAA100002
AAAA100003
AAAA100004
AAAA100005
...
AAAA100999
...
BBBB100006
BBBB100007
BBBB100008
BBBB100009
BBBB100010
So the expected output would be 100999
It seems this table R5OBJECTS is too big, and your sql query performance didn't pass the base configuration parameter.
If using Inbox -> Set your INBXSCOR to 50 and try your query again.
If using KPI -> Set your KPISCOR to 50
SQL Statement
Enter the SQL statement to calculate the number of applicable records
for the inbox entry. The system automatically populates SQL Statement
Text. Note: SQL Statement cannot exceed the performance score limit
defined in the INBXSCOR installation parameter.
https://docs.infor.com/eam/11.3.2/en-us/eamolh/cdh1498150395934.html
Although this code works perfectly form me in SQL Server 2016, I add additional function to convert string to int to be sure:
SELECT MAX(CONVERT(INT,RIGHT(OBJ_CODE,6))) FROM R5OBJECTS
There seems to be a problem with ODBC Driver 13 for SQL Server (Running local Ubuntu 16.04) with RODBC package (version1.3-15) in R (version 3.4.1 (2017-06-30)). First of we make a query to see the size of the SQL table called TableName.
library(RODBC)
connectionString <- "Driver={ODBC Driver 13 for SQL Server};Server=tcp:<DATABASE-URL>,<NUMBER>;Database=<DATABASE NAME>;Uid=<USER ID>;Pwd=<PASSWORD>;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;"
connection <- odbcDriverConnect(connectionString)
count <- sqlQuery(connection, 'select count(*) from TableName')
odbcGetErrMsg(connection)
Output of the above gives a count value of 200.000 (odbcGetErrMsg returns no errors), which is known to be the correct size of the SQL table called TableName.
Now comes the trange part.
TableName <- sqlQuery(connection, 'select * from TableName')
count = dim(TableName)[1]
odbcGetErrMsg(connection)
Output of the above first gives a value of 700 (odbcGetErrMsg returns no errors). But when the above code is executed again it returns another count value of 2300 i.e. it is random. When repeating above code multiple times I see the range of the count value returned is approx. between 700-8.000 (TableName has 8 columns).
None of the above outputs changes when setting ConnectionTimeout equal to either 0 or some absurd high number, respectively.
Does anybody know what is going on here? The goal is to store the full SQL table called TableName as a dataframe in R for further data processing.
Any help is much appreciated.
Note for others with a similar problem:
I did not solve this BUG, however by shifting to Microsoft JDBC Driver 6.2 for SQL server with R package RJDBC returns the correct result. With this setup I am now able to load the full SQL table (200.000 rows and counting) into R as a dataframe for further processing.
We just recently converted to SQL Server 2012 (yes, our company is very behind!) from 2008. In processing some very basic queries, I am finding that the results are different and it appears to be related to the order of the items in the where clause.
For example this query:
select
source, count(*), sum(book_value)
from
repository.CURRENT_MONTH_14_1
where
NET_WAC = 0 or gross_wac = 0
and As_Of_Date = '2/29/2016'
and Argus_Entity = 'TEST'
group by
source
The results in SQL Server 2008, would give me only items that satisfy both the as_of_date and the argus_entity requirement and have a net_wac or gross_wac = 0.
In SQL Server 2012, I am getting all items that have a net_wac or gross_wac = 0 even if they are in other argus_entity or as_of_dates.
Is there a certain order required now? Does SQL Server 2012 no longer satisfy all where requirements before giving results?
FYI, I do not have access to the server or any backend information. I'm lucky they allow us to query the information.
Thanks for any help you can provide.
I would like to clean up my database by identified & removing the views & stored procedures which were not in use or not accessed for a longer period (May be for last 6 months or 1 year) in SQL Server 2005.
Please help.
You can't do this 100% unless you're running a trace on your system 24/7 and keeping the data or using the auditing mechanisms of 2008.
All the data will be lost when you restart system, else you can find out the last used time for a specific object as queried below
select
DB_NAME(us.[database_id]) as [db],
OBJECT_NAME(us.[object_id],us.[database_id]) as [object],
MAX(us.[last_user_lookup]) as [last_user_lookup],
MAX(us.[last_user_scan]) as [last_user_scan],
MAX(us.[last_user_seek]) as [last_user_seek]
from sys.dm_db_index_usage_stats us
where us.[database_id] = DB_ID()
AND us.[object_id] = OBJECT_ID('tblname')
group by us.[database_id], us.[object_id];
Based on #Koushick's answer, I resolved it by using this ..
SELECT DB_NAME(us.[database_id]) AS [db],
OBJECT_NAME(us.[object_id], us.[database_id]) AS [object],
MAX(us.[last_user_lookup]) AS [last_user_lookup],
MAX(us.[last_user_scan]) AS [last_user_scan],
MAX(us.[last_user_seek]) AS [last_user_seek]
FROM sys.dm_db_index_usage_stats AS us
WHERE DB_NAME(us.[database_id]) = 'your database name'
AND OBJECT_NAME(us.[object_id], us.[database_id]) = 'your object'
GROUP BY us.[database_id], us.[object_id];
You can then quickly sort and play with the dates to see when objects were last used, as opposed to when they were last modified.
To find out the views/stored procedures older than particular date you can use following query
SELECT [name],create_date,modify_date
FROM sys.views (or sys.procedures)
WHERE modify_date<= 'date_older_than_you_want'
To find out unused views you can use following query:
SELECT [name],create_date,modify_date
FROM
sys.views
where create_date=modify_date