View's column contains data which looks like UUID but the column type is int. Why? - sql

I've access to a view on a SQL Server 2016 database.
The column named 'id_key' contains such data:
id_key
D93F37FC-3C2A-EB11-B813-00505690E502
B03D37FC-3C2A-EB11-B813-00505690E502
AC644CFC-3C2A-EB11-B813-00505690E502
I've checked the type of the column: it's int
Truly, the result of:
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'yourTableName' AND
COLUMN_NAME = 'yourColumnName'
returns just int.
I've not found any explanation for that in SQL Server 2016 docs.
Have I missed something?
How int type store data which looks like strings/uuids?

If the view was not created using the WITH SCHEMABINDING option then the underlying tables that it references are freely able to change.
It is possible that the problematic column was originally using an int data type when the view was created but has subsequently changed to uniqueidentifier, e.g.:
drop view if exists dbo.yourViewName;
drop table if exists dbo.yourTableName;
go
create table dbo.yourTableName (
ignore int,
yourColumnName int
);
go
create view dbo.yourViewName --with schemabinding
as
select yourColumnName as id_key
from dbo.yourTableName
go
alter table dbo.yourTableName
drop column yourColumnName
go
alter table dbo.yourTableName
add yourColumnName uniqueidentifier
go
insert dbo.yourTableName (yourColumnName) values
('D93F37FC-3C2A-EB11-B813-00505690E502'),
('B03D37FC-3C2A-EB11-B813-00505690E502'),
('AC644CFC-3C2A-EB11-B813-00505690E502')
go
select * from dbo.yourViewName
go
select data_type
from information_schema.columns
where table_name = 'yourViewName'
and column_name = 'id_key'
Which yields:
id_key
------------------------------------
D93F37FC-3C2A-EB11-B813-00505690E502
B03D37FC-3C2A-EB11-B813-00505690E502
AC644CFC-3C2A-EB11-B813-00505690E502
data_type
----------
int
See the CREATE VIEW (Transact-SQL) documentation for more information.

Related

Change data type of the attribute Roll-No of the table STUDENT from Number (10) to Varchar (10) [duplicate]

I want to change the data type of multiple columns from float to int. What is the simplest way to do this?
There is no data to worry about, yet.
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
ALTER TABLE tablename MODIFY columnname INTEGER;
This will change the datatype of given column
Depending on how many columns you wish to modify it might be best to generate a script, or use some kind of mysql client GUI
alter table table_name modify column_name int(5)
You can also use this:
ALTER TABLE [tablename] CHANGE [columnName] [columnName] DECIMAL (10,2)
If you want to change all columns of a certain type to another type, you can generate queries using a query like this:
select distinct concat('alter table ',
table_name,
' modify ',
column_name,
' <new datatype> ',
if(is_nullable = 'NO', ' NOT ', ''),
' NULL;')
from information_schema.columns
where table_schema = '<your database>'
and column_type = '<old datatype>';
For instance, if you want to change columns from tinyint(4) to bit(1), run it like this:
select distinct concat('alter table ',
table_name,
' modify ',
column_name,
' bit(1) ',
if(is_nullable = 'NO', ' NOT ', ''),
' NULL;')
from information_schema.columns
where table_schema = 'MyDatabase'
and column_type = 'tinyint(4)';
and get an output like this:
alter table table1 modify finished bit(1) NOT NULL;
alter table table2 modify canItBeTrue bit(1) NOT NULL;
alter table table3 modify canBeNull bit(1) NULL;
!! Does not keep unique constraints, but should be easily fixed with another if-parameter to concat. I'll leave it up to the reader to implement that if needed..
Alter TABLE `tableName` MODIFY COLUMN `ColumnName` datatype(length);
Ex :
Alter TABLE `tbl_users` MODIFY COLUMN `dup` VARCHAR(120);
To change column data type there are change
method and modify method
ALTER TABLE student_info CHANGE roll_no roll_no VARCHAR(255);
ALTER TABLE student_info MODIFY roll_no VARCHAR(255);
To change the field name also use the change method
ALTER TABLE student_info CHANGE roll_no identity_no VARCHAR(255);
You use the alter table ... change ... method, for example:
mysql> create table yar (id int);
Query OK, 0 rows affected (0.01 sec)
mysql> insert into yar values(5);
Query OK, 1 row affected (0.01 sec)
mysql> alter table yar change id id varchar(255);
Query OK, 1 row affected (0.03 sec)
Records: 1 Duplicates: 0 Warnings: 0
mysql> desc yar;
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| id | varchar(255) | YES | | NULL | |
+-------+--------------+------+-----+---------+-------+
1 row in set (0.00 sec)
If you want to alter the column details, set default value and add a comment, use this
ALTER TABLE [table_name] MODIFY [column_name] [new data type]
DEFAULT [VALUE] COMMENT '[column comment]'
https://dev.mysql.com/doc/refman/8.0/en/alter-table.html
You can also set a default value for the column just add the DEFAULT keyword followed by the value.
ALTER TABLE [table_name] MODIFY [column_name] [NEW DATA TYPE] DEFAULT [VALUE];
This is also working for MariaDB (tested version 10.2)

BigQuery Drop Table Column - DDL Bug

After removing a column from a table by:
ALTER TABLE MyTable
DROP COLUMN IF EXISTS MyColumn
In BigQuery UI I Can see that the column was deleted successfully & I can't query the specific column but when I query DDL I can see that the column still exists in the scheme:
SELECT DDL FROM MyDataSet.INFORMATION_SCHEMA.TABLES
WHERE DDL LIKE '%MyTable%'
What am I doing wrong?
This is a nasty, undocumented side effect of Bigquery's Time Travel. Time Travel makes it unsafe to use ALTER TABLE statements in bigquery.
Demonstration of problem:
create table apu.time_travel_problem
( id int64
, name string
);
select column_name, data_type
FROM apu.INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'time_travel_problem';
column_name
data_type
id
INT64
name
STRING
This is all normal so far, but after an ALTER TABLE everything goes odd:
alter table apu.time_travel_problem drop column name;
select column_name, data_type
FROM apu.INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'time_travel_problem';
column_name
data_type
id
INT64
name
STRING
The column we just dropped is still there!
Now try this:
alter table apu.time_travel_problem add column name string;
Column `name` was recently deleted in the table `time_travel_problem`. Deleted column name is reserved for up to the time travel duration, use a different column name instead.
Solution:
Do not use ALTER TABLE in bigquery. Instead DROP and reCREATE using a temporary table.
This is a jinja template which I use:
/* {{TABLE}} */
CREATE TABLE IF NOT EXISTS {{DATASET}}.{{TABLE}}_migration
OPTIONS (expiration_timestamp = timestamp_add(CURRENT_TIMESTAMP(), INTERVAL 8 HOUR))
AS SELECT * FROM {{DATASET}}.{{TABLE}};
DROP TABLE {{DATASET}}.{{TABLE}};
CREATE TABLE {{DATASET}}.{{TABLE}}
(
{{COLUMN_DDL}}
);
INSERT INTO {{DATASET}}.{{TABLE}}
(
{{COLUMN_LIST}}
)
SELECT
{{COLUMN_LIST}}
FROM {{DATASET}}.{{TABLE}}_migration;

is it possible to "clone" a table variable?

I have a table variable with about 20 columns. I'd like to essentially reuse a single table variable structure for 2 different result sets. The 2 result sets should be represented in different table variables so I can't reuse a single table variable. Therefore, I was wondering if there was a way to clone a single table variable for reuse. For example, something like this:
DECLARE #MyTableVar1 TABLE(
Col1 INT,
Col2 INT
}
DECLARE #MyTableVar2 TABLE = #MyTableVar1
I'd like to avoid creating duplicate SQL if I can reuse existing SQL.
That is not possible, use temp table instead
if object_id('tempdb..#MyTempTable1') is not null drop table #MyTempTable1
Create TABLE #MyTempTable1 (
Col1 INT,
Col2 INT
)
if object_id('tempdb..#MyTempTable2') is not null drop table #MyTempTable2
select * into #MyTempTable2 from #MyTempTable1
update :
As suggested by Eric in comment, if you are looking for just table schema and not the data inside the first table then
select * into #MyTempTable2 from #MyTempTable1 where 1 = 0
You can create a user-defined table type which is typically meant for using table valued parameters for stored procedures. Once the type is created, you can use it as a type to declare any number of table variables just like built-in types. This comes closest to you requirement.
Ex:
CREATE TYPE MyTableType AS TABLE
( COL1 int
, COL2 int )
DECLARE #MyTableVar1 AS MyTableType
DECLARE #MyTableVar2 AS MyTableType
A few things to note with this solution
MyTableType becomes a database level type. It is not local to a specific stored procedure.
If you have to ever change the definition of the table, then you have to drop the code/sprocs using the TVP type, then recreate the table type with new definition and related sprocs. Typically this is a non-issue as the code and the type are created/recreated together.
You could use a temp table and select into... they perform better since their statistics are better.
create table #myTable(
Col1 INT null,
Col2 INT null
}
...
select *
into #myTableTwo
from #myTable
You can create one table variable and add type column in the table and use the type column in your queries to filter the data.
By this you are using one table to hold more than one type of data.
Hope this helps.
declare #myTable table(
Col1 INT null,
Col2 INT null,
....
Type INT NULL
}
insert into #myTable(...,type)
select ......,1
insert into #myTable(...,type)
select ......,2
select * from #myTable where type =1
select * from #myTable where type =2

Get the max length of SQL Server column using an array

I have four columns in my table Surname, FirstName, MiddleName, CurrAddress. Is there a way that I can store column names dynamically using an array and get the max length of each column? Say for example out of the four fields I only need the Surname and FirstName maximum lengths. My code below will only display one column per transaction. Any help is greatly appreciated. Thank you!
ALTER PROCEDURE [dbo].[sp_getColumnLength]
#colval nvarchar(50),
#tblval nvarchar(50)
AS
BEGIN
SELECT
character_maximum_length as 'Max Length'
FROM
information_schema.columns
WHERE
column_name = #colval
AND table_name = #tblval
END
GO
You could use a table-valued parameter in place of an array see https://msdn.microsoft.com/en-gb/library/bb510489(v=sql.110).aspx
Example by microsoft did not work for me but the following code does
USE AdventureWorksdw2012;
GO
/* Create a table type. */
drop type t1
go
CREATE TYPE t1 AS TABLE
( tabname CHAR(50)
, colname char(50) );
GO
/* Create a procedure to receive data for the table-valued parameter. */
CREATE PROCEDURE dbo. usp_InsertProductionLocation
#TVP [db_datareader].[t1] readonly
AS
BEGIN
SELECT
character_maximum_length as 'Max Length'
FROM
#TVP
join information_schema.columns on table_name = tabname and column_name = colname
END
GO
/* Declare a variable that references the type. */
DECLARE #TVP AS t1;
/* Add data to the table variable. */
insert into #TVP values ('dimcustomer','title'),('dimcustomer','firstname')
/* Pass the table variable data to a stored procedure. */
EXEC usp_InsertProductionLocation #TVP;
GO
Problem with microsofts's example seemed to be that t1 needs to be fully qualified within stored procedure.
You have a native function for this:
select COL_LENGTH('TABLENAME', 'COLUMNNAME')

Select all values from all tables with specific table name

EDIT original question:
Our UDW is broken out into attribute and attribute list tables.
I would like to write a data dictionary query that dynamically pulls in all column values from all tables that are like %attr_list% without having to write a series of unions and update or add every time a new attribute list is created in our UDW.
All of our existing attribute list tables follow the same format (number of columns, most column names, etc). Below is the first two unions in our existing view which I want to avoid updating each time a new attribute list table is added to our UDW.
CREATE VIEW [dbo].[V_BI_DATA_DICTIONARY]
( ATTR_TABLE
,ATTR_LIST_ID
,ATTR_NAME
,ATTR_FORMAT
,SHORT_DESCR
,LONG_DESCR
,SOURCE_DATABASE
,SOURCE_TABLE
,SOURCE_COLUMN
,INSERT_DATETIME
,INSERT_OPRID
)
AS
SELECT 'PREAUTH_ATTR_LIST' ATTR_TABLE
,[PREAUTH_ATTR_LIST_ID] ATTR_LIST_ID
,[ATTR_NAME] ATTR_NAME
,[ATTR_FORMAT] ATTR_FORMAT
,[SHORT_DESCR] SHORT_DESCR
,[LONG_DESCR] LONG_DESCR
,[SOURCE_DATABASE] SOURCE_DATABASE
,[SOURCE_TABLE] SOURCE_TABLE
,[SOURCE_COLUMN] SOURCE_COLUMN
,[INSERT_DATETIME] INSERT_DATETIME
,[INSERT_OPRID] INSERT_OPRID
FROM [My_Server].[MY_DB].[dbo].[PREAUTH_ATTR_LIST]
UNION
SELECT 'SAVINGS_ACCOUNT_ATTR_LIST'
,[SAVINGS_ACCOUNT_ATTR_LIST_ID]
,[ATTR_NAME]
,[ATTR_FORMAT]
,[SHORT_DESCR]
,[LONG_DESCR]
,[SOURCE_DATABASE]
,[SOURCE_TABLE]
,[SOURCE_COLUMN]
,[INSERT_DATETIME]
,[INSERT_OPRID]
FROM [My_Server].[MY_DB].[dbo].[SAVINGS_ACCOUNT_ATTR_LIST]'
Something like this might work for you if all tables contain the same columns.
Just change the temp table and the selected columns to match your own columns.
CREATE TABLE #results (
ATTR_TABLE SYSNAME,
ATTR_LIST_ID INT,
ATTR_NAME NVARCHAR(50),
ATTR_FORMAT NVARCHAR(50),
SHORT_DESCR NVARCHAR(50),
LONG_DESCR NVARCHAR(255),
SOURCE_DATABASE NVARCHAR(50),
SOURCE_TABLE NVARCHAR(50),
SOURCE_COLUMN NVARCHAR(50),
INSERT_DATETIME DATETIME,
INSERT_OPRID INT
);
INSERT INTO #results
EXEC sp_MSforeachtable #command1 =
'
SELECT ''?''
, *
FROM ?
WHERE ''?'' LIKE ''%ATTR_LIST%''
'
SELECT *
FROM #results
DROP TABLE #results
EDIT: Updated my example with your columns. Because you use different column name for ATTR_LIST_ID in each table I changed the select to SELECT *. Obviously I don't know the data types of your columns so you have to change them.
This won't work in a view but you could create a stored procedure.
For SQL Server you should be able to use something like this:
SELECT c.name AS ColName, t.name AS TableName
FROM sys.columns c
JOIN sys.tables t ON c.object_id = t.object_id
WHERE t.name LIKE '%attr_list%'
And this will include views as well as tables
SELECT COLUMN_NAME, TABLE_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME LIKE '%attr_list%'
If using MS SQL Server check out the sys catalog views. You can use sys.tables and join to sys.columns to get your tables and columns. sys.extended_properties can get you description information, if entered.