Insert into table the outcome of a select on that table using Row_Number - sql

I am creating a query where in I select data on a table, then select a number of rows from that table, to then insert those rows into another identical table in another Database, and then repeat the proces to select the next number of rows from the orignal table.
For Reference, this is what i try to do (already build it for Oracle):
$" INSERT INTO {destination-table}
SELECT * FROM {original-table}
WHERE ROWID IN (SELECT B.RID
FROM (SELECT ROWID AS RID, rownum as RID2
FROM {original-table}
WHERE {Where Claus}
AND ROWNUM <= {recordsPerStatement * iteration}
) B WHERE RID2 > {recordsPerStatement * (iteration - 1)})"
This is put through a loop in .net
For SQL server however I fail to get this done. The data i retrieve with:
$" Select B.* from (Select A.* from (Select Row_NUMBER()
OVER (order by %%physloc%%) As RowID, {original-table}.* FROM
{original-table} where {where-claus})
A Where A.RowID between {recordsPerStatement * (iteration - 1)}
AND {recordsPerStatement * iteration} B"
The problem here is that above select produces an extra column (ROWID) which prevents me from inserting the above data into the destination-table
I have been looking at ways to get rid of the ROWID column in the top select or to insert data from original-table based on the data retrieved
(something like insert into destination-table select * from original-table where exists in (rest of select query)..... but to no avail
TLDR = Get rid of a ROWID column used in calculations to then be able to insert rows into an identical table
specifications:
A LOT (millions of rows) of data (therefor processing it in bits)
Unknown tables (so i cannot call on specific column names, as they are unknown)
needs to have an order (thus the row_number) so the same data is not copied twice.
insert using a select query (as first retrieving it and doing some magic locally would severly impact performance)
If necessary additional variables can be added in here (like an order claus variable) however, any reference to data in the query will ALWAYS be a variable + If I can find a way to not add more varriables in the query then that would be preferable
I hope that someone would have an idea on what i could look at further.

This approach uses a temporary table to save the paginated data before processing it page by page. It has worked for me, but not sure if you might have problems with very large data sets. You could put the whole thing in an SP then call the SP with parameters from .net. You will need to add a parameter for the destination table name and construct/execute an INSERT statement in the final loop.
-- Parameters
DECLARE #PageSize integer = 100;
DECLARE #TableName nVarchar(200) = 'WRD_WordHits';
DECLARE #OrderBy nVarchar(3000) = 'WordID'
STEP_010: BEGIN
-- Get the column definitions for the table
DECLARE #Cols int;
SELECT TABLE_NAME, ORDINAL_POSITION, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
, IS_NULLABLE
INTO #Tspec
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #TableName;
-- Number of columns
SET #Cols = ##ROWCOUNT;
END;
STEP_020: BEGIN
-- Create the temporary table that will hold the paginated data
CREATE TABLE #TT2 ( PageNumber int, LineNumber int, SSEQ int )
DECLARE #STMT nvarchar(3000);
END;
STEP_030: BEGIN
-- Add columns to #TT2 using the column definitions
DECLARE #Ord int = 0;
DECLARE #Colspec nvarchar(3000) = '';
DECLARE #AllCols nvarchar(3000) = '';
DECLARE #ColName nvarchar(200) = '';
WHILE #Ord < #Cols BEGIN
SELECT #Ord = #Ord + 1;
-- Get the column name and specification
SELECT #ColName = Column_Name
, #Colspec =
Column_Name + ' ' + DATA_TYPE + CASE WHEN CHARACTER_MAXIMUM_LENGTH IS NULL THEN ''
ELSE '(' + CAST(CHARACTER_MAXIMUM_LENGTH AS varchar(30) ) + ')' END
FROM #Tspec WHERE ORDINAL_POSITION = #Ord;
-- Create and execute statement to add the column and the columns list used later
SELECT #STMT = ' ALTER TABLE #TT2 ADD ' + #Colspec + ';'
, #AllCols = #AllCols + ', ' + #ColName ;
EXEC sp_ExecuteSQL #STMT;
END;
-- Remove leading comma from columns list
SELECT #AllCols = SUBSTRING(#AllCols, 3, 3000);
PRINT #AllCols
-- Finished with the source table spec
DROP TABLE #Tspec;
END;
STEP_040: BEGIN -- Create and execute the statement used to fill #TT2 with the paginated data from the source table
-- The first two cols are the page number and row number within the page
-- The sequence is arbitrary but could use a key list for the order by clause
SELECT #STMT =
'INSERT #TT2
SELECT FLOOR( CAST( SSEQ as float) /' + CAST(#PageSize as nvarchar(10)) + ' ) + 1 PageNumber, (SSEQ) % ' + CAST(#PageSize as nvarchar(10)) + ' + 1 LineNumber, * FROM
(
SELECT ROW_NUMBER() OVER ( ORDER BY ' + #OrderBy + ' ) - 1 AS SSEQ, * FROM ' + #TableName + '
)
A; ' ;
EXEC sp_ExecuteSQL #STMT;
-- *** Test only to show that the table contains the data
--SELECT * FROM #TT2;
--SELECT #STMT = 'SELECT NULL AS EXECSELECT, ' + #AllCols + ' FROM #TT2;' ;
--EXEC sp_ExecuteSQL #STMT;
-- ***
END;
STEP_050: BEGIN -- Loop through paginated data, one page at a time.
-- Variables to control the paginated loop
DECLARE #PageMAX int;
SELECT #PageMAX = MAX(PageNumber) FROM #TT2;
PRINT 'Generated ' + CAST( #PageMAX AS varchar(10) ) + ' pages from table';
DECLARE #Page int = 0;
WHILE #Page < #PageMax BEGIN
SELECT #Page = #Page + 1;
-- Create and execute the statement to get one page of data - this could be any statement to process data page by page
SELECT #STMT = 'SELECT ' + #AllCols + ' FROM #TT2 WHERE PageNumber = ' + CAST(#Page AS Varchar(10 )) + ' ORDER BY LineNumber '
-- Execute the statment.
PRINT #STMT -- For testing
--EXEC sp_EXECUTESQL #STMT;
END;
-- Finished with Paginated data
DROP TABLE #TT2;
END;

The solution i came up with:
First reading the column_names from the database and storing them locally, to then use them again in building up the insert / select query and only select those columns from the view (which are all apart from ROWID).
commandText = $"SELECT column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'{table}'"
columnNames = "executionfunction with commandText"
columnNamesCount = columnNames.Rows.Count
Dim counter As Int16 = 0
commandText = String.Empty
commandText = $"INSERT INTO {destination} SELECT "
For Each row As DataRow In columnNames.Rows
If counter = columnNamesCount - 1 Then
commandText += $"B.{row("column_name")} "
Else
commandText += $"B.{row("column_name")}, "
End If
counter = counter + 1
Next
commandText += $"FROM
(Select A.* FROM (Select Row_NUMBER()
OVER(order by %%physloc%%) AS RowID, {table}.*
FROM {table} where {filter}) A
WHERE A.RowID between ({recordsPerStatement} * ({iteration}-1)) + 1
AND ({recordsPerStatement} * {iteration})) B"
EDIT: To remove the %%physloc%% clause AN OFFSET FETCH NEXT part has been build in. new approach:
commandText += $"INSERT INTO {destination} SELECT * FROM {table} WHERE {filter}"
For i As Int16 = 1 To columnNamesCount
If i = 1 Then
commandText += $"ORDER BY {columnNames.Rows(i - 1)("column_name")} ASC"
Else
commandText += $"{columnNames.Rows(i - 1)("column_name")} ASC"
End If
If i <> columnNamesCount Then
commandText += ", "
End If
Next
commandText += $" OFFSET ({recordsPerStatement} * ({iteration} -1)) ROWS FETCH Next {recordsPerStatement} ROWS ONLY"

Related

SQL return values if row count > X

DECLARE #sql_string varchar(7000)
set #sql_string = (select top 1 statement from queries where name = 'report name')
EXECUTE (#sql_string)
#sql_string is holding another SQL statement. This query works for me. It returns all the values from the query from the statement on the queries table. From this, I need to figure out how to only return the results IF the number of rows returned exceeds a threshold (for my particular case, 25). Else return nothing. I can't quite figure out how to get this conditional statement to work.
Much appreciated for any direction on this.
If all the queries return the same columns, you could simply store the data in a temporary table or table variable and then use logic such as:
select t.*
from #t t
where (select count(*) from #t) > 25;
An alternative is to try constructing a new query from the existing query. I don't recommend trying to parse the existing string, if you can avoid that. Assuming that the query does not use CTEs or have an ORDER BY clause, for instance, something like this should work:
set #sql = '
with q as (
' + #sql + '
)
select q.*
from q
where (select count(*) from q) > 25
';
That did the trick #Gordon. Here was my final:
DECLARE #report_name varchar(100)
DECLARE #sql_string varchar(7000)
DECLARE #sql varchar(7000)
DECLARE #days int
set #report_name = 'Complex Pass Failed within 1 day'
set #days = 5
set #sql_string = (select top 1 statement from queries where name = #report_name )
set #sql = 'with q as (' + #sql_string + ') select q.* from q where (select count(*) from q) > ' + convert(varchar(100), #days)
EXECUTE (#sql)
Worked with 2 nuances.
The SQL returned could not include an end ";" charicter
The statement cannot include an "order by" statement

Dynamic SQL Procedure with Pivot displaying counts based on Date Range

I have a table which contains multiple user entries.
I want to pull counts of user entries based on date range passed to a stored procedure.
start date: 11/9/2017
end date: 11/11/2017
However the response needs to be dynamic based on amount of days in the date range.
Here is a desired format:
Now that you have provided examples, I have updated my answer which provides you with a solution based on the data you have provided.
Note that you are able to change the date range and the query will update accordingly.
Bare in mind that this SQL query is for SQL Server:
create table #tbl1 (
[UserId] int
,[UserName] nvarchar(max)
,[EntryDateTime] datetime
);
insert into #tbl1 ([UserId],[UserName],[EntryDateTime])
values
(1,'John Doe','20171109')
,(1,'John Doe','20171109')
,(1,'John Doe','20171110')
,(1,'John Doe','20171111')
,(2,'Mike Smith','20171109')
,(2,'Mike Smith','20171110')
,(2,'Mike Smith','20171110')
,(2,'Mike Smith','20171110')
;
-- declare variables
declare
#p1 date
,#p2 date
,#diff int
,#counter1 int
,#counter2 int
,#dynamicSQL nvarchar(max)
;
-- set variables
set #p1 = '20171109'; -- ENTER THE START DATE IN THE FORMAT YYYYMMDD
set #p2 = '20171111'; -- ENTER THE END DATE IN THE FORMAT YYYYMMDD
set #diff = datediff(dd,#p1,#p2); -- used to calculate the difference in days
set #counter1 = 0; -- first counter to be used in while loop
set #counter2 = 0; -- second counter to be used in while loop
set #dynamicSQL = 'select pivotTable.[UserId] ,pivotTable.[UserName] as [Name] '; -- start of the dynamic SQL statement
-- to get the dates into the query in a dynamic way, you need to do a while loop (or use a cursor)
while (#counter1 < #diff)
begin
set #dynamicSQL += ',pivotTable.[' + convert(nvarchar(10),dateadd(dd,#counter1,#p1),120) + '] '
set #counter1 = (#counter1 +1)
end
-- continuation of the dynamic SQL statement
set #dynamicSQL += ' from (
select
t.[UserId]
,t.[UserName]
,cast(t.[EntryDateTime] as date) as [EntryDate]
,count(t.[UserId]) as [UserCount]
from #tbl1 as t
where
t.[EntryDateTime] >= ''' + convert(nvarchar(10),#p1,120) + ''' ' +
' and t.[EntryDateTime] <= ''' + convert(nvarchar(10),#p2,120) + ''' ' +
'group by
t.[UserId]
,t.[UserName]
,t.[EntryDateTime]
) as mainQuery
pivot (
sum(mainQuery.[UserCount]) for mainQuery.[EntryDate]
in ('
;
-- the second while loop which is used to create the columns in the pivot table
while (#counter2 < #diff)
begin
set #dynamicSQL += ',[' + convert(nvarchar(10),dateadd(dd,#counter2,#p1),120) + ']'
set #counter2 = (#counter2 +1)
end
-- continuation of the SQL statement
set #dynamicSQL += ')
) as pivotTable'
;
-- this is the easiet way I could think of to get rid of the leading comma in the query
set #dynamicSQL = replace(#dynamicSQL,'in (,','in (');
print #dynamicSQL -- included this so that you can see the SQL statement that is generated
exec sp_executesql #dynamicSQL; -- this will run the generate dynamic SQL statement
drop table #tbl1;
Let me know if that's what you were looking for.
If you are using MySQL this will make what you want:
SELECT UserID,
UserName,
SUM(Date = '2017-11-09') '2017-11-09',
SUM(Date = '2017-11-10') '2017-11-10',
SUM(Date = '2017-11-11') '2017-11-11'
FROM src
GROUP BY UserID
If you are using SQL Server, you could try it with PIVOT:
SELECT *
FROM
(SELECT userID, userName, EntryDateTime
FROM t) src
PIVOT
(COUNT(userID)
FOR EntryDateTime IN (['2017-11-09'], ['2017-11-10'], ['2017-11-11'])) pvt

i want to know in how many table the data existing in database

i need sql script to check,from a database how many table have data and how many table is empty??
Try this one:
USE dbName
SELECT COUNT(*) from information_schema.tables
WHERE table_type = 'base table'
AS Suggested by # James Z in the comments you can use the standard reports in SSMS.
Right Click on the Database instance
REPORTS-->STANDARD REPORTS-->DISK USAGE BY TABLES
Please try the following...
CREATE PROCEDURE EmptyFullTableCounter AS
BEGIN
DECLARE #fldNameValue VARCHAR( 64 );
DECLARE #sqlStatementString varchar( 200 );
DECLARE #loopIndex INT = 1;
DECLARE #recordCount INT;
DROP TABLE IF EXISTS tempTblTableNames;
CREATE TABLE tempTblTableNames
(
fldName VARCHAR( 64 ),
fldCount INT
);
INSERT INTO tempTblTableNames ( fldName,
fldCount )
SELECT table_name,
0
FROM INFORMATION_SCHEMA.TABLES
WHERE table_type = 'BASE TABLE'
AND table_catalog = 'UserAccessAccounts001'
AND table_name != 'tempTblTableNames';
SET #recordCount = ( SELECT COUNT( * )
FROM tempTblTableNames );
WHILE #loopIndex <= #recordCount
BEGIN
SET #fldNameValue = ( SELECT fldName
FROM ( SELECT fldName,
ROW_NUMBER() OVER ( ORDER BY fldName ) AS recordNumber
FROM ( SELECT fldName
FROM tempTblTableNames
) AS fldNamesFinder
) AS fldNamesWithRowNumber
WHERE recordNumber = #loopIndex );
SET #sqlStatementString = 'UPDATE tempTblTableNames ' +
'SET fldCount = ( SELECT COUNT( * ) ' +
' FROM ' +
#fldNameValue +
' ) ' +
'WHERE fldName = ''' +
#fldNameValue +
''';';
EXEC ( #sqlStatementString );
SET #loopIndex = #loopIndex + 1;
END
SELECT SUM( IIF( fldCount > 0, 1, 0 ) ) AS Haves,
SUM( IIF( fldCount = 0, 1, 0 ) ) AS HaveNots
FROM tempTblTableNames;
DROP TABLE tempTblTableNames;
END
This procedure starts by creating a table to hold the names of each table, then populates it using the following statement...
INSERT INTO tempTblTableNames ( fldName,
fldCount )
SELECT table_name,
0
FROM INFORMATION_SCHEMA.TABLES
WHERE table_type = 'BASE TABLE'
AND table_catalog = 'UserAccessAccounts001'
AND table_name != 'tempTblTableNames';
Please note that the above statement excludes tempTblTableNames from our list of tables.
The procedure then stores a count of the total number of records in the variable #recordCount. This value is used as a sentinel value for a WHILE loop that extracts each table name from tempTblTableNames and constructs around it a statement that will update that table name's associated count in tempTblTableNames. This statement is then executed and the loop index iterated.
Once the loop has completed a final SELECT statement is performed that uses SUM() in conjunction with IIF() to count the number of tables that have records and the number of tables that do not have records.
If you have any questions or comments, then please feel free to post a Comment accordingly.

How to UPDATE all columns of a record without having to list every column

I'm trying to figure out a way to update a record without having to list every column name that needs to be updated.
For instance, it would be nice if I could use something similar to the following:
// the parts inside braces are what I am trying to figure out
UPDATE Employee
SET {all columns, without listing each of them}
WITH {this record with id of '111' from other table}
WHERE employee_id = '100'
If this can be done, what would be the most straightforward/efficient way of writing such a query?
It's not possible.
What you're trying to do is not part of SQL specification and is not supported by any database vendor. See the specifications of SQL UPDATE statements for MySQL, Postgresql, MSSQL, Oracle, Firebird, Teradata. Every one of those supports only below syntax:
UPDATE table_reference
SET column1 = {expression} [, column2 = {expression}] ...
[WHERE ...]
This is not posible, but..
you can doit:
begin tran
delete from table where CONDITION
insert into table select * from EqualDesingTabletoTable where CONDITION
commit tran
be carefoul with identity fields.
Here's a hardcore way to do it with SQL SERVER. Carefully consider security and integrity before you try it, though.
This uses schema to get the names of all the columns and then puts together a big update statement to update all columns except ID column, which it uses to join the tables.
This only works for a single column key, not composites.
usage: EXEC UPDATE_ALL 'source_table','destination_table','id_column'
CREATE PROCEDURE UPDATE_ALL
#SOURCE VARCHAR(100),
#DEST VARCHAR(100),
#ID VARCHAR(100)
AS
DECLARE #SQL VARCHAR(MAX) =
'UPDATE D SET ' +
-- Google 'for xml path stuff' This gets the rows from query results and
-- turns into comma separated list.
STUFF((SELECT ', D.'+ COLUMN_NAME + ' = S.' + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #DEST
AND COLUMN_NAME <> #ID
FOR XML PATH('')),1,1,'')
+ ' FROM ' + #SOURCE + ' S JOIN ' + #DEST + ' D ON S.' + #ID + ' = D.' + #ID
--SELECT #SQL
EXEC (#SQL)
In Oracle PL/SQL, you can use the following syntax:
DECLARE
r my_table%ROWTYPE;
BEGIN
r.a := 1;
r.b := 2;
...
UPDATE my_table
SET ROW = r
WHERE id = r.id;
END;
Of course that just moves the burden from the UPDATE statement to the record construction, but you might already have fetched the record from somewhere.
How about using Merge?
https://technet.microsoft.com/en-us/library/bb522522(v=sql.105).aspx
It gives you the ability to run Insert, Update, and Delete. One other piece of advice is if you're going to be updating a large data set with indexes, and the source subset is smaller than your target but both tables are very large, move the changes to a temporary table first. I tried to merge two tables that were nearly two million rows each and 20 records took 22 minutes. Once I moved the deltas over to a temp table, it took seconds.
If you are using Oracle, you can use rowtype
declare
var_x TABLE_A%ROWTYPE;
Begin
select * into var_x
from TABLE_B where rownum = 1;
update TABLE_A set row = var_x
where ID = var_x.ID;
end;
/
given that TABLE_A and TABLE_B are of same schema
It is possible. Like npe said it's not a standard practice. But if you really have to:
1. First a scalar function
CREATE FUNCTION [dte].[getCleanUpdateQuery] (#pTableName varchar(40), #pQueryFirstPart VARCHAR(200) = '', #pQueryLastPart VARCHAR(200) = '', #pIncludeCurVal BIT = 1)
RETURNS VARCHAR(8000) AS
BEGIN
DECLARE #pQuery VARCHAR(8000);
WITH cte_Temp
AS
(
SELECT
C.name
FROM SYS.COLUMNS AS C
INNER JOIN SYS.TABLES AS T ON T.object_id = C.object_id
WHERE T.name = #pTableName
)
SELECT #pQuery = (
CASE #pIncludeCurVal
WHEN 0 THEN
(
STUFF(
(SELECT ', ' + name + ' = ' + #pQueryFirstPart + #pQueryLastPart FROM cte_Temp FOR XML PATH('')), 1, 2, ''
)
)
ELSE
(
STUFF(
(SELECT ', ' + name + ' = ' + #pQueryFirstPart + name + #pQueryLastPart FROM cte_Temp FOR XML PATH('')), 1, 2, ''
)
) END)
RETURN 'UPDATE ' + #pTableName + ' SET ' + #pQuery
END
2. Use it like this
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery(<your table name>, <query part before current value>, <query part after current value>, <1 if current value is used. 0 if updating everything to a static value>);
EXEC (#pQuery)
Example 1: make all employees columns 'Unknown' (you need to make sure column type matches the intended value:
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery('employee', '', 'Unknown', 0);
EXEC (#pQuery)
Example 2: Remove an undesired text qualifier (e.g. #)
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery('employee', 'REPLACE(', ', ''#'', '''')', 1);
EXEC (#pQuery)
This query can be improved. This is just the one I saved and sometime I use. You get the idea.
Similar to an upsert, you could check if the item exists on the table, if so, delete it and insert it with the new values (technically updating it) but you would lose your rowid if that's something sensitive to keep in your case.
Behold, the updelsert
IF NOT EXISTS (SELECT * FROM Employee WHERE ID = #SomeID)
INSERT INTO Employee VALUES(#SomeID, #Your, #Vals, #Here)
ELSE
DELETE FROM Employee WHERE ID = #SomeID
INSERT INTO Employee VALUES(#SomeID, #Your, #Vals, #Here)
you could do it by deleting the column in the table and adding the column back in and adding a default value of whatever you needed it to be. then saving this will require to rebuild the table

Export data from a non-normalized database

I need to export data from a non-normalized database where there are multiple columns to a new normalized database.
One example is the Products table, which has 30 boolean columns (ValidSize1, ValidSize2 ecc...) and every record has a foreign key which points to a Sizes table where there are 30 columns with the size codes (XS, S, M etc...). In order to take the valid sizes for a product I have to scan both tables and take the value SizeCodeX from the Sizes table only if ValidSizeX on the product is true. Something like this:
Products Table
--------------
ProductCode <PK>
Description
SizesTableCode <FK>
ValidSize1
ValidSize2
[...]
ValidSize30
Sizes Table
-----------
SizesTableCode <PK>
SizeCode1
SizeCode2
[...]
SizeCode30
For now I am using a "template" query which I repeat for 30 times:
SELECT
Products.Code,
Sizes.SizesTableCode, -- I need this code because different codes can have same size codes
Sizes.Size_1
FROM Products
INNER JOIN Sizes
ON Sizes.SizesTableCode = Products.SizesTableCode
WHERE Sizes.Size_1 IS NOT NULL
AND Products.ValidSize_1 = 1
I am just putting this query inside a loop and I replace the "_1" with the loop index:
SET #counter = 1;
SET #max = 30;
SET #sql = '';
WHILE (#counter <= #max)
BEGIN
SET #sql = #sql + ('[...]'); -- Here goes my query with dynamic indexes
IF #counter < #max
SET #sql = #sql + ' UNION ';
SET #counter = #counter + 1;
END
INSERT INTO DestDb.ProductsSizes EXEC(#sql); -- Insert statement
GO
Is there a better, cleaner or faster method to do this? I am using SQL Server and I can only use SQL/TSQL.
You can prepare a dynamic query using the SYS.Syscolumns table to get all value in row
DECLARE #SqlStmt Varchar(MAX)
SET #SqlStmt=''
SELECT #SqlStmt = #SqlStmt + 'SELECT '''+ name +''' column , UNION ALL '
FROM SYS.Syscolumns WITH (READUNCOMMITTED)
WHERE Object_Id('dbo.Products')=Id AND ([Name] like 'SizeCode%' OR [Name] like 'ProductCode%')
IF REVERSE(#SqlStmt) LIKE REVERSE('UNION ALL ') + '%'
SET #SqlStmt = LEFT(#SqlStmt, LEN(#SqlStmt) - LEN('UNION ALL '))
print ( #SqlStmt )
Well, it seems that a "clean" (and much faster!) solution is the UNPIVOT function.
I found a very good example here:
http://pratchev.blogspot.it/2009/02/unpivoting-multiple-columns.html