I'm looking for way to add my data to some columns,
select co1, col2, col3 from tbl
I want to cod3 exist but only show my data
select co1, col2, col3=3 from tbl
output should be
1, 0, 3
I have problem with CR9 and this is only way I guess .
if you want to call the 3rd column col3 just do
select co1,col2, '3' as col3 from tbl
by the way
select co1,col2,col=3 from tbl
was valid acceptable but not recommenened by microsoft until SQL2008R2 in 2012 is not accepted anymore
just use "select co1,col2,'3' from tbl".
try this answer
select co1,col2, '3' as col3 from tbl
Related
I have a design concern which I need to implement without using cursor.
There is source table 'A' which will have all column in varchar data type. I want to iterate over them and convert each column to destination table data type and if conversion/parsing fails, I need to log that row in extra error table.
Any suggestions to go ahead will be helpful.
In SQL Server, you would use try_convert():
insert into t2 ( . . . )
select . . .
from (select try_convert(?, col1) as col1,
try_convert(?, col1) as col2,
from staging_t
) t
where col1 is not null and col2 is not null and . . .;
Then run a second query to get the rows where the value is NULL.
If NULL is a permitted value in the staging column, then this is a bit more complex:
insert into t2 ( . . . )
select new_col1, new_col2, . . .
from (select try_convert(?, col1) as new_col1, col1,
try_convert(?, col1) as new_col2, col2,
from staging_t
) t
where (new_col1 is not null or col1 is null) and
(new_col2 is not null or col2 is null) and
. . .;
In Sql Server 2012 and up: each of these will return null when the conversion fails instead of an error.
try_convert(datatype,val)
try_cast(val as datatype)
try_parse(val as datatype [using culture])
Example of all varcharcol that would fail conversion to int:
select id, varcharcol
from a
where try_convert(int,varcharcol) is null
Use Above Query Using User Define Table Type
Create Type Table - Procedure - User Define Table Type
#UserDefineTypeTable UserDefineTypeTable readonly
insert into table1
col1,
col2,
col3
(select
type.col1,
type.col2,
type,col3
from #UserDefineTypeTable as type)
so i have this query
select id, col1, len(col1)
from tableA
from there I wanted to grab all data in col1 that have exactly 5 characters and start with 15
select id, col1, len(col1)
from tableA
where col1 like '15___' -- underscore 3 times
Now col1 is a nvarchar(192) and there are data that starts with 15 and are of length 5. But the second query always shows me no rows.
Why is that?
The case could be that the field is a large empty string? Such as "15123 "
You could also try another solution?
select id, col1, len(col1)
from tableA
where col1 like '15%' AND Len(col1)=5
EDIT - FOR FUTURE REFERENCE:
For sake of comprehensiveness, char and nchar uses the full field size, so char(10) would be 15________ ("15" + 8 characters) long, because it implicitly forces the size, whereas a varchar resizes based on what it is supplied 15 is simply 15.
To get around this you could
A) Do an LTRIM/RTRIM To cut off all extra spaces
select id, col1, len(col1)
from tableA
where rtrim(ltrim(col1)) like '15___'
B) Do a LEFT() to only grab the left 5 characters
select id, col1, len(col1)
from tableA
where left(col1,5) like '15___'
C) Cast as a varchar, a rather sloppy approach
select id, col1, len(col1)
from tableA
where CAST(col1 AS Varchar(192)) like '15___'
Does this query return anything?
select id, col1, len(col1)
from tableA
where len(col1) = 5 and
left(col1, 2) = '15';
If not, then there are no values that match that pattern. And, my best guess would be spaces, in which case, this might work:
select id, col1, len(col1)
from tableA
where ltrim(rtrim(col1)) like '15___';
I ran following query in MS Access 2007 and get expected results
SELECT Col1
FROM tblA
GROUP BY Col1
HAVING ((Count(Col1))>1);
But after adding additional column in the same table to the grouping as below. It gives 0 records
SELECT Col1, Col2
FROM tblA
GROUP BY Col1, Col2
HAVING ((Count(Col1))>1);
Col1 Col2
19570304 180243268
19570304 180243269
19570304 180243270
26984406 422233864
26984951 796883002
26985060 594201758
19700070 150814697
19700070 430871349
19700070 670755019
19700070 883583086
19700070 963146318
19990910 715835415
19990910 715835416
19990910 799844489
20123527 957714629
20123527 957714630
22000508 376790722
26981961 637378887
What could be the issue here
Thanks
Try this way:
SELECT t.Col1, t.Col2
FROM tblA t
inner join (
SELECT Col1
FROM tblA
GROUP BY Col1
HAVING ((Count(Col1))>1);
) tbl on tbl.col1=t.col1
I believe there is no duplicate pairs in Col1 and Col2
I need to copy data from original table and add custom column specified in query
Original table struct: col1, col2, col3
Insert table struct: x, col1, col2, col3
INSERT INTO newtable
SELECT *
FROM original
WHERE cond
and I'm getting this error
Column count doesn't match value count at row 1
HOW can I insert X value in this single query?
I tought something like this can pass
INSERT INTO newtable
SELECT 'x' = NULL, *
FROM original
WHERE cond
Any ideas?
Is it possible to use *? Because that table has so many columns and X has to be first value
I know this all is bad but I have to edit unbeliveable ugly db with even worse php code
The second statement is almost correct, but instead of 'x' = null, use null x (I'm assuming you want to store a null value in a column named x);
INSERT INTO newtable
SELECT null x, o.* FROM original o WHERE cond
Select Null as X, *
into newtable
from original
where ...
INSERT INTO newtable
SELECT null as x, col1, col2, col3 FROM original WHERE cond
I have a database table that has a structure like the one shown below:
CREATE TABLE dated_records (
recdate DATE NOT NULL
col1 DOUBLE NOT NULL,
col2 DOUBLE NOT NULL,
col3 DOUBLE NOT NULL,
col4 DOUBLE NOT NULL,
col5 DOUBLE NOT NULL,
col6 DOUBLE NOT NULL,
col7 DOUBLE NOT NULL,
col8 DOUBLE NOT NULL
);
I want to write an SQL statement that will allow me to return a record containing the changes between two supplied dates, for specified columns - e.g. col1, col2 and col3
for example, if I wanted to see how much the value in col1, col2 and col3 has changed during the interval between two dates. A dumb way of doing this would be to select the rows (separately) for each date and then difference the fields outside the db server -
SQL1 = "SELECT col1, col2 col3 FROM dated_records WHERE recdate='2001-01-01'";
SQL1 = "SELECT col1, col2 col3 FROM dated_records WHERE recdate='2001-02-01'";
however, I'm sure there there is a way a smarter way of performing the differencing using pure SQL. I am guessing that it will involve using a self join (and possibly a nested subquery), but I may be over complicating things - I decided it would be better to ask the SQL experts on here to see how they would solve this problem in the most efficient way.
Ideally the SQL should be DB agnostic, but if it needs to be tied to be a particular db, then it would have to be PostgreSQL.
Just select the two rows, join them into one, and subtract the values:
select d1.recdate, d2.recdate,
(d2.col1 - d1.col1) as delta_col1,
(d2.col2 - d1.col2) as delta_col2,
...
from (select *
from dated_records
where recdate = <date1>
) d1 cross join
(select *
from dated_records
where recdate = <date2>
) d2
I think that if what you want to do is get in the result set rows that doesn't intersect the two select queries , you can use the EXCEPT operator :
The EXCEPT operator returns the rows that are in the first result set
but not in the second.
So your two queries will become one single query with the except operator joining them :
SELECT col1, col2 col3 FROM dated_records WHERE recdate='2001-01-01'
EXCEPT
SELECT col1, col2 col3 FROM dated_records WHERE recdate='2001-02-01'
SELECT
COALESCE
(a.col1 -
(
SELECT b.col1
FROM dated_records b
WHERE b.id = a.id + 1
),
a.col1)
FROM dated_records a
WHERE recdate='2001-01-01';
You could use window functions plus DISTINCT:
SELECT DISTINCT
first_value(recdate) OVER () AS date1
,last_value(recdate) OVER () AS date2
,last_value(col1) OVER () - first_value(col1) OVER () AS delta1
,last_value(col2) OVER () - first_value(col2) OVER () AS delta2
...
FROM dated_records
WHERE recdate IN ('2001-01-01', '2001-01-03')
For any two days. Uses a single index or table scan, so it should be fast.
I did not order the window, but all calculations use the same window, so the values are consistent.
This solution can easily be generalized for calculations between n rows. You may want to use nth_value() from the Postgres arsenal of window functions in this case.
This seemed a quicker way to write this if you are looking for a simple delta.
SELECT first(col1) - last(col1) AS delta_col1
, first(col2) - last(col2) AS delta_col2
FROM dated_records WHERE recdate IN ('2001-02-01', '2001-01-01')
You may not know whether the first row or the second row comes first, but you can always wrap the answer in abs(first(col1)-last(col1))