I am trying to get max value from a table but it's not giving me the correct value (max value).
I used this query to get value
SELECT MAX(column_name1) FROM table_name WHERE column_name2 = 'some_value'
The data type of column_name is VARCHAR. Is this the problem for showing this unexpected result ?
I think this is due to data type. So please change the data type to int or float and try once.
Hmmm. The value would always be 'some_value'. If you want the max(), remove the where clause:
SELECT MAX(column_name)
FROM table_name;
The data type has nothing to do with the issue.
Then you would use:
SELECT MAX(salary)
FROM table_name
WHERE name = 'John';
EDIT:
If you store numbers and dates as character strings instead of using native SQL types, then you are bound to have problems. MySQL does have easy conversion from strings to numbers, so you can try this:
SELECT MAX(salary + 0)
FROM table_name
WHERE name = 'John';
But the real solution is to fix the data types.
MAX() function is used to be applied in integer value, not for varchar
I tried to cast that varchar into int and its working now. I changed the query like this -
SELECT MAX(CAST(column_name1 AS Int)) FROM table_name WHERE column_name2 = 'some_value'
Related
I have a Phone number column in my table with values only being numbers and no special characters. for one of the column I got a value coming in as ":1212121212".
I will need to filter this record and any records coming in with any special characters in teradata. Can anyone help on this.
I have tried the below solutions but it is not working
where (REGEXP_SUBSTR(column_name, '[0-9]+')<>1 or column_name is null )
In MS SQL Server DB's, you can use TRYCAST to find those entries having non numeric characters:
SELECT column_name
FROM yourtable
WHERE TRY_CAST(column_name AS INT) IS NULL;
In Teradata DB's, you can use TO_NUMBER:
SELECT column_name
FROM yourtable
WHERE TO_NUMBER(column_name) IS NULL;
If you want to stay close to your attempt, can use LIKE to find not numeric entries:
SELECT column_name
FROM yourtable
WHERE column_name LIKE '%[^0-9]%';
Note this could get slow when your table has very many rows.
Thanks Jonas. Since I need only numeric values and the length should be 10, I tried the below and it worked. This would ignore all the additional special characters.
(regexp_similar(Column,'[0-9]{10}')=1)
As a little bit of background. I want to fill a column with jsonb values using values from other columns. Initially, I used this query:
UPDATE myTable
SET column_name =
row_to_json(rowset)
FROM (SELECT column1, column2 FROM myTable)rowset
However, this query seems to run for way too long (a few hours before I stopped it) on a dataset with 9 million records. So I looking for a solution with the second FROM clause and found the jsonb_insert function. To test the query I first ran this sample query:
SELECT jsonb_insert('{}','{column1}','500000')
Which gives {'column1':500000} as output. Perfect, so I tried to fill the value using the actual column:
SELECT jsonb_insert('{}':,'{column1}',column1) FROM myTable WHERE id = <test_id>
This gives a syntax error and a suggestion to add argument types, which leads me to the following:
SELECT jsonb_insert('{}':,'{column1}','column1')
FROM myTable WHERE id = <test_id>
SELECT jsonb_insert('{}'::jsonb,'{column1}'::jsonb,column1::numeric(8,0))
FROM myTable WHERE id = <test_id>
Both these queries give invalid input type syntax error, with Token 'column1' is invalid.
I really can not seem to find the correct syntax for these queries using documentation. Does anyone know what the correct syntax would be?
Because jsonb_insert function might need to use jsonb type for the new_value parameter
jsonb_insert(target jsonb, path text[], new_value jsonb [, insert_after boolean])
if we want to get number type of JSON, we can try to cast the column as string type before cast jsonb
if we want to get a string type of JSON, we can try to use concat function with double-quotes sign.
CREATE TABLE myTable (column1 varchar(50),column2 int);
INSERT INTO myTable VALUES('column1',50000);
SELECT jsonb_insert('{}','{column1}',concat('"',column1,'"')::jsonb) as JsonStringType,
jsonb_insert('{}','{column2}',coalesce(column2::TEXT,'null')::jsonb) as JsonNumberType
FROM myTable
sqlfiddle
Note
if our column value might be null we can try to put 'null' for coalesce function as coalesce(column2::TEXT,'null').
I've got a field in a table that has a DataType of varchar(10). This field contains numeric values that are formatted as a varchar, for the sole purpose of being used to join two tables together. Some sample data would be:
AcctNum AcctNumChar
2223333 2223333
3324444 3324444
For some records, the table sometimes thinks this field (AcctNumChar) is numeric and the join doesn't work properly. I then have to use an Update statement to re-enter the value as a varchar.
Is there any way to determine whether or not the field has a varchar or numeric value in it, using a query? I'm trying to narrow down which records are faulty without having to wait for one of the users to tell me that their query isn't returning any hits.
You can use isnumeric() for a generic comparison, for instance:
select (case when isnumeric(acctnum) = 1 then cast(acctnum as decimal(10, 0))
end)
In your case, though, you only seem to want integers:
(case when acctnum not like '%[^0-9]%' then cast(acctnum as decimal(10, 0))
end)
However, I would strongly suggest that you update the table to change the data type to a number, which appears to be the correct type for the value. You can also add a computed column as:
alter table t add AcctNum_Number as
(case when acctnum not like '%[^0-9]%' then cast(acctnum as decimal(10, 0))
end)
Then you can use the computed column rather than the character column.
There are several ways to check if varchar column contains numeric value but I recommend to you to us TRY_CONVERT function.
It will give you NULL if the value cannot be converted to number. For example, to get all records that have numeric values, you can do this:
SELECT *
FROM [table]
WHERE TRY_CONVERT(INT, [value]) IS NOT NULL
You can use CAST and CONVERT (Transact-SQL) functions here to solve your purpose.
reference here - https://msdn.microsoft.com/en-IN/library/ms187928.aspx.
IsNumeric worked, TRY_CONVERT didn't (SQL wouldn't recognize it as a built-in function for some reason). Anyway, for the record I ran the following query and got all of my suspect records:
SELECT *
FROM ACCT_LIST
where IsNumeric([ACCT_NUM_CHAR]) = 0
Use PATINDEX function:
DECLARE #s VARCHAR(20) = '123123'
SELECT PATINDEX('%[^0-9]%', #s)
If #s variable will have something different from range 0-9 this function will return the index of first occurence of non digit symbol. If all symbols are digits it will return 0.
I am trying to concatenate multiple columns in a query in SQL Server 11.00.3393.
I tried the new function CONCAT() but it's not working when I use more than two columns.
So I wonder if that's the best way to solve the problem:
SELECT CONCAT(CONCAT(CONCAT(COLUMN1,COLUMN2),COLUMN3),COLUMN4) FROM myTable
I can't use COLUMN1 + COLUMN2 because of NULL values.
EDIT
If I try SELECT CONCAT('1','2','3') AS RESULT I get an error
The CONCAT function requires 2 argument(s)
Through discourse it's clear that the problem lies in using VS2010 to write the query, as it uses the canonical CONCAT() function which is limited to 2 parameters. There's probably a way to change that, but I'm not aware of it.
An alternative:
SELECT '1'+'2'+'3'
This approach requires non-string values to be cast/converted to strings, as well as NULL handling via ISNULL() or COALESCE():
SELECT ISNULL(CAST(Col1 AS VARCHAR(50)),'')
+ COALESCE(CONVERT(VARCHAR(50),Col2),'')
SELECT CONCAT(LOWER(LAST_NAME), UPPER(LAST_NAME)
INITCAP(LAST_NAME), HIRE DATE AS ‘up_low_init_hdate’)
FROM EMPLOYEES
WHERE HIRE DATE = 1995
Try using below:
SELECT
(RTRIM(LTRIM(col_1))) + (RTRIM(LTRIM(col_2))) AS Col_newname,
col_1,
col_2
FROM
s_cols
WHERE
col_any_condition = ''
;
Blockquote
Using concatenation in Oracle SQL is very easy and interesting. But don't know much about MS-SQL.
Blockquote
Here we go for Oracle :
Syntax:
SQL> select First_name||Last_Name as Employee
from employees;
Result: EMPLOYEE
EllenAbel
SundarAnde
MozheAtkinson
Here AS: keyword used as alias.
We can concatenate with NULL values.
e.g. : columnm1||Null
Suppose any of your columns contains a NULL value then the result will show only the value of that column which has value.
You can also use literal character string in concatenation.
e.g.
select column1||' is a '||column2
from tableName;
Result: column1 is a column2.
in between literal should be encolsed in single quotation. you cna exclude numbers.
NOTE: This is only for oracle server//SQL.
for anyone dealing with Snowflake
Try using CONCAT with multiple columns like so:
SELECT
CONCAT(col1, col2, col3) AS all_string_columns_together
, CONCAT(CAST(col4 AS VARCHAR(50), col1) AS string_and_int_column
FROM table
If the fields are nullable, then you'll have to handle those nulls. Remember that null is contagious, and concat('foo', null) simply results in NULL as well:
SELECT CONCAT(ISNULL(column1, ''),ISNULL(column2,'')) etc...
Basically test each field for nullness, and replace with an empty string if so.
I would like to create a sql query (or plpgsql) that will md5() all given rows regardless of type. However, below, if one is null then the hash is null:
UPDATE thetable
SET hash = md5(accountid || accounttype || createdby || editedby);
I am later using the hash to compare uniqueness so null hash does not work for this use case.
The problem was the way it handles concatenating nulls. For example:
thedatabase=# SELECT accountid || accounttype || createdby || editedby
FROM thetable LIMIT 5;
1Type113225
<NULL>
2Type11751222
3Type10651010
4Type10651
I could use coalesce or CASE statements if I knew the type; however, I have many tables and I will not know the type ahead of time of every column.
There is much more elegant solution for this.
In Postgres, using table name in SELECT is permitted and it has type ROW. If you cast this to type TEXT, it gives all columns concatenated together in string that is actually JSON.
Having this, you can get md5 of all columns as follows:
SELECT md5(mytable::TEXT)
FROM mytable
If you want to only use some columns, use ROW constructor and cast it to TEXT:
SELECT md5(ROW(col1, col2, col3)::TEXT)
FROM mytable
Another nice property about this solution is that md5 will be different for NULL vs. empty string.
Obligatory SQLFiddle.
You can also use something else similar to mvp's solution. Instead of using ROW() function which is not supported by Amazon Redshift...
Invalid operation: ROW expression, implicit or explicit, is not supported in target list;
My proposition is to use NVL2 and CAST function to cast different type of columns to CHAR, as long as this type is compatible with all Redshift data types according to the documentation. Below there is an example of how to achieve null proof MD5 in Redshift.
SELECT md5(NVL2(col1,col1::char,''),
NVL2(col2,col2::char,''),
NVL2(col3,col3::char,''))
FROM mytable
This might work without casting second NVL2 function argument to char but it would definately fail if you'd try to get md5 from date column with null value.
I hope this would be helpful for someone.
Have you tried using CONCAT()? I just tried in my PG 9.1 install:
SELECT CONCAT('aaaa',1111,'bbbb'); => aaaa1111bbbb
SELECT CONCAT('aaaa',null,'bbbb'); => aaaabbbb
Therefore, you can try:
SELECT MD5(CONCAT(column1, column2, column3, column_n)) => md5_hash string here
select MD5(cast(p as text)) from fiscal_cfop as p