Convert a column from Base 36 to base 10 using snowflake [duplicate] - sql

This question already has an answer here:
CONV() function in snowflake
(1 answer)
Closed 2 years ago.
Trying to convert two or three columns in a huge table from base 36 to base 10.
I know the python code for it. But looking to do it in SQL (Snowflake). Is there a better way?

I wrote a JavaScript UDF to convert from any base to another base. Just call CONV(x, 36, 10) to go from base 36 to base 10.
CONV() function in snowflake

Related

SQL Server: dynamic columns based on row values (Date) [duplicate]

This question already has answers here:
T-SQL dynamic pivot
(5 answers)
Why is processing a sorted array faster than processing an unsorted array?
(26 answers)
Closed 4 years ago.
I have spent an hour already on this problem.
I want to dynamically generate columns based on the values from the column AttendanceDate.
I have found some similar questions, but unfortunately the examples were too complicated for me to comprehend.
Data:
Expected output:
This can be done with the stuff method as mention in comments or with a while exists implementation:
http://rextester.com/FPU47008

Identify missing numbers in an Oracle Table [duplicate]

This question already has answers here:
How to check any missing number from a series of numbers?
(11 answers)
SQL - Find missing int values in mostly ordered sequential series
(6 answers)
Closed 5 years ago.
I have a number sequence field in a table that has some gaps/skipped numbers in it. I need to identify the skipped numbers. The only solution I can think of is to use iterative/cursor based loops and I suspect that will be fairly slow. Is there a faster method?

case classes in scala does not accept more than 22 parameters [duplicate]

This question already has answers here:
How to get around the Scala case class limit of 22 fields?
(5 answers)
Closed 8 years ago.
This is a challenge specific to SPARK-SQL and I'm unable to apply two highlighted answers
I'm writing complex data processing logic in SPARK-SQL.
Here is the process I follow ,
Define case class for a table with all attributes.
Register that as table.
Use SQLContext to query the same.
I'm encountering an issue as Scala allows only 22 parameters whereas my table has 50 columns. Only approach I could think of is to break dataset in such a way that it has 22 parameters and combine them later at the end. It does not look like a clean approach. Is there any better approach to this issue ?
Switch to Scala 2.11 and the case class field limit is gone. Release notes. Issue.

SQL Server parsing function? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Split Function equivalent in tsql?
I have a column that contains data in the form:
CustomerZip::12345||AccountId::1111111||s_Is_Advertiser::True||ManagedBy::3000||CustomerID::5555555||
Does SQL have any sort of built in function to easily parse out this data, or will I have to build my own complicated mess of patindex/substring functions to pull each value into its own field?
I don't believe there is anything built in. Look at the comments posted against your original question.
If this is something you're going to need on a regular basis, consider writing a view.

Situation with SQL query [duplicate]

This question already has answers here:
How to search for a comma separated value
(3 answers)
Closed 8 years ago.
I have data in table in below format
id brand_ids
--------------
2 77,2
3 77
6 3,77,5
8 2,45,77
--------------
(Note the brand ids will be stored like comma separated values, this is common for values in this field)
Now i am trying to get a query which is capable of querying out only rows which have '77'
in it..
I know i can use LIKE command in three formats LIKE '77,%' OR LIKE '%,77,%' OR LIKE '%,77' with or condition to achieve it. But i hope this will increase the load time of the sql.
is there any straight forward method to achieve this? if so please suggest.
Thanks,
Balan
A strict answer to your question would be: no. Your suggestion of using LIKE is your best option with this data model. However, as mentioned, it is highly suggested that you use a more normalized model.