SQL Server parsing function? [duplicate] - sql

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Split Function equivalent in tsql?
I have a column that contains data in the form:
CustomerZip::12345||AccountId::1111111||s_Is_Advertiser::True||ManagedBy::3000||CustomerID::5555555||
Does SQL have any sort of built in function to easily parse out this data, or will I have to build my own complicated mess of patindex/substring functions to pull each value into its own field?

I don't believe there is anything built in. Look at the comments posted against your original question.
If this is something you're going to need on a regular basis, consider writing a view.

Related

How to generate SQL WHERE IN or = ANY() condition in Firestore Database [duplicate]

This question already has answers here:
Google Firestore - How to get several documents by multiple ids in one round-trip?
(17 answers)
Closed 5 years ago.
I have tried to figure out how to return a query based on whether the values are in an array I have client side. I so far have found nothing regarding the issue. Is this possible?
Firestore now supports "IN" queries:
Announcement
Documentation
Example:
let citiesRef = db.collection("cities")
citiesRef.whereField("country", in: ["USA", "Japan"])
Before November 2019
In Firestore, there is no "where in" like you might be used to with SQL.
If you know the values you want to query, perform different queries for each one, and call getDocument() on each of the DocumentReference objects. You typically would do this in a loop an collect the results yourself.

I'm currently looking for a dynamic way to convert rows to columns in a specific way in SQL server [duplicate]

This question already has answers here:
T-SQL dynamic pivot
(5 answers)
Closed 6 years ago.
I'm currently looking for a dynamic way to convert rows to columns in a specific way in SQL server (I was able to do it in excel vba but the excel limitations made me go to SQL).
Abstract : I am making a daily analysis over 10 years considering 1315 stocks, for each stock we have daily returns for a period going from 29/12/2009 to 30/12/2016.
As you can see every 2614 row there's a new stock with the 3 following rows showing text..
Table on SQL
And I would like to obtain this result.. therefore looking for a good insight to help me go through this!
Desired solution draft
I am doing this for a quantitative department in Luxembourg to implement a dynamic model allocation of smart betas. (First time with SQL)
Thank you for your help! feel free to ask any questions if you need any other detail..
R.H.
This is also known as a dynamic pivot -- for sql server see https://stackoverflow.com/a/10404455/1327961

case classes in scala does not accept more than 22 parameters [duplicate]

This question already has answers here:
How to get around the Scala case class limit of 22 fields?
(5 answers)
Closed 8 years ago.
This is a challenge specific to SPARK-SQL and I'm unable to apply two highlighted answers
I'm writing complex data processing logic in SPARK-SQL.
Here is the process I follow ,
Define case class for a table with all attributes.
Register that as table.
Use SQLContext to query the same.
I'm encountering an issue as Scala allows only 22 parameters whereas my table has 50 columns. Only approach I could think of is to break dataset in such a way that it has 22 parameters and combine them later at the end. It does not look like a clean approach. Is there any better approach to this issue ?
Switch to Scala 2.11 and the case class field limit is gone. Release notes. Issue.

SQL: Splitting first, last and middle name into their own columns? [duplicate]

This question already has answers here:
Splitting the full name and writing it to another table in SQL Server 2008
(2 answers)
Closed 9 years ago.
How would I go about splitting one column that has the first, last and middle name. To there own separate columns in a SQL Server 2008 query?
The column is called NAME
NAME(char(25),null)
Mctasrren ,David Max
Cressler ,Patti L
Basil ,Vessen Eddie
Chapplestait ,Victoy
this is what i've used so far my main issue is the middle name. or if someone has a better way to shorten the first name code.
--last name code
left([NAME],charindex(' ,',[NAME]))
-first name code
substring([NAME],charindex(',',[NAME])+1,charindex(' ',substring([NAME],charindex(',',[NAME])+1,25-charindex(',',[NAME])+1)))
Do you want to split it into columns as part of a result set, create computed columns on the table, or actually update the schema to have the data split in the source?
In any case the basic nuts and bolts can be done by either:
Use a combination of CHARINDEX, SUBSTRING, and LEFT or RIGHT to find commas or spaces and split around that. If you sure you data will always be 'L_NAME ,F_NAME M_NAME_OR_INITIAL' that will pretty easy. I am actually I surprised I didn't find an similar question here near the top of a google search, but there is an example of similar from SQLServerCentral.
Use a RegEx via the CLR, which can be more robust if there is any variety in the data. If you are familiar with RegEx this should be a straight forward parse. Again, a simplified example can found on MSDN.
Whatever you choose, you'll probably quickly run into names that don't easily follow that format. In that case you want to build more logic into a function handle different types of names.

Situation with SQL query [duplicate]

This question already has answers here:
How to search for a comma separated value
(3 answers)
Closed 8 years ago.
I have data in table in below format
id brand_ids
--------------
2 77,2
3 77
6 3,77,5
8 2,45,77
--------------
(Note the brand ids will be stored like comma separated values, this is common for values in this field)
Now i am trying to get a query which is capable of querying out only rows which have '77'
in it..
I know i can use LIKE command in three formats LIKE '77,%' OR LIKE '%,77,%' OR LIKE '%,77' with or condition to achieve it. But i hope this will increase the load time of the sql.
is there any straight forward method to achieve this? if so please suggest.
Thanks,
Balan
A strict answer to your question would be: no. Your suggestion of using LIKE is your best option with this data model. However, as mentioned, it is highly suggested that you use a more normalized model.