This question already has answers here:
T-SQL dynamic pivot
(5 answers)
Why is processing a sorted array faster than processing an unsorted array?
(26 answers)
Closed 4 years ago.
I have spent an hour already on this problem.
I want to dynamically generate columns based on the values from the column AttendanceDate.
I have found some similar questions, but unfortunately the examples were too complicated for me to comprehend.
Data:
Expected output:
This can be done with the stuff method as mention in comments or with a while exists implementation:
http://rextester.com/FPU47008
Related
This question already has answers here:
How to check any missing number from a series of numbers?
(11 answers)
SQL - Find missing int values in mostly ordered sequential series
(6 answers)
Closed 5 years ago.
I have a number sequence field in a table that has some gaps/skipped numbers in it. I need to identify the skipped numbers. The only solution I can think of is to use iterative/cursor based loops and I suspect that will be fairly slow. Is there a faster method?
This question already has answers here:
T-SQL dynamic pivot
(5 answers)
Closed 6 years ago.
I'm currently looking for a dynamic way to convert rows to columns in a specific way in SQL server (I was able to do it in excel vba but the excel limitations made me go to SQL).
Abstract : I am making a daily analysis over 10 years considering 1315 stocks, for each stock we have daily returns for a period going from 29/12/2009 to 30/12/2016.
As you can see every 2614 row there's a new stock with the 3 following rows showing text..
Table on SQL
And I would like to obtain this result.. therefore looking for a good insight to help me go through this!
Desired solution draft
I am doing this for a quantitative department in Luxembourg to implement a dynamic model allocation of smart betas. (First time with SQL)
Thank you for your help! feel free to ask any questions if you need any other detail..
R.H.
This is also known as a dynamic pivot -- for sql server see https://stackoverflow.com/a/10404455/1327961
This question already has an answer here:
SQL Fuzzy Matching
(1 answer)
Closed 6 years ago.
attempting to match a list of names that are similar in one very long column to another that are close but often vary do to missing letters and punctuation? is there a simple solution via a macro and/or sql?
using Levenstein functions can help
check the function and algorithm here: Levenshtein distance in T-SQL
after you create a function - compare the distance, for example:
select ..... from....
where dbo.Levenstein(str1,str2)>0.9 --means, the match between str1 and str2 is 90%
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Split Function equivalent in tsql?
I have a column that contains data in the form:
CustomerZip::12345||AccountId::1111111||s_Is_Advertiser::True||ManagedBy::3000||CustomerID::5555555||
Does SQL have any sort of built in function to easily parse out this data, or will I have to build my own complicated mess of patindex/substring functions to pull each value into its own field?
I don't believe there is anything built in. Look at the comments posted against your original question.
If this is something you're going to need on a regular basis, consider writing a view.
This question already has answers here:
How to search for a comma separated value
(3 answers)
Closed 8 years ago.
I have data in table in below format
id brand_ids
--------------
2 77,2
3 77
6 3,77,5
8 2,45,77
--------------
(Note the brand ids will be stored like comma separated values, this is common for values in this field)
Now i am trying to get a query which is capable of querying out only rows which have '77'
in it..
I know i can use LIKE command in three formats LIKE '77,%' OR LIKE '%,77,%' OR LIKE '%,77' with or condition to achieve it. But i hope this will increase the load time of the sql.
is there any straight forward method to achieve this? if so please suggest.
Thanks,
Balan
A strict answer to your question would be: no. Your suggestion of using LIKE is your best option with this data model. However, as mentioned, it is highly suggested that you use a more normalized model.