How to join SQL column elements together? - sql

I have an SQL table with the column named expenses like this
I want to write a query to return all the elements of this column separated by a comma. For the above example output should be:-
hotel,taxi,food,movie
How can I do this?

You can use the group_concat aggregate function:
SELECT GROUP_CONCAT(expenses)
FROM my_table

Related

How to select column name "startwith" in proc sql query in SAS

I am looking a way to select all columns name that "startwith" a specific character. My data contains the same column name multiple time with a digit number at the end and I want the code to always select all the columns regardless the last digit numbers.
For example, if I have 3 kinds of apple in my column names, the dataset will contains the column: "apple_1", "apple_2" and "apple_3". Therefore, I want to select all columns that startwith "apple_" in a proc sql statement.
Thanks you
In regular SAS code you can use : as a wildcard to create a variable list. You normally cannot use variable lists in SQL code, but you can use them in dataset options.
proc sql ;
create table want as
select *
from mydata(keep= id apple_: )
;
quit;
Use like:
proc sql;
select t.*
from t
where col like 'apple%';
If you want the _ character as well, you need to use the ESCAPE clause, because _ is a wildcard character for LIKE:
proc sql;
select t.*
from t
where col like 'apple$_%' escape '$';

Is it possible to aggregate each BigQuery column by some expresion like SUM(*)

I have a BQ table that contains 1 String column and many (>2000) Int columns. I want to write a simple query where I would be able to aggregate all records by the first string column and sum other integer columns. Something like that:
SELECT
string_col,
SUM(* EXCEPT(string_col))
FROM
project_id.dataset_id.table_id
GROUP BY
string_col
Is there any way to do smth like this? The goal is to avoid a declaration of all Int columns in the query, e.g. SUM(int1_col), SUM(int2_col), ... , SUM(int2000_col)

is there any way to select columns containing a substring from the table in SQL or pyspark?

Is there any way to select columns containing a substring from the table?
I want to select columns starting with 'device_' from a database table.
You can use a regex to filter the columns to select:
import re
df.select([col for col in df.columns if re.findall('^device_', col)])

Dax How to get distinct values from a column

This is the query I'm trying.
EVALUATE
SELECTCOLUMNS('MyTable',"col1",DISTINCT(VALUES('MyTable'[Email])))
If you are trying to simply create a new, single column table with the distinct values of an existing table, you can use the formula below.
Starting with data like this...
... simply create a new table with this formula to get a list of distinct values.
Locations = DISTINCT(Fruit[Location])
This will work:
Evaluate
VALUES('Table'[Column])

Joining table A and table B, both having column1 as common

I have 2 tables - TableA and TableB. Both have column1 as common column.
But in TableA data in column1 is numeric like 201 and in TableB data in column1 is in words like two hundred one.
None of the other columns is common.
How can I join these tables? Can I use to_char(todate(column1,'j'),jsp) for TableA?
In Oracle it is possible using this to_char function, but in SQL you will need to write a function for this conversion and then pass this function in your query. One such example is number_to_string function found on this post.
Converting Numbers to Words in MYSQL result! Using Query
Using this function you can write a condition like
number_to_string(TableA.numValue) = TableB.stringValue in your JOIN to obtain the desired results.
First, it would be appropriate to have some function which convert integer to words, and then you can use standard join clause.
This link can help you
or
this link