How can I make a BigQuery to show the result look like this?
This is an sample data that have repeated value in "FeatureName" column.
https://i.stack.imgur.com/RJkQa.png
Expected Result
https://i.stack.imgur.com/lyZaU.png
Related
I'm not sure if my question is really stupid, but I found nothing on the internet...
Is it possible to insert a specific value in a cell of a matrix?
for example I have a dataset like below:
Month Prod Amount
2 X 34$
11 Y 12$
7 Z 150$
and a matrix like:
-------| Month |
Prduct |SUM(Amount)|
So the row group are products and column group are the months of a specific year.
If I want to add an extra column, with a specific value chosen dynamically from the amount (for xample 150$) so to have
-------| Month |columnName
Prduct |SUM(Amount)| 150
is that possible? also if the value is repeated through the column (it would be useful if I wanted the new column to have this specific value added for each value)
thanks a lot!! :D
You can insert a value directly in your matrix but it will be repeated for each record.
The best way is to add a new column with conditional values is to do this in your dataset query. Probably with a CASE statement if you are using SQL.
EDIT: If you can't adjust the query for whatever reason, you can add the new column and use SWITCH function inside your textbox to achieve the same.
I have a column "numbers" with array values. If I select the column in a query, the result looks like:
["40432","83248","1"]
["40432","8923","7723"]
["2340","837","20309"]
["290348","83248","20309","187"]
["98184897","98234","20309"]
["40432","83248"]
["2340"]
Now, I'd like to group the results on only the first value in the array and count them. The result should look like:
value amount
40432 3
2340 2
290348 1
98184897 1
How do I arrange this? How should the query look like?
I tried things like:
SELECT.... WHERE split(TO_JSON_STRING(numbers), ',')[ordinal(1)] as firstNumber ......
But this did not result in the desired data.
When I type this search query in splunk search head:
index=main sourcetype=mySrcType | top fieldA fieldB
Splunk automatically adds count column to the resulting table. Now, what is this count? is it a simple sum of each field count?
The count is showing you the number of times thatt field value pair show up in the time range and query you ran. If you want to exclude it, you can add
| fields - count
Top counts the most common 10 values of each of the fields you list after it's command
You can read more about it on its documentation page
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Top
TestTable
inputsCOLUMN
3-300-150-150-R
3-200-100-100-A
5-500-00-500-A
output
3_open 3_spent 3_closing 3_type 5_open 5_spent 5_closing 5_type
-------- --------- ----------- -------- -------- --------- ----------- --------
300 150 150 R 500 00 500 A
200 100 100 A
Above is the input table called TestTable. It has two columns that contains rows of data(strings)
And there is a desired output table of which the column names are based on the input string.
the column name is the first number on the string + another string name, like CONCAT(split(inputsCOLUMN,'\\-')[0],'-','type')
so that output is the desired output. and the below query is not working as desired because of that part when i am trying to concatenate an alias i think is not allowed. so help me if there is a way i can find that desired output.
SELECT split(inputsCOLUMN,'\\-')[1] as CONCAT(split(inputsCOLUMN,'\\-')[0],'-','open'),
split(inputsCOLUMN,'\\-')[2] as CONCAT(split(inputsCOLUMN,'\\-')[0],'-','spent'),
split(inputsCOLUMN,'\\-')[3] as CONCAT(split(inputsCOLUMN,'\\-')[0],'-','closing'),
split(inputsCOLUMN,'\\-')[4] as CONCAT(split(inputsCOLUMN,'\\-')[0],'-','type')
Hive cannot have a dynamic number of columns, and it cannot have dynamic column names. It must be able to determine the entire schema (column count, types, and names) at query planning time, without looking at any data.
It's also not clear to me how exactly you're matching up input records into a single row. For example, how do you know which "3" record corresponds to which "5" record.
If you knew that, for example, there would always be a "3" record and a "5" record and you could commit to those being the only column names, and if you had a consistent way of matching up records to "flatten" this data, then it is possible, but difficult. I've done almost this exact operation before, and it involved a custom UDTF and a custom UDAF, and some code to auto-generate the actual query, which ended up being hundreds of lines long in some cases. I would re-evaluate why you want to do this in the first place and see if you can come up with another approach.
I'm trying to replicate an Access Formula that will output the Columns in Column "Distributions" to include distinct counts leading up to the value in Column "Count."
For example, if I have an order number that recurs in the data sheet for three lines, I would like the output in the "Distributions" column to count 1,2,3 for each line in the sheet. If there is one line to the order, I need only an output of 1 in the column, if there are 70 lines, I need an output of 1-70 for every matching Order Number.
I already have the "Count" column sorted out, but I can't wrap my head around the necessary code to make the output increment up in the "Distributions" column. The image below details the sort of output I'm looking for with Sample Data.