I am trying to change the data type of a column i created with SQL in Metabase.
i used this code to split the json column
select *,
substring(key from '_([^_]+)$') as Volume,
substring(outgoing::varchar from ':"([a-z]*)' ) as Status
from table
cross join lateral json_object_keys(outgoing) as j(key);
upon splitting, i realized that the volume field has the type text.
I am trying to change this to integer.
Related
in the BigQuery we have a dictionary tables which specify some mappings - example:
But this structure (flat, typically relational one) is not so handy to operate with mappings - when we write a query multiple joins are required if we would like to get mappings for a few fields. Our idea is to transform dictionary table into one-row nested table with repeated fields, to enable mappings application with a single join. So desired structure looks like:
Any idea how to transform the flat table to nested one via standard sql? Essentially, values from field attribute should become a new attributes, and key value value pairs should became a repeated entries for each attribute. So the whole operation is similar to the pivot.
P.S.
I know, that BigQuery recently introduced a json structures. We considered this solution, but JSON_QUERY doesn't support passing concatenated values to function parameters. As a result we are unable to get values dynamically, and we resign from this solution as more complicated.
Error when trying to have a variable pathsname: JSONPath must be a string literal or query parameter
Consider below simple option
select * from your_table pivot (
array_agg(struct(key, value) ignore nulls)
for field in ('aaa','bbb')
)
if applied to sample data in your question - output is
You can use pivot operator along with array_agg() function to achieve this.
Here is a query that does this -
with temp as
(
select 'aaa' as field, 1 key, 2 value union all
select 'aaa' as field, 2 key, 5 value union all
select 'aaa' as field, 4 key, 15 value union all
select 'bbb' as field, 1 key, 23 value union all
select 'bbb' as field, 2 key, 36 value
)
select *
from
(
select field, key, value
from temp
)
pivot
(
array_agg(struct(key, value) ignore nulls)
for field in ('aaa','bbb')
)
In the pivot operator, as you can see, I have mentioned the field values as aaa,bbb. If you want to load them dynamically, then you will need to set a variable and then do it.
For more details on storing the values in variable and then using pivot, use this link - https://towardsdatascience.com/pivot-in-bigquery-4eefde28b3be
Knowing that SQL Data Definition Language (DDL) Server Postgres (SP) can be used:
CREATE
ALTER
DROP
And for SQL Data Manipulation Language (DML) SP:
SELECT
INSERT
UPDATE
DELETE
When we make a SELECT the attributes that we project come in its type of data established in the DDL. And when performing an operation with 2 attributes of the same type to generate an attribute calculated in a SELECT, the type will be the same as that of the operands but I need to change it.
My problem is that I want to calculate an average: A / B * 100 between two types of data bigint but when I do A/B I obtain in the calculated field a 0, because I would need the decimal values also to multiply them by 100
The following sql instruction does not work with bigint:
SELECT id, a/b
FROM <mytable>;
This in the calculated field returns 0 values to me and is what I want to avoid.
And what I would like to have is:
SELECT id, first_operand * 100
FROM (SELECT id, a/b TYPE <newtype> AS first_operand
FROM <mytable>) firstTable
NATURAL JOIN <mytable>;
So what I would like to know is how to change the type of data in a SELECT projection or what way do I have this?
In Postgres you can easily cast by appending ::<type to cast to> to an expression. So in your case you can try a::decimal / b::decimal to have the result of a / b being a decimal. (Actually already casting only one operand would be enough here, if you like it less verbose.)
You can do typecasting using:
cast function i.e. CAST ( expression AS type )
or also:
PostgreSQL type cast :: i.e. expression::type
I have run a query using Eclipse from a Sybase db. I need to eliminate duplicate entries but the results have mixed types - INT and TEXT. Sybase will not do distinct on TEXT fields. When I Save All results and paste that into Excel some of the TEXT field bleeds into the INT field columns - which makes Excel -Remove Duplicates tough to do.
I am thinking I might create an alias for my query, add a temp table, select the distinct INT column values from the alias and then query the alias again, this time including the TEXT values. Then when I export the data I save it into Word instead. It would look like this:
SELECT id, text
FROM tableA, TableB
WHERE (various joins here...)
AS stuff
CREATE TABLE #id_values
(alt_id CHAR(8) null)
INSERT INTO #id_values
(SELECT DISTINCT id
FROM stuff)
SELECT id, text
FROM stuff a
WHERE EXISTS (SELECT 1 FROM #id_values WHERE b.alt_id = a.id )
If there was a way to format the data better in Excel I would not have to do all this manipulation on the db side.I have tried different formats in the Excel import dialog..import as tab-delimited, space-delimited with the same end result.
Additional information: I converted the TEXT to VARCHAR but I now need a new column which has up to 5 entries per id sometimes. ID -> TYPE is 1-many? The distinct worked on the original list but now I need to figure out how to show all the new column values in one row with each id. The new column is CHAR(4).
Now my original select looks like this:
SELECT DISTINCT id, CONVERT(VARCHAR(8192), text), type_cd
FROM TableA, TableB
...etc
And I get multiple rows again for each type_cd attached to an id. I also realized I don't think I need the 'b.' alias in front of *alt_id*.
Also, regardless of how I format the query (TEXT or VARCHAR), Excel continues to bleed the text into the id rows. Maybe this is not a sql problem but rather with Excel, or maybe Eclipse.
You are limited in how much data you can past into an Excel cell anyway, so convert your text to a varchar:
SELECT distinct id, cast(text as varchar(255)) as text
FROM tableA, TableB
WHERE (various joins here...)
I'm using 255, because that is the default on what Excel shows. You can have longer values in Excel cells, but this may be sufficient for your purposes. If not, just make the value bigger.
Also, as a comment, you should be using the proper syntax for joins, which uses the "on" clause (or "cross join" in place of a comma).
I have table categories (c) and an another table (x) with a column which can contain cat IDs separated with comma as varchar data type. I want to Select related categories but I'm having error "Conversion failed when converting the varchar value '5,' to data type int." when trying to select:
SELECT ID, Title FROM c WHERE ID IN (SELECT catIDs FROM x WHERE ID=any);
The subquery returns data like "1,3,4"
You need to split the 1,3,4 string returned by the subquery into separate int values. SQL Server does not have a built-in function to do it, but you can use this user-defined function.
Create the function dbo.Split in your database and then re-write your query as follows:
SELECT ID, Title
FROM c
WHERE ID IN
(
SELECT s
FROM dbo.Split(',', '1,3,4')
)
I replaced the subquery with example results 1,3,4 to shorten the query and make it easier to understand.
If I get it right, you actually have values like "1,3,4" in your column catIDs. So you should extract a substring in the select of your subquery.
By the way, I'm not an MS SQL Server expert, but it's probably a bad database design to do so. I'm not sure the RDBMS engine will use indexes in such a case...
I have a column of type 'nvarchar(max)' that should now hold XML information instead of just a string.
Say: col1 has value 'abc'
Now it has values, with additional info:
<el1>abc</el2>
<el2>someotherinfo</el2>
Storing the information to the column is fine, since it can still be pushed in as a string.
However, extracting the same information and also using/replacing the same information 'abc' from this column that is being used in various other joins from other tables, is something I'm not able to figure out.
how can I also push in this information into abcd when it comes from another table's value 'abcd' without losing other information?
I am building an XML from the application side and updating it in a column of type nvarchar(). All the columns have been replaced to hold the XML, so the safe assumption is that the col1 only holds XML similar to that mentioned above. Just push the XML as is and it works fine. However, how should I extract the information to use it in joins?
How do I extract a particular element from this nvarchar() string to use it in a join??
Previously, this column 'Col1' was just used as a string, and a check was done like this:
where tablex.colx = table1.col1
or
Update Table2 where
Once you cast the NVARCHAR data to the XML data type, you can use XML functions to get element/attribute values for joining to:
WITH xoutput AS (
SELECT CONVERT(xml, t.nvarchar_column) AS col
FROM YOUR_TABLE t)
SELECT x.*
FROM TABLE x
JOIN xoutput y ON y.col.value('(/path/to/your/element)[1]', 'int') = x.id
It won't be able to use indexes, because of the data type conversion...
Alternate version, using IN:
WITH xoutput AS (
SELECT CONVERT(xml, t.nvarchar_column) AS col
FROM YOUR_TABLE t)
SELECT x.*
FROM TABLE x
WHERE x.id IN (SELECT y.col.value('(/path/to/your/element)[1]', 'int')
FROM xoutput)