I am using an MDM that has a custom reporting feature that allows for some SQL. I'd like to try and get a list of SIM card ICCID numbers, but it is returned in a JSON string with other device information.
Running:
SELECT DeviceDetails
FROM Device
Returns (no whitespace, formatted for readability):
{
"BadgeNumber": 0,
"DeviceLocale": "en-US",
"ICCID": "0000000000000004720",
"InstalledPoliciesSignedBy": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX1690F",
"AvailableDeviceCapacity": 00000000080,
"OSSdkVersion": 24,
"ModemFirmwareVersion": "angler-03.72",
"DeviceCapacity": 00000000184,
"Product": "Nexus 6P/angler",
"WiFiMAC": "02:00:00:00:00:00"
}
I don't know what flavor of SQL it's running unfortunately. Any idea on how I can just return the ICCID value?
Edit: found this in the reporting docs:
Admin Portal uses a subset of SQL-92 that only supports SELECT statements. SQL commands that change database values are not valid (CREATE, ALTER, DELETE, DROP, INSERT, SELECT INTO, TRUNCATE, UPDATE, and so forth).
Try:
MySQL:
SELECT DeviceDetails->"$.ICCID" as ICCID
FROM Device;
Oracle:
SELECT DeviceDetails.Device.ICCID from DeviceDetails;
Microsoft SQL Server:
If you are FORTUNATE enough to have SQL Server 2016 you can use new functionality MS has FINALLY put into SQL Server.
DECLARE #Device TABLE ( value VARCHAR(4000))
INSERT INTO #Device (value) VALUES (
'{
"BadgeNumber": 0,
"DeviceLocale": "en-US",
"ICCID": "0000000000000004720",
"InstalledPoliciesSignedBy": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX1690F",
"AvailableDeviceCapacity": 00000000080,
"OSSdkVersion": 24,
"ModemFirmwareVersion": "angler-03.72",
"DeviceCapacity": 00000000184,
"Product": "Nexus 6P/angler",
"WiFiMAC": "02:00:00:00:00:00"
}')
SELECT
JSON_VALUE(value, '$.ICCID') as ICCID
FROM #Device
If not you may want to either roll your own CLR or take a look at what this author created here: https://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/
Related
I want to remove the array wrapper surrounding a query result as I'm running a for loop to push the object into an array. This is my query
"SELECT * FROM jobs WHERE id = ? FOR JSON PATH, WITHOUT_ARRAY_WRAPPER"
but I'm getting this result in postman
{
"status": "Failed",
"message": "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'JSON PATH, WITHOUT_ARRAY_WRAPPER' at line 1"
}
for json path is a feature of Microsoft SQL Server. There is a standard for JSON in SQL, but don't expect most SQL servers to follow it.
You can get a single JSON object for each row with json_object.
-- {"id": 2, "name": "Bar"}
select
json_object('id', id, 'name', name)
from jobs
where id = 2
Rather than query each job individually and then appending to an array, you can do this in single query using the in operator to query all desired rows at once, and then json_arrayagg to aggregate them into a single array.
-- [{"id": 1, "name": "Foo"},{"id": 3, "name": "Baz"}]
select
json_arrayagg( json_object('id', id, 'name', name) )
from jobs
where id in (1, 3)
This is much more efficient. In general, if you're querying SQL in loops there's a better way.
Demonstration.
Summary:
I am trying to extract the values from a json stored as a super in redshift.
Context
This issue is near identical to the question posted here for TSQL.
My schema:
user_id VARCHAR
properties SUPER
Sample data:
{
"$os": "Mac OS X",
"$browser": "Chrome",
"token": "123x5"
}
I have this as a column in my table called properties.
Desired behavior
I want to be able to retrieve the value Mac OS X from the $os key and store it in a VARCHAR column.
What I've tried
I am able to retrieve the value for keys that do not have special characters in the following way:
SELECT properties.token from clean
I have referenced the following aws docs:
https://docs.aws.amazon.com/redshift/latest/dg/JSON_EXTRACT_PATH_TEXT.html
https://docs.aws.amazon.com/redshift/latest/dg/r_SUPER_type.html
https://docs.aws.amazon.com/redshift/latest/dg/super-overview.html
Attempting to do the same
I have tried the following which haven't worked for me:
SELECT properties.'\$os' from clean
SELECT properties.'$os' from clean
SELECT properties[$os] from clean
SELECT properties['\$os'] from clean
Referencing the following docs: https://docs.aws.amazon.com/redshift/latest/dg/query-super.html#unnest
I have also attempted to iterate over the super type using partisql:
select b.*
, pr
from base b, b.properties pr;
But this returns no rows.
I also tried the following:
select
properties
, properties.token
, properties[0] praw0
, properties[0].os os
, properties[0][0] praw00
, properties[0][0][0] praw000
from base
And this returned rows with value in the properties and token columns but nulls in all the other columns.
What am I missing? What else should I be trying?
You have to use double quotes ""
CREATE TEMP TABLE test_json
(
user_id VARCHAR,
properties SUPER
);
INSERT INTO test_json VALUES (1,JSON_PARSE('{"$os": "Mac OS X", "$browser": "Chrome", "token": "123x5"}'));
SELECT properties."$os" from test_json
-- Output
"Mac OS X"
I have a json table which was created by
CREATE TABLE `normaldata_source`(
`column1` int,
`column2` string,
`column3` struct<column4:string>)
A sample data is:
{
"column1": 9,
"column2": "Z",
"column3": {
"column4": "Y"
}
}
If I do
SELECT column3
FROM normaldata_source
it will produce a result {column4=y}. However, I want it to be in json form {"column4": "y"}
Is this possible?
*Edit This query gives me the following result:
SELECT CAST(column3 AS JSON) as column3_json
FROM normaldata_source
As of Trino 357 (formerly known as Presto SQL), you can now cast a row to JSON and it will preserve the column names:
WITH normaldata_source(column1, column2, column3) AS (
VALUES (9, 'Z', cast(row('Y') as row(column4 varchar)))
)
SELECT cast(column3 as json)
FROM normaldata_source
=>
_col0
-----------------
{"column4":"Y"}
(1 row)
I encountered this same problem and was thoroughly stumped on how to proceed in light of deep compositional nesting/structs. I'm using Athena (managed Presto w/ Hive Connector from AWS). In the end, I worked around it by doing a CTAS (create table as select) where I selected the complex column I wanted, under the conditions I wanted) and wrote it to an external table with an underlying SerDe format of JSON. Then, via the HiveConnector's $path magic column (or by listing the files under the external table location), I obtain the resulting files and streamed out of those.
I know this isn't a direct answer to the question at hand - I believe we have to wait for https://github.com/trinodb/trino/pull/3613 in order to support arbitrary struct/array compositions -> json. But maybe this will help someone else who kind of assumed they'd be able to do this.
Although I originally saw this as an annoying workaround, I'm now starting to think it was the right call for my application anyway
I am attempting to insert data into a table in my database. I am using an Oracle Apache Derby DB. I have the following code-
Insert into P2K_DBA.ODS_CNTRL
(ODS_LOAD_ID, ODS_STATUS, USR_WWID, USR_FIRST_NM,
USR_LAST_NM, USR_DISPLAY_NM, USR_NT_ID,TOT_AMT,
TOT_RCD_CNT, TOT_QTY, LAST_UPD_DT, ODS_ADJ_TYP,
ODS_ADJ_DESC, APRV_WWID, APRV_FIRST_NM,APRV_LAST_NM,
APRV_DISPLAY_NM, APRV_NT_ID, APRV_DT
)
values
(6,'avail','64300339', 'Travis',
'Taylor', 'TT', '3339', 33,
15, 40, '7/10/2012', 'test',
'test', '64300337', 'Travis',
'Taylor', 'TT', '3339', '2/06/2013');
I ran this SQL command and received the following error-
"Error code -1, SQL state 21000: Scalar subquery is only allowed to return a single row.
Line 1, column 1"
I have ran this code successfully a few days ago. On top of that I have tried to manually enter in data in this table (using NetBeans) and have it auto generate the code, which resulted in the same error.
What is causing this error and how can I solve/bypass it?
One way in which you could run into this would be to do something like
CREATE FUNCTION F(...) ...
F((SELECT COL FROM T))
But you could instead write
... (SELECT F(COL) FROM T) provided the new context permits a subquery, that is.
From Java I am doing the following query on DB2:
SELECT * FROM PRV_PRE_ACTIVATION WHERE TRANSACTION_ID = ?
The field TRANSACTION_ID is a VARCHAR of length 32. I set the parameter in the preparedStatement using the setString method.
I get the error:
com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63, DRIVER=3.59.81
at com.ibm.db2.jcc.am.dd.a(dd.java:676)
at com.ibm.db2.jcc.am.dd.a(dd.java:60)
at com.ibm.db2.jcc.am.dd.a(dd.java:127)
at com.ibm.db2.jcc.am.bn.c(bn.java:2546)
at com.ibm.db2.jcc.am.bn.d(bn.java:2534)
at com.ibm.db2.jcc.am.bn.a(bn.java:2026)
at com.ibm.db2.jcc.t4.cb.g(cb.java:140)
at com.ibm.db2.jcc.t4.cb.a(cb.java:40)
at com.ibm.db2.jcc.t4.q.a(q.java:32)
at com.ibm.db2.jcc.t4.rb.i(rb.java:135)
at com.ibm.db2.jcc.am.bn.gb(bn.java:1997)
at com.ibm.db2.jcc.am.cn.pc(cn.java:3009)
at com.ibm.db2.jcc.am.cn.b(cn.java:3786)
at com.ibm.db2.jcc.am.cn.bc(cn.java:678)
at com.ibm.db2.jcc.am.cn.executeQuery(cn.java:652)
Where the sqstate means "Capability is not supported by this version of the DB2 application requester, DB2 application server, or the combination of the two." But I don't use any strange functionality.
I have tried using an squ client the query:
SELECT * FROM PRV_PRE_ACTIVATION where transaction_id='A'
And it goes ok.
What is the cause of the problem?
UPDATE: The code where the statement is prepared:
s = con.prepareStatement(sSQL,
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_UPDATABLE);
Try changing to a specified list of columns in the select list -- my guess is you have a user defined column type (or some other type) which is not supported by your driver. For example, does the statement
SELECT TRANSACTION_ID FROM PRV_PRE_ACTIVATION WHERE TRANSACTION_ID = ?
work? If so then start adding columns in and you will find the problem column.
I've came across this problem lately, and after some searching on web, I've came across this link:
DB2 SQL error: SQLCODE: -270, SQLSTATE: 42997, SQLERRMC: 63
, which specifies this:
A column with a LOB type, distinct type on a LOB type, or
structured type cannot be specified in the select-list of an
insensitive scrollable cursor.
With help from an colleague, we came to this conclusion:
1, Q: When will you get this "SQLCODE=-204, SQLSTATE=42704" exception?
A: When a scrollable PreparedStatement is prepared & executed, yet there are [B|C]LOB fields exist in the select list. e.g.:
String strQuery = "SELECT NUMBER_FIELD, CHAR_FIELD, CLOB_FIELD FROM TABLE_NAME WHERE CONDITION IS TRUE;"
Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, REsultSet.CONCUR_READ_ONLY);
rs = stmt.executeQuery(strQuery); //and this exception will be thrown here
2, Q: So what's the solution if we want to get rid of it when [B|C]LOB fields are queried?
A: Try to use ResultSet.TYPE_FORWARD_ONLY while creating the query statement.e.g.:
stmt = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
Or simply try this one:
stmt = conn.createStatement();
Note that the same rules apply to conn.prepareStatement() too. You may refer to Java API doc for more information.