Cast to long datatype - BigQuery - sql

BigQuery and SQL noob here. I was going through possible data types big query supports here. I have a column in bigtable which is of type bytes and its original data type is scala Long. This was converted to bytes and stored in bigtable from my application code. I am trying to do CAST(itemId AS integer) (where itemId is the column name) in the BigQuery UI but the output of CAST(itemId AS integer) is 0 instead of actual value. I have no idea how to do this. If someone could point me in the right direction then I would greatly appreciate it.
EDIT: Adding more details
Sample itemId is 190007788462
Following is the code which writes itemId to the big table. I have included the relevant method. Using hbase client to write to bigtable.
import org.apache.hadoop.hbase.client._
def toPut(key: String, itemId: Long): Put = {
val TrxColumnFamily = Bytes.toBytes("trx")
val ItemIdColumn = Bytes.toBytes("itemId")
new Put(Bytes.toBytes(key))
.addColumn(TrxColumnFamily,
ItemIdColumn,
Bytes.toBytes(itemId))
}
Following is the entry in big table based on above code
ROW COLUMN+CELL
foo column=trx:itemId, value=\x00\x00\x00\xAFP]F\xAA
Following is the relevant code which reads the entry from big table in scala. This works correctly. Result is a org.apache.hadoop.hbase.client.Result
private def getItemId(row: Result): Long = {
val key = Bytes.toString(row.getRow)
val TrxColumnFamily = Bytes.toBytes("trx")
val ItemIdColumn = Bytes.toBytes("itemId")
val itemId =
Bytes.toLong(row.getValue(TrxColumnFamily, ItemIdColumn))
itemId
}
The getItemId function above correctly returns itemId. That's because Bytes.toLong is part of org.apache.hadoop.hbase.util.Bytes which correctly casts the Byte string to Long.
I am using big query UI similar to this one and using CAST(itemId AS integer) because BigQuery doesn't have a Long data type. This incorrectly casts the itemId byte string to integer and resulting value is 0.
Is there any way I can have a Bytes.toLong equivalent from hbase-client in BigQuery UI? If not is there any other way I can go about this issue?

Try this:
SELECT CAST(CONCAT('0x', TO_HEX(itemId)) AS INT64) AS itemId
FROM YourTable;
It converts the bytes into a hex string, then casts that string into an INT64. Note that the query uses standard SQL, as opposed to legacy SQL. If you want to try it with some sample data, you can run this query:
WITH `YourTable` AS (
SELECT b'\x00\x00\x00\xAFP]F\xAA' AS itemId UNION ALL
SELECT b'\xFA\x45\x99\x61'
)
SELECT CAST(CONCAT('0x', TO_HEX(itemId)) AS INT64) AS itemId
FROM YourTable;

Related

How can I get the last element of an array? SQL Bigquery

I'm working on building a follow-network form Github's available data on Google BigQuery, e.g.: https://bigquery.cloud.google.com/table/githubarchive:day.20210606
The key data is contained in the "payload" field, STRING type. I managed to unnest the data contained in that field and convert it to an array, but how can I get the last element?
Here is what I have so far...
select type,
array(select trim(val) from unnest(split(trim(payload, '[]'))) val) payload
from `githubarchive.day.20210606`
where type = 'MemberEvent'
Which outputs:
How can I get only the last element, "Action":"added"} ?
I know that
select array_reverse(your_array)[offset(0)]
should do the trick, however I'm unsure how to combine that in my code. I've been trying different options without success, for example:
with payload as ( select array(select trim(val) from unnest(split(trim(payload, '[]'))) val) payload from `githubarchive.day.20210606`)
select type, ARRAY_REVERSE(payload)[ORDINAL(1)]
from `githubarchive.day.20210606` where type = 'MemberEvent'
The desired output should look like:
To get last element in array you can use below approach
select array_reverse(your_array)[offset(0)]
I'm unsure how to combine that in my code
select type, array_reverse(array(
select trim(val)
from unnest(split(trim(payload, '[]'))) val
))[offset(0)]
from `githubarchive.day.20210606`
where type = 'MemberEvent'
There is a solution without reversing the array.
SELECT event[OFFSET(ARRAY_LENGTH(event)-1)

U-sql call data in json array

I have browsed the web and forum to download the data from the file json, but my script does not work.
I have a problem with downloading the list of objects of rates. Can someone please help? I can not find fault.
{"table":"C","no":"195/C/NBP/2016","tradingDate":"2016-10-06","effectiveDate":"2016-10-07","rates":
[
{"currency":"dolar amerykański","code":"USD","bid":3.8011,"ask":3.8779},
{"currency":"dolar australijski","code":"AUD","bid":2.8768,"ask":2.935},
{"currency":"dolar kanadyjski","code":"CAD","bid":2.8759,"ask":2.9339},
{"currency":"euro","code":"EUR","bid":4.2493,"ask":4.3351},
{"currency":"forint (Węgry)","code":"HUF","bid":0.013927,"ask":0.014209},
{"currency":"frank szwajcarski","code":"CHF","bid":3.8822,"ask":3.9606},
{"currency":"funt szterling","code":"GBP","bid":4.8053,"ask":4.9023},
{"currency":"jen (Japonia)","code":"JPY","bid":0.036558,"ask":0.037296},
{"currency":"korona czeska","code":"CZK","bid":0.1573,"ask":0.1605},
{"currency":"korona duńska","code":"DKK","bid":0.571,"ask":0.5826},
{"currency":"korona norweska","code":"NOK","bid":0.473,"ask":0.4826},
{"currency":"korona szwedzka","code":"SEK","bid":0.4408,"ask":0.4498},
{"currency":"SDR (MFW)","code":"XDR","bid":5.3142,"ask":5.4216}
],
"EventProcessedUtcTime":"2016-10-09T10:48:41.6338718Z","PartitionId":1,"EventEnqueuedUtcTime":"2016-10-09T10:48:42.6170000Z"}
This is my script in sql.
#trial =
EXTRACT jsonString string
FROM #"adl://kamilsepin.azuredatalakestore.net/ExchangeRates/2016/10/09/10_0_c60d8b8895b047c896ce67d19df3cdb2.json"
USING Extractors.Text(delimiter:'\b', quoting:false);
#json =
SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(jsonString) AS rec
FROM #trial;
#columnized =
SELECT
rec["table"]AS table,
rec["no"]AS no,
rec["tradingDate"]AS tradingDate,
rec["effectiveDate"]AS effectiveDate,
rec["rates"]AS rates
FROM #json;
#rateslist =
SELECT
table, no, tradingDate, effectiveDate,
Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(rates) AS recl
FROM #columnized;
#selectrates =
SELECT
recl["currency"]AS currency,
recl["code"]AS code,
recl["bid"]AS bid,
recl["ask"]AS ask
FROM #rateslist;
OUTPUT #selectrates
TO "adl://kamilsepin.azuredatalakestore.net/datastreamanalitics/ExchangeRates.tsv"
USING Outputters.Tsv();
You need to look at the structure of your JSON and identify, what constitutes your first path inside your JSON that you want to map to correlated rows. In your case, you are really only interested in the array in rates where you want one row per array item.
Thus, you use the JSONExtractor with a JSONPath that gives you one row per array element (e.g., rates[*]) and then project each of its fields.
Here is the code (with slightly changed paths):
REFERENCE ASSEMBLY JSONBlog.[Newtonsoft.Json];
REFERENCE ASSEMBLY JSONBlog.[Microsoft.Analytics.Samples.Formats];
#selectrates =
EXTRACT currency string, code string, bid decimal, ask decimal
FROM #"/Temp/rates.json"
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor("rates[*]");
OUTPUT #selectrates
TO "/Temp/ExchangeRates.tsv"
USING Outputters.Tsv();

Setting a Clob value in a native query

Oracle DB.
Spring JPA using Hibernate.
I am having difficulty inserting a Clob value into a native sql query.
The code calling the query is as follows:
#SuppressWarnings("unchecked")
public List<Object[]> findQueryColumnsByNativeQuery(String queryString, Map<String, Object> namedParameters)
{
List<Object[]> result = null;
final Query query = em.createNativeQuery(queryString);
if (namedParameters != null)
{
Set<String> keys = namedParameters.keySet();
for (String key : keys)
{
final Object value = namedParameters.get(key);
query.setParameter(key, value);
}
}
query.setHint(QueryHints.HINT_READONLY, Boolean.TRUE);
result = query.getResultList();
return result;
}
The query string is of the format
SELECT COUNT ( DISTINCT ( <column> ) ) FROM <Table> c where (exact ( <column> , (:clobValue), null ) = 1 )
where "(exact ( , (:clobValue), null ) = 1 )" is a function and "clobValue" is a Clob.
I can adjust the query to work as follows:
SELECT COUNT ( DISTINCT ( <column> ) ) FROM <Table> c where (exact ( <column> , to_clob((:stringValue)), null ) = 1 )
where "stringValue" is a String but obviously this only works up to the max sql string size (4000) and I need to pass in much more than that.
I have tried to pass the Clob value as a java.sql.Clob using the method
final Clob clobValue = org.hibernate.engine.jdbc.ClobProxy.generateProxy(stringValue);
This results in a java.io.NotSerializableException: org.hibernate.engine.jdbc.ClobProxy
I have tried to Serialize the Clob using
final Clob clob = org.hibernate.engine.jdbc.ClobProxy.generateProxy(stringValue);
final Clob clobValue = SerializableClobProxy.generateProxy(clob);
But this appears to provide the wrong type of argument to the "exact" function resulting in (org.hibernate.engine.jdbc.spi.SqlExceptionHelper:144) - SQL Error: 29900, SQLState: 99999
(org.hibernate.engine.jdbc.spi.SqlExceptionHelper:146) - ORA-29900: operator binding does not exist
ORA-06553: PLS-306: wrong number or types of arguments in call to 'EXACT'
After reading some post about using Clobs with entities I have tried passing in a byte[] but this also provides the wrong argument type (org.hibernate.engine.jdbc.spi.SqlExceptionHelper:144) - SQL Error: 29900, SQLState: 99999
(org.hibernate.engine.jdbc.spi.SqlExceptionHelper:146) - ORA-29900: operator binding does not exist
ORA-06553: PLS-306: wrong number or types of arguments in call to 'EXACT'
I can also just pass in the value as a String as long as it doesn't break the max string value
I have seen a post (Using function in where clause with clob parameter) which seems to suggest that the only way is to use "plain old JDBC". This is not an option.
I am up against a hard deadline so any help is very welcome.
I'm afraid your assumptions about CLOBs in Oracle are wrong. In Oracle CLOB locator is something like a file handle. And such handle can be created by the database only. So you can not simply pass CLOB as bind variable. CLOB must be somehow related to database storage, because this it can occupy up to 176TB and something like that can not be held in Java Heap.
So the usual approach is to call either DB functions empty_clob() or dbms_lob.create_temporary (in some form). Then you get a clob from database even if you think it is "IN" parameter. Then you can write as many data as you want into that locator (handle, CLOB) and then you can use this CLOB as a parameter for a query.
If you do not follow this pattern, your code will not work. It does not matter whether you use JPA, SpringBatch or plan JDBC. This constrain is given by the database.
It seems that it's required to set type of parameter explicitly for Hibernate in such cases. The following code worked for me:
Clob clob = entityManager
.unwrap(Session.class)
.getLobHelper()
.createClob(reader, length);
int inserted = entityManager
.unwrap(org.hibernate.Session.class)
.createSQLQuery("INSERT INTO EXAMPLE ( UUID, TYPE, DATA) VALUES (:uuid, :type, :data)")
.setParameter("uuid", java.util.Uuid.randomUUID(), org.hibernate.type.UUIDBinaryType.INSTANCE)
.setParameter("type", java.util.Uuid.randomUUID(), org.hibernate.type.StringType.INSTANCE)
.setParameter("data", clob, org.hibernate.type.ClobType.INSTANCE)
.executeUpdate();
Similar workaround is available for Blob.
THE ANSWER: Thank you both for your answers. I should have updated this when i solved the issue some time ago. In the end I used JDBC and the problem disappeared in a puff of smoke!

Converting xml node string to strip out nodes

I have a table that has a column called RAW DATA of type NVARCHAR MAX, which is a dump from a web service. Here is a sample of 1 data line:
<CourtRecordEventCaseHist>
<eventDate>2008-02-11T06:00:00Z</eventDate>
<eventDate_TZ>-0600</eventDate_TZ>
<histSeqNo>4</histSeqNo>
<countyNo>1</countyNo>
<caseNo>xxxxxx</caseNo>
<eventType>WCCS</eventType>
<descr>Warrant/Capias/Commitment served</descr>
<tag/>
<ctofcNameL/>
<ctofcNameF/>
<ctofcNameM/>
<ctofcSuffix/>
<sealCtofcNameL/>
<sealCtofcNameF/>
<sealCtofcNameM/>
<sealCtofcSuffix/>
<sealCtofcTypeCodeDescr/>
<courtRptrNameL/>
<courtRptrNameF/>
<courtRptrNameM/>
<courtRptrSuffix/>
<dktTxt>Signature bond set</dktTxt>
<eventAmt>0.00</eventAmt>
<isMoneyEnabled>false</isMoneyEnabled>
<courtRecordEventPartyList>
<partyNameF>Name</partyNameF>
<partyNameM>A.</partyNameM>
<partyNameL>xxxx</partyNameL>
<partySuffix/>
<isAddrSealed>false</isAddrSealed>
<isSeal>false</isSeal>
</courtRecordEventPartyList>
</CourtRecordEventCaseHist>
It was suppose to go in a table, with the node names representing the column names. The table it's going to is created, I just need to exract the data from this row to the table. I have 100's of thousands records like this. I was going to copy to a xml file, then import. But there is so much data, I would rather try and do the work within the DB.
Any ideas?
First, create the table with all the required columns.
Then, use your favorite scripting language to load the table! Mine being groovy, here is what I'd do:
def sql = Sql.newInstance(/* SQL connection here*/)
sql.eachRow("select RAW_DATA from TABLE_NAME") { row ->
String xmlData = row."RAW_DATA"
def root = new XmlSlurper().parseText(xmlData)
def date = root.eventDate
def histSeqNo = root.histSeqNo
//Pull out all the data and insert into new table!
}
I did find an answer to this, I'm sure there is more than one way of doing this. But this is what I got to work. Thanks for everyone's help.
SELECT
pref.value('(caseNo/text())[1]', 'varchar(20)') as CaseNumber,
pref.value('(countyNo/text())[1]', 'int') as CountyNumber
FROM
dbo.CaseHistoryRawData_10 CROSS APPLY
RawData.nodes('//CourtRecordEventCaseHist') AS CourtRec(pref)

NHibernate Like with integer

I have a NHibernate search function where I receive integers and want to return results where at least the beginning coincides with the integers, e.g.
received integer: 729
returns: 729445, 7291 etc.
The database column is of type int, as is the property "Id" of Foo.
But
int id = 729;
var criteria = session.CreateCriteria(typeof(Foo))
criteria.Add(NHibernate.Criterion.Expression.InsensitiveLike("Id", id.ToString() + "%"));
return criteria.List<Foo>();
does result in an error (Could not convert parameter string to int32). Is there something wrong in the code, a work around, or other solution?
How about this:
int id = 729;
var criteria = session.CreateCriteria(typeof(Foo))
criteria.Add(Expression.Like(Projections.Cast(NHibernateUtil.String, Projections.Property("Id")), id.ToString(), MatchMode.Anywhere));
return criteria.List<Foo>();
Have you tried something like this:
int id = 729;
var criteria = session.CreateCriteria(typeof(Foo))
criteria.Add(NHibernate.Criterion.Expression.Like(Projections.SqlFunction("to_char", NHibernate.NHibernateUtil.String, Projections.Property("Id")), id.ToString() + "%"));
return criteria.List<Foo>();
The idea is convert the column before using a to_char function. Some databases do this automatically.
AFAIK, you'll need to store your integer as a string in the database if you want to use the built in NHibernate functionality for this (I would recommend this approach even without NHibernate - the minute you start doing 'like' searches you are dealing with a string, not a number - think US Zip Codes, etc...).
You could also do it mathematically in a database-specific function (or convert to a string as described in Thiago Azevedo's answer), but I imagine these options would be significantly slower, and also have potential to tie you to a specific database.