HSQLDB: Convert NUMERIC to HEX - hsqldb

I am using HSQLDB 2.3.3 I'm trying to convert a number into a hex.
For example,
input: 10 output: A
input: 100 output: 64
Is it possible to do this in hsqldb?

You can use the standard Java static methods directly.
For example:
call "java.lang.Long.toHexString"(123)
SELECT "java.lang.Long.toHexString"(id) FROM Customer

Related

How to get text bytes used by a string in Hive?

I have some data in Hive 1.2.1 table. I have to get raw bytes of a specific column. The column data is html raw in multiple languages. In order to get length of characters, I can use simple query like below
select baseurl, LENGTH(content) from clss limit 30;
Above query is ok for characters length the problem is for text other is English, their value is incorrect. For a Character in Arabic, it is saved as unicoded that's why character length is changed. Some characters are of two bytes and some are single byte.
Is there any builtin function to know bytes of text instead of characters ?
Function character_length(string str) was added in Jira HIVE-15979 And it says Fix versions 2.3.0. If you cannot upgrade your Hive (and this is quite risky), then try to download UDF source codes and build it, then add jar and create temporary function.
Download code: GenericUDFCharacterLength.java

Is there an EBCDIC_STR function on IBMi

I'm running into issue with character encoding and I found the functions EBCDIC_STR, ASCII_STR in Db2 for z/OS. Are there similar function for Db2 for IBM i?
Starting with v7.2, there is a similar function in DB2 for i, it is CHAR. It is not an exact replacement though. While EBCDIC_STR returns a string in the system EBCDIC CCSID, and provides a UTF-16 encoding for unknown characters, CHAR takes a string and converts it to a provided CCSID. CHAR has no defined behavior for characters that cannot be converted to the new CCSID.
I believe you will have to use a CAST specification in your SQL statement, specifying in it the desired CCSID, rather than using a built-in function.
This documentation page gives the syntax of a CAST specification, but it does not have a precisely relevant example. The DB2 for zOS CAST page gives an example that should be the same on the i Series:
CAST(MYDATA AS CHAR(10) CCSID 367)

Reading just 1 column from a file using NumPy's loadtxt() function

I want to read in data from multiple files that I want to use for plotting (matplotlib).
I found a function loadtxt() that I could use for this purpose. However, I only want to read in one column from each file.
How would I do this?
The following command works for me if I read in at least 2 columns, for example:
numpy.loadtxt('myfile.dat', usecols=(2,3))
But
numpy.loadtxt('myfile.dat', usecols=(3))
would throw an error.
You need a comma after the 3 in order to tell Python that (3,) is a tuple. Python interprets (3) to be the same value as the int 3, and loadtxt wants a sequence-type argument for usecols.
numpy.loadtxt('myfile.dat', usecols=(3,))

Fortran 90: How to correctly read an integer among other real

I have created a Fortran 90 code to filter and convert the text output of another program in a csv form. The file contains a table with columns of various types (character, real, integer). There is a column that generally contains decimal values (probability values). BUΤ, in some rows, where the value should be decimal "1.000", the value is actually integer "1".
I use "F5.3" specifier to read this column and I have the same format statement for every row of the table. So, when the code finds "1", it reads ".001", because it does not find a decimal point.
What ways could I use to correctly (and generally) read integers among other decimals?
Could I specify "unformatted" input only for a number of "spaces"?
The data edit descriptor fw.d for floating point format specification is for input normally used with zero d (it cannot be ommited). Nonzero d is used in the rare case when the floating point data is stored as scaled integers, or you do some unit conversion from the integer values.
You could try using list-directed input: use a * instead of a format specifier. This would be for the entire read, not selected items. Or you could read the lines into a string test their contents to decide how to read them. If the sub-string has a decimal point: read (string(M:N), '(F5.3)') value. If it doesn't, use a different format, e.g., perhaps read as as F5.0.
P.S. "unformatted" is reading binary data without conversion ... it is a direct copy of the data from the file to the data item. "listed-directed" is the Fortran term for reading & converting data without using a format specification.
well here's someting new to me: f90 allows a mix of comma and space delimiters for a simple list directed read:
read(unit,*)v1,v2,v3,v4
with input
1.222 2 , 3.14 , 4
yields
1.222000 2.000000 3.140000 4.000000

Two characters with the same ASCII Code?

I'm trying to clean a recently imported sql server 2008 database that have to many invalid charcters for my application. and I found different characters with the same ASCII code, ¿that is posible?.
If I execute this query:
select ASCII('║'), ASCII('¦')
I get:
166 166
I need to do a similar work, but with .net code.
If I ask for these char in .net:
? ((int)'║').ToString() + ", " + ((int)'¦').ToString()
I get:
"9553, 166"
Anybody can Explain what happens
Instead of ASCII, use the UNICODE function.
Both ║ and | are not an ASCII characters, so calling ASCII with them would convert incorrectly and result in the wrong value.
Additionally, you need to use unicode strings when calling the UNICODE function, using the N prefix:
SELECT UNICODE(N'║'), UNICODE(N'|')
-- Results in: 9553, 166