I'm trying to extract data (using SPUFI) from a DB2 table to a file, with one of the output fields converting a decimal field to the same format as a COBOL comp field.
So e.g. today's date (20141007) would be ..ëõ
The SQL HEX function converts 20141007 to 013353CF, and doing a SELECT of x'013353CF' gives me the desired result, but obviously that's a constant, I'm trying to find an equivalent function.
Basically an inverse of the HEX function.
I've come across a couple of suggestions using user defined functions. Problem is, we've only recently upgraded to DB2 10 and new function mode isn't enabled yet, which means I don't have access to any control functions in a UDF.
I suspect I'm out of luck, but wondering if anyone has any suggestions.
I appreciate this is completely the wrong tool for the job, and would be easier to just write a COBOL program to do it, but various constraints are preventing that. I'm limited to just SQL functions and possibly JCL).
I thought I had a solution using a recursive UDF to get around the lack of control functions, but that's not allowed either.
Related
I am using MS query to pull data from sql server and all is good.
Problem starts when data comes from the server I am stuck with data type general for everything, and no way to change the data type in excel.
Main issue is numbers, where in database datatype is decimal yet i can do no calculations on it in excel. Any help would be appreciated.
I am using excel to execute a stored procedure on server
This pulls the data into the following table
Even though the data in the sql server for column price is formatted as decimal it becomes a general data type after getting to excel.
Changing it to number/currency etc. does not change anything.
Also no errors appear. Simply data comes down and no matter what changes in excel I apply nothing changes it all is treated as text.
You can do these things.
Select Column
Click Data-> Text to Columns
Follow the wizard
Set the format
Use this official support ticket from Microsoft
Problem in this case was created by myself.
But I suppose it could easily happen to others who are just starting on their path with sql and excel.
Here is what happened as I established after few days of going in circles.
as there was load of trailing spaces in the data coming down from the server I have decided to tidy things up.
Without considerring implications I have stuck an RTRIM() on everything.
This caused excel to treat everything as strings as string RTRIM is a built in string function.
What made things worse is the fact that when using power query I was able to transform the data to the desired, formats.
Unfortunately MS query does not seem to be quite as clever as power query hence the issues.
I have a 64-bit integer field in my Postgres database, which is populated with 64 bit integer numbers. (Non) coincidentally, those numbers are actually 8-chars strings in ASCII format, little endian. For example, a number 5208208757389214273 is a numeric representation of a string "ABCDEFGH": it is 0x4847464544434241 in hex, where 0x41 is A, 0x42 is B, 0x43 is C and so forth.
I would like to convert those numbers purely for display purposes - i.e. find a way to leave them as numbers in the database, but be able to see them as strings when querying. Is there any way to do it in SQL? If not in SQL, is there anything I can do on the server side (install extensions, stored procedures, anything at all) which would allow this? This problem is trivially solvable with any script or programming language, but I do not know how to solve it with SQL.
P.S. And just one more time for some of trigger-happy duplicate-hammer-yielding bunch - this is not a question of translating number like 5208208757389214273 to string "5208208757389214273" (we have a lot of answers on how to do this, but this is not what I am looking for).
Use to_hex() to get a hexadecimal representation for the number. Then use decode() to turn it into a bytea. (Unfortunately I did not find any direct way from bigint to bytea.) Cast that to text and reverse() it, because of the endianess.
reverse(decode(to_hex(5208208757389214273), 'hex')::text)
ABCDEFGH
The bytea_output must be set to 'escape' for this to work properly -- use SET bytea_output = 'escape';.
(Tested on versions 9.4 and 9.6.)
An alternative way to achieve the same rsult without using SET is following:
select reverse(encode(decode(to_hex(5208208757389214273),'hex'),'escape'))
I have a database which has a view created off other views which are created off other views (a data engineer built the views not me)
In Hive I can do this but its slow, so I want to use Impala
select * from table limit 5;
In Impala I get an error, have tried invalidate metadata and refresh with no luck.
"ERROR: AnalysisException: No matching function with signature: lower(BIGINT)."
what reason could this happen? Never seen this type of error before. Is there a way to do this recursively?
show create table;
To begin with, be aware that Hive and Impala are distinct solutions, with distinct SQL parsers, supporting a distinct set of functions and features. A syntax that is valid in Hive may not be valid in Impala. Some table formats defined with Hive may not be supported by Impala (e.g. ORC, or Parquet with a BINARY column).
In this specific case, the Hive documentation appears to match the Impala documentation for function lower() (caveat: check what versions you are using).
But there's a big catch: lower() takes a String and produces a String. It is not a number function. That smells like a gross mistake such as confusing lower() -- convert some text to lowercase -- and floor() -- get the integer value that is equal or less to a decimal value.
Check with your so-called Data Engineer what he/she was trying to do, and make sure the views were properly tested (or are properly tested after the correction is made). Hive clearly applies some implicit type conversions that enable queries to run, even though it makes no sense and produces goofy results.
Today I had to use a SQLite database for the first time and I really wondered about the display of a DATETIME column like 1411111200. Of course, internally it has to be stored as some integer value to be able to do math with it. But who wants to see that in a grid output, which is clearly for human eyes?
I even tried two programs, SQLiteStudio and SQLite Manager, and both don't even have an option to change this (at least I couldn't find it).
Of course with my knowledge about SQL it didn't take long to find out what the values mean - this query displays it like I expected:
select datetime(timestamp, 'unixepoch', 'localtime'), * from MyTable
But that's very uncomfortable when working with a GUI Tool. So why? Just because? Unix nerds? Or did I just get a wrong impression because I accidentally tried the only 2 Tools which are bad?
(I also appreciate comments on which tools to use or where I can find the hidden settings.)
Probably because sqlite doesn't have a first-class date type — how would a GUI tool know which columns are supposed to contain dates?
The question implies that a column of datatype DATETIME can only hold valid datetimes. But that's not true in SQLite: you can put any number or string value and it will be stored and displayed like it is.
To find out what the most "natural" way for a timestamp in SQLite would be, I created a table like this:
CREATE TABLE test ( timestamp DATETIME DEFAULT ( CURRENT_TIMESTAMP ) );
The result is a display in human readable format (2014-09-22 10:56:07)! But in fact it is saved as string, and I cannot imagine any serious software developer who would like that. Any comments?
That original database from the question, having datetimes as unixepoch, is not because of its table definition, but because the inserted data was like that. And that was probably the best possible option how to do it.
So, the answer is, those tools cannot display the datetime in human readable format, because they cannot know how it was encoded. It can be the number of seconds since 1970 or anything else, and it could even be different from row to row. What a mess.
From Wikipedia:
A common criticism is that SQLite's type system lacks the data
integrity mechanism provided by statically typed columns in other
products. [...] However, it can be implemented with constraints
like CHECK(typeof(x)='integer').
From the authors:
[...] most other SQL database engines are statically typed and so some
people feel that the use of manifest typing is a bug in SQLite. But
the authors of SQLite feel very strongly that this is a feature. The
use of manifest typing in SQLite is a deliberate design decision which
has proven in practice to make SQLite more reliable and easier to use,
especially when used in combination with dynamically typed programming
languages such as Tcl and Python.
I am looking for an efficient way to use a wild card search on a text (blob) column.
I have seen that it is internally stored as bytes...
The data amount will be limited, but unfortunately my vendor has decided to use this stupid datatype. I would also consider to move everything in a temp table if there is an easy system side function to modify it - unfortunately something like rpad does not work...
I can see the text value correctly via using the column in the select part or when reading the data via Perl's DBI module.
Unfortunately, you are stuck - there are very few operations that you can perform on TEXT or BYTE blobs. In particular, none of these work:
+ create table t (t text in table);
+ select t from t where t[1,3] = "abc";
SQL -615: Blobs are not allowed in this expression.
+ select t from t where t like "%abc%";
SQL -219: Wildcard matching may not be used with non-character types.
+ select t from t where t matches "*abc*";
SQL -219: Wildcard matching may not be used with non-character types.
Depending on the version of IDS, you may have options with BTS - Basic Text Search (requires IDS v11), or with other text search data blades. On the other hand, if the data is already in the DB and cannot be type-converted, then you may be forced to extract the blobs and search them client-side, which is less efficient. If you must do that, ensure you filter on as many other conditions as possible to minimize the traffic that is needed.
You might also notice that DBD::Informix has to go through some machinations to make blobs appear to work - machinations that it should not, quite frankly, have to go through. So far, in a decade of trying, I've not persuaded people that these things need fixing.