Why does SDI flowgraph fails when filter in projection is introduced? - hana

I am developing a flowgraph in native HANA and I am receiving an error ORA-00972 after I have introduced a filter to the projection node that contains single quote sign.
The filter is as follows:
"VALID_FROM" >= to_timestamp(to_nvarchar($$MaxDT$$),'yyyymmddhh24miss')
When I change the filter to e.g:
"ID" IN (1,5,6,7,34)
it's working just fine.
I had the same error previously while i was querying a virtual table. The solution there was to make the namespace much much smaller so that namespace+table name+field name does not exceed 30 characters. But I am not sure what is the solution when this error is there in the flowgraph.
Any help appreciated!
Cheers

The error message is not from HANA but from an Oracle DB.
ORA-00972 means “Identifier too long” - so it may well be that the single-quoted string from the filter condition is mis-interpreted as an identifier in the remote Oracle DB.
Try to escape the single quote by using two consecutive singe quotes ''.
"VALID_FROM" >= to_timestamp(to_nvarchar($$MaxDT$$),''yyyymmddhh24miss'')
Also, reconsider the actual data type of VALID_FROM - it looks as if a character input get converted to mvarchar and then to timestamp.

Related

Data filtered differently in sql and crystal reports

Problem arises when filtering string columns with symbols '-'.
For example query bellow returns ~280 rows:
"SELECT code FROM client WHERE code >= 'M-SOLUTIONS' AND code <= 'MUZIKOS'"
but CR with record selection bellow only returns 20 rows:
{client.code} >= 'M-SOLUTIONS' AND {client.code} <= 'MUZIKOS'
If I put 'Lxxx' instead of 'M-SOLUTIONS' then returned data is correct. Any ideas how to overcome this issue? I used PostgreSql database over Odbc connection.
Apparently they use different collations. Some collations will ignore punctuation on a first pass, using it only if the values are otherwise equal. Figure out which collation you want to use, then make sure both CR and PostgreSQL use that one.

Teradata handling single digit month and day problem

I have below values coming from a flat file which may contain single digit month & day field:
9/14/2020 07:20:18.630000
7/7/2020 16:24:57.700000
10/24/2019 03:40:52.380000
11/9/2020 20:21:32.420000
Now I need to load this to a column having TIMESTAMP(6) as the data type.
Can someone please help on this? I am using TD SQL Assistant version 16.
SQL Assistant is not a load utility, e.g. TPT fully supports dealing with intput like this.
Your other post shows that you already use a RegEx to add the missing zeroes and you apply the correct format. This is indicating bad data in your input. You might try to spot the error in the input file (check how many rows have been loaded and check the following lines).
Or you apply TRYCAST which doesn't fail, but returns a NULL for bad dates. But yikes, it doesn't support FORMAT, thus you must rearrange the MDY to YMD first:
trycast(RegExp_Replace(RegExp_Replace(x,'\b([0-9])\b', '0\1'), '(..).(..).(....)(.*)','\3-\1-\2\4') as timestamp(6))

HANA: Unknown Characters in Database column of datatype BLOB

I need help on how to resolve characters of unknown type from a database field into a readable format, because I need to overwrite this value on database level with another valid value (in the exact format the application stores it in) to automate system copy acitvities.
I have a proprietary application that also allows users to configure it in via the frontend. This configuration data gets stored in a table and the values of a configuration property are stored in a column of type "BLOB". For the here desired value, I provide a valid URL in the application frontend (like http://myserver:8080). However, what gets stored in the database is not readable (some square characters). I tried all sorts of conversion functions of HANA (HEX, binary), simple, and in a cascaded way (e.g. first to binary, then to varchar) to make it readable. Also, I tried it the other way around and make the value that I want to insert appear in the correct format (conversion to BLOL over hex or binary) but this does not work either. I copied the value to clipboard and compared it to all sorts of character set tables (although I am not sure if this can work at all).
My conversion tries look somewhat like this:
SELECT TO_ALPHANUM('') FROM DUMMY;
while the brackets would contain the characters in question. I cant even print them here.
How can one approach this and maybe find out the character set that is used by this application? I would be grateful for some more ideas.
What you have in your BLOB column is a series of bytes. As you mentioned, these bytes have been written by an application that uses an unknown character set.
In order to interpret those bytes correctly, you need to know the character set as this is literally the mapping of bytes to characters or character identifiers (e.g. code points in UTF).
Now, HANA doesn't come with a whole lot of options to work on LOB data in the first place and for C(haracter)LOB data most manipulations implicitly perform a conversion to a string data type.
So, what I would recommend is to write a custom application that is able to read out the BLOB bytes and perform the conversion in that custom app. Once successfully converted into a string you can store the data in a new NVCLOB field that keeps it in UTF-8 encoding.
You will have to know the character set in the first place, though. No way around that.
I assume you are on Oracle. You can convert BLOB to CLOB as described here.
http://www.dba-oracle.com/t_convert_blob_to_clob_script.htm
In case of your example try this query:
select UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(<your_blob_value)) from dual;
Obviously this only works for values below 32767 characters.

Why am I getting a "[SQL0802] Data conversion of data mapping error" exception?

I am not very familiar with iseries/DB2. However, I work on a website that uses it as its primary database.
A new column was recently added to an existing table. When I view it via AS400, I see the following data type:
Type: S
Length: 9
Dec: 2
This tells me it's a numeric field with 6 digits before the decimal point, and 2 digits after the decimal point.
When I query the data with a simple SELECT (SELECT MYCOL FROM MYTABLE), I get back all the records without a problem. However, when I try using a DISTINCT, GROUP BY, or ORDER BY on that same column I get the following exception:
[SQL0802] Data conversion of data mapping error
I've deduced that at least one record has invalid data - what my DBA calls "blanks" or "4 O". How is this possible though? Shouldn't the database throw an exception when invalid data is attempted to be added to that column?
Is there any way I can get around this, such as filtering out those bad records in my query?
"4 O" means 0x40 which is the EBCDIC code for a space or blank character and is the default value placed into any new space in a record.
Legacy programs / operations can introduce the decimal data error. For example if the new file was created and filled using the CPYF command with the FMTOPT(*NOCHK) option.
The easiest way to fix it is to write an HLL program (RPG) to read the file and correct the records.
The only solution I could find was to write a script that checks for blank values in the column and then updates them to zero when they are found.
If the file has record format level checking turned off [ie. LVLCHK(*NO)] or is overridden to that, then an HLL program. (ex. RPG, COBOL, etc) that was not recompiled with the new record might write out records with invalid data in this column, especially if the new column is not at the end of the record.
Make sure that all programs that use native I/O to write or update records on this file are recompiled.
I was able to solve this error by force-casting the key columns to integer. I changed the join from this...
FROM DAILYV INNER JOIN BXV ON DAILYV.DAITEM=BXV.BXPACK
...to this...
FROM DAILYV INNER JOIN BXV ON CAST(DAILYV.DAITEM AS INT)=CAST(BXV.BXPACK AS INT)
...and I didn't have to make any corrections to the tables. This is a very old, very messy database with lots of junk in it. I've made many corrections, but it's a work in progress.

Error Inserting Entry With Text Column That Contains New Line And Quotes

I have an Informix 11.70 database.I am unable to sucessfully execute this insert statement on a table.
INSERT INTO some_table(
col1,
col2,
text_col,
col3)
VALUES(
5,
50,
CAST('"id","title1","title2"
"row1","some data","some other data"
"row2","some data","some other"' AS TEXT),
3);
The error I receive is:
[Error Code: -9634, SQL State: IX000] No cast from char to text.
I found that I should add this statement in order to allow using new lines in text literals, so I added this above the same query I have already written:
EXECUTE PROCEDURE IFX_ALLOW_NEWLINE('t');
Still, I receive the same error.
I have also read the IBM documentation that says: to alternatively allow new lines, I could set the ALLOW_NEWLINE parameter in the ONCONFIG file. I suppose the last one requires administrative access to the server to alter that config file, which I do not have, and I prefer not to take advantage of this setting.
Informix's TEXT (and BYTE) columns pre-date any standard, and are in many ways very peculiar types. TEXT in Informix is very different from TEXT found in other DBMS. One of the long-standing (over 20 years) problems with them is that there isn't a string literal notation that can be used to insert data into them. The 'No cast from char to text' is saying there is no explicit conversion from string literal to TEXT, either.
You have a variety of options:
Use LVARCHAR in the table (good if your values won't be longer than a few KiB, because the total row length is approximately 32 KiB). Maximum size of an LVARCHAR column is just under 32 KiB.
Use a programming language which can handle Informix 'locator' structures — in ESQL/C, the type used to hold a TEXT is loc_t.
Consider using CLOB instead. However, this has the same limitation (no string to CLOB conversion), but you'd be able to use the FILETOCLOB() function to get the information from a file on the client to the database (and LOTOFILE transfers information from the DB to a file on the client).
If you can use LVARCHAR, that is by far the simplest alternative.
I forgot to mention an important detail in the question - I use Java and the Hibernate ORM to access my Informix database, thus some of the suggested approaches (the loc_t handling in particular) in Jonathan Leffler's answer are unfortunately not applicable. Also, I need to store large data of dynamic length and I fear the LVARCHAR column would not be sufficient to hold it.
The way I got it working was to follow Michał Niklas's suggestion from his comment, and use PreparedStatement. This could potentially be explained by Informix handing the TEXT data type in its own manner.