While I was selecting a unit from a database table I noticed, via transaction SE16N, that there are two different values for the same field. An unconverted and a converted value. With my SELECT statement, I receive the unconverted one. Do I need to convert this value in order to continue working with it?
First of all, it's probably worth explaining what is the concept of "converted value" and "unconverted value" (what is better known as "external value" and "internal value").
Internal values are the actual values used by the programs and stored in the database, and the external values are only calculated at the time of the display, on screen, printout, and so on.
It's very practical to see a meaningful code, as Legxis explained, for the internal value of the unit of measure "ST" (a unit of measure which indicates that the number is a number of pieces, an English user would prefer to see PCS (English word "pieces"), while a German user would prefer to see ST (German word "Stücks").
The conversion algorithm is defined at the DDIC domain level (transaction code SE11) via the "conversion routine" field, a 5-character code which defines the conversion function modules which are called automatically at display time. For instance, the Unit of measure is related to the domain MEINS, which has the routine CUNIT which corresponds to the function modules CONVERSION_EXIT_CUNIT_INPUT and CONVERSION_EXIT_CUNIT_OUTPUT.
CONVERSION_EXIT_CUNIT_INPUT does the conversion from the external value (displayed) to the internal value (program and database)
CONVERSION_EXIT_CUNIT_OUTPUT does the conversion from the internal value (program and database) to the external value (displayed)
These function modules are automatically called in SAP rendering technologies like SAP GUI, SAPscript, Smart Form, SAP Adobe form, BSP, Web Dynpro, etc. The "OUTPUT" function module is also called if you call the ABAP statement WRITE.
Note that the "output length" defined for a DDIC domain may be of some importance, because one may define an output length (displayed) larger than the internal length. For instance, the language code is stored internally on one character but displayed on two characters. For instance, in English, the language code "V" (Sweden) is displayed "SW" (Sweden), and the language code "S" (Spain) is displayed "SP" (Spain).
Finally, if you understand well the concept, you should conclude that you usually don't need to convert anything yourself. It can be useful only if you want to define an interface which is not one of the SAP supported technologies mentioned above.
The table rows you SELECT in ABAP do only contain the unconverted values. Use these to e.g. JOIN with other tables or call methods/function modules. Conversion is only relevant when displaying the data.
By the way: Nonetheless these conversions with "good intentions" can cause problems. Values with type NUMC (numeric characters) for example are often trimmed/stripped during conversion when they have leading zeros. But some function modules do not work when these leading zeros are missing.
Related
I need help on how to resolve characters of unknown type from a database field into a readable format, because I need to overwrite this value on database level with another valid value (in the exact format the application stores it in) to automate system copy acitvities.
I have a proprietary application that also allows users to configure it in via the frontend. This configuration data gets stored in a table and the values of a configuration property are stored in a column of type "BLOB". For the here desired value, I provide a valid URL in the application frontend (like http://myserver:8080). However, what gets stored in the database is not readable (some square characters). I tried all sorts of conversion functions of HANA (HEX, binary), simple, and in a cascaded way (e.g. first to binary, then to varchar) to make it readable. Also, I tried it the other way around and make the value that I want to insert appear in the correct format (conversion to BLOL over hex or binary) but this does not work either. I copied the value to clipboard and compared it to all sorts of character set tables (although I am not sure if this can work at all).
My conversion tries look somewhat like this:
SELECT TO_ALPHANUM('') FROM DUMMY;
while the brackets would contain the characters in question. I cant even print them here.
How can one approach this and maybe find out the character set that is used by this application? I would be grateful for some more ideas.
What you have in your BLOB column is a series of bytes. As you mentioned, these bytes have been written by an application that uses an unknown character set.
In order to interpret those bytes correctly, you need to know the character set as this is literally the mapping of bytes to characters or character identifiers (e.g. code points in UTF).
Now, HANA doesn't come with a whole lot of options to work on LOB data in the first place and for C(haracter)LOB data most manipulations implicitly perform a conversion to a string data type.
So, what I would recommend is to write a custom application that is able to read out the BLOB bytes and perform the conversion in that custom app. Once successfully converted into a string you can store the data in a new NVCLOB field that keeps it in UTF-8 encoding.
You will have to know the character set in the first place, though. No way around that.
I assume you are on Oracle. You can convert BLOB to CLOB as described here.
http://www.dba-oracle.com/t_convert_blob_to_clob_script.htm
In case of your example try this query:
select UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(<your_blob_value)) from dual;
Obviously this only works for values below 32767 characters.
As it commonly known, it is not recommended by SAP to use 255+ character fields in transparent tables. One should use several 255 fields instead, wrap text in LCHR, LRAW or STRING, or use SO10 text etc.
However, while maintaining legacy (and ugly) developments, such problem often arises: how to view what is stored in char500 or char1000 field in database?
The real life scenario:
we have a development where some structure written and read from char1000 field in transparent table
we know field structure and parsing the field through CL_ABAP_CONTAINER_UTILITIES=>FILL_CONTAINER_C or SO_STRUCT_TO_CHAR goes fine, all fields are put wonderfully
displaying the fields via SE11/SE16/SE16n gives nothing as the field is truncated to 255, and to 132 in debugger, AFAIR.
Is there any standard tool, transaction or FM we can use to display such long field?
In the DBA cockpit (ST04), there is a SQL command line, where you can enter directly the "native" SQL commands and display the result as an ALV view. With a substring function, you can split a field into several sections (expl: select substr(sql_text,1,100) s1, substr(sql_text,101,100) s2, substr(sql_text,201,100) s3, substr(sql_text,301,100) s4 from dba_hist_sqltext where sql_id = '0cuyjatkcmjf0'). PS: every ALV cell is 128 characters maximum.
Not sure whether this tool is available for all supported database softwares.
There is also an equivalent program named RSDU_EXEC_SQL (in all ABAP-based systems?)
Unfortunately, they won't work for ersatz of tables by SAP (clustered tables and so on) as they can be queried only with ABAP "Open SQL".
If you have an ERP system to you hand check transaction PP01 out with infotype 1002. Basically They store text in table HRP1002 and HRT1002 and create a special view with an text editor. It looks like this: http://www.sapfunctional.com/HCM/Positions/Page1.13.jpg
In debugger you can switch the view to e.g. HTML and you should see the whole string, but editing is limited as far as i know to a certain number of charachters.
Since Keen is not strongly typed, I've noticed it is possible to send data of different types into the same property. For instance, some events may have a property whose value is a String (sent surrounded by quotes), and some whose value is an integer (sent without quotes). In the case of mathematical operations, what is the expected behavior?
Our comparator will only compute mathematical operations on numbers. If you have a property whose values are mixed, the operation will only apply to the numbers, strings will be ignored. You can see the values in your property by running a select_unique query on that property as the target_property, then (if you're using the Explorer) selecting JSON from the drop-down in the top-right. Any values you see there that are surrounded by quotes will be ignored by a mathematical query type (minimum, maximum, median, average, percentile, and sum).
If you are just starting out, and you know you want to be able to do mathematical operations on this property, we recommend making sure that you always send integers as numbers (without quotes). If you really want to keep your dataset clean, you can even start a new collection once you've made sure you are no longer sending any strings.
Yes, you're correct, Keen can accept data of different types as the value for your properties. An example of Keen's lenient data type is that a property such as VisitorID can contain both numbers (ie 14558) or strings (ie "14558").
This is article from the Keen site is useful for seeing where you can check data types: https://keen.io/docs/data-collection/data-modeling-guide-200/#check-for-data-type-mismatch
I'm trying to import a CREATE TABLE statement in NexusDB.
The table name contains some german umlauts and so do some field names but I receive an error that there were some invalid characters in my statement (obviously the umlauts...).
My question is now: can somebody give a solution or any ideas to solve my problem?
It's not so easy to just change the umlauts into equivalent terms like ä -> ae or ö -> oe since our application has fixed table names every customer uses currently.
It is not a good idea to use characters outside what is normally permitted in the SQL standard. This will bite you not only in NexusDB, but in many other databases as well. Take special note that there is a good chance you will also run into problems when you want to access data via ODBC etc, as other environments may also have similar standard restrictions. My strong recommendation would be to avoid use of characters outside the SQL naming standard for tables, no matter which database is used.
However... having said all that, given that NexusDB is one of the most flexible database systems for the programmer (it comes with full source), there is already a solution. If you add an "extendedliterals" define to your database server project, then a larger array of characters are considered valid. For the exact change this enables, see the nxcValidIdentChars constant in the nxllConst.pas unit. The constant may also be changed if required.
I have the following XSD/XML type definition. It has been used by number of business units/applications.
<xsd:simpleType name="NAICSCodeType">
<xsd:annotation>
<xsd:documentation>NAICSCode</xsd:documentation>
</xsd:annotation>
<xsd:restriction base="xsd:integer">
<xsd:minInclusive value="000001"/>
<xsd:maxInclusive value="999000"/>
</xsd:restriction>
</xsd:simpleType>
As this one defined as "integer" data type, it strips the leading zeros of input. Eg: 0078 become 78 after parsing.
We need to pass the input as it is without stripping leading zeros eg 0078 become 0078 after parsing.
The ideal fix is to change the integer to string in restriction base. It is non-starter due to buy in from other groups.
Is there a way to redefine the above data type for desired outcome?
How do I do it?
Books and net dont seem to have helped too much either, so I am starting to question if this is theoretically possible at all
It sounds as if the values in question are not in fact integers, but strings consisting only of numeric digits. Why does the schema say that they are integers if 78 and 078 and 0078 are three distinct values instead of three ways of naming the same value?
You can of course restrict xs:integer by requiring leading zeroes in the lexical space, or a fixed number of digits. But that is unlikely to have any effect on the way software reading the document re-serializes it or passes values to other software.
In theory, there shouldn't be; and as far as I know, there aren't out of the box XML serializers that would be configurable to get what you described; leading zeroes and padding whitespace are remnants from fixed-length records era (your example would be a PIC 9(6) in a COBOL copybook).
Depending on your platform, you might be able to create custom serializers. In my shop I would argue that as just plain wrong.
If I would be forced to do it, I would simply use a "private" variation of the XSD (based on string), therefore implement whatever formatting on your side and be done with it. Private would mean that you don't need to be "sharing" your XSD artifact that you used internally to generate whatever code you need, with the other groups; this could create the "input" you refer to with minimum overhead. The "refactoring" of the schema could be done with minimum overhead...
I am suggesting it simply because having to put up with this is an indication that in your environment there are obviously bigger problems to deal with, starting with not necessarily understanding how to properly bridge XML with legacy systems (a wild guess, of course).