I'm creating a simple app, which takes in an xml file, map it to another schema, then convert it to a text file, and drop it in a folder.
I have created a schema using the Flat file wizard and setup send pipeline.
My problem is, in the flatfile I want to add pad character before each element e.g "Helloworld,"hey,"12
What's the best way to do this? Is it best to do this during the map from source schema to destination schema (add a " before each element during the map). In this case wouldn't all elements in destination schema need to be of type 'string'? Is there a better way to do this?
So, that's a rather odd requirement, I would double check.
A simple way to do this would be to set the infix field delimiter of the row elements to ," instead of just ,
To handle the first ", I think a dummy field with a default value of " would be the best.
Related
I want to delete the rest of a loaded csv file based on the occurrence of a string.
Remove(Row, RowCnd(Interval, Pos(Top, findMeThePositionOfaGivenString('TeddyBear')),
Pos(Bottom, 1), Select(1, 0))
Or just any approach to dynamically delete a range of rows!
If you're doing this during the data import stage then I would recommend
load yourstuff
from yourfile
where index(givenstring,'Teddybear')=0;
Index will return the position of the string in the larger string.
eg. index('ABC','BC')=2 so index()=0 means the string does no exist in the searched text. Be careful of the capitalisation as it will honour that, so use upper or lower to remove that kind of confusion.
I hope I understood your request.
When we started using a Dojo List Text Box in one of our applications, I came across the problem, that this Dojo control seems to have a built in delimiter, automatically splitting every String that contains a comma into extra array items.
A code to verify this behaviour:
<xe:djextListTextBox id="djextListTextBox1"></xe:djextListTextBox><xe:valuePicker id="valuePicker1" for="djextListTextBox1">
<xe:this.dataProvider>
<xe:simpleValuePicker>
<xe:this.valueList><![CDATA[11111
222,22
33333]]></xe:this.valueList>
</xe:simpleValuePicker>
</xe:this.dataProvider>
</xe:valuePicker>
I managed to resolve the situation by manually defining another delimiter
multipleSeparator="|"
which seems to overwrite the default delimiter, but I still would be very interested in verification of this finding and experts' tips on how to handle this control properly for future reference.
Yes, it uses "," as the default delimiter.
It is defined in the dojo widget source code of _ListTextBox.js (in com.ibm.xsp.extlib.controls package, \resources\web\extlib\dijit folder. This is the base widget for several components (e.g. ListTextBox, NameTextBox, etc.) and the multiple item seperator (msep) defaults to ",".
Basically, these components keep value in a hidden inputbox and submit that value. Internally, they convert the submitted value into a vector and store into the data binding. So as long as you don't have the declared delimiter in your value list, you may use any delimiter.
One problem I had is \n, because I experienced some problems in the past. Using ";" or "," is no problem with ListTextBox. However, NameTextBox doesn't work with any delimiter other than ",". No big deal, because it's only name elements. If you use ",", this component keeps values correctly but does not render well.
I want to manipulate a large text file, which is coming as TEXT and want to use smooks to manipulate it. The text file contains large number of lines. And from each line, i have to split the characters and get information out of that.
Eg: i do following in java;
row.substring(0, 4)
row.substring(4, 64)
I have to convert the text content to CSV file.
Can we do exact same string manipulation in smooks too? (that is in smooks configuration can i do that?) I believe i can use Fixed Length processing for that?
How to add IF ELSE condition in smooks configuration?
Like in java;
if (row.length() == 900) {
//DO
}else(){
//DO
}
We can do string manipulation using fixed length reader[1]. but still i do not find a way to do condition check.
Eg: if /else
[1]http://www.smooks.org/mediawiki/index.php?title=V1.4:Smooks_v1.4_User_Guide#XML
If the format does not fit the flatfile reader, then you might be able to use the regex reader: https://github.com/smooks/smooks/tree/v1.5.1/smooks-examples/flatfile-to-xml-regex/
As for the conditional stuff... you really need to bind the data fragments into a Java model of some sort (real or virtual) and then conditionally process those fragments by either adding elements on the visitors being applied, or process the fragments by routing them to another process that processes them in parallel, which is a far better way of processing a huge data stream.
I am new to ABAP.
I have a requirement in abap.In my presentation server ,there is header text file, which I want to upload data from that text file to Header table. But the custom table is having different structure from text file.
It includes extra 4 fields- PO_CREATED_DATE, PO_CREATED_BY, PO_CHANGED_DATE, PO_CHANGED_BY.
These fields have to populate from our report program using sy-datum and sy-uname.
In this scenario,we have to check,If the data is existing then populate
PO_CHANGED_DATE, PO_CHANGED_BY and if the data is not there,then populate PO_CREATED_DATE, PO_CREATED_BY.
Please let me know the logic...
first load the file into an internal table with only 1 very long field (long enough to contain at least the longest possible line in the file). Then loop over that itab and split the individual lines using the separator that is used in the file. You split the contents into a work area that contains all your fields, including the 4 extra fields that may or may not be included in the file. Make sure to clear the work area before splitting the line into the WA. Append the work area to an itab with the same structure as the wa, then continue with the next line.
After that, loop over that second itab and check for lines where your 4 extra fields are initial. Those are the lines where you need to add the data by code. After that, do whatever you need to do with the data in the itab.
I uploaded text file header data to it_input1 using gui_upload.But the it_input1 is not having extra 4 fields.I declared another itable it_header which is having the same structure as Header custom table.Now i wnt to check whether the data in the it_input 1 is alredy existing or not.If existing ,populate it_header-po_changed_date and it_header-po_changed_by or else, it_header-po_created_date and it_header-po_created_by.
Take a look to the "Pattern" Button on top. Select ABAP Objects an press enter.
Now you can supply the class and methdo you want to call.
CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
GUI_UPLOAD is a static method. If you are new that is the easiest way to see which parameters must be supplied. With the forward navigation (double-click) you can check the signature for typing the parameter variables.
Then you just need to convert your data (e.g. SPLIT). I can only recommend to use the F1-Help.
Kind regards!
Is there any way of programmatically getting the value of a Text Symbol at runtime?
The scenario is that I have a simple report that calls a function module. I receive an exported parameter in variable LV_MSG of type CHAR1. This indicates a certain status message created in the program, for instance F (Fail), X (Match) or E (Error). I currently use a CASE statement to switch on LV_MSG and fill another variable with a short description of the message. These descriptions are maintained as text symbols that I retrieve at compile time with text-MS# where # is the same as the possible returns of LV_MSG, for instance text-MSX has the value "Exact Match Found".
Now it seems to me that the entire CASE statement is unnecessary as I could just assign to my description variable the value of the text symbol with ID 'MS' + LV_MSG (pseudocode, would use CONCATENATE). Now my issue is how I can find a text symbol based on the String representation of its ID at runtime. Is this even possible?
If it is, my code would look cleaner and I wouldn't have to update my actual code when new messages are added in the function module, as I would simply have to add a new text symbol. But would this approach be any faster or would it in fact degrade the report's performance?
Personally, I would probably define a domain and use the fixed values of the domain to represent the values. This way, you would even get around the string concatenation. You can use the function module DD_DOMVALUE_TEXT_GET to easily access the language-dependent text of a domain value.
To access the text elements of a program, use a function module like READ_TEXT_ELEMENTS.
Be aware that generic programming like this will definitely slow down your program. Whether it would make your code look cleaner is in the eye of the beholder - if the values change rarely, I don't see why a simple CASE statement should be inferior to some generic text access.
Hope I understand you correctly but here goes. This is possible with a little trickery, all the text symbols in a report are defined as variables in the program (with the name text-abc where abc is the text ID). So you can use the following:
data: lt_all_text type standard table of textpool with default key,
lsr_text type ref to textpool.
"Load texts - you will only want to do this once
read textpool sy-repid into lt_all_text language sy-langu.
sort lt_all_Text by entry.
"Find a text, the field KEY is the text ID without TEXT-
read table lt_all_text with key entry = i_wanted_text
reference into lsr_text binary search.
If you want the address you can add:
field-symbols: <l_text> type any.
data l_name type string.
data lr_address type ref to data.
concatenate 'TEXT-' lsr_text->key into l_name.
assign (l_name) to <l_text>.
if sy-subrc = 0.
get reference of <l_text> into lr_address.
endif.
As vwegert pointed out this is probably not the best solution, for error handling rather use message classes or exception objects. This is useful in other cases though so now you know how.