Read spreadsheet data with cell length > 32 characters into itab - spreadsheet

I am trying to read an Excel file into my internal table with cell values longer than 32 characters. I am using the function KCD_EXCEL_OLE_TO_INT_CONVERT to read the files in.
I have tried copying the function module and table using SE37 and SE80, but it will not let me save the table with a value named ROW.
Is there a better function module I'm not seeing or is there a way I can make a Z_ copy of the table and FM to allow me to change the length of the value column in a kcde_cells formatted table?
My program works doing everyting except reading in the 33rd and beyond characters, so I know the rest of the functionality is fine. I just need to get the length of the read in value to be longer to accommodate longer cell contents.
Edit: Adding the code I used to upload the file.
CALL FUNCTION 'KCD_EXCEL_OLE_TO_INT_CONVERT'
EXPORTING
filename = infile "Input file.
i_begin_col = st_col
i_begin_row = st_row
i_end_col = e_col
i_end_row = e_row
TABLES
intern = ttab "Internal table for storing the Excel data.
EXCEPTIONS
inconsistent_parameters = 1
upload_ole = 2
OTHERS = 3.
IF sy-subrc <> 0.
FORMAT COLOR COL_BACKGROUND INTENSIFIED.
WRITE:/ 'Error Uploading file'.
EXIT.
ENDIF.
IF ttab[] IS INITIAL. "Internal table is empty.
FORMAT COLOR COL_BACKGROUND INTENSIFIED.
WRITE:/ 'No Data Uploaded'.
EXIT.
ELSE.
SORT ttab BY row col.
LOOP AT ttab.
MOVE ttab-col TO index.
ASSIGN COMPONENT 'ROW' OF STRUCTURE itab TO <fs>.
MOVE ttab-row TO <fs>.
ASSIGN COMPONENT 'COL' OF STRUCTURE itab TO <fs>.
MOVE ttab-col TO <fs>.
ASSIGN COMPONENT 'VALUE' OF STRUCTURE itab TO <fs>.
MOVE ttab-value TO <fs>.
APPEND itab.
CLEAR itab.
ENDLOOP.
ENDIF.

As I stated in similar question , use FM FILE_READ_AND_CONVERT_SAP_DATA. It allows reading cells with the length up to 256 characters.

Related

I want to write the contents of row 0 as the column name. What should I do? (pandas columns problem)

In the df structure below
enter image description here
I want to write the contents of row 0 as the column name. What should I do?
The actual number of columns is very large (more than 50)
enter image description here
This might help
header = df.iloc[0]
new_df = pd.DataFrame(df.values[1:], columns=header)
It looks like the input file you read your DataFrame from contained column names
in its first row.
You probably read your DataFrame calling either read_csv or read_fwf.
Both these functions have header parameter, which you probably set to
None and in this case:
column names are created as consecutive numbers, starting from 0,
the first row of input file is read as a regular data row.
If I am right, change the way you read your DataFrame.
Maybe it is be enough to remove header parameter (its default value
of infer is sufficient).

Importing Excel to internal table with same layout

I know of function module ALSM_EXCEL_TO_INTERNAL_TABLE. This FM creates an internal table with three columns (row, column, value). But I want to create an internal table which has the same layout as my Excel sheet. How can I achieve this?
You can use class cl_mass_spreadsheet_service if you are uploading the excel in foreground. See my example code below:
DATA:
lv_file TYPE if_mass_spreadsheet_types=>file_name,
lt_result TYPE STANDARD TABLE OF zsd_salesorder_create. "your result table
lv_file = 'C:\some_file.xlsx'.
cl_mass_spreadsheet_service=>fill_table(
EXPORTING
iv_file = lv_file "full path+name of file. See method navigate_to_file below
iv_from_file = abap_true "use to upload from excel/CSV
iv_from_clipboard = abap_false "use to copy directly from clipbiard
iv_tabname = 'Order_Create' "can be whatever
CHANGING
ct_table = lt_result "if ct_table have the same column names as the excel file, the order of the columns does not matter
).
If you upload the data with FM ALSM_EXCEL_TO_INTERNAL_TABLE, you can LOOP through the internal table this FM is uses (the one with row, column, value as you mentioned) and fill your own internal table (which looks like the Excel sheet) accordingly.
You could use
cl_mass_spreadsheet_service=>import_from_file
as well but without ddic structure.
Unfortunately, these methods will actually open and show EXCEL...

modify (encrypt/decrypt) cell values in rows of datagridview

I've followed this article to use a datagridview to manage data that will be saved into an XML file: http://www.codeproject.com/Articles/32542/Using-XML-as-datagridview-Source
The data will be a list of usernames and passwords.
As such, I need to step through each cell in the columns for 'username' and 'password', and replace the value of the cell with the result of a call to a function that would encrypt or decrypt the value of the cell.
On form_load, after I populate the table with data from the XML file, I want to cycle through these cells and do this to decrypt, and on form close / save, I want to cycle through each cell and encrypt the values before written to file.
I have a function written out to encrypt the data, the part I'm stuck on is how I could step through every cell in the 'username' and 'password' columns of DataGridView1 (as an example) and update the results to the value returned by a function.
I apologize for not having a code example for this question, I do not know how to do this, so I haven't been able to put together a bit of code to try / fail at it.
I imagine it will be something simple like 'For each cell in (whatever statement returns the cells in a given column of the datagridview), ...' , but I'm not sure.
Create a Dataset and use Dataset.ReadXml Method to read the Xml Data
And then choose the specified datatable from the dataset as a datasource for the datagridView. it is easier to manipulate datatable rows.
use the decryption function on the DatagridView.formatingRow event
and before closing Loop over Datatable.rows
For Each row As DataRow In dtDataTable.Rows
row("Pass") = Encrypt(row("Pass")
Next
and encrypt the password before saving it. and Save the dataset into the XML File using the Method WriteXml

Format cell data type when data added with worksheet.add_table

First, love xlsxwriter. Use both python and perl modules. Thanks so much to John M.
When creating a table using add_table(), all the data cells get data formatted as text. Even if only integers in the data. An integer view as text causes a small green triangle to appear in the upper left of each cell viewed in Excel.
Is there anyway to go back and modify cell data types after adding data with add_table??
Here is the tidbit of code doing the add_table() :
worksheet = self.add_worksheet(name)
worksheet.header = header
tableinfo= {
'data' : data,
'columns' : columns
}
lastcol = scol + (len(header) - 1)
lastrow = srow + len(data)
worksheet.add_table(srow,scol,lastrow,lastcol,tableinfo)
When creating a table using add_table(), all the data cells get data formatted as text. Even if only integers in the data.
That shouldn't be the case. The add_table() method uses the write() method which writes the correct Excel data type based on the Python data type.
You can see that it works as expected using the table example in the docs.
So, if you are seeing green warning triangle it is probably because you have numeric data stores as strings in your Python code.
If you convert your sample code to a working example we could verify that.

Parse Text File with Variable Fields Vb.Net

A text file that I process has changed in the way data is formatted, so it's time to update the code that parses it. The old file had a fixed number of lines and fields per record and so parsing it by position was easy, of course now that isn't the case (I added the spaces for readability, the ~ indicates a new line, the * is the field separator):
~ENT*1*2J*34*111223333
~NM1*IL*1*SMITHJOHNA***N*123456789
~RMRIKH62XX/PAY/1234567/20150103**12345.67
~REFZZMEDPM/M/12345.67
~REF*LU*40/CSWI
~DTM*582****RD8*20150101-20150131
~ENT*2*2J*34*222334444
~NM1*IL*1*DOEJANES***N*234567891 ~RMRIKH62XX/PAY/1234567/345678901**23456.78
~REF*LU*40/CSWI
~DTM*582****RD8*20141211-20141231
~ENT*3*2J*34*333445555
~NM1*IL*1*DOE*JOHN****N*3456789012 ~RMRIKH62XX/PAY/200462975/20150103**45678.90
~REFZZMEDPM/M/3456.78
~REF*LU*40/CSWI
~DTM*582****RD8*20150101-20150131
~ENT*4*2J*34*444556666
~NM1*IL*1*SMITHJANED***N*456789012 ~RMRIKH62XX/PAY/567890123/678901234**6789.01
~REFZZMEDPM/M/6789.01
~REF*LU*40/CSWI
~DTM*582****RD8*20150101-20150131
~ENT*5*2J*34*666778888
~NM1*IL*1*SMITHJONJ***N*8901234
~RMRIKH62XX/PAY/56789012/67890123**5678.90
~REFZZMEDPM/M/5678.90
~REF*LU*40/CSWI
~DTM*582****RD8*20150101-20150131
~ENT*6*2J*34*777889999
~NM1*IL*1*DOEBOBE***N*567890123
~RMRIKH62XX/PAY/34567890/45678901*5678.90
~REF*LU*40/CSWI
~DTM*582****RD8*20141210-20141231 ~RMRIKH62XX/PAY/1234567890/2345678901**6789.01
~REFZZMEDPM/M/6789.01
~REF*LU*40/CSWI
~DTM*582****RD8*20150101-20150131
What is the best way to parse this data? Is there a better way than using StreamReader?
String.Split is your friend.
If the file is not too large, the simplest approach would be to:
Read the file contents into a string variable (File.ReadAllText).
Split the "lines" (lines = allText.Split("~"c)).
Loop through the lines. For each line:
Split the line into fields (fields = line.Split("*"c))
Process the field values. You'll probably want to have a big Select Case statement on fields(0) and then proceed depending on the first field of the line.
You can get this into an 2-D array fairly easily:
' Dynamic structure to hold the data as we go.
Dim data As New List(Of String())
' Break each delimiter into a new line.
Dim lines = System.IO.File.ReadAllText("data.txt").Split("~")
' Process each line.
For Each line As String In lines
' Break down the components of each line.
data.Add(line.Split("*"))
Next
' Produce 2-D array. Not really needed, as you can just use data if you want.
Dim dataArray = data.ToArray()
Now just iterate through the 2-D structure and process the data accordingly.
If you need to ensure your data always has a specific number of indexes (for example, some lines have 5 fields supplied, but you expect there to always be 8), you can can adjust the data.Add command like so:
' Ensure there are always at least 8 indexes for each line.
' This will insert blank (String.Empty) values into the array indexes if a line of data omits certain values.
data.Add((line & Space(8).Replace(" ", "*")).Split("*"))