Fixing sql length error - sql

I'm trying to use the openedge jdbc connector to pull data from an existing progress db but im having column width issues.
As I know already about dBtool option to fix the width. I need to call this dBtool by the 4gl script.
All the input values must be defined in the script.
Is it possible to do? If it is yes, Please provide me a sample script to do this

Here's an example from the official knowledge base (see link below for complete description).
========== PROGRAM LISTING FOLLOWS ==============
FOR EACH _file NO-LOCK WHERE _Tbl-Type = "T":
OUTPUT TO Value("input.txt").
/* SQL Width Scan wFix Option. */
PUT UNFORMATTED "9~n2~n1~n20~n" + STRING (_file-number) + "~n0".
OUTPUT CLOSE.
OS-COMMAND SILENT VALUE ("dbtool Sports2000 < input.txt").
OS-RENAME VALUE ("dbtool.out") value ("dbtool_" + _file-name).
END.
========= example of the input file created by the above script =======
9
2
1
20
20
0
========= example output for a single table ===========
Total records read: 0
SQLWidth errors found: 0, Date errors found: 0
SQLWidth errors fixed: 0
See complete example and better description in the Progress Knowledgebase

Related

Why FF_5 is not posting EBS records to subledgers?

I'm trying to post document through tcode FF_5 (electronic bank statements) as SWIFT MT940 - international format, with immediate posting parameter. Bank Accounting Posting works fine, but Subledger posting doesn't work correctly.
After debugging I found information that document is being posted by FM: 'POSTING_INTERFACE_DOCUMENT'. Inside return table - t_bapiret2 I'm getting message "Batch Input for screen SAPLFCPD 0100 does not exist" (Type: S, ID: 00, NR: 344). When I'm trying to post this without background processing I have to insert name of customer into field BSEC-NAME1 of this screen and it posts fine.
I want to automize this process. How should I pass data to ftpost[] or bdcdata[] tables to inject information about Customer Name? I tried to do it in various ways in debugging mode but none of them worked for me.
Sample BDCDATA[] record that I created:
ft-program = 'SAPLFCPD'.
ft-dynpro = '0100'.
ft-dynbegin = 'X'.
APPEND ft.
CLEAR ft.
ft-fname = 'BSEC-NAME1'.
ft-fval = 'TEST'.
APPEND ft.
EDIT:
Sample bank statement:
:20:MT940
:25:/PL22112110212000180204832110
:28C:56
:60F:C220525PLN89107,30
:61:2205250525D269,98N152NONREF//6450501100324535
152 0
:86:020~00152
~20ZAM.PL111111111, FVKOR/0022
~2111/2205/2401120
~22˙
~23˙
~24˙
~25˙
~3010202964
~310000620200678839
~32CUSTOMER NAME
~33˙
~38PL23102029640000620200678839
~60˙
~63˙
:62F:C220525PLN88837,32
:64:C220525PLN88837,32
-
This is one-time Client, he has no master data information that's why I want to inject it.
I would really appreciate any help.
I added some code to process it as BDC, right now entries are available in SM35.
Code looks like this:
ENHANCEMENT 1 ES_BDC_FEBAN. "active version
data lv_session TYPE APQI-GROUPID.
lv_session = |{ SY-DATUM }{ SY-TIMLO(4) }|.
DATA: lv_name1 LIKE bsec-name1.
GET PARAMETER ID 'FEBAN_NAME1' FIELD lv_name1.
IF lv_name1 IS NOT INITIAL.
CALL FUNCTION 'BDC_OPEN_GROUP'
EXPORTING
client = SY-MANDT " Client
group = LV_SESSION " Session name
keep = 'X' " Indicator to keep processed sessions
user = SY-UNAME " Batch input user
EXCEPTIONS
client_invalid = 1 " Client is invalid
destination_invalid = 2 " Target system is invalid/no longer relevant
group_invalid = 3 " Batch input session name is invalid
group_is_locked = 4 " Batch input session is protected elsewhere
holddate_invalid = 5 " Lock date is invalid
internal_error = 6 " Internal error of batch input (see SYSLOG)
queue_error = 7 " Error reading/writing the queue (see SYSLOG)
running = 8 " Session is already being processed
system_lock_error = 9 " System error when protecting BI session
user_invalid = 10 " BI user is not valid
others = 11
.
IF SY-SUBRC <> 0.
ENDIF.
MODE = 'Q'.
clear: FUNCT, SGFUNCT.
* funct = 'B'.
* SGFUNCT = 'B'.
ft-program = 'SAPLFCPD'.
ft-dynpro = '0100'.
ft-dynbegin = 'X'.
APPEND ft TO ft[].
CLEAR: ft-program, ft-dynpro, ft-dynbegin.
ft-fnam = 'BSEC-NAME1'.
ft-fval = lv_name1.
APPEND ft TO ft[].
CALL FUNCTION 'BDC_INSERT'
EXPORTING
tcode = tcode
TABLES
dynprotab = ft.
call function 'BDC_CLOSE_GROUP' .
COMMIT WORK AND WAIT.
SUBMIT RSBDCSUB EXPORTING LIST TO MEMORY
WITH mappe EQ lv_session
WITH von EQ sy-datum
WITH bis EQ sy-datum
WITH z_verarb EQ 'X'
WITH fehler EQ ''
WITH logall EQ 'X'
AND RETURN.
ENDIF.
ENDENHANCEMENT.
Variables entries:
Tcode = 'FB01'
FT[]:
<asx:abap version="1.0" xmlns:asx="http://www.sap.com/abapxml"><asx:values><_--5CTYPE_--3D_--25_T00004S00000371O0000147040><item><PROGRAM>SAPMF05A</PROGRAM><DYNPRO>0100</DYNPRO><DYNBEGIN>X</DYNBEGIN><FNAM/><FVAL/></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BDC_CURSOR</FNAM><FVAL>RF05A-NEWKO</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-BLDAT</FNAM><FVAL>25.05.2022</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-BLART</FNAM><FVAL>WB</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-BUKRS</FNAM><FVAL>1700</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-BUDAT</FNAM><FVAL>25.05.2022</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-WAERS</FNAM><FVAL>PLN</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-XBLNR</FNAM><FVAL>PBE01PL41022056</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BKPF-BKTXT</FNAM><FVAL>0000375800001</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>RF05A-NEWBS</FNAM><FVAL>40</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>RF05A-NEWKO</FNAM><FVAL>1232000000</FVAL></item><item><PROGRAM>SAPMF05A</PROGRAM><DYNPRO>0300</DYNPRO><DYNBEGIN>X</DYNBEGIN><FNAM/><FVAL/></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-WRBTR</FNAM><FVAL>269,98</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-VALUT</FNAM><FVAL>25.05.2022</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-ZUONR</FNAM><FVAL>0000375800001PLN</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-SGTXT</FNAM><FVAL>NONREF 020152 ZAM.PL146751217, FVKOR/002211/2205/2</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BDC_CURSOR</FNAM><FVAL>RF05A-NEWKO</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>RF05A-NEWBS</FNAM><FVAL>50</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>RF05A-NEWKO</FNAM><FVAL>1430101010</FVAL></item><item><PROGRAM>SAPLKACB</PROGRAM><DYNPRO>0002</DYNPRO><DYNBEGIN>X</DYNBEGIN><FNAM/><FVAL/></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BDC_OKCODE</FNAM><FVAL>/00</FVAL></item><item><PROGRAM>SAPMF05A</PROGRAM><DYNPRO>0300</DYNPRO><DYNBEGIN>X</DYNBEGIN><FNAM/><FVAL/></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-WRBTR</FNAM><FVAL>269,98</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-VALUT</FNAM><FVAL>25.05.2022</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-ZUONR</FNAM><FVAL>PL1467512</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEG-SGTXT</FNAM><FVAL>NONREF 020152 ZAM.PL111111111, FVKOR/002211/2205/2</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BDC_CURSOR</FNAM><FVAL>RF05A-NEWKO</FVAL></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BDC_OKCODE</FNAM><FVAL>/11</FVAL></item><item><PROGRAM>SAPLKACB</PROGRAM><DYNPRO>0002</DYNPRO><DYNBEGIN>X</DYNBEGIN><FNAM/><FVAL/></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BDC_OKCODE</FNAM><FVAL>/00</FVAL></item><item><PROGRAM>SAPLFCPD</PROGRAM><DYNPRO>0100</DYNPRO><DYNBEGIN>X</DYNBEGIN><FNAM/><FVAL/></item><item><PROGRAM/><DYNPRO>0000</DYNPRO><DYNBEGIN/><FNAM>BSEC-NAME1</FNAM><FVAL>CUSTOMER NAME</FVAL></item></_--5CTYPE_--3D_--25_T00004S00000371O0000147040></asx:values></asx:abap>
Data might looks slightly differently from debugger and bank statement.
There are 2 entries in SM35, first is processed correctly, but 2nd one has log entries like this:
Can somebody help me please?
Most likely you are confusing working principles of FEBAN and FF_5.
In SM35 you will see BI sessions created by FF_5. You need to process them to post real postings.
Also I recommend to retry the failed postings via FEBP transaction, which is called by FF_5 under the hood. It does almost the same as FF_5, and uses FF_5 data, but has the ability to repost the failed records.
The one interesting parameter FEBP has is Bk Pstg Only "Only post to G/L", which may be setting silently by FF_5 which may prevent you to post to subledgers. Though I can't confirm this, it's only assumption.
P.S. Also I recommend to never ever change automatically generated batch sessions like you do, not SAPLFCPD nor any others.
Problem solved. I passed records in ft[] in wrong order.
Very usefull thing is using tcode SHDB as simulation how records should be passed. At my case FT[] table should contain
SAPMF05A scr. 0100
[... required fields ...]
SAPLFCPD scr. 0100
BSEC-NAME1 <-- Injected missing field
SAPMF05A scr. 0300
[... required fields ...]
SAPMF05A SCR. 0301
[... required fields ... -> SAVE]
Topic can be closed. Thank you.

Snowflake COPY INTO from JSON - ON_ERROR = CONTINUE - Weird Issue

I am trying to load JSON file from Staging area (S3) into Stage table using COPY INTO command.
Table:
create or replace TABLE stage_tableA (
RAW_JSON VARIANT NOT NULL
);
Copy Command:
copy into stage_tableA from #stgS3/filename_45.gz file_format = (format_name = 'file_json')
Got the below error when executing the above (sample provided)
SQL Error [100069] [22P02]: Error parsing JSON: document is too large, max size 16777216 bytes If you would like to continue loading
when an error is encountered, use other values such as 'SKIP_FILE' or
'CONTINUE' for the ON_ERROR option. For more information on loading
options, please run 'info loading_data' in a SQL client.
When I had put "ON_ERROR=CONTINUE" , records got partially loaded, i.e until the record with more than max size. But no records after the Error record was loaded.
Was "ON_ERROR=CONTINUE" supposed to skip only the record that has max size and load records before and after it ?
Yes, the ON_ERROR=CONTINUE skips the offending line and continues to load the rest of the file.
To help us provide more insight, can you answer the following:
How many records are in your file?
How many got loaded?
At what line was the error first encountered?
You can find this information using the COPY_HISTORY() table function
Try setting the option strip_outer_array = true for file format and attempt the loading again.
The considerations for loading large size semi-structured data are documented in the below article:
https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
I partially agree with Chris. The ON_ERROR=CONTINUE option only helps if the there are in fact more than 1 JSON objects in the file. If it's 1 massive object then you would simply not get an error or the record loaded when using ON_ERROR=CONTINUE.
If you know your JSON payload is smaller than 16mb then definitely try the strip_outer_array = true. Also, if your JSON has a lot of nulls ("NULL") as values use the STRIP_NULL_VALUES = TRUE as this will slim your payload as well. Hope that helps.

CSV file input not working together with set field value step in Pentaho Kettle

I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?

Invalid operation result set is closed errorcode 4470 sqlstate null - DB2 data extract

I am running a very simple query and trying to extract the results to a text file. The entire query is essentially what is below, I am selecting everything from one single table with one piece of where criteria which is limiting the data to one month's worth. After it has extracted around 1.2 gig this error shows up. Is there any way that I can work around this other than extracting smaller date ranges? I am trying to pull a couple of years worth of data so if I can only get it a few days at a time it will take a lot of manual work.
I am currently using the free trial of a DB2 query tool - Razor SQL if that makes a difference, I can probably purchase different software if it would help. I am trying to get IBM's tool but for some reason it freezes during the download so I am still working on that. I have searched about this error but everything I see seems much more complex than what I am doing and I can't tell if it applies or not. Thanks in advance.
select *
from MyTable
where date_col between date '2014-01-01' and date '2014-01-31'
I stumbled at this error too, found out it is related to db2jcc.jar (type 4) driver.
Excerpt: If there are no items in the result set left (or to begin with), the Result set is closed automatically and therefore the Exception. Suggestion is to handle it in the application, perhaps in my case, I started checking if(rs.next()) but otherwise, there is a work around. Check out the source link below for how you can set some properties to Data source and avoid exception.
Source :
"Invalid operation: result set is closed" error with Data Server Driver for JDBC
In my case, i missed some properties in WAS, after add allowNextOnExhaustedResultSet the issue is fixed.
1.Log in to the WebSphere Application Server administration console.
2.Select Resources > JDBC > Data sources > Application Center DataSource name > Custom properties and click New.
3.In the Name field, enter allowNextOnExhaustedResultSet.
4.In the Value field, type 1.
5.Change the type to java.lang.Integer.
6.Click OK.
Sometimes you need also check whether resultSetHoldability properties exists. Details refer to here.
I encountered this failure also when ugrading from JDBC Type 2 driver (db2java.zip) JDBC type 4 driver (db2jcc4.jar)
Statement statement = results.getStatement();
if (statement != null)
{
connection = statement.getConnection(); // ** failed here
statement.close();
}
Solution was to check if the statement is closed or not as follows.
Changed to:
Statement statement = results.getStatement();
if (statement != null && !statement.isClosed()) {
{
connection = statement.getConnection();
statement.close();
}
Creating property bellow with type Integer it's worked for me:
allowNextOnExhaustedResultSet:
I had the same issue on WAS 7 so i had to add and change few this on Admin Console.
This TeamWorksRuntimeException exception should be fixed by applying APAR JR50863 which is available on top of BPM V8.5.5 or included on BPM V8.5 refresh pack 6.
For the case that the APAR does not solve the problem, try following workaround:
Log in to the WebSphere Application Server admin console
Select Resources > JDBC > Data sources > DataSource name (TeamWorksDB) > Custom properties and click New
In the Name field, enter downgradeHoldCursorsUnderXa
In the Value field, type true
Change the type to java.lang.Boolean
Click OK to save your changes
Select custom property resultSetHoldability
In the Value field, type 1
Click OK to save your changes
Source of the Answer : https://developer.ibm.com/answers/questions/194821/invalid-operation-result-set-is-closed-errorcode-4/
Restarting the app may fix the problem if connection pool lost session to Db2. If using Tomcat then connection pool property of 'testonBorrow' may reestablish the connection to Db2.

CF9 Error Executing Database Query

I am getting this error and don't understand why:
Error Executing Database Query. [Macromedia][SQLServer JDBC
Driver][SQLServer]Invalid column name 'buildno'. The error occurred
in C:/data/wwwroot/webappsdev/cfeis/redbook/redbook_bio_load.cfm: line
10
8 : select *
9 : from redbook_bio
10 : where build_num = '#session.build_num#'
11 : </cfquery>
12 :
VENDORERRORCODE: 207 SQLSTATE: 42S22 SQL: select * from
redbook_bio where buildno = '4700' DATASOURCE: xxxx
******"
It is saying buildno is an invalid column name, but I do not have that name in my query. I used to, but changed both the column in the database and the column name in the query to build_num. You can see my exact code with line numbers, and that there is no 'buildno' in there. But looking at the SQL statement below that, it is still trying to use 'buildno'.
I had my editor check the directory for anywhere it says buildno and no results came back. I have restarted the CF Service and cleared the cache. Why would it still be trying to run it with buildno instead of build_num like the code says?
There was a cfquery cache setting in the Administrator. We had it set to 100. Apparently clearing the cache template and component cache doesn't clear the cfquery cache. I changed the query name and it fixed the problem. It most likely could have been fixed by setting the cfquery cache value to 0.