invalid datetime format - sql

I have a question about powercenter message code: RR-4035. I have a mapping in which i am using a sql override query, this error is in sql override. This mapping is failing with an error,
'[IBM][CLI DRIVER]CLIO113E SQLSTATE 22007:An invalid datetime format
was detected, that is an invalid string representation or value was
specified'.
> Database driver error:
Function name:Fetch
SQL STMNT:
select s.employee_record_id,s.employee_id,s.record_origin,
cnt.employee_contract_id,cnt.employee_contract_efctv_dt,cnt.employee_contract_term_dt,club.employee_club
from
employee_main_info s
inner join
(select
employee_id,record_origin,employee_contract_term_dt,employee_contract_efctv_dt
from employee_perm
union
select
employee_id,record_origin,employee_contract_term_dt,employee_contract_efctv_dt
from employee_temp
) cnt on s.employee_id=cnt.employee_id,
employee_club_data club
where
club.employee_id=s.employee_id
and (cnt.employee_contract_efctv_dt <=sysdate or cnt.employee_contract_efctv_dt<'2016-01-01')
and s.employee_record_term_dt>sysdate;
native error code= -99999
I have tried everything, my previous mappings have run fine with the same datetime formats but this one is failing. One thing i have noticed is that if i remove all the transformations in between the source qualifier and target the mapping succeeds and data gets loaded to target, but as soon as i put any lookups or expressions between source qualifier and target except a pass through expression, the mapping fails again.
Any suggestion, any help regarding this is appreciated.

We've seen this error occurring when SELECTing from a table with a timestamp column via the IBM Data Server ODBC/CLI driver. It only happened on one Windows machine and we were able to make the error disappear by changing the regional setting main selection from Israel to USA.
While not tested yet, it may be that the IBM DB2 ODBC configuration option DateTimeStringFormat or the attributes SQL_ATTR_DATE_FMT and SQL_ATTR_TIME_FMT can be used to force a specific format (such as JIS). See https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0011525.html

Related

Oracle SQL developer connect issue [duplicate]

I'm accessing an Oracle Database from a java application, when I run my application I get the following error:
java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
You may also try to check the version of the Oracle jdbc driver and Oracle database. Just today I had this issue when using ojdbc6.jar (version 11.2.0.3.0) to connect to an Oracle 9.2.0.4.0 server. Replacing it with ojdbc6.jar version 11.1.0.7.0 solved the issue.
I also managed to make ojdbc6.jar version 11.2.0.3.0 connect without error, by adding oracle.jdbc.timezoneAsRegion=false in file oracle/jdbc/defaultConnectionProperties.properties (inside the jar). Found this solution here (broken link)
Then, one can add -Doracle.jdbc.timezoneAsRegion=false to the command line, or AddVMOption -Doracle.jdbc.timezoneAsRegion=false in config files that use this notation.
You can also do this programmatically, e.g. with System.setProperty.
In some cases you can add the environment variable on a per-connection basis if that's allowed (SQL Developer allows this in the "Advanced" connection properties; I verified it to work when connecting to a database that doesn't have the problem and using a database link to a database which has).
In a plain a SQL-Developer installation under Windows go to directory
C:\Program Files\sqldeveloper\sqldeveloper\bin
and add
AddVMOption -Duser.timezone=CET
to file sqldeveloper.conf.
Error I got :
Error from db_connection.java -->> java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
ORA-00604: error occurred at recursive SQL level 1ORA-01882: timezone region not found
Prev code:
public Connection getOracle() throws Exception {
Connection conn = null;
Class.forName("oracle.jdbc.driver.OracleDriver");
conn = DriverManager.getConnection("jdbc:oracle:thin:#127.0.0.1:1521:tap", "username", "pw");
return conn;
}
new Code:
public Connection getOracle() throws Exception {
TimeZone timeZone = TimeZone.getTimeZone("Asia/Kolkata");
TimeZone.setDefault(timeZone);
Connection conn = null;
Class.forName("oracle.jdbc.driver.OracleDriver");
conn = DriverManager.getConnection("jdbc:oracle:thin:#127.0.0.1:1521:tap", "username", "pw");
return conn;
}
now it is working!!
Update the file oracle/jdbc/defaultConnectionProperties.properties in whatever version of the library (i.e. inside your jar) you are using to contain the line below:
oracle.jdbc.timezoneAsRegion=false
What happens is, that the JDBC client sends the timezone ID to the Server. The server needs to know that zone. You can check with
SELECT DISTINCT tzname FROM V$TIMEZONE_NAMES where tzname like 'Etc%';
I have some db servers which know about 'Etc/UTC' and 'UTC' (tzfile version 18) but others only know 'UTC' (tz version 11).
SELECT FILENAME,VERSION from V$TIMEZONE_FILE;
There is also different behavior on the JDBC client side. Starting with 11.2 the driver will sent the zone IDs if it is "known" to Oracle, whereas before it sent the time offset. The problem with this "sending of known IDs" is, that the client does not check what timezone version/content is present on the server but has its own list.
This is explained in Oracle Support Article [ID 1068063.1].
It seems it also depends on the Client OS, it was more likely that Etc/UTC fails with Ubuntu than RHEL or Windows. I guess this is due to some normalization but I haven't figured out what exactly.
in eclipse go run - > run configuration
in there go to JRE tab in right side panels
in VM Arguments section paste this
-Duser.timezone=GMT
then Apply - > Run
I had this problem when running automated tests from a continuous integration server. I tried adding the VM argument "-Duser.timezone=GMT" to the build parameters, but that didn't solve the problem. However, adding the environment variable "TZ=GMT" did fix it for me.
I ran into this problem with Tomcat. Setting the following in $CATALINA_BASE/bin/setenv.sh solved the issue:
JAVA_OPTS=-Doracle.jdbc.timezoneAsRegion=false
I'm sure that using one of the Java parameter suggestions from the other answers would work in the same way.
ERROR :
ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found
Solution:
CIM setup in Centos.
/opt/oracle/product/ATG/ATG11.2/home/bin/dynamoEnv.sh
Add this java arguments:
JAVA_ARGS="${JAVA_ARGS} -Duser.timezone=EDT"
In Netbeans,
Right-click your project -> Properties
Go to Run (under Categories)
Enter -Duser.timezone=UTC or -Duser.timezone=GMT under VM Options.
Click Ok, then re-run your program.
Note: You can as well set to other timestones besides UTC & GMT.
If this problem is in JDeveloper:
Change the project properties for both the model and the view project -> run/debug -> default profile -> edit
add the following run option:
-Duser.timezone=Asia/Calcutta
Make sure that the above time zone value is fetched from your database as follows:
select TZNAME from V$TIMEZONE_NAMES;
Along with that you'd want to check the time zone settings in your jdev.conf as well as in the JDeveloper -> Application Menu -> Default Project Propertes -> Run/Debug -> Default Profile -> Run Options.
I also same faced similar issue.
Environment:
Linux, hibernate project, ojdbc6 driver while querying oracle 11g database.
Resolution
TZ parameter was not set in linux machine, that basically tell oracle about the timezone.
So, After adding export statment "export TZ=UTC" at time of application start solved my problem.
UTC--> Change accorind to your timezone.
I had the same problem when trying to make a connection on OBIEE to Oracle db.
I changed my Windows timezone from (GMT+01:00) West Central Africa to (GMT+01:00) Brussels, Copenhagen, Madrid, Paris. Then I rebooted my computer and it worked just fine.
Seems like Oracle was not able to recognize the west central Africa timezone.
This issue happens as the code which is trying to connect to db, has a timezone which is not in db.
It can also be resolved by setting the time zone as below or any valid time zone available in oracle db.
valid time zone which can be found select * from v$version;
System.setProperty("user.timezone", "America/New_York");
TimeZone.setDefault(null);
I too had the same problem when i tried to create connection in JDeveloper. Our server located in different timezone and hence it raised the below errors as:
ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
I referred many forums which asked to include timezone in the Java Options(Run/Debug/Profile) of Project properties and Default Project properties as -Duser.timezone="+02:00" bBut it didn't work for me. Finally the following solution worked for me.
Add the following line to the JDeveloper's configuration file (jdev.conf).
AddVMOption -Duser.timezone=UTC+02:00
The file is located in "<oracle installation root>\Middleware\jdeveloper\jdev\bin\jdev.conf".
In my case I could get the query working by changing "TZR" with "TZD"..
String query = "select * from table1 to_timestamp_tz(origintime,'dd-mm-yyyy hh24:mi:ss TZD') between ? and ?";
I was able to solve the same issue by setting the timezone in my linux system (Centos6.5).
Reposting from
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
set timezone in /etc/sysconfig/clock e.g. set to ZONE="America/Los_Angeles"
sudo ln -sf /usr/share/zoneinfo/America/Phoenix /etc/localtime
To figure out the timezone value try to
ls /usr/share/zoneinfo
and look for the file that represents your timezone.
Once you've set these reboot the machine and try again.
Facing the same issue using Eclipse and a distant Oracle Database, changing my system time zone to match the time zone of the database server fixed the problem.
Re-start the machine after changing system time zone.
I hope this can help someone
java.sql.SQLException: ORA-00604: error occurred at recursive SQL
level 1 ORA-01882: timezone region not found
For this type of error, just change your system time to your country's standard GMT format
e.g. Indian time zone is chennai,kolkata.
Happens when you use the wrong version of OJDBC jar.
You need to use
11.2.0.4
For my case, i set the timezone at my OS level (ubuntu) with this command.
timedatectl set-timezone {timezone}
For example,
timedatectl set-timezone Africa/Kampala
This might be a bit late but It may help someone.
I encountered this issue while working on spring-boot application and failed to connect to the Oracle DB.
Error:
java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region not found
SELECT DBTIMEZONE FROM dual; -- This return: +00:00
timedatectl #The OS on the other hand returned correct time and timezone
Since I am not a system admin and not allowed to change system config, I had to apply a workaround in the codes.
spring.datasource.hikari.data-source-properties.oracle.jdbc.timezoneAsRegion=false
#Added the above into the application property file
The working solution came from the below link
Working solution URL
Edit
After the DBA patched the oracle with the missing timezone patches, The above (...timezoneAsRegion=false) was no longer needed.
For Spring-Boot Application - // add below two lines
#SpringBootApplication
public class Application {
public static void main(String[] args) {
// add below two lines
System.out.println("Setting the timezone"+TimeZone.getTimeZone("GMT+9:00").getID());
TimeZone.setDefault(TimeZone.getTimeZone("GMT+9:00"));
SpringApplication.run(Application.class, args);
}
}

Importing data from multi-value D3 database into SQL issues

Trying to use the mv.NET by bluefinity tools. Made some integration packages with it for importing data from a d3 multi-value database into MS SQL 2012 but seem to be having some trouble with the mapping.
For the VOYAGES table have some commentX fields in the D3 application that are acting quite unwieldy and the INSERT fails after a certain number of rows with the following message
>Error: 0xC0047062 at INSERT, mvNET Source[354]: System.Exception: Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(The value is too large to fit in the column data area of the buffer.))
at mvNETDataSource.mvNETSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)
Error: 0xC0047038 at INSERT, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.The PrimeOutput method on mvNET Source returned error code 0x80131500.The component returned a failure code when the pipeline engine called PrimeOutput().The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.There may be error messages posted before this with more information about the failure.
The value is too large to fit in the column data area of the buffer. -> tried changing the input / outputs types but can't seem to get it right.
In the SQL table the columns are of type ntext.
In the .dtsx job the data type for the columns are of type Unicode String [DT_WSTR] with length 4000 , I guess these are auto-detected.
The import worked for other D3 files like this not sure why it fails for these comment fields.
Running the query on the mv.NET Data Manager ( on the d3 server) times out after 240 seconds so maybe this is the underlying issue?
Any ideas how to proceed? Thank you ~
Most like reason is column COMMGROUP does not have correct data type or some record in source do not fit in output type
To find error record (causing) you have to use on redirect row (property of component failing component ) and get the result set in some txt.csv /or tsv file .
then check data
The exception is being thrown from mv.NET so I suggest you call (or ask your reseller) to call Bluefinity support and ask them about this. You're paying for support, might as well use it. Those programs shouldn't be allowed to throw exceptions like that.
D3 doesn't export Unicode, that might be one issue. But if the Data Manager times-out then I suspect something is wrong in the connectivity into D3. Open a Connection Monitor from the Session Monitor and watch the connection when you make the request. I'm guessing it's either hanging or more probably it's falling into BASIC Debug.
Make sure all D3-side programs related to this are either all Flash-compiled, or all Not Flashed. Your app code will fall into Debug if it's not Flashed but MVNET.BP is.
If it's your program that's in Debug, fix it. If you're not sure which program it is, LIST-RUNTIME-ERRORS in DM.
If it's a MVNET.BP program, again work with Bluefinity. If you are using MVSP for connectivity then the Connection Monitor may be useless, you'll need to change that to an IP (Telnet) connection to see the raw data exchange.

Invalid operation result set is closed errorcode 4470 sqlstate null - DB2 data extract

I am running a very simple query and trying to extract the results to a text file. The entire query is essentially what is below, I am selecting everything from one single table with one piece of where criteria which is limiting the data to one month's worth. After it has extracted around 1.2 gig this error shows up. Is there any way that I can work around this other than extracting smaller date ranges? I am trying to pull a couple of years worth of data so if I can only get it a few days at a time it will take a lot of manual work.
I am currently using the free trial of a DB2 query tool - Razor SQL if that makes a difference, I can probably purchase different software if it would help. I am trying to get IBM's tool but for some reason it freezes during the download so I am still working on that. I have searched about this error but everything I see seems much more complex than what I am doing and I can't tell if it applies or not. Thanks in advance.
select *
from MyTable
where date_col between date '2014-01-01' and date '2014-01-31'
I stumbled at this error too, found out it is related to db2jcc.jar (type 4) driver.
Excerpt: If there are no items in the result set left (or to begin with), the Result set is closed automatically and therefore the Exception. Suggestion is to handle it in the application, perhaps in my case, I started checking if(rs.next()) but otherwise, there is a work around. Check out the source link below for how you can set some properties to Data source and avoid exception.
Source :
"Invalid operation: result set is closed" error with Data Server Driver for JDBC
In my case, i missed some properties in WAS, after add allowNextOnExhaustedResultSet the issue is fixed.
1.Log in to the WebSphere Application Server administration console.
2.Select Resources > JDBC > Data sources > Application Center DataSource name > Custom properties and click New.
3.In the Name field, enter allowNextOnExhaustedResultSet.
4.In the Value field, type 1.
5.Change the type to java.lang.Integer.
6.Click OK.
Sometimes you need also check whether resultSetHoldability properties exists. Details refer to here.
I encountered this failure also when ugrading from JDBC Type 2 driver (db2java.zip) JDBC type 4 driver (db2jcc4.jar)
Statement statement = results.getStatement();
if (statement != null)
{
connection = statement.getConnection(); // ** failed here
statement.close();
}
Solution was to check if the statement is closed or not as follows.
Changed to:
Statement statement = results.getStatement();
if (statement != null && !statement.isClosed()) {
{
connection = statement.getConnection();
statement.close();
}
Creating property bellow with type Integer it's worked for me:
allowNextOnExhaustedResultSet:
I had the same issue on WAS 7 so i had to add and change few this on Admin Console.
This TeamWorksRuntimeException exception should be fixed by applying APAR JR50863 which is available on top of BPM V8.5.5 or included on BPM V8.5 refresh pack 6.
For the case that the APAR does not solve the problem, try following workaround:
Log in to the WebSphere Application Server admin console
Select Resources > JDBC > Data sources > DataSource name (TeamWorksDB) > Custom properties and click New
In the Name field, enter downgradeHoldCursorsUnderXa
In the Value field, type true
Change the type to java.lang.Boolean
Click OK to save your changes
Select custom property resultSetHoldability
In the Value field, type 1
Click OK to save your changes
Source of the Answer : https://developer.ibm.com/answers/questions/194821/invalid-operation-result-set-is-closed-errorcode-4/
Restarting the app may fix the problem if connection pool lost session to Db2. If using Tomcat then connection pool property of 'testonBorrow' may reestablish the connection to Db2.

updateblob fails in Powerbuilder

In Powerbuilder I am trying to update a table (Oracle) with blob but get sqlerror, "Database statement must refer to blob variable". My declaration and updateblob statements are as follows:
blob lblob_newxml
long llong_subid
UPDATEBLOB RP_XML_FORMS SET XML_DOC = :lblob_newxml
WHERE SUBMISSION_ID = :llong_subid
USING SQLCA;
Does anybody know why it is happening and or how to solve this problem? Thanks.
To get more information on this problem and the possible causes, I'd run with one of the database traces turned on. (You can check out database trace options in the Connecting to Your Database manual; link may not be appropriate for your PB version, which you haven't mentioned yet.) This may or may not tell you more, but it tracks everything between the app and when the PB drivers pass the commands "over the wall" to the database's driver.
Good luck,
Terry.
"The PowerBuilder VM can get the SQL syntax for the following types of errors, and passes it to the Transaction object’s DBError event for the following types of errors: ..." (see this page).
If your lblob_newxml is null then use this update statement instead:
UPDATE RP_XML_FORMS SET XML_DOC = NULL
WHERE SUBMISSION_ID = :llong_subid
USING SQLCA;

SQL STATE 37000 [Microsoft][ODBC Microsoft Access Driver] Syntax Error or Access Violation

Good day!
I get this error:
SQL STATE 37000 [Microsoft][ODBC Microsoft Access Driver] Syntax Error
or Access Violation, when trying to run an embedded SQL statement on
Powerscript.
I am using MsSQL Server 2008 and PowerBuilder 10.5, the OS is Windows 7. I was able to determine one of the queries that is causing the problem:
SELECT top 1 CONVERT(DATETIME,:ls_datetime)
into :ldtme_datetime
from employee_information
USING SQLCA;
if SQLCA.SQLCODE = -1 then
Messagebox('SQL ERROR',SQLCA.SQLERRTEXT)
return -1
end if
I was able to come up with a solution to this by just using the datetime() function of PowerBuilder. But there are other parts of the program that is causing this and I am having a hard time in identifying which part of the program causes this. I find this very weird because I am running the same scripts here in my dev-pc with no problems at all, but when trying to run the program on my client's workstation I am getting this error. I haven't found any differences in the workstation and my dev-pc. I also tried following the instructions here, but the problem still occurs.
UPDATE: I was able to identify the other script that is causing the problem:
/////////////////////////////////////////////////////////////////////////////
// f_datediff
// Computes the time difference (in number of minutes) between adtme_datefrom and adtme_dateto
////////////////////////////
decimal ld_time_diff
SELECT top 1 DATEDIFF(MINUTE,:adtme_datefrom,:adtme_dateto)
into :ld_time_diff
FROM EMPLOYEE_INFORMATION
USING SQLCA;
if SQLCA.SQLCODE = -1 then
Messagebox('SQL ERROR',SQLCA.SQLERRTEXT)
return -1
end if
return ld_time_diff
Seems like passing datetime variables causes the error above. Other scripts are working fine.
Create a transaction user object inherited trom transaction.
Put logic in the sqlpreview of your object to capture and log the sql statement being sent to the db.
Instantiate it, connect to the db, and use it in your embedded sql.
Assuming the user gets the error you can then check what was being sent to the db and go from there.
The error in your first statement should be the second parameter to CONVERT function.
It's type is not a string, it's type is an valid expression
https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql
So I would expect that your
CONVERT(DATETIME,:ls_datetime)
would evaluate to
CONVERT(DATETIME, 'ls_datetime')
but it should be
CONVERT(DATETIME, DateTimeColumn)
The error in your second statement could be that you're providing an wrong datetime format.
So please check if your error still occurs when you use this function
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-dateformat-transact-sql
with the correct datetime format you're using