Ibatis parameter initialization issue - sql

My application is having an exception about initialization of a parametermap into an sql statement. The error is :
Caused By: com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred in /com/***/cusman/cusbilman/postpaid/main/product/data/ibatis/sqlMap/THSSqlMap.xml.
--- The error occurred while applying a parameter map.
--- Check the invoicing.invoice.ths.paymentInfoMap.
--- Check the statement (query failed).
--- Cause: java.sql.SQLException: ORA-00904: : invalid identifier
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:201)
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForList(MappedStatement.java:139)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:567)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:541)
at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118)
at org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:298)
at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:209)
at org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:249)
at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:296)
The definitions are totally persistent which each other(java side and the xml side I mean).
Any ideas?

I found it. The problem is, oracle has not a stack trace type definition for errors. I was using a function in a select, but my db user had not got the grant to execute it, so Stupid Oracle tried to run it like, the function name is a column name. So it could not find a column name like it. So it hide the real problem...

Related

An unhandled exception occurred while processing the request SqlException

When I create a new project (web api) in Visual Studio 2019, I try running the project but I got the following error:
An unhandled exception occurred while processing the request.
SqlException: Invalid object name 'Books'.
Microsoft.Data.SqlClient.SqlCommand+<>c.b__169_0(Task result)
Did you check spelling in SqlDataReader? For me, I was misspelled. If the spelling is correct, any chance are you re-using the same name of an existing environmental variable for a different database on the same SQL Server? If the answer is yes it can be issue too.

Runtime error : DBSQL_SQL_ERROR exception : CX_SY_OPEN_SQL_DB, where to focus?

More info from stms :
Short Text
SQL error "SQL code: -10108" occurred while accessing table "CDHDR".
What happened?
Database error text: "SQL message: Session has been reconnected."
Return value of the database layer: "SQL dbsl rc: 99"
The problem is that client is running this code through the job in which I am not available to debug it. All I know is that it is working for other countries selected but for some specific it is not so the question is :
Was it a temporary connection error between two servers or is it data overload issue due to SELECT statement?
At the moment I am not really sure which thing I have to look on the system, the same program once produced the error for exceeding the limit in the db query ( > 10 minutes) so this might be related to system configuration or what?
Thanks in advance

Invalid predecessor when editing task in snowflake

I keep getting an error when trying to edit a task on snowflake: whenever I want to edit the task I keep getting the following error message:
SQL-Fehler [91085] [42601]: Invalid predecessor
TableNameA_001_update_newdata was specified.
The task itself looks like this:
CREATE OR REPLACE TASK "TableNameA_001_update_newdata"
WAREHOUSE = marketing_wh
AFTER "TableNameA_001_delete" AS
INSERT INTO tableA
...
By now I do not understand what is triggering the error.
Thanks for your help!
You don't use qualified TASK names (with database and schema in the TASK name).
If you inadvertedly change context (switch database or schema) the new context will not contain any task named "TableNameA_001_delete".
That will result in the error message "Invalid predecessor <TASK name> was specified."

[Amazon](500310) Invalid operation: Assert

I am using spark-redshift and querying redshift data using pyspark for processing.
The query works fine if i run on redshift using workbench etc.But spark-redshift unloads data to s3 and then retrieves it and it is throwing the following error when i run it.
py4j.protocol.Py4JJavaError: An error occurred while calling o124.save.
: java.sql.SQLException: [Amazon](500310) Invalid operation: Assert
Details:
-----------------------------------------------
error: Assert
code: 1000
context: !AmLeaderProcess -
query: 583860
location: scheduler.cpp:642
process: padbmaster [pid=31521]
-----------------------------------------------;
at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(ErrorResponse.java:1830)
at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(PGMessagingContext.java:822)
at com.amazon.redshift.client.PGMessagingContext.handleMessage(PGMessagingContext.java:647)
at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(InboundMessagesPipeline.java:312)
at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(PGMessagingContext.java:1080)
at com.amazon.redshift.client.PGMessagingContext.getErrorResponse(PGMessagingContext.java:1048)
at com.amazon.redshift.client.PGClient.handleErrorsScenario2ForPrepareExecution(PGClient.java:2524)
at com.amazon.redshift.client.PGClient.handleErrorsPrepareExecute(PGClient.java:2465)
at com.amazon.redshift.client.PGClient.executePreparedStatement(PGClient.java:1420)
at com.amazon.redshift.dataengine.PGQueryExecutor.executePreparedStatement(PGQueryExecutor.java:370)
at com.amazon.redshift.dataengine.PGQueryExecutor.execute(PGQueryExecutor.java:245)
at com.amazon.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)
at com.amazon.jdbc.common.SPreparedStatement.execute(Unknown Source)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$2.apply(RedshiftJDBCWrapper.scala:126)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused by: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: Assert
The query which gets generated:
UNLOAD ('SELECT “x”,”y" FROM (select x,y from table_name where
((load_date=20171226 and hour>=16) or (load_date between 20171227 and
20171226) or (load_date=20171227 and hour<=16))) ') TO ‘s3:s3path' WITH
CREDENTIALS ‘aws_access_key_id=xxx;aws_secret_access_key=yyy' ESCAPE
MANIFEST
What is the issue here and how can i resolve this.
Assert error usually happens when something is wrong with interpreting data types, for example for 2 parts of union query where column N in one part is varchar and in another part the same column is integer or null. Maybe your assertion error happens for data that comes from different nodes (just like in union query). Try to add explicit data formatting for each column like x::integer

Difference between oracle call and execute in context of error throwing

I have some procedure written in Oracle. Unfortunately I can't show its code. Somewhere inside it is select where execution crashed because of absence of required data. It looks like this
select value into l_value
from config
where code = upper(p_code);
So when I call this procedure like this (in SqlDeveloper)
execute some_package.some_procedure('CODE');
it throws
Error report -
ORA-01403: no data found
ORA-06512: at "XXXXXXXXXXXXXXXXXXX", line 111
ORA-06512: at "XXXXXXXXXXXXXXXXXXX", line 111
ORA-06512: at line 1
01403. 00000 - "no data found"
*Cause: No data was found from the objects.
*Action: There was no data from the objects which may be due to end of fetch.
But when I call it like this
call some_package.some_procedure('CODE');
it crashes at the same place (as I can suggest from the result, stored in DB), but it does no throw an exception.
some_package.some_procedure('CODE') succeeded.
What happens? And why there is such difference?
NO_DATA_FOUND exception behavior is special. It is handled by default in SQL context but not in PL/SQL. In SQL no data found is not considered as an error, it happens all the time that there is no data that meets certain condition.
CALL is SQL command whereas EXEC is a shortcut for BEGIN <code> END; which is PL/SQL.