Docker PostgreSQL 12.3 how to debug failed transaction\ incomplete message - sql

Im using PostgreSQL 12.3 docker image, and i have an application that stopped working as some query transaction gets closed or incomplete in the middle of it and the application just doesn't work.
My question is, how can i see or make PostgreSQL to save the incomplete transaction ?
I want to see from where, when and what was the query and issue, is there any way to debug it?
or save it maybe on a error table ?
my psql log:
LOG: incomplete message from client
Thanks!

Related

Azure Data Factory Failing with Bulk Load

I am trying to extract data from a Azure SQL Database, however I'm getting the
Operation on target Copy Table to EnrDB failed: Failure happened on 'Source' side. ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A database operation failed with the following error: 'Cannot bulk load because the file "https://xxxxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/appointment/2022-03.csv" could not be opened. Operating system error code 12(The access code is invalid.).
You might be thinking this is permission issue, but if you take a look at the error code 12 you will see the issue is related to Bulk Load.. a related answer can be found here..
https://learn.microsoft.com/en-us/answers/questions/988935/cannot-bulk-load-file-error-code-12-azure-synapse.html
I thought I might be able to fix the issue by selecting Bulk lock see image.
But I still get the error.
Any thoughts on how to resolve this issue?
As I see that the error is refering to a source side (2022-03.csv) , so I am not sure as to why are you making changes on the sink side . As explained in the threads which you referd , it appears the the CSV file is getting updated once the you pipeline starts execute by some other process . Refering back to the same thread .https://learn.microsoft.com/en-us/answers/questions/988935/cannot-bulk-load-file-error-code-12-azure-synapse.html
The changes suggested below should be made on the pipeline/process which is writing to 2022-03.csv .
[![enter image description here][1]][1]
HTH
[1]: https://i.stack.imgur.com/SSzwt.png

tomcat server sql exception

I have an app that runs the Tomcat server. I use IntelliJ on my machine and run it from there when I do tests.
Running it many times, all works, and suddenly server do not go up well, and I see the following in log:
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1: BasicResourcePool$AcquireTask: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#1ab64513 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: Unexpected exception encountered during query.
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2664)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2568)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1557)
at com.mysql.jdbc.ConnectionImpl.loadServerVariables(ConnectionImpl.java:3868)
at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3407)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2384)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2153)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:792)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source)
I have no clue what happened since I did not change anything related to JDBC or SQL. I tried to replace kotlin version (upgrade) and returned right away, but I don't know if it has anything to do with the exception, and how can I solve it.
checking the connection to the database from IntelliJ DB pane - connected successfully.
I will be thankful if someone has a clue what could go wrong.

Talend tCreateTable error: NullPointerException

Trying to create a table in a local instance of SQL Server Mgmt Studio using Talend with the ultimate goal of setting up a direct Salesforce-SSMS connection for ETL.
I've managed to load the data from SFDC into SSMS, but only by first manually creating the tables, manually mapping the schema in a tMap, and then running my job.
I'd like to now create the tables in SSMS with a tCreateTable component, and then use the AutoMap feature to map fields.
However, I'm getting a Null Pointer Exception error that makes no sense to me. Debugging line 370 shows that my dbSchema_tCreateTable_1 object is null, but I don't understand why. I've defined it from repository. Below are some pics of my setup:
Sample Schema
Error Message and Job Design
Line 370 and suspect in Red
I know my db connection is good because I've already pushed data to existing tables, but for the life of me (and 2 of my java engineers) I can't figure this out. I've got 5 days of experience with Talend so apologies if I'm making a dumb mistake. Any help would be appreciated!
edit: Component view of tCreateTable
edit 2: Component view of tFixedFlowInput
edit 3: Component view of tMSSqlOutput
edit 4: tMSSqlConnection
On first job (provided on Error Message and Job Design) NPE occur because of connection still not created (is null) when tCreateTable tried to call null.executeStatement()
You can modify your first job put tMsSQLConnection -> OnSubjobOK -> tCreateTable
OR remove connection element at all and set connection parameters to tCreateTable.
If it doesn't help, answer please on following questions:
Share please exception stacktrace and error message occurs when you use second job (connection -> tFixedFlowInput - tMSSQLOutput)
What version of studio (Open Studio or enterprise and version) have you used?
If it is not the latest (6.5.1) could you upgrade it?
If it is, could you export your job and share it? (i.e. on talend bug tracker)
P.S. You can try to debug job by yourself, select Run Job -> Debug Run -> Java Debug
Using eclipse debug view you can find out why the NPE occur.

Oracle Shutdown error ORA-01033

I had installed Oracle 11g on windows 07,everything was working fine.But today it is giving me an error saaying ORA-01033: ORACLE initialization or shutdown in progress . I followed the steps mentioned in the different communities but unable to solve the error. After connecting as sqlplus sys/sys as sysdba. and executing below statement,i got following error. How to solve this error?
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 9: 'C:\APP\MKHATAL\ORADATA\ORCL\VELODBDATA.DBF'
ORA-01157: cannot identify/lock data file 9 - see DBWR trace file
ORA-01110: data file 9: 'C:\APP\MKHATAL\ORADATA\ORCL\VELODBDATA.DBF'
Thanks in advance!!
First, check your ALERT.LOG file typically located in D:\app\oracle\diag\rdbms\DBNAME\SID\trace\.
The latest entries in your ALERT.LOG should give you an indication on what is going on. Is the database starting? Shutting down?
Should the database be stuck in the shutdown process, you can try to kill the oracle.exe process and then restart the database via Windows Service.
Often, when the database seems to hang on startup, it is actually applying REDO to get a consistent state, so pay close attention on what the ALERT.LOG tells you.
There's more info needed to be able to solve similar issues. Questions like these are interesting to get answered:
To what level can you start the database ? None, nomount, mount or
open (exclude the last one in this case)
What does the ALERT.LOG file show ? This from the moment the first
issue is suspected to be around
An obvious question is just : what happened ? What action were you
doing when you saw the first error message ?
Can you shutdown the database ? If yes, which methods : normal,
immediate or abort only ? Know that stopping the database with ABORT
option may make problems bigger as they are already.
Did you try rebooting the server ? This is particularly an issue on
Windows, which is your case.
Depending on how you stopped the database, do you have a cold backup
?
Depending on RMAN still working : can you take a backup ? The RECOVERY you were trying is not bad, but it could be interesting to know what is happening on OS level with that file; Is it there ? If yes, is it locked or not ? Does the OS think it's a valid file ?

Cron is not Working in Alfresco

I have written a cron to run every 30 minutes in scheduled-action-services-context.xml file
However I see that it is not working, when I check the log I can find only this error.
For my cron, I have also used lucene search. So I beleive this error is regarding that, so kindly help me in fixing it. Here is the error:
ERROR [quartz.core.JobRunShell] [DefaultScheduler_Worker-8] Job jobGroup.jobD threw an unhandled Exception:
org.alfresco.repo.search.impl.lucene.LuceneQueryParserException: 03020086
The error log you show is most likely the reason behind your scheduled action not properly working. In facts, it seems that the action is properly scheduled, but it then fails to complete as you provided an invalid Lucene query. Without the query itself or any other detail such as the relevant Spring config or action implementation details, I can only tell you to:
double check the lucene query
verify that that error log appears precisely when you would expect your action to be scheduled