HSQLDB error writing an object into the database - hsqldb

I have one server which should write in a HSQLDB Database stored in another server. But while trying to do the writing process. It appears to me the following error, shown below as log trace.
Apparently when the logs mention, "current / latest database version", is not related with the process of trying to upgrade the Database Itself. It`s related with some data object which is trying to write in a field of database, but without success.
No idea, why is caused this error, and how to solve it. Because there is another node which also attacks to the server database, which apparently has the same configuration and runs perfect.
Any help would be appreciated
Many thanks
2022-11-08 12:45:00,001 INFO [.maudit.task.BackupLogProcessorTask] Task finished: Backup Log Proccesor.
**2022-11-08 12:45:00,001 INFO [.common.operations.rest.client.CommonOperationsClient] Starting the call to the getLastDDBBVersion method of the CommonOperations rest service. Current database version: 167.
2022-11-08 12:45:03,080 INFO [.gob.afirma.common.operations.rest.client.CommonOperationsClient] The latest version of the database available is 329. The update proceeds.**
**2022-11-08 12:45:04,155 ERROR [.common.operations.rest.client.CommonOperationsClient] Error updating database. Some statement has not been executed correctly.**
**java.sql.BatchUpdateException: integrity constraint violation: unique constraint or index violation; "PSC_UNIQUE_IDENTIFICATOR" table: "PSC"
java.sql.BatchUpdateException: integrity constraint violation: unique constraint or index violation; "PSC_UNIQUE_IDENTIFICATOR" table: "PSC"**
at org.hsqldb.jdbc.JDBCStatement.executeBatch(Unknown Source)
at .persistence.utils.UtilsDataBase.updateDataBase(UtilsDataBase.java:411)
at .common.operations.rest.client.CommonOperationsClient.getLastDDBBVersion(CommonOperationsClient.java:142)
at sun.reflect.GeneratedMethodAccessor576.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at .rest.utils.UtilsCommonOperationsClientOperations.getLastDDBBVersion(UtilsCommonOperationsClientOperations.java:228)
at .ddbb.version.rest.task.DDBBVersionTask.doActionOfTheTask(DDBBVersionTask.java:86)
at .mplanificacion.Task.doTask(Task.java:52)
at .quartz.job.AbstractAfirmaTaskQuartzJob.execute(AbstractAfirmaTaskQuartzJob.java:105)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
2022-11-08 12:45:04,157 WARN [.malarm.AlarmsModuleManager] It has been generated the AL055 alarm in the module [MODGENERAL]: Not possible upgrade to the latest local database version...

Related

hive: use flink sql client to data sync from hive ,then throw that bug

org.apache.flink.table.client.SqlClient [] - SQL Client must stop. Unexpected exception. This is a bug. Please consider filing an issue.
java.lang.NoSuchMethodError: com.facebook.fb303.FacebookService$Client.sendBaseOneway(Ljava/lang/String;Lorg/apache/thrift/TBase;)V

Symfony 4 - SQLSTATE[42000] alter table for unknown reasons

I'm here because, I need your help. I'm working on a Symfony project, I just finished to modify my DataBase with MySqlWorkBench (I've done this because I have to present the DataBase at the end of the school year). After that, I've executed "Forward Engineer" to send all the tables and columns to the database.
Then, i've just entered this line, to create my entities:
php bin/console doctrine:mapping:import "App\Entity" annotation --path=src/Entity
Later I put this line line to generate my setters/getters and my Repositories
php bin/console make:entity --regenerate App
When it's done, I've used those lines :
php bin/console d:m:diff
And I migrate all to the database:
php bin/console d:migrations:execute --up 20190321194410
And after this command, I got a ton of errors but they deal of the same thing.
There is this query, I don't know why it's executed:
ALTER TABLE school_has_level RENAME INDEX fk_school_has_school_level_school1_idx TO IDX_102C1F10C32A47EE
And after there is those errors :
In AbstractMySQLDriver.php line 79:
An exception occurred while executing 'ALTER TABLE school_has_level RENAME INDEX fk_school_has_school_level_school1_idx TO IDX_102C1F10C32A47EE':
SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB serve
r version for the right syntax to use near 'INDEX fk_school_has_school_level_school1_idx TO IDX_102C1F10C32A47EE' at line 1
I'm searching some tips on Internet but I didn't find anything about that. The problem is that, I don't understand where the query is started...
Thank you for your future answer, and sorry for my english :'(
Best regards,
Mathieu

setColumnRemarks is not supported on mysql using Liquibase

I am getting following error while running command line version of Liquibase V3.5.5
Unexpected error running Liquibase: Validation Failed:
1 changes have validation failures
setColumnRemarks is not supported on mysql, PosMS1-V0.0.1-
104.xml::1531369854279-1::otalio (generated)
liquibase.exception.ValidationFailedException: Validation Failed:
1 changes have validation failures
setColumnRemarks is not supported on mysql, PosMS1-V0.0.1-
104.xml::1531369854279-1::otalio (generated)
I was able to use Liquibase to add remarks to column while renaming and creating new columns, but it throws an error when I compute the diff of two databases which has only change in adding remark.

Adding a new parameter throws an error in the pentaho report designer

I have a designed a report which works well. I am trying to add another sql query in it. Previewing the sql query works without any hiccups.
But when I try to add the parameter for the same query, I get two different errors depending on which mysql-connector I am using.
Earlier I was using the mysql-connector-java-5.0.8-bin and the error
was
org.pentaho.reporting.engine.classic.core.ReportDataFactoryException: Failed at query: (a few lines down )
Caused by: java.sql.SQLException: Stopped by user.
Then I changed the mysql-connector to mysql-connector-java-5.1.36-bin. The error changed to :-
org.pentaho.reporting.engine.classic.core.ReportDataFactoryException: Failed at query:
Caused by: com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
Any suggestions would be helpful. I am using Pentaho 5.3.0.0-213 on Windows 8.1 Pro. Although the same problem exists when I run Pentaho 5.3.0.0-213 on Ubuntu 14.04
Thanks

Insufficient data written when inserting rows

I am facing this error when running my unit test to insert some rows into my bigquery table today :
Caused by: java.io.IOException: insufficient data written
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.close(HttpURLConnection.java:3213)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:81)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:960)
at com.google.api.client.googleapis.media.MediaHttpUploader.executeCurrentRequest(MediaHttpUploader.java:482)
at com.google.api.client.googleapis.media.MediaHttpUploader.executeCurrentRequestWithBackOffAndGZip(MediaHttpUploader.java:504)
at com.google.api.client.googleapis.media.MediaHttpUploader.executeUploadInitiation(MediaHttpUploader.java:456)
at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:348)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:418)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:343)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:460)
I thought it was due to the new version of google-http-client (1.16.0.rc) because i updated just before running the test. But rollbacking to 1.15.0-rc has no effect.
Any idea ?
Me too. Also, it seems like a sign that Bigquery just stops receiving any data. Because if you query you table by count(*) after this exception, result won't change anymore. If I keep my program running for a while, it will give me errors such as:
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1343)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
waiting for answers...
These errors usually happen when there is a communication failure, specially for large files.
The way to avoid it is to use resumable upload.