sonar 5.1.1 analysis results for different branches giving timeouts with mysql db - sonarqube5.1

We are using Sonarqube 5.1.1 with a MySQL database. We are facing timeout issues with the database. We ran the MySQL tuning primer script and made some changes to the InnoDB timeout (increased it in /etc/my.cnf), but it made no difference. One of the suggestions from mysl tuner output is :
"of 7943 temp tables, 40% were created on disk"
Note: BLOB and TEXT columns are not allowed in memory tables.
Are there any suggestions for dealing with Sonar analysis results for a bunch of different branches?
Perhaps using Postgres instead of MySQL?
We get errors as shown below:
Failed to process analysis report 8 of project "X"
org.apache.ibatis.exceptions.PersistenceException:
Error committing transaction. Cause: org.apache.ibatis.executor.BatchExecutorException:
org.sonar.core.issue.db.IssueMapper.insert (batch index #1) failed.
Cause: java.sql.BatchUpdateException: Lock wait timeout exceeded; try
restarting transaction
Cause: org.apache.ibatis.executor.BatchExecutorException: org.sonar.core.issue.db.IssueMapper.insert (batch index #1) failed.
Cause: java.sql.BatchUpdateException: Lock wait timeout exceeded; try
restarting transaction

Related

SQL server Warning: Fatal error 829 occurred at Oct 10 2019 12:48 PM. Note the error and time, and contact your system administrator

The 2 table not insert or select or delete or drop table command execute then show error below:
The error I'm receiving
Warning: Fatal error 829 occurred at Oct 10 2019 12:48PM. Note the
error and time, and contact your system administrator.
DROP TABLE [dbo].[tbl_SalesMaster_tmp]
GO
Just a quick search on Google and find a similar thread here. However, I extracted the possible solution for an easy reference.
Means there's an I/O subsystem problem. Is something called a 'hard I/O error'. SQL Server asks the OS to read a page and it says no - this means the I/O subsystem couldn't read the page in question.
The CHECKDB output means that it couldn't create the internal database snapshot that it uses to get a transactionally-consistent point-in-time view of the database. There are a number of different causes of this:
There may not be any free space on the volume(s) storing the data files for the database
The SQL service account might not have create-file permissions in the directory containing the data files for the database
If neither of these are the case, you can create your own database snapshot and run DBCC CHECKDB on that. One you have, run the following:
DBCC CHECKDB (yourdbname) WITH NO_INFOMSGS, ALL_ERRORMSGS
Whatever the results are, you're looking at either restoring from a backup, extracting data to a new database, or running repair. Each involves varying amounts of downtime and data-loss. You're also going to have to do some root-cause analysis to figure out what happened to cause the corruption in the first place.
By the way - do you have page checksums enabled? Have you looked in the SQL error log or Windows application event log for any signs of corruption or things going wrong with the I/O subsystem?

SparkSQL JDBC writer fails with "Cannot acquire locks error"

I'm trying to insert 50 million rows from hive table into a SQLServer table using SparkSQL JDBC Writer.Below is the line of code that I'm using to insert the data
mdf1.coalesce(4).write.mode(SaveMode.Append).jdbc(connectionString, "dbo.TEST_TABLE", connectionProperties)
The spark job is failing after processing 10 million rows with the below error
java.sql.BatchUpdateException: The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions.
But the same job succeeds if I use the below line of code.
mdf1.coalesce(1).write.mode(SaveMode.Append).jdbc(connectionString, "dbo.TEST_TABLE", connectionProperties)
I'm trying to open 4 parallel connections to the SQLServer to optimize the performance. But the job keeps failing with "cannot aquire locks error" after processing 10 million rows. Also, If I limit the dataframe to just few million rows(less than 10 million), the job succeeds even with four parallel connections
Can anybody suggest me if SparkSQL can be used to export huge volumes of data into RDBMS and if I need to make any configuration changes on SQL server table.
Thanks in Advance.

Is there size limit on appending ORC data files to Vora tables

I created a Vora table in Vora 1.3 and tried to append data to that table from ORC files that I got from SAP BW archiving process (NLS on Hadoop). I had 20 files, in total containing approx 50 Mio records.
When I tried to use the "files" setting in the APPEND statement as "/path/*", after approx 1 hour Vora returned this error message:
com.sap.spark.vora.client.VoraClientException: Could not load table F002_5F: [Vora [eba156.extendtec.com.au:42681.1640438]] java.lang.RuntimeException: Wrong magic number in response, expected: 0x56320170, actual: 0x00000000. An unsuccessful attempt to load a table might lead to an inconsistent table state. Please drop the table and re-create it if necessary. with error code 0, status ERROR_STATUS
Next thing I tried was appending data from each file using separate APPEND statements. On the 15th append (of 20) I've got the same error message.
The error indicates that the Vora engine on node eba156.extendtec.com.au is not available. I suspect it either crashed or ran into an out-of-memory situtation.
You can check the log directory for a crash dump. If you find one, please open a customer message for further investigation.
If you do not find a crash dump, it is likely a out-of-memory situation. You should find confirmation in either the engine log file or in /var/log/messages (if the oom killer ended the process). In that case, the available memory is not sufficient to load the data.

SQL server 2008R2 The transaction log for database 'MGR' is full due to 'ACTIVE_TRANSACTION'

I run a query in which I wanted to update more then 130 mln of records. After few hours I got an error:
The transaction log for database 'MGR' is full due to 'ACTIVE_TRANSACTION'.
now I ve got 70 MB free on my C disk drive.
I supose that the problem was with to little disc space and thats why query failed but how can I now regain the lost disc space from before query ?
Im using sql server 2008 R2
Thanks for any hints
The problem has to do with how sql logs all the changes during an active transaction. While a transaction is active, the log cannot be flushed, so if you have a huge active transaction the log keeps growing until it reaches a point where it can exceed its capacity. The amount of logging depends on many factors: the recovery mode (full recovery mode is the one that generates more logging activity). Also, you can breakdown the transaction in small chunks to enable log flushing in between. Also look into table hint TABLOCK. The lost amount of disk must possibly have gone to the log file. Check that out.

Prevent MS SQL 2005 master DB from being corrupted

I am trying to resolve what cause the following corruption.
2011-06-29 10:47:26.42 spid5s Starting up database 'master'.
2011-06-29 10:47:26.53 spid5s Error: 9003, Severity: 20, State: 1.
2011-06-29 10:47:26.53 spid5s The log scan number (216:72:1) passed to log scan in database 'master' is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup.
2011-06-29 10:47:26.53 spid5s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
I can find plenty of threads and information on how to recover databases when master db is corrupt. I can recover them succussfully.
HOWEVER, this is not very satisfactory for customers to have perform these operations. I have been able to examine event log files of when the corruption occurs. From there I can see server working fine then computer is shutdown, few hours later the computer is switched on and master db is corrupted.
Any help greatly appreciated
One of:
disk corruption. Run chkdsk etc with SQL Server shutdonw
someone has been playing with the MDF/LDF files
The master DB starts once when SQL Server starts up: so why did this happen? Patch? BSOD? PEBKAC? Note: the MDF/LDF files won't be locked when SQL is shutdown...
I can't recall a corrupt master, ever, unless it's one of the 3 reasons above