Why does my hybris server after starting it, gets stuck during :"Localizing Types" step.
I click on initialize in the HAC.
The process gets stuck here as shown.
I get no logs as well.
Im using default HSQL of hybris.
This is what I get when I stop the server:
tenant <<master>> not yet started - got no active datasource
It happened with me. I noticed that the message was:
[main] [TypeLocalization] 1 threads will be used to localize type system.
The system hangs forever.
I found that the virtual machine was of 1 CPU.
When I increased CPU to 16. The message became:
[main] [TypeLocalization] 16 threads will be used to localize type system.
and the step toke no more than 30 seconds!
Therefore, the solution is to increase CPU.
It looks like your databse connection was interupted
Related
Over the past day I have been experiencing this error very frequently which results in the cloud instance needing to be reset in order to continue connections:
ERROR: unrecognized configuration parameter "cloudsql.enable_instance_password_validation"
This is operating on a PostgreSQL 14 GCP Cloud SQL community shared 1 vCPU, 0.614 GB instance but was also tested on the standard 1 vCPU, 3.7 GB instance where the problem persisted.
The only code that has changed since this occurrence is a listen/notify call with a Golang PGX pool interface which has been reverted and the problem persists.
The problem hits regularly with any database calls (within 30 mins of a reset) and I have not set anything involving "enable_instance_password_validation" - I am also unable to find any parameters involving this name.
This was an interesting one, #enocom was correct in pointing out that the error experienced within Postgresql was a red herring. Throughout the process, there were several red herrings due to the problem spanning across both the Go server instance and the Postgresql server.
The real reason for the crashes was that the Golang PGX pool interface for interacting with the Postgres DB had a maximum 'DBMaxPools' cap of 14 connections by default. When we get to this number of connections the server hangs without any error log and all further connections are refused perpetually.
To solve this problem all that is required is adding a 'pool_max_conns' within the dbURI when calling:
dbPool, err := pgxpool.Connect(context.Background(), dbURI)
This can do done with something like:
dbURI = fmt.Sprintf(`postgresql://%s:%s#%s:%s/%s?pool_max_conns=%s`, config.DBUser, config.DBPassword, config.DBAddress, config.DBPort, config.DBName, config.DBMaxPools)
Detailed instructions can be found here: pgxpool package info
When setting PGX max conns, be sure to set this number to the same or similar to your maximum concurrency connections for the server instance to avoid this happening again. For GCP cloud run instances, this can be set within Deployment Revisions, under 'Maximum requests per container'
We are currently upgrading a TYPO3-Installation with about 60.000 Pages to V9.
The Upgrade-Wizard "Introduce URL parts ("slugs") to all existing pages" does not finish. In Browser (Install-Tool) I get a time-out.
Calling it via
./vendor/bin/typo3cms upgrade:wizard pagesSlugs
results in following Error:
[ Symfony\Component\Process\Exception\ProcessSignaledException ]
The process has been signaled with signal "9".
After using my favourite internet-search-engine I thinks that means most likely "out of memory".
Sadly the database doesn't seams to be touched at all - so no pages got the slug after that. That means just running this process several times will not help. Observing the Process the PHP-Process takes all memory it can get, then filling the swap. When the swap is full the process crashes.
Tested so far on a local Docker with 16GB RAM Host and on a Server with 8 Cores but 8GB RAM (DB is on an external Machine).
Any ideas to fix that?
After debugging I found out that the reason for this are messed up relations in database. So there are non deleted pages which points to non existing parents. This was mainly caused by a heavy clean up of the database before. Beside the wizard is not checking that and could be an improvement on it - the main problem is my database in that case.
Can someone of you help me, how to make the following service selected in the image get into wait mode after starting the server.
Please let me know if developer trace is required to be posted for resolving this issue.
that particular process is a BATCH process, a process that runs scheduled background tasks (maintained by transaction SM36/SM37). If the process is busy right after starting the server, that means there were scheduled tasks with status released waiting for execution, and as soon as the server was up, it started those tasks.
If you want to make sure the system doesn't immediately start released background tasks, you'll have to set the status back to scheduled (which, thanks to a bit of weird translation, means they won't be executed because they are not released).
if you want to start the server without having a chance to first change the job status in SM37, you would either have to reset the status on database level (likely not officially supported by SAP) or first start the server without any BATCH processes (which would give you a number of great big warning messages upon login) and change the job status before then restarting the server with the BATCH processes. You can set the number of processes for each type in the profile of your instance (parameter rdisp/wp_no_btc).
I have created the new replication. Now what is issue I am facing:
When I go to ​start the 'View Agent Snapshot Status' Its just start working and First line shows "Starting Agent" and just keep working, working and continuously working.
..
After sometime it show the following message:
"The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Distributor are still active."
I try the following solution that I found, I have increased the value of #HeartBeat_interval property of distributor from 10 to 30 but no success.
I have Sql Server 2008 R2.
any help will be appreciated really.
May be this will help to someone else:
I did the following changes and my replication is working perfect.
1 - Job username and Job password must have full access and permission of windows.
2 - You must be logged In to user that you will use in the replication script to create replication.
That's all.
Thanks!!
I had the same behavior.
some of my articals are huge. while the replica's synch was over, the agent hanged up with the same message as yours.
after ~20 minutes it began running as expected.
I thought it is not not normal behavior, but after creating my second subscription, the error appeared again. it was gone approximately after 20 minutes.
I believe it is encounters high load of data (in case it is) and hangs up for while.
hope it helps
We have recently upgraded our NServiceBus project from version 4 to version 5. We are using NHibernate for data storage to an SQL server database. Since the upgrade we have started to encounter an error around connection timeouts and the TimeoutEntity table. The NServiceBus services run fine for a while - at least a couple of hours and then they stop.
When investigating the cause of this it seems to be down to polling query to the TimeoutEntity table - the query is done every minute and if the query takes more than 2 seconds to complete an error is raised and CriticalError.Raise is called - this causes the NServiceBus to stop the service.
One route of investigation is to find out of the cause of the timeouts, but we would also like to know why this functionality was changed - in the previous version of NServiceBus, Logger.Warn was called rather than CriticalError.Raise. Would anybody know why this change was made in NServiceBus 5 and what we can do to mitigate it?
You can configure the time to wait before raising a critical error, see http://docs.particular.net/nservicebus/errors/critical-exception-for-timeout-outages on how to do it.
You can also define your own critical error action by using
config.DefineCriticalErrorAction((message, exception) => {
<do something here>
});