When I start My SAP MMC EC6 server one service is not getting to wait mode - abap

Can someone of you help me, how to make the following service selected in the image get into wait mode after starting the server.
Please let me know if developer trace is required to be posted for resolving this issue.

that particular process is a BATCH process, a process that runs scheduled background tasks (maintained by transaction SM36/SM37). If the process is busy right after starting the server, that means there were scheduled tasks with status released waiting for execution, and as soon as the server was up, it started those tasks.
If you want to make sure the system doesn't immediately start released background tasks, you'll have to set the status back to scheduled (which, thanks to a bit of weird translation, means they won't be executed because they are not released).
if you want to start the server without having a chance to first change the job status in SM37, you would either have to reset the status on database level (likely not officially supported by SAP) or first start the server without any BATCH processes (which would give you a number of great big warning messages upon login) and change the job status before then restarting the server with the BATCH processes. You can set the number of processes for each type in the profile of your instance (parameter rdisp/wp_no_btc).

Related

Any way to cancel a request in SSMS?

I say cancel a "request" rather than a query because typically I am able to continue using Management Studio when I am running queries. However, I have remote access to an Azure database that shuts off after an hour of no activity. When I send it a query and it's shut down, it take a really long time to fire up and I'm not able to continue working during this time as Management Studio completely freezes.
I am literally still waiting on the request that prompted me to write this post to complete and it has been several minutes now. In this case, I actually ran my query on the wrong connection and did not even mean to hit the Azure database so it's especially annoying that I have to wait this long, lol.
If I didn't know better I would think it was permanently locked, but this has happened enough times now that I know it will eventually return control.
By your description, you have a SERVERLESS version of Azure SQL Database.
When this shuts off the compute due to lack of activity, it completely removes the compute portion of the service - simply leaving your database on storage. When you then query it again for the first time, it needs to allocate some compute to the database, start that up (with the redundant copies that Azure SQL provides), connect to your database, ensure the gateway is up to date to direct your connection and only THEN will it accept a connection.
So, with Management Studio, it is waiting on the response and I believe that it also has some degree of retry so that it keeps checking until the connection is established.
You could change the tier to a PROVISIONED service tier where it is available all the time (your billing will change so be sure it is what you need) and this will stop, or you could have a PowerShell script or similar that you run to ensure the database is available before connecting from SSMS.
When it is waiting for the response back from the service that it has started OK, there isn't a session available to KILL - so your only scope there would be to kill the client - i.e. use task manager to shut down SSMS.

TFS 2015 - Team Project won't delete, stays as Queued in status

Trying to delete projects from the Admin console in TFS 2015. State changes to "deleting", listed as "Queued" in the Status tab, never completes.
There's currently several jobs listed as "queued" in the Status tab, some from many months back. Trying to select View Log, never gets past "Waiting for the job to start", which makes sense cause it's queued.
Not sure if/why one of the older jobs are stuck or otherwise blocking up the later ones. Can delete requests be cancelled to see if the later ones will run properly?
TFS uses SQL jobs to do all kinds of maintenance. Can you check if they are running as scheduled?
Looking further, it seems to be a Windows service:
https://www.visualstudio.com/en-us/docs/setup-admin/tfs/architecture/background-job-agent
You can give a try with deleting a team project with TFSDeleteProject command
TFSDeleteproject [/q] [/force] [/excludewss] /collection:URL TeamProjectName
Also increase the Time-Out Period.
By default, each Web service call that the TFSDeleteProject command
issues to delete a component must complete within 10 minutes.
Another thing is restarting the Team Foundation Background Job Agent service in your TFS server.

Spring Batch restart crashed jobs

Hi spring batch users,
regarding the documentation http://docs.spring.io/spring-batch/reference/htmlsingle/#d5e1320
"If the process died ("kill -9" or server failure) the job is, of course, not running, but the JobRepository has no way of knowing because no-one told it before the process died."
I try to find and restart the stale job executions by using
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(jobName);
...
jobExecution.setStatus(FAILED);
jobExecution.setEndTime(new Date());
jobRepository.update(jobExecution);
jobOperator.restart(jobExecution.getId());
But this seems to be very inconvenient.
1) I have to do this before other (new) jobs could be started.
2) I have to handle multiple instances of running servers so findRunningJobExecutions will not do the trick.
You can find other questions regarding this topic:
https://jira.spring.io/browse/BATCH-2433?jql=project%20%3D%20BATCH%20AND%20status%20%3D%20Open%20ORDER%20BY%20priority%20DESC
Spring Batch after JVM crash
I would love to see a solution to register a "start up clean jobs listener". This will still not fix the problems originated by the multi server environment because spring batch does not know if the JobExecution marked by STARTED is not running on an other instance.
Thanks for any advice
Alex
Your job cannot and should not recover "automatically" from a kill -9 scenario. A kill -9 is treated very differently than you application throwing a caught Exception. The reason for this is that you've effectively pulled the carpet out from under the application without giving it a chance to reach a synchronization point with the database to commit any necessary information to the ExecutionContext or update the job/step status(es). Therefore, the last status touchpoint with the database will remain and the job will still look STARTED.
"OK, fine" you say, "but if I start another execution, I want it to find that STARTED execution, and pick up where it left off." The problem here is that there is no clean way for the application to distinguish a job that is ACTUALLY RUNNING from one that has failed but couldn't up the database. The framework here correctly errs on the side of caution and prevents you from starting a job that already appears running, and this is a GOOD thing.
Why? Because let's assume your job was actually still running and you restarted by accident. As coded, the framework will start to spin up, see your running execution and fail with the following message A job execution for this job is already running. I can't tell you how many times we've been saved by this because someone accidentally launched a job twice!
If you were to implement the listener you suggest, the 2nd execution would instead be allowed to start and you'd have 2 different JVMs repeating the same work, possibly writing to the same files/tables and causing a huge data mess that could be impossible to clean up.
Trust me, in the event the Linux terminal kills your job or your job dies because the connection to the database has been severed, you WANT human eyes on those execution states before you attempt a restart.
Finally, on the off chance you actually wanted to kill you job, you can leverage several other standard patterns for stopping jobs:
Stop via throw Exception
Stop via JobOperator.stop()

SQL Server LCK_M_S only happens in production

I have a stored procedure that is called by a SQL Server 2012 report that is taking an age to run in production compared to development because of a blocking session lck_m_s
The stored procedure runs instantaneously when executed in SQL Server Management Studio and also works well when called as part of the report from a dev laptop via Visual Studio.
When the report is uploaded to the production server this blocking issue appears.
How can I find out what is causing the lck_m_s issue when in production?
Execute this query when the problem happens again:
select * from
sys.dm_os_waiting_tasks t
inner join sys.dm_exec_connections c on c.session_id = t.blocking_session_id
cross apply sys.dm_exec_sql_text(c.most_recent_sql_handle) as h1
It will give you the spid of the session that caused blocking, on which resource was blocked, and text of the most rcent query for that session. This should give you a solid starting point.
You have a couple of options
Set up the blocked process report. Essentially, you set the blocked process threshold (s) system configuration to a non-zero number of seconds and set up an event notification on the BLOCKED_PROCESS_REPORT event. You'll get an XML report each time any process is blocked for more than the threshold you've set. The downside to this is that it'll be for anything getting blocked, not just your procedure, but you'll get whomever is holding the non-compatible lock in the report.
Set up an extended events session for your procedure to capture the lock_released event where the duration is longer than you care to wait. The upside is that this is extremely targeted and you can define the session so that you get very little noise. The downside is that you won't know what process is holding the incompatible lock (though you will get a pretty detailed description of what the locked resource is to further your investigation).

SQL Server Agent Job is not running

I have a job that is supposed to run every 11 AM and 8 PM. About two weeks ago, it started to not respect the schedule. The "fix" that I found was to start the job manually and then the job would restart respecting the schedule for a while but eventually the issue reappears.
The big problem is that there are no error message what so ever. If the job fails, I am supposed to get a notification Email which I do not. In the sql server agent logs and the Job history, there are no errors. In the job history, I can see clearly that the job skipped the schedule since there are no entries. It looks like it did not even start as if the running time had not arrived.
The schedule is set to run everyday and there are no limits on how long it is supposed to run. The sql Agent is set to restart automatically if it stops unexpectedly.
Did anyone get this problem before?
Check the user which is used to run the job. Maybe the user password is expired or the user itself is no longer active.