what happens if one page of a process is stored out out ram? - process

i would like to understand what exactly happens if a page is stored out of ram.
if there is a process A and one page gets stored out of ram, does it mean that process A is automatically "blocked" or is it still possible that process A is running
so i would like to know about the process status of that process
it is not about a specific problem, just about the understanding of memory paging
i only read about which page should get stored out
sorry for my english btw

Related

TYPO3 9 Install-Wizard pagesSlugs crashes on many pages installation

We are currently upgrading a TYPO3-Installation with about 60.000 Pages to V9.
The Upgrade-Wizard "Introduce URL parts ("slugs") to all existing pages" does not finish. In Browser (Install-Tool) I get a time-out.
Calling it via
./vendor/bin/typo3cms upgrade:wizard pagesSlugs
results in following Error:
[ Symfony\Component\Process\Exception\ProcessSignaledException ]
The process has been signaled with signal "9".
After using my favourite internet-search-engine I thinks that means most likely "out of memory".
Sadly the database doesn't seams to be touched at all - so no pages got the slug after that. That means just running this process several times will not help. Observing the Process the PHP-Process takes all memory it can get, then filling the swap. When the swap is full the process crashes.
Tested so far on a local Docker with 16GB RAM Host and on a Server with 8 Cores but 8GB RAM (DB is on an external Machine).
Any ideas to fix that?
After debugging I found out that the reason for this are messed up relations in database. So there are non deleted pages which points to non existing parents. This was mainly caused by a heavy clean up of the database before. Beside the wizard is not checking that and could be an improvement on it - the main problem is my database in that case.

SSAS Tabular Cube Reload (Seems to need a user to trigger the load of the data form disk)

We are seeing some odd behaviour on our SSAS instances. We process our cubes as part of an overnight job on different environments, on our prod environment we process the cube on a separate server and then sync it out to a set of user facing servers. We are however seeing this behaviour even on environments where we process and query on a single instance.
The first user that hits any environment with fresh data seems to trigger a reload of the cube data from disk. Given we have 2 cubes that run to some 20Gb this takes a while. During this we are seeing low CPU utilisation, but, we can see the memory footprint of the SSAS instance spooling up, this is very visible if the instance has just been started as it seems to start using a couple of hundred Mb initially and then spool up to 22Gb at which point is becomes responsive for end users. During the spool up DAX stuiod/Excel/SSMS all seem to hang a far as the end user is concerned. Profiler isn't showing anything usfeul other than very slow responses to META data discover requests.
Is there a setting somewhere that can change this? Or do I have to run some DAX against the cube to "prewarm" it?
Is this something I've missed in the past because all my models were pretty small (sub 1Gb)
This is SQL 2016 SP2 running Tab Models at compat 1200.
Many thanks
Steve
I see that you are suffering from an acute OLAP cube cold. :)
You need to get it warmer (as you've guessed it, you need to issue a command against it, after (re)starting the service).
What you want to do, is issue a discover command - a query like this one should be enough:
SELECT * FROM $System.DBSCHEMA_CATALOGS
If you want the full story, and a detailed explanation on how to automate this warming, you can find my post here: https://fundatament.com/2018/11/07/moments-before-disaster-ssas-tabular-is-not-responding-after-a-server-restart/
Hope it helps.
Have fun. :)

How and why do we test BufferPool HIT RATIO graphs

we are testing BufferPool HIT RATIO graphs for DB call that happen through stored procedures (DB2).
Now what I dont understand is "why and how we test them."
When I google about BUFFERPOOL, I got that when pages (in turn our data I guess) is not updated then those pages will be picked from bufferpool else they will be picked from hard disk.
So my questions are :
How we test this for a db call.
If I am requesting some data continuously then data might not be changed then how BufferPool HIT RATIO graphs should look like.
If the graph is 100 percent or if goes down to 0, then what it mean. Basically how we check that which level in graph is good or bad.
On the whole I want to understand the concept of looking at BufferPool HIT RATIO graphs.
And which scenario is right for testing this. If my database is not changing then also I can see and compare results as compared to DB which changes frequently.
If any one can give some links for this, that also will help.
Buffer Cache Hit Ratio shows how SQL Server utilizes buffer cache
“Percent of page requests satisfied by data pages from the buffer pool”
It gives the ratio of the data pages found and read from the SQL Server buffer cache and all data page requests. The pages that are not found in the buffer cache are read from the disk, which is significantly slower and affects performance.
For more info : http://www.sqlshack.com/sql-server-memory-performance-metrics-part-4-buffer-cache-hit-ratio-page-life-expectancy/
This link solves my questions no : 1,2 & 3.
Now if some one can share there experience for qsn no 4. That would be big help.
Actually we are trying to monitor performance issues that can be caused in actual application's db, by testing our stored procedures in another DB which we use for test /develop purposes.

How to share the APC user cache between CLI and Web Server instances?

I am using PHP's APC to store a large amount of information (with apc_fetch(), etc.). This information occasionally needs analyzed and dumped elsewhere.
The story goes, I'm getting several hundred hits/sec. These hits increase various counters (with apc_inc(), and friends). Every hour, I would like to iterate over all the values I've accumulated, and do some other processing with them, and then save them on disk.
I could do this as a random or time-based switch in each request, but it's a potentially long operation (may require 20-30 sec, if not several minutes) and I do not want to hang a request for that long.
I thought a simple PHP cronjob would do the task. However, I can't even get it to read back cahe information.
<?php
print_r(apc_cache_info());
?>
Yeilds a seemingly different APC memory segment, with:
[num_entries] => 1
(The single entry seems to be a opcode cache of itself)
While my webserver, powered by nginx/php5-fpm, yields:
[num_entries] => 3175
So, they are obviously not sharing the same chunk of memory. How can I either access the same chunk of memory in the CLI script (preferred), or if that is simply not possible, what would be the absolute safest way to execute a long running sequence on say, a random HTTP request every hour?
For the latter, would using register_shutdown_function() and immediately set_time_limit(0) and ignore_user_abort(true) do the trick to ensure execution completes and doesn't "hang" anyone's browser?
And yes, I am aware of redis, memcache, etc that would not have this problem, but I am stuck to APC for now as neither could demonstrate the same speed as APC.
This is really a design issue and a matter of selecting preferred costs vs. payoffs.
You are thrilled by the speed of APC since you do not spend time to persist the data. You also want to persist the data but now the performance hit is too big. You have to balance these out somehow.
If persistence is important, take the hit and persist (file, DB, etc.) on every request. If speed is all you care about, change nothing - this whole question becomes moot. There are cache systems with persistent storage that can optimize your disk writes by aggregating what gets written to disk and when but you will generally always have a payoff between the two with varying tipping points. You just have to choose which of those suits your objectives.
There will probably never exist an enduring, wholesome technological solution to the wolf being sated and the lamb being whole.
If you really must do it your way, you could have a cron that CURLs a special request to your application which would trigger persisting your cache to disk. That way you control the request, its timeout, etc., and don't have to worry about everything users might do to kill their requests.
Potential risks in this case, however, are data integrity (as you will be writing the cache to disk while it is being updated by other requests in the meantime) as well as requests being served while you are persisting the cache paying the performance hit of your server being busy.
Essentially, we introduced a bundle of hay to the wolf/lamb dilemma ;)

WinForms ReportViewer: slow initial rendering

UPDATE 2.4.2010
Yeah, this is an old question but I thought I would give an update. So, I'm working with the ReportViewer again and it's still rendering slowly on the initial load. The only difference is that the SQL database is on the reporting server.
UPDATE 3.16.2009
I have done profiling and it's not the SQL that is making the ReportViewer render slowly on the first call. On the first call, the ReportViewer control locks up the UI thread and makes the program unresponsive. After about 5 seconds the ReportViewer will unlock the UI thread and display "Report is being generated" and then finally show the report. I know 5 seconds is not much but this shouldn't be happening. My coworker does the same thing in a program of his and the ReportViewer immediately displays the "Report is being generated" upon any request.
The only difference is that the reporting server is on one server and the data is on another server. However, when I am developing the reports within SSRS, there is no delay.
UPDATE
I have noticed that only the first load of the ReportViewer takes a long time; each subsequent load of the same or different reports loads fast.
I have a WinForms ReportViewer that I'm using in Remote processing mode that can take up to 30 seconds to render when the ReportViewer.RefreshReport() method is called. However, the report itself runs fast.
This is the code to setup my ReportViewer:
rvReport.ProcessingMode = ProcessingMode.Remote
rvReport.ShowParameterPrompts = False
rvReport.ServerReport.ReportServerUrl = New Uri(_reportServerURL)
rvReport.ServerReport.ReportPath = _reportPath
This is where the ReportViewer can take up to 30 seconds to render:
rvReport.RefreshReport()
I found the answer on other forums. MSDN explains that a DLL is searching for some Verisign web server and it takes forever... there are 2 ways to turn it off, one is a checkbox in internet explorer and another is adding some lines to the app.config file of the app.
You can pull a report in two modes, local and server. If you're running in local mode, it's going to pull both the data and the report definition onto your machine, then render them both. In server mode, it's going to just let SSRS do all the work, then pull back the information to render.
If you're using local mode, it could be a hardware issue. If you've got a huge dataset, that's a lot of data to store in memory.
Other than that, that's not a lot of info to go on...
Update: since you've noticed it's only the first call that takes a while, have you done any profiling to determine if the bulk of the work is done on the backend SQL calls or is spent in the actual report render?
If it's faster on subsequent calls, it's possible you're (incidentally) caching at one level or another. You can cache reports (http://www.sqlservercurry.com/2007/12/configure-report-to-be-cached-ssrs-2005.html) or it could be that the execution plan to return the data is being cached deep in SQL Server.
In summary of the various ideas already presented, it could be
startup time for the report viewer infrastructure on the client
cache loading time on the client
query execution time at the server
report rendering time at the server
Try running the report, closing down the client, restarting the client and running the report again. If the report is much faster the second time, repeat this experiment but load, run and unload another large application in between report runs.
If the second report run continues to be much quicker, then the difference you are seeing has more to do with the SQL Server's I/O cache than what's happening on the client. You can further test this by deliberately displacing the MSSQL cache by running a query that pulls a lot of data from tables that aren't used in the report.
All of the above is interesting but unimportant. If you want to ensure snappy report response Reporting Services provides extensive support for scheduled generation of reports, so that when the consumer requests the report, the only delay is network delivery.
If your users insist on reporting on up to the minute (live) data they'll either have to specify tighter constraint parameters or get used to waiting.
ReportServer always takes a while to wake up because it's running under IIS. There is a process time out on each AppPool. We have the same issue with our ASP.NET application's report viewer. You could try increasing the AppPool keep alive times in the IIS settings.
See here:
http://www.sqlreportingservices.net/Ask/5536.aspx
http://www.developmentnow.com/g/115_2005_9_0_0_597422/First-run-of-reports-is-SLOW.htm
I'm assuming you're running SQL2005 SSRS of course.
One option is to upgrade to 2008 where SSRS no longer depends on IIS.
Thinking way out of the box: Is the report server on different machine to the one running the application? The network could be taking a long time to resolve "reportServerURL". Once resolved the name would be cached and hence subsequent calls would be quicker..
I have had this problem before with badly configured DNS servers. Try replacing "reportServerUrl" with "reportServerIPAddress" and see if the initial call to ReportViewer is any faster.
I was having this same problem.
i find out that changing the default printer(slow network here) fix the problem.
The ReportViewer gets some information from the default printer,
and since the network here is very slow, i was having 10 seconds of delay
Hope it helps
UPDATE
I have noticed that only the first load of the ReportViewer takes a long time; each subsequent load of the same or different reports loads fast.
You are set to run on server which means the SRS server needs to do the rendering as such the first time there will be a delay for one or all of the following reasons (these are the slowest of the bunch, there are others but they are quicker):
DNS resolution: The URL needs to be resolved to an IP address. Once this is done it is cached locally which speeds it up.
ASP.NET/IIS needs time to warm up. There is all kinds of compilation and initial loading that must occur - after loaded it will remain in the servers memory until you restart IIS or the default clean up time occurs.
Reporting Services needs time to warm up in the same way as ASP.NET/IIS does.
To test for this use a network monitor such as Netmon (if you are a Microsoft fan) or Wireshark (my recommendation) and watch the traffic from your machine to the server. You'll see the DNS request, then the HTTP requests go and the delay will be in the returning data. On second call you will see the speed is vastly different in the return and DNS checks.
What you could do to prevent this is a warm up script - I don't know one for SRS but here is a link to a SharePoint one which would not be hard to change since it has the exact same issues.
It seems as though you are going after the SSRS report directly. You may want to hit the SSRS web service instead. That may improve your performance.
Here is a possible resolution for your problem:
Try to access the first report from web before accessing any report with the application.
If the problem doesn't appear, you could make an application that will "preload" the first report, in order to allow reporting services to do their start-up.
I've seen this kind of solution for some demo applications from Microsoft. The applications where using Analysis Services and Reporting Services.
Good luck otherwise
To my knowledge, I think it's a problem Microsoft is finding it tough resolve.
Initially, the report loader is only slow at firt time rendering of report and subsequent reports loads noramal (a bit faster).
To help counter this, place a Startup Form with a label (Label1) and Timer (Timer1) control. Set Label1.Text="Please, wait (about 15 secs)". Set Timer1.Interval=3.
At the form_Load event of the Startup Form, set Timer1.Start.
At Tick event of Timer1 place "frmMyReportForm.reportViewer1.SetDisplayMode(Microsoft.Reporting.WinForms.DisplayMode.Normal)"
"frmMyReportForm" any of the forms in your project containing a reportviewer control.
All the delays will be caught here so that when you generate the actual report, there will be no delays.
I hope this might be helpful to my fellow developers.