As the title says, i'm trying to figure out how much RAM is needed to generate and export to excel a large report using SQL Server Reporting Services on Windows Server 2003.
It is not an option to upgrade it to SS2008 and also not an option to export to CSV.
Strictly from a hardware point of view what is a good configuration for a high load server?
(CPU's, RAM, Storage)
You've got problems - the maximum memory size that SSRS2005 can handle is 2GB. (There is a dodge to enable it to handle 3GB, but it's not recommended for production servers.)
SSRS2008 has no such limitation, which is why the normal response in this situation is to recommend an upgrade to 2008.
If your large report won't run on a machine with 2GB available, it doesn't matter how much RAM (or other resources) you put on your server - the report still won't run.
Your only option (given the restrictions stated above) would be to break the report up into smaller pieces and run them one at a time.
Related
I'm using the DevArt dotConnect for Oracle performance counters in my dev/test environment. By adding Use Performance Monitor=True to my database connection string, I can capture useful information such as number of connections, etc.
However, on my production box, I cannot see the DevArt performance counters in Resource Monitor:
I believe that the performance counters are installed, since they appear in the registry:
...and I imagine that adding Use Performance Monitor=True to my database connection string would cause an error if the relevant DLLs etc. were not present.
What else do I need to do to make the performance counters appear in Resource Monitor?
I'm experiencing poor performance from Azure PostGreSQL-PaaS and need help with how to proceed.
I'm trying out Azure PostGreSQL-PaaS in a project. I'm experiencing an intolerable performance from the database (or at least it seems like the database is the problem).
Our application is running in an Azure-VM and both the VM and the database is located in western Europe.
The network between the VM and the database seems to perform ok. (Using psping (from Sysinternals) on the database port 5432 I get latency between 2 ms and 4 ms)
PostGreSQL incorporates a benchmark tool called pgbench. This tool runs a sequence of simple sql statements on a test dataset and provides timing.
I ran pgbench on the VM against the database. Pgbench reports latency between 800 ms and 1600 ms.
If I do the same test with pgbench in-house on our local network against an in-house database I typically get latency below 10 ms.
I tried to contact Microsoft support regarding this, but I've basically been told that since the network seems to perform ok this must be a PostGreSQL-software-problem and not related to Microsoft.
Since the database is PostGreSQL-Paas I've only got limited access to logs and metrics.
Can anyone please help or advice me with how to proceed with this?
Performance of Azure PostgreSQL PaaS offering depends on different server and client configuration, including the SKU provisioned along with storage IOPS. Microsoft engineering has published series of performance blog which helps customer gain measurable and empirical gains by following these steps based on their workload. Please review these blog post:
Performance best practices for Azure PostgreSQL
Performance tuning of Azure PostgreSQL
Performance quick tips for Azure PostgreSQL
Is your in-house Postgres set up similar to the set up in Azure ?
I had the same issue. We moved from a dedicated VM (Ubuntu, Size Standard B2s 2 vcpus, 4 GiB memory, ~35€ p.m. ) running PostgreSQL to the Azure managed PostgreSQL instance (General Purpose, single server, 2vcpus, 10GB Memory, ~130€ p.m. ).
I first noticed the bad performance when the main API request of our webapplication suddenly took 3s instead of 1.7s / 2s.
I ran some very simple timing tests on my old setup with dedicated VM:
select count(*) from mytable;
count
-------
4686
Time: 0.940 ms
And those are the timings of the new setup with Azure managed PostgreSQL:
select count(*) from mytable;
count
-------
4686
Time: 21,353 ms
I think I do not have to explain these numbers :)
I have created a support ticket, and got some insights:
"In Azure PostgreSQL single server, we have a gateway to manage and route connections and there are always 3 copies of the data to ensure your data is not lost, and all of this will create latency."
I also asked what the benefits are of the managed database:
A: Being a instance running on azure, you’ve benefit of:
-Automatic patching, your instance is automatically upgraded.
-Crash recovery, in case our system detects the instance is not running, it tries to perform a restart/swithover to a new host. If all this fails, an oncall engineer is activated to manually restore the instance.
-Automatic backups and one click point in time restore.
-Redundancy of data."
They suggested that I switch from Single Server to a Flexible server, where the gateway is ditched and the performance apparently should be better, but not as good as on a managed instance:
"In several tests we’ve made, the performance comparing to single server is much better. But to setup the right expectactions, you will not get 1 to 1 performance as having PostgreSQL running in a dedicated virtual machine."
I asked for the results of those tests, I will post them here as soon as I get them.
I think you have to decide if the benefits mentioned above are so high that you are willing to pay at least 4 times more compared to a dedicated VM and if you can live with the worse performance. We will now switch back to a master / slave configuration with 2 dedicated VMs.
Anyone know how can I watch my SQL Server resource usage in my windows server?
I'm using SQL Server 2016 express and especially I want to watch my ram usage if possible.
For example there is maximum ram value 1410 MB per instance for SQL Server 2016 Express. How can I know if I am close to limitations of my SQL Server or not?
Thank you.
If you are running Microsoft Windows server operating system, use the System Monitor graphical tool to measure the performance of SQL Server. You can view SQL Server objects, performance counters, and the behavior of other objects, such as processors, memory, cache, threads, and processes. Each of these objects has an associated set of counters that measure device usage, queue lengths, delays, and other indicators of throughput and internal congestion.
Note: System Monitor replaced Performance Monitor after Windows NT 4.0.
Benefits of System Monitor
System Monitor can be useful to monitor Windows operating system and SQL Server counters at the same time to determine any correlation between the performance of SQL Server and Windows. For example, monitoring the Windows disk input/output (I/O) counters and the SQL Server Buffer Manager counters at the same time can reveal the behavior of the entire system.
System Monitor allows you to obtain statistics on current SQL Server activity and performance. Using System Monitor, you can:
View data simultaneously from any number of computers.
View and change charts to reflect current activity, and show counter values that are updated at a frequency that the user defines.
Export data from charts, logs, alert logs, and reports to spreadsheet or database applications for further manipulation and printing.
Add system alerts that list an event in the alert log and can notify you by issuing a network alert.
Run a predefined application the first time or every time a counter value goes over or under a user-defined value.
Create log files that contain data about various objects from different computers.
Append to one file selected sections from other existing log files to form a long-term archive.
View current-activity reports, or create reports from existing log files.
Save individual chart, alert, log, or report settings, or the entire workspace setup for reuse.
Note: System Monitor replaced the Performance Monitor after Windows NT 4.0. You can use either the System Monitor or Performance Monitor to do these tasks.
System Monitor Performance
When you monitor SQL Server and the Microsoft Windows operating system to investigate performance-related issues, concentrate your initial efforts in three main areas:
Disk activity
Processor utilization
Memory usage
Monitoring a computer on which System Monitor is running can affect computer performance slightly. Therefore, either log the System Monitor data to another disk (or computer) so that it reduces the effect on the computer being monitored, or run System Monitor from a remote computer. Monitor only the counters in which you are interested. If you monitor too many counters, resource usage overhead is added to the monitoring process and affects the performance of the computer that is being monitored.
To start System Monitor in Windows
On the Start menu, point to Run, type perfmon in the Run dialog box, and then click OK.
More information here and here and a detailed PDF here.
It's here what I was looking for;
SELECT COUNT() AS buffer_cache_pages, COUNT() * 8 AS buffer_cache_used_KB FROM sys.dm_os_buffer_descriptors;
At my current workplace, the production SQL server and web servers are also used as development and test servers. I've asked for dedicated servers, but been refused as I can't justify it to satisfaction (the reasons against being cost of software, software licenses and hardware resources).
So, what justifications are there for a dedicated test/development server (a combined server at the moment - I don't want to push my luck and ask for 6 servers!)?
Summarised list
Resource usage
Prevention of errors
DR purposes
The list doesn't seem as extensive as I'd hoped.
Consider using Virtual Machines to reduce costs.
Well for starters the potential resources the production database has to use is restricted.
Also rogue/accidental developer SQL scripts could play havock with the production data.
Could there be issues with production data sensitivity? (eg personal data)
just a few to get started :)
Try to calculate the cost of downtime if you take the production system down due to a mistake in development.
Try also to calculate the cost of slow response times in production if/when you are doing performance testing.
As a cost benefit the test/dev hardware can be used as a spare if something bad happens to the production hardware.
Explain how often developer have fat-handed moments and hit enter too soon while editing statements starting...
drop table...
UPDATE veryImportantTable SET veryImportantField = '' WHERE 1 = 1 --TODO: make proper condition
This'd be reason enough for me. :)
I hope you have at least separate databases and are not developing on production data.
Check the data protection act, and also look into PCI-DSS if you want to be really secure (Payment Card Industry Data Security Standard).
I think it's livable to have a test-database on the same physical machine as your production DB. Performance is often not an issue (and assuming it's a multicore muchas memory machine, even if you do a heavy query on test, production will often not noticably slow down), and so long as the DB connections are separate, the chance of accidental damage is very very low.
As for a web-server, almost any machine can run one of those (apache is free, and even IIS is free for 10 simultaneous connections or fewer) - you could install a test web server on any old machine, configure it to use your test DB, and have a decent, low-cost solution.
'course a separate machine is "cleaner" - but the difference isn't huge.
One strong argument is availability / reduce downtime / disaster recovery.
i.e. to have another machine on standby to replace the production machine should anything bad happen to it hardware-wise (e.g. disk controllers or motherboards or power supplies dying).
Ideally the additional machine should be identical to the production one so it can be swapped directly, or individual parts swapped in as required. They can also back each other up or have a local copy of their counterparts last backup so they can be restored from quickly.
Of course it depends on how critical uptime is to the business as to how much value they'll see it this. If you're able to roughly work out how much they'll lose in $ due to lost business with and without a 'hot spare' server and present your case from a $ saved viewpoint (hopefully a lot more than the cost of the server), they might go for it.
I just rolled a small web application into production on a remote host. After it ran a while, I noticed that MS SQL Server started to consume most of the RAM on the server, starving out IIS and my application.
I have tried changing the "server maximum memory" setting, but eventually the memory usage begins to creep above this setting. Using the activity monitor I have determined that I am not leaving open connections or something obvious so I am guessing its in cache, materialized views and the like.
I am beginning to believe that this setting doesn't mean what I think it means. I do note that if I simply restart the server, the process of memory consumption starts over without any adverse impact on the application - pages load without incident, all retrievals work.
There has got to be a better way to control sql or force it to release some memory back to the system????
From the MS knowledge base:
Note that the max server memory option
only limits the size of the SQL Server
buffer pool. The max server memory
option does not limit a remaining
unreserved memory area that SQL Server
leaves for allocations of other
components such as extended stored
procedures, COM objects, non-shared
DLLs, EXEs, and MAPI components.
Because of the preceding allocations,
it is normal for the SQL Server
private bytes to exceed the max server
memory configuration.
Are you using any extended stored procedures, .NET code, etc that could be leaking memory outside of SQL Server's control?
There is some more discussion on memory/address space usage here
Typically I recommend setting the max server memory to a couple of Gigs below the physical memory installed.
If you need to run IIS and an application server installed on the server then you'll want to give SQL Server less memory.