I am enabling performance counters using profiles in our development environment. I am thinking of using it in production.
Would it impact the performance if we enable these counters in production?
It would be insignificant compared to any work you'd do in your system.
Related
I have a SVN repository behind an Apache HTTPS server that stores small and large (+1GB) files. When I commit a large file, the transfer speed is about 10MB/sec (using a 1GBit network line). When I look at CPU utilization on the server, it is saturated with about 85% being consumed by apache2, and some 15% by the disk driver.
I have already tried disabling Apache logging and SSL, but that didn't help to improve the transfer speed. This makes me think that mod_dav_svn is using most of the CPU? I have also tried to increase the amount of available cores on the server (default = 1 core), but this mysteriously slows down the commits while httpd remains using 1 core. And setting SVNCompressionLevel 0 also didn't result in any noticeable speed improvement.
Is there any way to significantly increase the transfer speed through parallelization or some other optimization?
Server:
Debian 9.3
Apache 2.4.25
libapache2-mod-svn 1.9.5
svn repository: default FSFS config (i.e. all commented out in fsfs.conf). The HDD can write up to 30Mb/sec (hardware limited) without saturating the CPU (tested with copying). FS is NTFS, using ntfs-3g with big_writes enabled which is using some 10-15% CPU while writing #10MB/sec.
Client:
svn 1.8.13
CPU: first generation Intel Core #3.20Ghz
Obviously, I would be very pleased if I could transfer at 25-30MB/sec.
Is there any way to significantly increase the transfer speed through
parallelization or some other optimization?
Yes, there is. However, the question lacks necessary details about the SVN client and server version, the server's and FSFS repository configuration and the hardware it runs on. It is hard to tell what kind of optimizations will help in your case. You may want to upgrade your server and client to the latest versions and disable the compression in the server's config.
FYI: VisualSVN Server in my tests can deliver 1Gbps speed.
I don't actually have any problem, just a bit curious of things.
I make a python web framework based on bottle (http://bottlepy.org/). Today I try to do a bit comparison to compare bottle WSGI server and apache server performance. I work on lubuntu 12.04, using apache 2, python 2.7, bottle development version (0.12) and get this surprising result:
As stated in the bottle documentation, the included WSGI Server is only intended for development purpose. The question is, why the development server is faster than the deployment one (apache)?
As far as I know, development server is usually slower, since it provide some "debugging" features.
Also, I never has any response in less than 100 ms when developing PHP application. But look, it is just 13 ms in bottle.
Can anybody please explain this? This is just doesn't make sense for me. A deployment server should be faster than the development one.
Development servers are not necessarily faster than production grade servers, so such an answer is a bit misleading.
The real reason in this case is likely going to be due to lazy loading of your web application on the first request that hits a process. Especially if you don't configure Apache correctly, you could hit this lazy loading quite a bit if your site doesn't get much traffic.
I would suggest you go watch my PyCon talk which deals with some of these issues.
http://lanyrd.com/2013/pycon/scdyzk/
Especially make sure you aren't using prefork MPM. Use mod_wsgi daemon mode in preference.
A deployment server should be faster than the development one.
True. And it generally is faster... in a "typical" web server environment. To test this, try spinning up 20 concurrent clients and have them make continuous requests to each version of your server. You see, you've only tested 1 request at a time--certainly not a typical web environment. I suspect you'll see different results (we're thinking of both latency AND throughput here) with tens or hundreds of concurrent requests per second.
To put it another way: At 10, 20, 100 requests per second, you might still see ~200ms latency from Apache, but you'd see much worse latency from Bottle's server.
Incidentally, the Bottle docs do refer to concurrency:
The built-in default server is based on wsgiref WSGIServer. This
non-threading HTTP server is perfectly fine for development and early
production, but may become a performance bottleneck when server load
increases.
It's also worth noting that Apache is doing a lot more than the Bottle reference server is (checking .htaccess files, dispatching to child process/thread, robust logging, etc.) and all those features necessarily add to request latency.
Finally, I'd ask whether you tuned the Apache installation. It's possible that you could configure it to be faster than it is now, e.g. by tuning the MPM, simplifying logging, disabling .htaccess checks.
Hope this helps. And if you do run a concurrent benchmark, please do share the results with us.
In David Fowler's blog, SQL Server has been added to the list of scale out providers for service bus.
I am in the process of implementing Redis on our Windows servers. Based on what I know about Redis, I'm guessing it will be significantly faster than using SQL Server - is that a fair assumption?
If so, how does the Windows version of Redis implement fail-over?
Redis is ~x200 faster than SQL, mainly because it's in-memory and the protocol is designed for speed.
If that helps, Redis Cloud is now offered on Windows Azure, and HA is a built-in capability of the service.
Disclosure - I'm the Co-Founder & CTO of Garantia Data, the company behind the Redis Cloud service.
Based on what I know about Redis, I'm guessing it will be
significantly faster than using SQL Server - is that a fair
assumption?
It will be faster than SQL Server since it's optimized for in-memory based operations, however its speed isn't the only advantage. Support of advanced data structures offers a great deal of flexibility when dealing with various scenarios.
If so, how does the Windows version of Redis implement fail-over?
There is a link in download section to unofficial windows based port of redis which however isn't meant to be used for production purpose. Official version of redis supports replication and sentinel has automatic failover, but it's hard to say what's the state of these features in windows port. In general I wouldn't recommend to use redis on windows machine but rather use virtual machine with linux distro and run it there.
My Grails application is running in a development environment. I still didn't go into production, but in any case, is it normal that my Grails application is requiring 230 MB at startup only (with an empty bootstrap and no request handled so far)?
Do you know why this is the case, how to improve memory usage in development mode and, most important, whether it is reduced in production environment?
To answer your questions, yes - it is normal. It's especially normal if you have a lot GSPs in your application. GSPs are runtime compiled so you can speed up their generation by increasing your permgen space.
You can improve memory use and performance in general by making sure that you are passing the '-server' flag when you load your server JVM.
I wouldn't blame all that memory usage just on Grails. Because it uses an embedded Tomcat (Jetty in older versions) there will be a decent amount of overhead even when running an empty application.
IMO, 230MB is a lot of memory use for a Java application. High memory usage is just part of life when writing jvm based applications.
My online Grails applications run in a VPS with only 512MB (which includes a Drupal CMS, Apache, the email services, ... and the Tomcat to run GRails) so you can definitely tune your application to use less memory
We used to have a dedicated server (1&1) and very infrequently ran into problems with the server having issues.
Recently, we migrated to a VPS (Wiredtree.com) with similar specs to our old dedicated server, but notice frequent problems running out of memory, mysql having to restart, etc... both when knowingly running intensive scrips and also just randomly during normal use.
Because of this, we're considering migrating to another at VPS - this time at Slicehost to see if it performs better.
My question is two fold...
Are their straightforward ways we could stress test a VPS at Slicehost to see if the same issues occur without having to actually migrate everything over?
Also, is it possible that the issues we're facing aren't just because of the provider (Wiredtree) but just the difference between a dedicated box and VPS (despite having similar specs)?
The best way to stress test an environment is to put it under load. If this VPS is hosting a web application, use one of the many available web server benchmark tools: ab, httperf, Siege or http_load. You don't necessarily care that much about the statistics from the tool itself, but more that it puts a predictable load on the server so that you can tune Apache to handle it, or at least not crash and burn.
The one problem you have with testing against Slicehost is that you are at the mercy of the Internet and your bandwidth to Slicehost. You may not be able to put enough load on the server to reach a meaningful conclusion.
Instead, you might find it just as valuable to run one of the many virtualization products on the market and set up a VM with comparable specs to the VPS plan you're considering. Local testing over your LAN will allow you to put a higher and more predictable load on the server.
In either case, you don't need to migrate everything, but you will need to set up an environment for your application to run in, with representative data in your database.
A VPS with similar specs to a dedicated server should perform approximately the same, but in order to get good performance, you still need to tune Apache, MySQL and any other long-lived server processes. In my experience, the out-of-the-box configuration of Apache in many Linux distributions is not ideal and will allow far too many child processes, overcommitting memory and sending the server into a swap-death spiral.