Justifications for a test/development server - testing

At my current workplace, the production SQL server and web servers are also used as development and test servers. I've asked for dedicated servers, but been refused as I can't justify it to satisfaction (the reasons against being cost of software, software licenses and hardware resources).
So, what justifications are there for a dedicated test/development server (a combined server at the moment - I don't want to push my luck and ask for 6 servers!)?
Summarised list
Resource usage
Prevention of errors
DR purposes
The list doesn't seem as extensive as I'd hoped.

Consider using Virtual Machines to reduce costs.

Well for starters the potential resources the production database has to use is restricted.
Also rogue/accidental developer SQL scripts could play havock with the production data.
Could there be issues with production data sensitivity? (eg personal data)
just a few to get started :)

Try to calculate the cost of downtime if you take the production system down due to a mistake in development.
Try also to calculate the cost of slow response times in production if/when you are doing performance testing.
As a cost benefit the test/dev hardware can be used as a spare if something bad happens to the production hardware.

Explain how often developer have fat-handed moments and hit enter too soon while editing statements starting...
drop table...

UPDATE veryImportantTable SET veryImportantField = '' WHERE 1 = 1 --TODO: make proper condition
This'd be reason enough for me. :)
I hope you have at least separate databases and are not developing on production data.

Check the data protection act, and also look into PCI-DSS if you want to be really secure (Payment Card Industry Data Security Standard).

I think it's livable to have a test-database on the same physical machine as your production DB. Performance is often not an issue (and assuming it's a multicore muchas memory machine, even if you do a heavy query on test, production will often not noticably slow down), and so long as the DB connections are separate, the chance of accidental damage is very very low.
As for a web-server, almost any machine can run one of those (apache is free, and even IIS is free for 10 simultaneous connections or fewer) - you could install a test web server on any old machine, configure it to use your test DB, and have a decent, low-cost solution.
'course a separate machine is "cleaner" - but the difference isn't huge.

One strong argument is availability / reduce downtime / disaster recovery.
i.e. to have another machine on standby to replace the production machine should anything bad happen to it hardware-wise (e.g. disk controllers or motherboards or power supplies dying).
Ideally the additional machine should be identical to the production one so it can be swapped directly, or individual parts swapped in as required. They can also back each other up or have a local copy of their counterparts last backup so they can be restored from quickly.
Of course it depends on how critical uptime is to the business as to how much value they'll see it this. If you're able to roughly work out how much they'll lose in $ due to lost business with and without a 'hot spare' server and present your case from a $ saved viewpoint (hopefully a lot more than the cost of the server), they might go for it.

Related

How to analyze poor performance from Azure PostGreSQL-PaaS

I'm experiencing poor performance from Azure PostGreSQL-PaaS and need help with how to proceed.
I'm trying out Azure PostGreSQL-PaaS in a project. I'm experiencing an intolerable performance from the database (or at least it seems like the database is the problem).
Our application is running in an Azure-VM and both the VM and the database is located in western Europe.
The network between the VM and the database seems to perform ok. (Using psping (from Sysinternals) on the database port 5432 I get latency between 2 ms and 4 ms)
PostGreSQL incorporates a benchmark tool called pgbench. This tool runs a sequence of simple sql statements on a test dataset and provides timing.
I ran pgbench on the VM against the database. Pgbench reports latency between 800 ms and 1600 ms.
If I do the same test with pgbench in-house on our local network against an in-house database I typically get latency below 10 ms.
I tried to contact Microsoft support regarding this, but I've basically been told that since the network seems to perform ok this must be a PostGreSQL-software-problem and not related to Microsoft.
Since the database is PostGreSQL-Paas I've only got limited access to logs and metrics.
Can anyone please help or advice me with how to proceed with this?
Performance of Azure PostgreSQL PaaS offering depends on different server and client configuration, including the SKU provisioned along with storage IOPS. Microsoft engineering has published series of performance blog which helps customer gain measurable and empirical gains by following these steps based on their workload. Please review these blog post:
Performance best practices for Azure PostgreSQL
Performance tuning of Azure PostgreSQL
Performance quick tips for Azure PostgreSQL
Is your in-house Postgres set up similar to the set up in Azure ?
I had the same issue. We moved from a dedicated VM (Ubuntu, Size Standard B2s 2 vcpus, 4 GiB memory, ~35€ p.m. ) running PostgreSQL to the Azure managed PostgreSQL instance (General Purpose, single server, 2vcpus, 10GB Memory, ~130€ p.m. ).
I first noticed the bad performance when the main API request of our webapplication suddenly took 3s instead of 1.7s / 2s.
I ran some very simple timing tests on my old setup with dedicated VM:
select count(*) from mytable;
count
-------
4686
Time: 0.940 ms
And those are the timings of the new setup with Azure managed PostgreSQL:
select count(*) from mytable;
count
-------
4686
Time: 21,353 ms
I think I do not have to explain these numbers :)
I have created a support ticket, and got some insights:
"In Azure PostgreSQL single server, we have a gateway to manage and route connections and there are always 3 copies of the data to ensure your data is not lost, and all of this will create latency."
I also asked what the benefits are of the managed database:
A: Being a instance running on azure, you’ve benefit of:
-Automatic patching, your instance is automatically upgraded.
-Crash recovery, in case our system detects the instance is not running, it tries to perform a restart/swithover to a new host. If all this fails, an oncall engineer is activated to manually restore the instance.
-Automatic backups and one click point in time restore.
-Redundancy of data."
They suggested that I switch from Single Server to a Flexible server, where the gateway is ditched and the performance apparently should be better, but not as good as on a managed instance:
"In several tests we’ve made, the performance comparing to single server is much better. But to setup the right expectactions, you will not get 1 to 1 performance as having PostgreSQL running in a dedicated virtual machine."
I asked for the results of those tests, I will post them here as soon as I get them.
I think you have to decide if the benefits mentioned above are so high that you are willing to pay at least 4 times more compared to a dedicated VM and if you can live with the worse performance. We will now switch back to a master / slave configuration with 2 dedicated VMs.

Two Microsoft SQL server work as one?

I am interesting is there any solution to have two physical machine, with Microsoft SQL servers 2012 that work on the same database. somthing like Cluster just with bothe node to be active... Any idea?
Microsoft SQL Server does not support a 'real' load balancing scheme out of the box. AFAIK, this is still true with SQL Server 2012. (Someone will enlighten me if I'm wrong.) It doesn't matter if we are talking about database mirroring or AlwaysOn or clusters.
(In order to hammer that point home, MS seems to call SQL Server clusters "SQL Server failover clusters" lately. Pedantics.)
If you want to load balance your databases, you have to do the hard work yourself with some sort of sharding, federation or replication. (Note that federation (by views) has been in the product since SQL Server 2000, it just wasn't very popular.) And, of course, that would mean modifying either your databases or the apps themselves, which is almost always either too much work or violates your vendor agreements. With 150 databases, it's just that much more insurmountable.
You can have an active-active cluster, but the thing is that you would have to carefully distribute your databases on your nodes to divvy up the load. With 150 databases, this might be more granular than if you just had five databases, but if you have one database that is a ton of load and 149 that are light-weight or rarely used, you might still find one machine bogged down and the other isn't. And, some databases are busy sometimes and hardly busy at other times. Which means that everything might come down to when a user decides to run some heavy process.
Of course, you have to be able to support all of that load on a single node when you fail over, for whatever reason, even if it is something mundane like patching Windows. If you only patch during known slow traffic periods, that's great. If you don't have slow periods, or if the failover occurs because the hardware actually has a fault, the other node might not take the load and your users will be out of luck. If you think about it like that, having the second machine "doing nothing" isn't quite so irritating. At least you know that it will take all of the traffic that the primary usually does.
Yes, you can can two databases active sharing the same information and replicating it back and forth. This is referred to as "Merge Replication". In this configuration, both nodes can accept read and write transactions.
How Merge Replication Works

SQL Azure - very slow compared to localhost database

I decided I wanted to try out Microsoft SQL Azure, as many people have talked very highly about it. It should be fast, flexible, cheap and many other things.
I got it up and running, migrated my data to Azure and hooked up the connectionstring. I tried to run some queries on the database, and was shocked about how slow even simple queries were. A "SELECT *" from a table with 700 rows took 7 seconds. My page also seems extremely slow, compared to when I used a localhost managent studio or a database on a shared hosting.
Now, when I setup my server, I couldn't pick a physical position. However, I live in Denmark, and I can see the server is the "South Central US". This might be the issue.
I don't use any stored procedures (so I guess no parameter sniffing).. I can also see my indexes is transfered succesfully.
Any ideas on what to do? Any performance things I am missing?
I ran into this very issue the last few days. Change your database tier from basic to standard and you will see a HUGE increase in performance. I am working on a query intensive dashboard at the moment, it took a 20 sec response time down to 2 sec response.
I've used Azure now for the last many years, and my original question is pretty much solved.
My main take-aways after dealing with Azure databases for a while:
It's extremely important that your application and database is placed in the same region. If not, then you will have a slow application. Recently I had an API and app running on two different regions - it took ~1 second for every response.. After moving it to same, it was instant
If your application has a high load, it's often a good idea to upgrade. This happens earlier than you might expect
Pick the nearest region - it really matters

Need hints to optimise a sybase access over a big fat pipe

I have the need to access a sybase database (12.5) from oversea. The high latency is definitely a problem.
I already optimized the connection parameters to make better use of the network and achieved a 20x performance increase, but it's still not enough : 1 minute to get 3Mb of data.
We need another 10x or 20x increase for our application.
Technical data :
the data are flowing through a single TCP connection using the TDS protocol
the client app is an excel sheet with macros, using the default Sybase driver
the corporate environment makes it difficult to push big changes in the 10+ years architecture, so solutions need to be the least intrusive. But some changes may be bargained due to the importance of this project.
Can anyone give me pointers ?
I already thought of :
splitting SQL requests over several concurrent connections to the database. The problem is data consistency : what if records are modified at the same time since requests will not be exactly executed at the same time ? Is there an existing mechanism to spread a request over several calls on different connections ?
using some kind of database "cache" or "local replication" oversea, but I don't know what is possible.
Thanks.
Try to install local database (ASE or ASA) and synchronize this databases with Sybase Mobilink (or Sybase Replication Server if you need small replication latency and you have a lot of money).
(I know I answer to my own question)
Eventually, we settled to designing our own database remote access protocol. It's not complicated since we are only using a basic subset of SQL (SELECT and UPDATE), and the protocol doesn't have to understand SQL anyway.
By using our own protocol, we'll be able to use compression, make the client able to use several TCP links at the same time, maximize network utilisation and add some functionnal caching secific to our application.
The client will be our app and the server will be a "proxy" to the real database, sitting next to it (like #Tim suggested in the comments).
It's not the only solution, but we feel that it's a good balance between enormous replication price, development complexity and expected benefits.

The ideal multi-server LAMP environment

There's alot of information out there on setting up LAMP stacks on a single box, or perhaps moving MySQL onto it's own box, but growing beyond that doesn't seem to be very well documented.
My current web environment is having capacity issues, and so I'm looking for best-practices regarding configuration tuning, determining bottlenecks, security, etc.
I presently host around 400 sites, with a fair need for redundany and security, and so I've grown beyond the single-box solution - but am not at the level of a full ISP or dedicated web-hosting company.
Can anyone point me in the direction of some good expertise on setting up a great apache web-farm with a view to security and future expansion?
My web environment consists of 2 redundant MySQL servers, 2 redundant web-content servers, 2 load balancing front-end apache servers that mount the content via nfs and share apache config and sessions directories between them, and a single "developer's" server which also mounts the web-content via nfs, and contains all the developer accounts.
I'm pretty happy with alot of this setup, but it seems to be choking on the load prematurely.
Thanks!!
--UPDATE--
Turns out the "choking on the load" is related to mod_log_sql, which I use to send my apache logs to a mysql database. By re-configuring the webservers to write their sql statements to a disk file, and then creating a separate process to submit those to the database it allows the webservers to free up their threads much quicker, and handle a much greater load.
You need to be able to identify bottlenecks and test improvements.
To identify bottlenecks, you need to use your system's reporting tools. Some examples:
MySQL has a slow query log.
Linux provides stats like load average, iostat, vmstat, netstat, etc.
Apache has the access log and the server-status page.
Programming languages have profilers, like Pear Benchmark.
Use these tools to identifyy the slowest/biggest offenders and concentrate on them. Try an improvement and measure to see if it actually improves performance.
This becomes a never ending loop for two reasons: there's always something in a complex system that can be faster and as your system grows, different functions will start slowing down.
Based on the description of your system, my first hunch would be disk io and network io on the NFS servers, then I'd look at MySQL query times. I'd also check the performance of the shared sessions.
The schoolbook way of doing it would be to identify the bottlenecks with real empirical data.
Is it the database, apache, network, cpu, memory,io? Do you need more ram, sharding(+), is the DiskIO, the NFS network load, cpu for doing full table scans?
When you find out where the problem is you might run into the problem that its not enough to scale the infrastructure, because of the way the code works, and you end up with the need to either just create more instances of you current setup or make the code different.
I would also recommend as a first step in terms of scalability, off-load your content to a CDN like Edgecast. Use your current two content servers as additional web servers.