Which tier should I choose on Uffizzi Cloud for a smaller but growing application? - automation

My application currently has around 5k regular users and is growing by about 100 users per month. I am considering having it deployed on this new Uffizzi cloud so that I don't have to think so much about maintaining my deployment but am not sure what level I need at my size or if I will outgrow the lower tier soon.

Much will depend on your application is built and how it handles requests.
The Standard Tier is designed for anything that is for Dev and Test purposes or anything that just a few people will be accessing. Good use cases are running your app in a Dev, QA, or Staging environment so you can see how it is working before it goes live to an audience (this process is very important - most features don't come out of the box working exactly as everyone on the team expected them to - or at least as your product lead expected them to)
The Performance Tier has auto-scaling and is a good choice for a public facing app that you don't want to crash under load or have delays serving up responses. If you have users in the 10s to 100s this is appropriate.
The Enterprise Tier is a good choice for several 100 to hundreds of thousands of users. The key decision is how reliable do you want your app to be. Obviously if you have an outage at any point that may hurt your reputation and the more people using your app the worse it will be. The Enterprise Tier has automated High Availability. This means your app will be deployed and load-balanced across three data centers also known as "AZs" (Availability Zones). You've effectively decreased your risk factor of a hardware failure or any other incident that would cause an outage by a factor of three.

Related

Stress testing a desktop app system

If I want to stress test a 'classic' client-server (desktop app <-> LAN <-> database server) Windows Forms desktop application to see how it performs when many concurrent PC users are using it, how should I go about it? I want to simulate many PC users concurrently going through a work flow, to see if it all stands up and at what point the system degrades unacceptably. I've looked at many test tools but they all seems to be skewed toward testing functionality or web app performance, which is quite different.
Clearly having many actual people on actual PCs is not practical, and lots of virtual machines on a few PCs is not representative either. 'Cloud' computing (EC2, Azure etc) looks promising but the documentation and pricing information all seems to be skewed towards mobile apps or web servers, again not the same (but that could just be presentation so I remain open to the idea). I need to be able to virtualise a small LAN of many client machines running the application and a database server.
Can anyone suggest how to do this, or recommend something?
TIA
IMHO the real question is - do you really need to do performance testing in your case? Consider this - where is your business and functional logic?
Performance testing of Desktop applications is oxymoron by itself. Desktop application is made to be used by one person at a time. So if getting a response takes 5 seconds, it will take (pretty much) 5 seconds no matter how many users are clicking the button. The only real thing close to your backend is the DB and they by design support serious asynchronous load. In case this is not enough - just make a cluster.

Microsoft Azure Blob Storage Upload Performance

I am running an Azure web role, which is storing very small blobs into Azure storage. (Blob upload is being done from the server, not from the browser.) I have searched stack overflow and the rest of the internet for tips on optimizing blob storage performance, and I believe I've checked and implemented all of the usual suspects: uploading async, allowing unlimited outgoing web connections (which now seems to be the default setting on web roles and no longer needs to be explicitly set in web.config or in code).
Tweaking the number of concurrent uploads I allow makes some difference, but regardless of what I've tried, I seem to max out at around 1,000 blob uploads per second. This is when running in the Azure web role, in the same region as the storage account (East US). My rate when running this from home over a good internet connection isn't much less, ~700 blobs/sec, which seems to tell me that it's not the network latency that's limiting the rate, it's the actual processing time of the storage service.
I wouldn't normally consider these rates horrible for this kind of a service, but I've read that Microsoft boasts a rate of ~20,000 storage transactions per second, so I've been a little disappointed with these results.
I'd like to get some feedback from those who have really tried to push the limits of blob storage. Does ~1000 small uploads per second sound about right? Or is there possibly something else I should be doing to improve this? I'll post the code if I need to, but I'd rather not receive speculative answers, I'd like to hear from developers who can either confirm that my results are reasonable, or that they've seen much higher throughput.
I should add that I'm currently running this in a small web role. I've tried it also in a medium web role, and didn't see any significant difference.
EDIT:
After a few days of development and testing, my upload rate seemed to suddenly increase. Not by a lot, but maybe by another ~200 per second. In looking around the web, I noticed a comment in the Azure documentation stating "A storage account scales automatically as usage increases." So I'm wondering if it really is capable of much higher rates, but will not automatically scale up until it sees sustained period of high volume. Some confirmation of that would also be greatly appreciated.
Depending on how small your requests are the problem might be caused by Nagle’s Algorithm is Not Friendly towards Small Requests - although usually I see that with queues / table operations. Try disabling Nagle's and let me know if that makes any difference. As an fyi, you have to disable it prior to establishing the connection otherwise the changes will not take effect.
Jason

How important is Audit Logging in a Standalone Application

My experience are mostly in developing web applications and we do a lot of audit trails there. Literally every table is audited. I believe this is because user transactions are centralized to a server and they share the same table so it is important who did what.
But now I am assigned to a project developing a standalone application (specifically a mobile application with occasional server transactions). Some are suggesting to add Audit logging but I am not sure what is the norm for standalone applications. For those who have experiences, kindly share if you think it is mandatory or not. I'm leaning towards NO (that it is not that important) because it will only increase resource consumption (and mobile limited). It may affect performance, stability and usability.
Well for the app itself it may not be so important, however having automated error logs can be very useful when you faced with an angry customer(s) and need to quickly debug the app. You can even have a special 'debug mode' to collect more info.
You should also log your server transactions, adding an extra query in the request won't really affect performance.

How to achieve high availability?

My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent).
In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal.
External cloud system not allowed as he has no confidence.
The system should take into account of scalability which means addition of new servers should be easy to configure.
The servers should be load balanced.
The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption).
The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time)
Please help. I'm already at the end of my wit. Thank you in advance.
I imagine that these requirements (if properly analysed) are essentially incompatible, in that they cannot work according to CAP Theorem.
If you have several datacentres, even if they are close by, partitions WILL happen. If a partition happens, either availability OR consistency MUST be lost, because either:
you have a pre-determined "master", which keeps working and other "slave" DCs which fail (or go readonly). This keeps consistency at the expense of availability.
OR you lose consistency for the duration of the partition (this means that operations which depend on immediate consistency are also unavailable).
This is incompatible with your requirements, as far as I can see. What your boss wants is clearly impossible. He needs to understand CAP theorem.
Now, in YOUR application case, you may decide that you can bend the rules and redefine what consistency or availiblity are, for convenience, and have a system which degrades into an inconsistent but temporarily acceptable state.
You probably want to get product management to have a look at the business case for these requirements. Dropping some of them is probably ok. Consistency is a good requirement to keep, as it makes things behave as people expect - this means to drop availability or partition-tolerance. Keeping consistency is definitely easier from an engineering perspective.
This is another one of those things where employers tend not to understand the benefits of using an off-the-shelf solution. If you as a programmer don't really even know where to start with this, then rolling your own is probably a going to be a huge money and time sink. There's nothing wrong with not knowing this stuff either; high-availability, failsafe networking that takes into consideration catastrophic failure of critical components is a large problem domain that many people pour a lot of effort and money into. Why not take advantage of what providers have to offer?
Give talking to your boss about using existing cloud providers one more try.
You could contact one of the solid and experienced hosting provides (we use Rackspace) that have data centers in different regions world wide and get their recommendations upon your requirements.
This will require expert assistance and a large budget, and serious planning.
I better option will be contact a reputable provider with a global footprint and select a premium solution with a solid SLA backing up there service and let them tailor a solution that comes close to your needs.
Just realize even the guys like Google, Yahoo, Microsoft and Amazon (to name a few), at one time or another have had some or other issue that rendered segments of there systems offline to certain users.

How many users does a shared hosting website scale for?

I'm planning on hosting 3 or 4 WCF/.NET 3.5 REST webservices to be used by a new iPhone application.
I've heard good reviews about DiscountASP.NET (http://www.discountasp.net/index.aspx), but I'm pretty ignorant about shared hosting performance. At the same time, I think it's still early to pay $90 a month for a scalable Amazon EC2 server instance.
So, any idea how many hits/month would a shared hosting website handle?
It depends on how good your shared hosting is. To determine simultaneous users, you can try to benchmark the performance of your server by hitting with many requests.
Your host will likely yell at you if they see you using a lot of your cpu. For web services, bandwidth isn't so much an issue as is your cpu/memory availability.
You should be more concerned about simultaneous users instead of hits per month. You want to be able to handle any spikes in traffic. Shared hosting is less predictable because you don't control the usage of the other users on the machine.
I would say if you're starting out, shared hosting would probably be fine, just monitor it and upgrade when you see decreasing response times. Your host will probably let you know when you are affecting the performance of everyone else on the server.
They are all different. It depends on how much bandwidth they are giving you. (GB/Month is a typical metric).
I would say that hits per month could range in the low hundred thousands without causing major issues on many managed hosting. It all depends how intenst your WCF services are. The advantage is that many managed hosting facilities will allow you to dynamically add more hosting power to your site. (For more $ obviously).