Cloudbees instance sizing - cloudbees

I am preparing to configure our production Cloudbees instance, and would like some advice on app-cell sizing. We will be using multiple instances with auto scaling enabled, but need to choose the app-cell size. This is obviously a compromise between horizontal and vertical scaling and the best choice will be different for each app.
Our app is a shopping service written in GWT, so there is one JSP and an initial download of the application HTML, JS, CSS, and Image files. After that, everything runs in the clients' browser with only search calls hitting the server and returning plain JSON with no serialization. Also, we are not using sessions, and do not need any type of state on the server, so the memory footprint should be low.
Given all of this, my gut is to go with a more horizontally scaled deployment with a larger number of smaller instances. Any suggestions would be welcome.

There is no general answer to this question. Application design and internal processing comes in the equation and changes the result. Only load test can give you answer about the best sizing for your application.

Statelessness implies you can scale horizontally very easily - so this opens up both ways to scale.
My suggestion is to fine which "container size" works well enough for a single instance (usually a memory restriction) and once you are happy with that, you can then scale horizontally based on usage.

Related

Huge images in iOS app without CATiledLayer?

I have an image at about 7000x6000px. I need this to be in a scrollview/imageView in my app, however this is way to huge for display. It is supposed to be a kind of map. I was hoping to keep the size of the app to the minimum, and the image is just about 13mb in .jpg. In .png it is over 100mb, which is unacceptable. Many have suggested CATiledLayer as an option, but I believe this would result in even bigger file sizes. Anyway, I tried to do it with CATiledLayer, and create my own tiles in TileCutter, (tiles in .jpg), and the size wasn't too bad. But I am having errors all over the place. The iOS version of CATiledLayer is a mystery to me, and I can't find a way to solve this. I get an error saying something about the java-equivalent "index out of bounds of array", even though the array has content at that specific index..
It has a method which returns an array. The array contains data of a .plist. Before the return I log out the content of the array, giving me good data. The call is trying to access
[array objectAtIndex:0]
and put it in a dictionary, but throws OutOfBounds. When logged out the whole array, I can clearly see the content, but when logged out
NSLog("%#",[method objectAtIndex]); I get the same exception.
Anyway, CATiledLayer has given me nothing but problems. I have been reverse-engineering the PhotoScroller project with no luck. Anyone have any other solutions?
Thanks.
Apple has this really neat project, PhotoScroller, that uses CATiledLayer and lets you scroll through several images and zoom them. This seemed really neat until I found out that Apple "cheated" and pre-tiled the images (about 800 tiles saved as file in the bundle!)
I had need for a similar capability, but had to download the images from the network. Thus came about PhotoScrollerNetwork. With the TiledImageBuilder you can download (or read from disk) massive images - I even tested a 18000x18000 image - and it works.
What this class does is start tiling the image as it downloads (when using libjpegturbo) or can save the file then tile (takes longer). The class figures out how many levels of detail are needed to show the image at full resolution and sized to fit in the containing view (a scrollview).
The class uses the disk cache to hold the tiles, but uses and old Unix trick of creating a file, opening it, then unlinking it so that the tiles never really get saved - once the class is dealloced (and the file descriptor closed) the tiles are freed (or if your app crashes they get freed too.
Someone had problems on an iPad 1 with its quite limited memory, so the class now throttles its use of the file system when concurrently loading multiple images. I had a discussion with the iOS kernel manager at WWDC this year, and after explaining the throttling technique to him, he said the algorithm (on managing the amount of disk cache usage) was probably the best technique that could be used.
I think those who suggested CATiledLayer are right. You should really use it! If you need a sample project that displays a huge bitmap using that technology, look here: http://www.cimgf.com/2011/03/01/subduing-catiledlayer/
Many technologies we use as Cocoa/Cocoa Touch developers stand
untouched by the faint of heart because often we simply don’t
understand them and employing them can seem a daunting task. One of
those technologies is found in Core Animation and is referred to as
the CATiledLayer. It seems like a magical sort of technology because
so much of its implementation is a bit of a black box and this fact
contributes to it being misunderstood. CATiledLayer simply provides a
way to draw very large images without incurring a severe memory hit.
This is important no matter where you’re deploying, but it especially
matters on iOS devices as memory is precious and when the OS tells you
to free up memory, you better be able to do so or your app will be
brought down. This blog post is intended to demonstrate that
CATiledLayer works as advertised and implementing it is not as hard as
it may have once seemed.

Bigger cookie-like files for local data storage (browser "caching" of complex structures)

I am developing a browser based game, and I have a big map there. The terrain of the map is static. Therefore, I have some thousands of tiles that will not change (whether they represent a forest, a desert, whatever), just the players above it can change.
Hence, I wanted to store all my map in the player's computer. I am working with Ruby on Rails, and those map information are passed from the server to the javascript that runs on the user browser, in order to render a pretty map. But it makes me pretty sad to have a 200kb .html file, containing all those map related information.
What would be the simplest way to solve this issue? Cookies! Well. That's what I thought. A complete map information can get to almost 200kb (they are pretty big). A cookie can have at most 4kb.. I don't feel that the right way to achieve my objective is to create tons of cookies, one for each row of the map, for instance. Is there any more elegant way to have this static information lie on the player's browser, without creating lots of cookies? A way to cache it on his browser? I mean.. I can cache a 400kb image, why can't I cache a 200kb map structure?
Thanks in advance!
Fernando.
Well, HTML Local Storage gives you 5 MB (though data is stored *as strings*, so the actual amount of data you can fit in the container is likely a lot less than 5 MB.
This limit is oddly fluid. For one thing, it's just a recommended limit; and for another, i.e., Webkit-based browsers use UTF-16, which immediately cuts that in half (2.5 MB).
Browser support for Local Storage is good: IE, Firefox, Safari 4.0+, Chrome 4.0+, and Opera 10.5+. Both iPhone and Android are supported above versions 3.0 and 2.0. respectively.
Using Local Storage to preserve game state appears to be a proto-typical use case.
Finally, Paul Kinlan published an excellent step-by-step tutorial on HTML5Rocks, which i highly recommend (though it's a little more than a year old).
Have you considered storing it in a js file? Most browser will cache linked js files, allowing you to only serve it every once in a while. It would be very simple to deploy.

Ajax request to SQL returns hundreds of images to JSON -> CSS Sprites? Base64?

We are writing a Web-based events calendar with thousands of theatre shows, festivals, and concerts in a SQL database.
The user goes to the Website, performs a search, the SQL server returns JSON, and jQuery code on the client side displays the 200+ events.
Our problem is that each event has an image. If I return URLs, the browser has to make 200+ HTTP GET requests for these tiny (2-4Kb) images.
It is tempting to pack all the images into one CSS sprite, but since the user searches can return any combination of images, this would have to be done dynamically and we'd lose control over caching. Every search would have to return every image in a brand new CSS sprite. Also, we want to avoid any solution that requires dynamically reading or writing from a filesystem: that would be way too slow.
I'd rather return binary images as part of the JSON, and keep track of which images we already have cached and which we really need. But I can't figure out how. It seems to require converting to Base64? Is there any example code on this?
Thanks for any help you can give! :)
The web site you are using (StackOverflow) has to provide 50 avatars for 50 questions shown in the questions page. How does it do it? Browser makes 50 requests.
I would say, you had better implement pagination so that the list does not get too big. Also you can load images on the background or when the user scrolls and gets to the image.
Keeping track of images downloaded is not our job, it is browser's. Our job is to make sure the URL is unique and consistent and response headers allow caching.
Also base64 will make the stream at least 33% longer while it's processing on the client side is not trivial - I have never seen an implementation but probably someone has done some javascript for it.
I believe all you need is just pagination.
It looks like the original poster has proceeded with essentially this solution on their own, however based on their comment about 33% space overhead, I don't think they observed an unexpected phenomenon that you get when base64-ing a bunch of images into one response entity and then gzipping it...believe it or not, this can easily produce an overall transfer size that is smaller than the total of the unencoded image files, even if each unencoded image was also separately gzipped.
I've covered this JSON technique for accelerated image presentation in depth here as an alternative to CSS sprites, complete with a couple live samples. It aims show that it is often a superior technique to CSS sprites.
Data is never random. For example you could name your sprites
date_genre_location.jpg
or however you organise your searches. This might be worth it. Maybe.
You'll need to do the math
Here is what we decided.
Creating sprites has some downsides:
To reduce image loss, we need to store the originals as PNGs instead of JPEGs on the server. This is going to add to the slowness, and there's already quite some slowness in creating dynamic CSS sprites with ImageMagick.
To reduce image size, the final CSS sprite must be a JPG, but JPGs are lossy and with a grid of images, things get weird at the edges as JPG tries to blend and compress. We can fix that by putting a blank border around all the images but even so, this adds complexity.
with CSS sprites, it becomes harder to do intelligent caching; we have to pack up all the images regardless of what we've sent in the past. (or we have to keep all the old CSS sprites regardless of whether they contain many or few images we still need).
I'm afraid we have too many combinations of date-category-location for precomputing CSS sprites, although surely we could handle part of the data this way
Thus our solution is to Base64 it, to actually send each image one by one. Despite the 33% overhead, it is far less complex to code and manage and when you take caching issues into account, it may even be less data transfer.
Thank you all for your advice on this!

Optimal image size for browser rendering

The question
Is there a known benchmark or theoretical substantiation on the optimal (rendering speed wise) image size?
A little background
The problem is as follows: I have a collection of very large images, thousands of pixels wide in each dimension. These should be presented to the user and manipulated somehow. In order to improve performance of my web app, I need to slice them. And here is where my question arises: what should be the dimensions of these slices?
You can only find out by testing, every browser will have different performance parameters and your user base may have anything from a mobile phone to a 16-core Xeon desktop. The larger determining factor may actually be the network performance in loading new tiles which is completely dependent upon how you are hosting and who your users are.
As the others already said, you can save a lot of research by duplicating the sizes already used by similar projects: Google Maps, Bing Maps, any other mapping system, not forgetting some of the gigapixel projects like gigapan.
It's hard to give a definitive dimension, but I successfully used 256x256 tiles.
This is also the size used by Microsoft Deep Zoom technology.
In absence of any other suggestions, I'd just use whatever Google Maps is using. I'd imagine they would have done such tests.

Optimizing for low bandwidth

I am charged with designing a web application that displays very large geographical data. And one of the requirements is that it should be optimized so the PC still on dial-ups common in the suburbs of my country could use it as well.
Now I am permitted to use Flash and/or Silverlight if that will help with the limited development time and user experience.
The heavy part of the geographical data are chunked into tiles and loaded like map tiles in Google Maps but that means I need a lot of HTTP requests.
Should I go with just javascript + HTML? Would I end up with a faster application regarding Flash/Silverlight? Since I can do some complex algorithm on those 2 tech (like DeepZoom). Deploying desktop app though, is out of the question since we don't have that much maintenance funds.
It just needs to be fast... really fast..
p.s. faster is in the sense of "download faster"
I would suggest you look into Silverlight and DeepZoom
Is something like Gears acceptable? This will let you store data locally to limit re-requests.
I would also stay away from flash and Silverlight and go straight to javascript/AJAX. jQuery is a ton-O-fun.
I don't think you'll find Flash or Silverlight is going to help too much for this application. Either way you're going to be utilizing tiled images and the images are going to be the same size in both scenarios. Using Flash or Silverlight may allow you to add some neat animations to the application but anything you gain here will be additional overhead for your clients on dialup connections. I'd stick with plain Javascript/HTML.
You may also want to look at asynchronously downloading your tiles via one of the Ajax libraries available. Let's say your user can view 9 tiles at a time and scroll/zoom. Download those 9 tiles they can see plus whatever is needed to handle the zoom for those tiles on the first load; then you'll need to play around with caching strategies for prefetching other information asynchronously.
At one place I worked a rules engine was taking a bit too long to return a result so they opted to present the user with a "confirm this" screen. The few seconds it took the users to review and click next was more than enough time to return the results. It made the app look lightening fast to the user when in reality it took a bit longer. You have to remember, user perception of performance is just as important in some cases as the actual performance.
I believe Microsoft's Seadragon is your answer. However, I am not sure if that is available to developers.
It looks like some of it has found its way into Silverlight