When to use vertical and horizontal scaling? - mule

What are some scenarios where we have to decide whether to go with horizontal or vertical scaling? Also, is there any other alternative of scaling?

When you’re choosing between horizontal scaling and vertical scaling, you also have to consider what’s at stake when you scale up versus scale out. Horizontal scaling is almost always more desirable than vertical scaling because you don’t get caught in a resource deficit.
Choosing between vertical vs horizontal scaling depends on the application architecture. For instance, applications built using serverless architecture rightly suit horizontal scaling.

Scaling may apply for various kind of resources at various levels. It could be database, message broker, application runtime, etc., The below mentioned approach is in general acceptable but will vary slightly based on the kind of resource we are talking about.
Basically always choose an environment that supports horizontal scaling. But use vertical scaling for satisfying the need of benchmarked base load or a regular load. For any further varying increase or decrease in the load use horizontal scaling so that you can either increase or decrease the resources according to the need.

Related

What is the best method to optimize a PDF with cropped, rotated pictures?

I have noticed that with cropped and rotated pictures which can happen a lot with paperless office techniques, that no tool like Adobe Acrobat or Nitro Pro can optimize away all the removed parts of the photos. Attempting to use libraries to do so in C# is an interesting challenge but could potentially save many megabytes of blank image space, or space where multiple receipts for example were scanned then cropped into separate pages and the original images are always still stored.
So to the point, there is a potentially a solution with the rotate and crop and permanently save again (while adjusting offsets to not ruin alignment of OCR or other relative data). This can be quite good in some cases but rotation represents a loss of data.
So why would it look better in a PDF reader? Subpixels e.g. ClearText. The original document rotated actually increases to monitor scale color resolution when displaying on the screen. So it is practically cheaper disk space wise to store it unrotated then on display use more processing power to adjust it and use subpixel approximations.
So 2 proposals: 1) crop it without rotating it and adjust offsets - this is wasteful a little bit (worse case at 45 degree rotation) as its a rotated rectangle within a rectangle.
2) rotate and crop it while raising to a appropriate better and color resolution and using ClearText-style enhancement to do subpixel enhancement. This seems a bit involved and it would increase the resolution while decreasing the portion of the picture which could defeat the purpose.
Does anyone have a better strategy or any suggestions to address this interesting, and yet seemingly common problem. A very nice utility tool could be written perhaps doing the naïve and 2 proposals mentioned, but perhaps I am missing a yet easier and better idea? Cutting this waste is potentially beneficial and cost saving with minimal downside if one has properly finalized their cropping decisions in PDFs.

Effort Required to make 3D Game Engine?

For the sake of theory (and general understanding),
I would like to understand in a moderately exhaustive list of all the things that must be done in order to create a "modern" 3D Game Engine (from a coder's perspective)
I seem to have a hard time finding this information anywhere else, so I think that you guys at Stack overflow will have the knowledge I seek.
In terms of "moderately exhaustive", I mean such things as a general explanation of the design stages of such engine, such as Binary Space Partitioning, then actual implementation of such an engine, and the uses of the software ( it would be helpful if the means of rendering other than BSP could be explained).
I don't want to make a 3D Engine, but simply understand what sheer amount of effort is required to make one.
Focusing on 3D rendering alone:
Binary space partitioning, like many elements of 3d rendering, is optional. In this case, it is an optimization, allowing the computer to do less work to render a scene, by cutting out invisible sections.
At its core, rendering is simply a five stage process. First, a list of triangles is generated. Next, the triangles are converted from 3-space to 2-space using matrix multiplication. Next, the triangles are filled in with pixels and meta information. Finally, the pixels are shaded individually using the meta-information. Extra finally, the pixels are drawn to the screen.
Most of those steps are partially or wholly done by a graphics card, meaning the programmer's job is to tell the card which step to perform and provide the input data.
This bare bones engine is not even close to a modern engine, however. Modern engines will be filled with optimizations like binary space partitioning, mesh simplification, background loading and texture compression. They will also be filled with special features like shadows, mirrors, mist and particle effects.
Modern engines have to be able to load and interpret textures and meshes, and in some cases, deform and modify both at runtime. The most common example would be interpolating between keyframes.
Engines may need to interact with game logic modules in order to reuse data for collision detection. Collision detection being the thing that determines if bullets hit something and also the thing that makes makes walls and floors real.

Cloudbees instance sizing

I am preparing to configure our production Cloudbees instance, and would like some advice on app-cell sizing. We will be using multiple instances with auto scaling enabled, but need to choose the app-cell size. This is obviously a compromise between horizontal and vertical scaling and the best choice will be different for each app.
Our app is a shopping service written in GWT, so there is one JSP and an initial download of the application HTML, JS, CSS, and Image files. After that, everything runs in the clients' browser with only search calls hitting the server and returning plain JSON with no serialization. Also, we are not using sessions, and do not need any type of state on the server, so the memory footprint should be low.
Given all of this, my gut is to go with a more horizontally scaled deployment with a larger number of smaller instances. Any suggestions would be welcome.
There is no general answer to this question. Application design and internal processing comes in the equation and changes the result. Only load test can give you answer about the best sizing for your application.
Statelessness implies you can scale horizontally very easily - so this opens up both ways to scale.
My suggestion is to fine which "container size" works well enough for a single instance (usually a memory restriction) and once you are happy with that, you can then scale horizontally based on usage.

Optimal image size for browser rendering

The question
Is there a known benchmark or theoretical substantiation on the optimal (rendering speed wise) image size?
A little background
The problem is as follows: I have a collection of very large images, thousands of pixels wide in each dimension. These should be presented to the user and manipulated somehow. In order to improve performance of my web app, I need to slice them. And here is where my question arises: what should be the dimensions of these slices?
You can only find out by testing, every browser will have different performance parameters and your user base may have anything from a mobile phone to a 16-core Xeon desktop. The larger determining factor may actually be the network performance in loading new tiles which is completely dependent upon how you are hosting and who your users are.
As the others already said, you can save a lot of research by duplicating the sizes already used by similar projects: Google Maps, Bing Maps, any other mapping system, not forgetting some of the gigapixel projects like gigapan.
It's hard to give a definitive dimension, but I successfully used 256x256 tiles.
This is also the size used by Microsoft Deep Zoom technology.
In absence of any other suggestions, I'd just use whatever Google Maps is using. I'd imagine they would have done such tests.

Planning a 2D tile engine - Performance concerns

As the title says, I'm fleshing out a design for a 2D platformer engine. It's still in the design stage, but I'm worried that I'll be running into issues with the renderer, and I want to avoid them if they will be a concern.
I'm using SDL for my base library, and the game will be set up to use a single large array of Uint16 to hold the tiles. These index into a second array of "tile definitions" that are used by all parts of the engine, from collision handling to the graphics routine, which is my biggest concern.
The graphics engine is designed to run at a 640x480 resolution, with 32x32 tiles. There are 21x16 tiles drawn per layer per frame (to handle the extra tile that shows up when scrolling), and there are up to four layers that can be drawn. Layers are simply separate tile arrays, but the tile definition array is common to all four layers.
What I'm worried about is that I want to be able to take advantage of transparencies and animated tiles with this engine, and as I'm not too familiar with designs I'm worried that my current solution is going to be too inefficient to work well.
My target FPS is a flat 60 frames per second, and with all four layers being drawn, I'm looking at 21x16x4x60 = 80,640 separate 32x32px tiles needing to be drawn every second, plus however many odd-sized blits are needed for sprites, and this seems just a little excessive. So, is there a better way to approach rendering the tilemap setup I have? I'm looking towards possibilities of using hardware acceleration to draw the tilemaps, if it will help to improve performance much. I also want to hopefully be able to run this game well on slightly older computers as well.
If I'm looking for too much, then I don't think that reducing the engine's capabilities is out of the question.
I think the thing that will be an issue is the sheer amount of draw calls, rather than the total "fill rate" of all the pixels you are drawing. Remember - that is over 80000 calls per second that you must make. I think your biggest improvement will be to batch these together somehow.
One strategy to reduce the fill-rate of the tiles and layers would be to composite static areas together. For example, if you know an area doesn't need updating, it can be cached. A lot depends of if the layers are scrolled independently (parallax style).
Also, Have a look on Google for "dirty rectangles" and see if any schemes may fit your needs.
Personally, I would just try it and see. This probably won't affect your overall game design, and if you have good separation between logic and presentation, you can optimise the tile drawing til the cows come home.
Make sure to use alpha transparency only on tiles that actually use alpha, and skip drawing blank tiles. Make sure the tile surface color depth matches the screen color depth when possible (not really an option for tiles with an alpha channel), and store tiles in video memory, so sdl will use hardware acceleration when it can. Color key transparency will be faster than having a full alpha channel, for simple tiles where partial transparency or blending antialiased edges with the background aren't necessary.
On a 500mhz system you'll get about 6.8 cpu cycles per pixel per layer, or 27 per screen pixel, which (I believe) isn't going to be enough if you have full alpha channels on every tile of every layer, but should be fine if you take shortcuts like those mentioned where possible.
I agree with Kombuwa. If this is just a simple tile-based 2D game, you really ought to lower the standards a bit as this is not Crysis. 30FPS is very smooth (research Command & Conquer 3 which is limited to 30FPS). Even still, I had written a remote desktop viewer that ran at 14FPS (1900 x 1200) using GDI+ and it was still pretty smooth. I think that for your 2D game you'll probably be okay, especially using SDL.
Can you just buffer each complete layer into its view plus an additional tile size for all four ends(if you have vertical scrolling), use the buffer again to create a new buffer minus the first column and drawing on a new end column?
This would reduce a lot of needless redrawing.
Additionally, if you want a 60fps, you can look up ways to create frame skip methods for slower systems, skipping every other or every third draw phase.
I think you will be pleasantly surprised by how many of these tiles you can draw a second. Modern graphics hardware can fill a 1600x1200 framebuffer numerous times per frame at 60 fps, so your 640x480 framebuffer will be no problem. Try it and see what you get.
You should definitely take advantage of hardware acceleration. This will give you 1000x performance for very little effort on your part.
If you do find you need to optimise, then the simplest way is to only redraw the areas of the screen that have changed since the last frame. Sounds like you would need to know about any animating tiles, and any tiles that have changed state each frame. Depending on the game, this can be anywhere from no benefit at all, to a massive saving - it really depends on how much of the screen changes each frame.
You might consider merging neighbouring tiles with the same texture into a larger polygon with texture tiling (sort of a build process).
What about decreasing the frame rate to 30fps. I think it will be good enough for a 2D game.