Is there a relation between available RAM and Ring size in OpenStack SWIFT? - ram

I was reading about OpenStack SWIFT and its different components. But I have a doubt, if available RAM is more, then can we afford to have Rings of bigger size ? And how does Ring Size affect the system ?

Ring size has nothing to do with the RAM size.
Following is the command to build the object Ring:
swift-ring-builder <builder_file> create <part_power> <replicas> <min_part_hours>
I am quoting the explanation text for above command from the documentation about ring-preparation.
This will start the ring build process creating the with 2^ partitions. is the time in hours before a specific partition can be moved in succession (24 is a good value for this).
It means if you choose to be 10 then 2^10=1024 partitions are going to be created.
You can read in detail from the SWIFT Administrator’s Guide.

Related

Is there any connection between the segments created in memory by a microprocessor and the memory structure of a process in an operating system?

In 8086 microprocessor, we segment the memory into segments of 64K each because of the 16 bit registers (Since a 20 bit address cannot be stored in the 16 bit register). These segments are categorized as code segment, data segment, stack segment and extra segment. This structure is similar to that of created by a process in operating system. Does that mean each process takes up memory equivalent to 4 segments which will be equivalent to 4*64K in case of 8086 ? And if this is true then by doing some more math we can say that only 4 process will be handled by a 8086 microprocessor at a time (i.e. one of the process will be running state and others would be in block or ready state) since maximum of 16 segments are possible (Total memory size / size of each segment = 1MB/64K = 16).
I have just started studying this and saw this equivalence between process and segments. Does any such connection between the segments of the memory and the memory structure of the process actually exists or it's just my crazy imagination ?
A little history helps. Early UNIX(tm) ran on the Digital pdp minicomputer family. The first circulated versions were V6 & V7, which were exclusive to the pdp-11 family. That family could support a whopping 256K of RAM; but the gp register set (used for address formation) were 16bits wide. There was a limited memory protection scheme in the processor, which permitted the kernel (supervisor) to have a separate address space from user (user); and instructions (addresses generated by pc) to be separate from data (generated by other means). This will probably get edited into the dust by pdp-11 fanbois.
At around this time, intel was rolling what was to become the 8086. Current 8-bit CPUs were already straining at a 64K address space limitation, and were using a concept called bank switching to increase that. In bank switching, some sub-ranges of the 64K address space could be re-pointed into a larger memory bank; so although you could carefully address much more memory. The Hitachi 64180 was one of the CPUs that incorporated this into its silicon; most used external memory controllers.
The 8086 addressing scheme was an amalgamation of these notions. You could produce an Operating System which supported dynamically relocated processes and shared text with up to 64K Instruction + 64K Data. The general idea was you take the segment registers out of the programming model, thus if the OS has to relocate the process, it knows that the process had no saved copy of the old segment value. The commercial OS QNX 1.x, 2.x provided this as a model; the later using the 286 extensions to protect against programs that played with the segment registers.
For programs that didn't care about such subtleties (Lotus 123, ...), you could use the segment registers to effectively create a 2^20 address space on the 8086. It is an ugly programming model in this mode because address formation is A=Seg*16+Base, so Seg=1,Base=0 and Seg=0,Base=16 resolve to the same address.
So, you aren't hallucinating, it was quite intentional, if more than a little half-arsed.

xplane11 control pilot aircraft freeze loose settings

I use for professional piloting over airport Xplane 11 on windows 10 home, and sometimes by hour this software freeze stop running while I piloting real a airfreighter in sky, and to doing revoring is slow as 5 minutes minimum, also to start 3D world for improve view, and hd camera in board pilot cabin. Also to retake recovery the control piloting on ground the aircraft and the remote view is wrong of parameters angles. What about the last update in version older to quick restart, avoid freeze and recover more high keep the cockpit equipment settings whithout reload the saved file before a new time to be secure a new bug crach this software, what I use daily and know it since last years.
there a solution, increasing virtual memory and file preference screen video : https://www.x-plane.com/kb/configuring-x-plane-to-use-less-virtual-memory/
Reducing X-Plane’s Virtual Memory Use :
There are a few ways you can reduce the amount of virtual memory that X-Plane needs to operate.
The first thing you can do is remove add-ons. Plugins, custom airplanes, and custom scenery can all increase the amount of virtual memory that X-Plane uses. Try removing add-ons and see if the problem goes away. Add-ons can also increase the amount of memory that X-Plane itself uses. You may have to mix and match add-ons depending on your activities.
The second thing you can do is turn down X-Plane’s settings. Here are some settings that make a difference in virtual memory usage.
AI traffic. More planes uses more memory, and more complex and higher detail planes use more memory.
Texture resolution: turn texture compression on – it saves a lot of memory. Turn down texture res as needed. In particular, do not run with extreme res and uncompressed textures.
Trees – turn down the forest density to save memory.
4x SSAA – when in HDR mode, turn off 4x SSAA if you use a large monitor or a large window size.
If needed, turn down object density.

How to properly assign huge heap space for JVM

Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.

How do I set the UEFI memory map for a MacBook with Bad RAM?

I'm trying to fix a friends MacBook Air. We detected bad / corrupt RAM with memtest, but since RAM can't be replaced I was thinking it must be possible to alter the memory map to avoid certain RAM sections like the Linux kernel parameter memmap used to do in older (not UEFI) machines. Some one kindly pointed me towards Clover, but I have been reading the docs and have not found any way to alter the memory map.
The best solution of the original problem is to replace the faulty RAM module, it can be done by any skilled repairman with BGA rework station.
As for the solution mentioned: you can develop a very simple UEFI application that will use gBS->AllocatePages to allocate the faulty memory block completelly as EfiUnusableMemory, so it will automatically be added to UEFI memory map and then call the original Apple's boot.efi loader.

recommended limit for memory management in Cocos2d?

is there a recommended limit for the images in Cocos2d, whether there are too big and take too much memory? Are there some rules, in dimensions or in Kb, to avoid slowing the game down? (for the background image, or the graphics of my characters (even if i use a batch node)?)
Thanks for your answer
First of all, memory usage has very, very, very little to do with performance. You can fill up the entire memory with textures, the game won't care. It's when you render them where there will be a difference. And then it only matters how much of the screen area you're filling with textures, how heavily they're overlayed, batched, rotated, scaled, shaded and alpha-blended. Those are the main factors in texture rendering performance. Memory usage plays a very insignificant role.
You may be interested in the cocos2d sprite-batch performance test I did and the general cocos2d performance analysis. Both come with test projects.
As for the maximum texture sizes have a look at the table from my Learn Cocos2D book:
Note that iPhone and iPhone 3G devices have a 24 MB texture memory limit. 3rd generation (iPhone 3GS) and newer devices don't have that limit anymore. Also keep in mind that while a device may have 256 MB of memory installed, significantly less memory will be available for use by apps.
For example, on the iPad (1st gen) it is recommended not to use more than 100 MB of memory, with a maximum available memory peaking at around 125 MB and memory warning starting to show as early as around 80-90 MB memory usage.
With iOS 5.1 Apple also increased the maximum texture size of the iPad 2. The safest and most commonly usable texture size is 2048x2048 for Retina textures, and 1024x1024 for standard resolution textures.
Not in the table are iPod touch devices because they're practically identical to the iPhone models of the same generation, but not as easily identifiable. For example the iPod touch 3rd generation includes devices with 8, 16 and 32GB of flash memory, but the 8GB model is actually 2nd generation hardware.
The dimensional size of images and textures depends on the device you are supporting. Older devices supported small layers, I think 2048x2048 in size. I don't think such limitation exists on current devices.
For large images, you definitely want to use batch nodes as they have been tested to demonstrate the largest performance gain when dealing with large images. Though it is a good idea to use them for as much as possible in general.
As for how much you can load, it really depends on the device. The new iPad has 1 GB of memory and is designed to have much more available memory for large images. A first-gen iPad has 1/4 this amount of memory, and in my experience I start to see an app crash when it gets around 100 MB of memory used (as confirmed using Instruments).
The trick is to use only as much memory as you need for the current app's operation, then release it when you move to a new scene or new set of images/sprites/textures. You could for example have very large tiled textures where only the tiles nearest the viewport are loaded into memory. You could technically have an infinite sized view that stretches forever if you remove from memory those parts of the view that are not visible onscreen.
And of course when dealing with lots of resources, make sure your app delegate responds appropriately to its memory warnings.
As per my knowledge.. A batch node of size 1024x1024 takes around 4 MB of space which is only texture memory.. And an application has maximum limit of 24 MB. So game slows down as you reach this 24 MB space and crashes after that. To avoid slowness I used maximum of 4 Batch Nodes at one time i.e.16 MB. Rest 8 MB was left for variables and other data. Before using more batch node I used to clean memory and remove unused batch nodes.. I don't know about memory limit in 4s but in case of iPhone 4 this was what I learnt.
Using this logic in mind I used to run my game smoothly.