Is it possible to make CPU work on large persistent storage medium directly without using RAM? - ram

Is it possible to make CPU work on large persistent storage medium directly without using RAM ? I am fine if performance is low here.

You will need to specify the CPU architecture you are interested, most standard architectures (x86, Power, ARM) assume the existence of RAM for their data bus, I am afraid only a custom board for this processors would allow to use something like SSD instead of RAM.
Some numbers comparing RAM vs SSD latencies: https://gist.github.com/jboner/2841832
Also, RAM is there for a reason, to "smooth" access to bigger storage from CPU,
have a look at this image (from https://www.reddit.com/r/hardware/comments/2bdnny/dram_vs_pcie_ssd_how_long_before_ram_is_obsolete/)
As side note, it is possible to access persistent storage without CPU involving (although RAM is still needed), see https://www.techopedia.com/definition/2767/direct-memory-access-dma or https://en.wikipedia.org/wiki/Direct_memory_access

Related

Is it possible to use system memory instead of GPU memory for processing Dask tasks

We have been running DASK clusters on Kubernetes for some time. Up to now, we have been using CPUs for processing and, of course, system memory for storing our Dataframe of around 1,5 TB (per DASK cluster, split onto 960 workers). Now we want to update our algorithm to take advantage of GPUs. But it seems like the available memory on GPUs is not going to be enough for our needs, it will be a limiting factor(with our current setup, we are using more than 1GB of memory per virtual core).
I was wondering if it is possible to use GPUs (thinking about NVDIA, AMD cards with PCIe connections and their own VRAMS, not integrated GPUs that use system memory) for processing and system memory (not GPU memory/VRAM) for storing DASK Dataframes. I mean, is it technically possible? Have you ever tried something like this? Can I schedule a kubernetes pod such that it uses GPU cores and system memory together?
Another thing is, even if it was possible to allocate the system RAM as VRAM of GPU, is there a limitation to the size of this allocatable system RAM?
Note 1. I know that using system RAM with GPU (if it was possible) will create an unnecessary traffic through PCIe bus, and will result in a degraded performance, but I would still need to test this configuration with real data.
Note 2. GPUs are fast because they have many simple cores to perform simple tasks at the same time/in parallel. If an individual GPU core is not superior to an individual CPU core then may be I am chasing the wrong dream? I am already running dask workers on kubernetes which already have access to hundreds of CPU cores. In the end, having a huge number of workers with a part of my data won't mean better performance (increased shuffling). No use infinitely increasing the number of cores.
Note 3. We are mostly manipulating python objects and doing math calculations using calls to .so libraries implemented in C++.
Edit1: DASK-CUDA library seems to support spilling from GPU memory to host memory but spilling is not what I am after.
Edit2: It is very frustrating that most of the components needed to utilize GPUs on Kubernetes are still experimental/beta.
Dask-CUDA: This library is experimental...
NVIDIA device plugin: The NVIDIA device plugin is still considered beta and...
Kubernetes: Kubernetes includes experimental support for managing AMD and NVIDIA GPUs...
I don't think this is possible directly as of today, but it's useful to mention why and reply to some of the points you've raised:
Yes, dask-cuda is what comes to mind first when I think of your use-case. The docs do say it's experimental, but from what I gather, the team has plans to continue to support and improve it. :)
Next, dask-cuda's spilling mechanism was designed that way for a reason -- while doing GPU compute, your biggest bottleneck is data-transfer (as you have also noted), so we want to keep as much data on GPU-memory as possible by design.
I'd encourage you to open a topic on Dask's Discourse forum, where we can reach out to some NVIDIA developers who can help confirm. :)
A sidenote, there are some ongoing discussion around improving how Dask manages GPU resources. That's in its early stages, but we may see cool new features in the coming months!

Aerospike performance with flash disks

I have read, that Aerospike provides better performance than other NoSQL key-value databases because it uses flash disks. But DRAMs are faster than flash disks. Then how can it have better performance than Redis or Scalaris that use only DRAMs? Is it because of the Aerospike's own system to access flash disks directly?
Thank you for answer
Aerospike allows you the flexibility to store all or part of your data (part segregated by "namespaces") in DRAM or Flash (writing in blocks on device without using a filesystem) or both DRAM and Flash simultaneously (not the best use of your spend) or both DRAM and in files in Flash or HDD (ie using a filesystem) .. DRAM with filesystem on HDD - gives performance of DRAM with inexpensive persistence of spinning disks. Finally for single value integer or float data, there is an extremely fast performance option of storing the data in the Primary Index itself (data-in-index option).
You can mix storage options in Aeropsike. ie store one namespace records purely in DRAM, store another namespace records purely in Flash -- on the same horizontally scalable cluster of nodes. You can define upto 32 namespaces in Aerospike Enterprise Edition.
Flash / HDD .. etc options allow you to persist your data.
Pure DRAM storage in Aerospike will definitely give you better latency performance, relative to storing on persistent layer.
Storing on Flash in "blocks" (ie without using the filesystem) in Aerospike is the sweet spot that gives you "RAM like" performance with persistence. Other NoSQL solutions that are pure DRAM storage don't give you persistence, or there may be others that if they give you persistence with storage in files only via the filesystem, they will quite likely give you much higher latency.

Spin promela GPU

I am avaluating Spin using Promela for model checking, but processing time is an issue to me.
I have seen that I can use Multi Core to improve the calculation but what about GPU/Cuda support to speed up the calculations ? Can I do this at all ?
regards
Adrian
GPU support is not included in Spin but is an active area of research. Most SPIN problems that are slow enough to seek a speed up are also large enough to exceed the local memory on a GPU. As a result the CPU memory needs to be used to store the explored state space and then memory bandwidth, CPU <==> GPU, swamp any computational speed increases. If however, your state space is small then the GPU may be amenable to use; yet, Spin does not include such support.

Expression Engine Apache and SSD

Recently I've been working on an expression engine project that has a performance problem. On a test with 50 concurrent connections
Extremely high (100%) CPU usage
Low RAM usage (2 gigs out of 8)
Low CPU/RAM usage on the database
And the web server has 4 CPUs. Now, if I turn on the cache, the utilization is lower, but the content is such that dynamic caching had to be taken off. Now the expression engine is made up of templates that have to be read into memory and parsed. For those not familiar with expression engine, it is built using CodeIgniter.
My thinking is this that if Apache and the expression engine files were taken off HDD and put onto an SSD, I/O for the templates, it would be a lot faster and would lower the CPU utilization by Apache. Would this kind of performance improvement actually happen or would an SSD make no difference?
SSD will always be faster then spinny turny disks where disk I/O is concerned, but it doesn't sound like that's where your bottleneck is.
You're not using RAM and as you correctly stated, the templates have to be parsed. You have 4 CPU's, but they may be from 1998 (we don't know). If they are more recent, it sounds like it should be more than enough for 50 concurrent connections, but you may be rendering the contents of the Library of Congress (again, we don't know).
You might get some benefit with tag caching or some of the other techniques mentioned in The Guide.
Also found this: http://eeinsider.com/articles/using-cache-wisely-with-expressionengine/

How does memory use affect battery life?

How does memory allocation affect battery usage? Does holding lots of data in variables consume more power than performing many iterations of basic calculations?
P.S. I'm working on a scientific app for mac, and want to optimize it for battery consumption.
The amount of data you hold in memory doesn't influence the battery life as the complete memory has to be refreshed all the time, whether you store something there or not (the memory controller doesn't know whether a part is "unused", AFAIK).
By contrast, calculations do require power. Especially if they might wake up the CPU from an idle or low power state.
I believe RAM consumption is identical regardless of whether it's full or empty. However more physical RAM you have in the machine the more power it will consume.
On a mac, you will want to avoid hitting the hard drive, so try to make sure you don't read the disk very often and definitely don't consume so much RAM you start using virtual memory (or push other apps into virtual memory).
Most modern macs will also partially power down the CPU(s) when they aren't very busy, so reducing CPU usage will actually reduce power consumption.
On the other hand when your app uses more memory it pushes other apps cache data out of the memory and the processing can have some battery cost if the user decides to switch from one to the other, but that i think will be negligible.
it's best to minimize your application's memory footprint once it transitions to the background simply to allow more applications to hang around and not be terminated. Also, applications are terminated in descending order of memory size, so if your application is the largest one existing in the background, it will be killed first.