Opencl defines maximum work group size.
What is the reason for that ?- is that harware limitation or just software design?
Why can`t I use any work group size ? - any case the workgroup is "divided" to wavefronts .Each wavefront indeed should have the number of workitems , which is equal to the number of processing elements (PE) of compute unit (CU).Thus the workgroup of size n,will be executed in
ceil(n/wavefront_size)
Related
How can I calculate the global_allocation_limit parameter? When I have SAP Netweaver and SAP HANA DB installed on a server. And the current database size in RAM is 300 GB.
Many thanks
As you correctly mentioned, the Global Allocation Limit is a parameter, which can be set by the administrator. If the administrator has set this to an arbitrary value, there is no way for you to "calculate" it.
However, if your question is referring to the default value, the official documentation may be helpful:
The default value is 0 in which case the global allocation limit is
calculated as follows: 90% of the first 64 GB of available physical
memory on the host plus 97% of each further GB. Or, in the case of
small physical memory, physical memory minus 1 GB.
I have a query using to much containers and to much memory. (97% of the memory used).
Is there a way to set the number of containers used in the query and limit the max memory?
The query is running on Tez.
Thanks in advance
Controlling the number of Mappers:
The number of mappers depends on various factors such as how the data is distributed among nodes, input format, execution engine and configuration params. See also How initial task parallelism works
MR uses CombineInputFormat, while Tez uses grouped splits.
Tez:
set tez.grouping.min-size=16777216; -- 16 MB min split
set tez.grouping.max-size=1073741824; -- 1 GB max split
Increase these figures to reduce the number of mappers running.
Also Mappers are running on data nodes where the data is located, that is why manually controlling the number of mappers is not an easy task, not always possible to combine input.
Controlling the number of Reducers:
The number of reducers determined according to
mapreduce.job.reduces
The default number of reduce tasks per job. Typically set to a prime close to the number of available hosts. Ignored when mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas Hive uses -1 as its default value. By setting this property to -1, Hive will automatically figure out what should be the number of reducers.
hive.exec.reducers.bytes.per.reducer - The default in Hive 0.14.0 and earlier is 1 GB.
Also hive.exec.reducers.max - Maximum number of reducers that will be used. If mapreduce.job.reduces is negative, Hive will use this as the maximum number of reducers when automatically determining the number of reducers.
Simply set hive.exec.reducers.max=<number> to limit the number of reducers running.
If you want to increase reducers parallelism, increase hive.exec.reducers.max and decrease hive.exec.reducers.bytes.per.reducer.
Memory settings
set tez.am.resource.memory.mb=8192;
set tez.am.java.opts=-Xmx6144m;
set tez.reduce.memory.mb=6144;
set hive.tez.container.size=9216;
set hive.tez.java.opts=-Xmx6144m;
The defaultĀ settings mean that the actual Tez task will use the mapper's memory setting:
hive.tez.container.size = mapreduce.map.memory.mb
hive.tez.java.opts = mapreduce.map.java.opts
Read this for more details: Demystify Apache Tez Memory Tuning - Step by Step
I would suggest to optimize query first. Use map-joins if possible, use vectorising execution, add distribute by partitin key if you are writing partitioned table to reduce memory consumption on reducers and write good sql of course.
What is the meaning of the statistics.query.totalSlotMs value returned for a completed BigQuery job? Except for giving an indication of relative cost of one job vs the other, it's not clear how else one should interpret the number. For example, how does the slot-milliseconds number relate to the stack driver reported total slot usage for a given project (which needs to stay below 2000 for on demand BigQuery usage)?
The docs are a bit terse ('[Output-only] Slot-milliseconds for the job.')
The idea is to have a 'slots' metric in the same units at which slots of reservation are sold to customers.
For example, imagine that you have a 20-second query that is continuously consuming 4 slots. In that case, your query is using 80,000 totalSlotMs (4 * 20,000).
This way you can determine the average number of slots even if the peak number of slots differs as, in practice, the number of workers will fluctuate over the runtime of a query.
We've been trying to figure out why we only achieve writing speed of ~53MBps on UHS104 cards that claim 90MBps.
Due to hardware constraints, clock frequency supplied to the card is only 148.5 MHz (instead of 208MHz).
Does that mean that we should achieve speed of (148.5 * 4)/8 = 74.25MBps?
Or is our caclulation wrong since it assumes that if card guarantees speed of 90MBps on frequency of 208MHz, then it should guarantee speed of 74.25MBps on frequency of 148.5?
The simplified physical layer spec states that for maximum performance you need to write full AU blocks - usually 2 or 4 MByte, otherwise the card will have to copy data around internally when writing across block boundaries. Unfortunately, most of the Speed Class Specification is missing in the 4.13 chapter.
The first AUs may have a different wear level strategy, as they are normally used for the FATs. This could make them slower to write to.
Does anyone know what the maximum value size you can store in redis? I want to use redis as a message queue with celery to store some small documents that need to be processed by a worker on another server, and I want to make sure the documents aren't going to be too big.
I found one page with a reference to 1GB, but when I followed the link on the page for where they got that answer the link wasn't valid anymore. Here is the link:
http://news.ycombinator.com/item?id=1182005
All string values are limited to 512 MiB. This is the size limit you probably care most about.
EDIT: Because keys in Redis are strings, the maximum key size is 512 MiB. The maximum number of keys is 2^32 - 1 = 4,294,967,295.
Values, on the other hand, can vary in size depending on their type. For aggregate data types (i.e. hash, list, set, and sorted set), the maximum value size is 512 MiB for each element, although the data structure itself can have up to 2^32 - 1 elements.
https://redis.io/topics/data-types
https://redis.io/topics/faq#what-is-the-maximum-number-of-keys-a-single-redis-instance-can-hold-and-what-is-the-max-number-of-elements-in-a-hash-list-set-sorted-set
http://groups.google.com/group/redis-db/browse_thread/thread/1c7e33fbc98734b3?fwc=2
Article about Redis Memory Usage can help you to roughly determine how much memory your database would take.
It's in the order of the amount of RAM you have, at least, so unless you plan on puting multi-gigabyte objects in there I wouldn't worry. I've had sets that were hundreds of megabytes big without a problem, but I don't know the exact limits.
A String value can accommodate the size of max 512MB. But according to this link, the size can be increased.