Calculate HANA global allocation limit - hana

How can I calculate the global_allocation_limit parameter? When I have SAP Netweaver and SAP HANA DB installed on a server. And the current database size in RAM is 300 GB.
Many thanks

As you correctly mentioned, the Global Allocation Limit is a parameter, which can be set by the administrator. If the administrator has set this to an arbitrary value, there is no way for you to "calculate" it.
However, if your question is referring to the default value, the official documentation may be helpful:
The default value is 0 in which case the global allocation limit is
calculated as follows: 90% of the first 64 GB of available physical
memory on the host plus 97% of each further GB. Or, in the case of
small physical memory, physical memory minus 1 GB.

Related

Is there a way to get the allocated storage data percentage metric from Azure SQL database resource for creating an alert?

I'd like to create an alert that will monitor the allocated data storage for an SQL database in Azure, so that I know when it is about to reach its allocated data storage capacity. Ideally, something like storage_percent would be perfect since it monitors percentage and not in bytes. But I want to track the allocated data storage.
Here is a list of metrics that can be monitored by an alert: https://learn.microsoft.com/en-us/azure/azure-monitor/platform/metrics-supported#microsoftsqlserversdatabases
There is no any metric that can track percentage, only bytes. (allocated_data_storage has Bytes units)
My workaround at the moment is to retrieve the allocated data storage in bytes and then multiply that value by the threshold I'd like to be alerted about.
e.g.
threshold to trigger alert is 75%
allocated_data_storage is 4 GB
alert me when database storage is greater than 4 GB * 0.75 = 3 GB
But this doesn't seem reliable since a database is prone to be scaled up/down in data size. So, if the allocated data storage gets increased to 10 GB, my alert will still be monitoring for data storage under 3 GB, which is now under 75% of allocated data storage.
We could get the used space/ allocated space/Maximum storage size on Portal:
Or You use bellow query in database:
-- Connect to database
-- Get database data space allocated in MB, max database stroage in MB and database data space allocated used in MB
SELECT SUM(size/128.0) AS DatabaseDataSpaceAllocatedInMB,
SUM(max_size/128.0) AS DatabaseDataSpaceMaxInMB,
SUM (CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0) AS DatabaseDataSpaceUsedInMB
FROM sys.database_files
GROUP BY type_desc
HAVING type_desc = 'ROWS'
You could create a new query with the values to get the alert value to built the alert rule, such as percent of AllocatedSpace/UsedSpace.
Since Azure SQL database doesn't support the send email feature, we could use Logic app to trigger it:
Create a Recurrence trigger: schedule run the trigger.
Add an Execute a SQL Query action: to get the alert value.
Add Condition: to judge the if the alert value is greater than
75, if true, send the email!
Logic app example overview:

How to calculate redis memory used percentage on ElastiCache

I want to monitor my redis cache cluster on ElastiCache. From AWS/Elasticache i am able to get metrics like FreeableMemory and BytesUsedForCache. If i am not wrong BytesUsedForCache is the memory used by cluster(assuming there is only one node in cluster). I want to calculate percentage uses of memory. Can any one help me to get percentage of Memory uses in Redis.
We had the same issue since we wanted to monitor the percentage of ElastiCache Redis memory that is consumed by our data.
As you wrote correctly, you need to look at BytesUsedForCache - that is the amount of memory (in bytes) consumed by the data you've stored in Redis.
The other two important numbers are
The available RAM of the AWS instance type you use for your ElastiCache node, see https://aws.amazon.com/elasticache/pricing/
Your value for parameter reserved-memory-percent (check your ElastiCache parameter group). That's the percentage of RAM that is reserved for "nondata purposes", i.e. for the OS and whatever AWS needs to run there to manage your ElastiCache node. By default this is 25 %. See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/redis-memory-management.html#redis-memory-management-parameters
So the total available memory for your data in ElastiCache is
(100 - reserved-memory-percent) * instance-RAM-size
(In our case, we use instance type cache.r5.2xlarge with 52,82 GB RAM, and we have the default setting of reserved-memory-percent = 25%.
Checking with the info command in Redis I see that maxmemory_human = 39.61 GB, which is equal to 75 % of 52,82 GB.)
So the ratio of used memory to available memory is
BytesUsedForCache / ((100 - reserved-memory-percent) * instance-RAM-size)
By comparing the freeableMemory and bytesUsedForCache metrics, you will have the available memory for the Elasticache non-cluster mode (not sure if it applies to cluster-mode too).
Here is the NRQL we're using to monitor the cache:
SELECT Max(`provider.bytesUsedForCache.Sum`) / (Max(`provider.bytesUsedForCache.Sum`) + Min(`provider.freeableMemory.Sum`)) * 100 FROM DatastoreSample WHERE provider = 'ElastiCacheRedisNode'
This is based on the following:
FreeableMemory: The amount of free memory available on the host. This is derived from the RAM, buffers and cache that the OS reports as freeable.AWS CacheMetrics HostLevel
BytesUsedForCache: The total number of bytes allocated by Redis for all purposes, including the dataset, buffers, etc. This is derived from used_memory statistic at Redis INFO.AWS CacheMetrics Redis
So BytesUsedForCache (amount of memory used by Redis) + FreeableMemory (amount of data that Redis can have access to) = total memory that Redis can use.
With the release of the 18 additional CloudWatch metrics, you can now use DatabaseMemoryUsagePercentage and see the percentage of memory utilization in redis.
View more about the metric in the memory section here
You would have to calculate this based on the size of the node you have selected. See these 2 posts for more information.
Pricing doc gives you the size of your setup.
https://aws.amazon.com/elasticache/pricing/
https://forums.aws.amazon.com/thread.jspa?threadID=141154

Neo4j Configuration for 4M Nodes 10M relationship

I am new to Neo4j and have made few graph queries with 4M nodes and 10M relationships. Till now i've completely surprised from the performance of my queries.
SCHEMA
.......
(a:user{data:1})-[:follow]->(:user)-[:next*1..10]-(:activity)
Here user with data:1 is following another 100,000 user. Each of those 100,000 users have 2-8 next nodes(lets say activity of users) attached. Now i want to fetch the activities of users till next level 3 [:next*1..3] . Each activity has property relevance number.
So now i have 100,000 *3 nodes to traverse.
CYPHER
.......
match (u:user{data:1})-[:follow]-(:user)-[:next*1..3]-(a:activity)
return a order by a.relevance desc limit 50
This query is taking 72000 ms almost every time. Since i am new to Neo4j and i am sure that i haven't done tuning of the OS.
I am using following parameters-
Initial Java Heap Size (in MB)
wrapper.java.initmemory=2000
Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=2456
Default values for the low-level graph engine
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=50M
neostore.propertystore.db.mapped_memory=90M
neostore.propertystore.db.strings.mapped_memory=130M
neostore.propertystore.db.arrays.mapped_memory=130M
Please tell me where i am doing wrong. I read all the documentation from neo4j website but the query time didn't improve.
please tell me how can i configure high performing cache? What should i do so that all the graph loads up in memory? When i see my RAM usage , it is always like 1.8 GB out of 4 Gb. I am using enterprise license on windows (Neo4j 2.0). Please help.
You are actually following not 100k * 3 but, 100k * (2-10)^10 meaning 10^15 paths.
More memory in your machine would make a lot of sense, so try to get 8 or more GB.
Then you can increase the heap, e.g. to 6GB:
wrapper.java.initmemory=6000
wrapper.java.maxmemory=6000
neo4j.properties
neostore.nodestore.db.mapped_memory=100M
neostore.relationshipstore.db.mapped_memory=500M
neostore.propertystore.db.mapped_memory=200M
neostore.propertystore.db.strings.mapped_memory=200M
neostore.propertystore.db.arrays.mapped_memory=10M
If you want to pull your data through, you would most probably want to inverse your query.
match (a:activity),(u:user{data:1})
with a,u
order by a.relevance
desc limit 100
match (followed:user)-[:next*1..3]-(a:activity)
where (followed)-[:follow]-(user)
return a
order by a.relevance
desc limit 50

Should I force MS SQL to consume x amount of memory?

I'm trying to get better performance out of our MS SQL database. One thing I noticed that the instance is taking up about 20 gigs of RAM, and the database in question is taking 19 gigs of that 20. Why isn't the instance consuming most of the 32 gigs that is on box? Also the size of the DB is a lot larger then 32 gigs, so it being smaller then the available Ram is not the issue. I was thinking on setting the min server memory to 28 gigs or something along those lines, any thoughts? I didn't find anything on the interwebs that threw up red flags on this idea. This is on a VM(VMWARE). I verified that the host is not overcommitting memory. Also I do not have access to the host.
This is the query I ran to find out what each database was consuming
SELECT DB_NAME(database_id),
COUNT (*) * 8 / 1024 AS MBUsed
FROM sys.dm_os_buffer_descriptors
GROUP BY database_id
ORDER BY COUNT (*) * 8 / 1024 DESC
If data is sitting on disk, but hasn't been requested by a query since the service has started, then there would be no reason for SQL Server to put those rows into the buffer cache, thus the size on disk would be larger than the size in memory.

What is the maximum value size you can store in redis?

Does anyone know what the maximum value size you can store in redis? I want to use redis as a message queue with celery to store some small documents that need to be processed by a worker on another server, and I want to make sure the documents aren't going to be too big.
I found one page with a reference to 1GB, but when I followed the link on the page for where they got that answer the link wasn't valid anymore. Here is the link:
http://news.ycombinator.com/item?id=1182005
All string values are limited to 512 MiB. This is the size limit you probably care most about.
EDIT: Because keys in Redis are strings, the maximum key size is 512 MiB. The maximum number of keys is 2^32 - 1 = 4,294,967,295.
Values, on the other hand, can vary in size depending on their type. For aggregate data types (i.e. hash, list, set, and sorted set), the maximum value size is 512 MiB for each element, although the data structure itself can have up to 2^32 - 1 elements.
https://redis.io/topics/data-types
https://redis.io/topics/faq#what-is-the-maximum-number-of-keys-a-single-redis-instance-can-hold-and-what-is-the-max-number-of-elements-in-a-hash-list-set-sorted-set
http://groups.google.com/group/redis-db/browse_thread/thread/1c7e33fbc98734b3?fwc=2
Article about Redis Memory Usage can help you to roughly determine how much memory your database would take.
It's in the order of the amount of RAM you have, at least, so unless you plan on puting multi-gigabyte objects in there I wouldn't worry. I've had sets that were hundreds of megabytes big without a problem, but I don't know the exact limits.
A String value can accommodate the size of max 512MB. But according to this link, the size can be increased.