Redis - Multi Datacenter Replication - Latency impact - redis

I'm looking for information on the implications latency has on Redis Replication and Sentinel with a cross data center setup for Geo-HA.
Let's assume the following in regards to server/DC latency:
A <> B: 5 ms
A <> C: 25 ms
B <> C: 25 ms
The database is used with real-time messaging brokers and therefore we cannot have any long latencies with reads and writes.
What impact do network latencies (5 ms, 25 ms) have on read and write operations within a replication setup?
How does Sentinel handle such latencies?
What would the effect be if C is only a Sentinel instance to the above?

Related

Azure SQL sessions blocked in status PAGEIOLATCH_SH

I've got a Spring application with Hibernate that sporadically stops working because all 30 connections of its connection pool are blocked. While these connections were blocked, I executed a bunch of queries to find the cause.
Each connection executed the same join statement.
The execution plan of that join looks like that
This query
SELECT *
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
returns for each connection something like:
status = suspended
command = SELECT
blocking_session_id = 0
wait_type = last_wait_type = PAGEIOLATCH_SH
wait_time = 3
cpu_time = 12909
total_elapsed_time = 3723943
logical_reads = 7986970
The 3 indexes involved have a size of about 16GB (last one in the screenshot above), 1 GB and 500 MB respectively.
This is happening inside a sql database in an elastic pool with 24 vcores, Gen5, max data size 2418 GB
The resource monitor of that elastic pool looked reasonable (arrow indicates the correct time):
Anything else I could check? Any ideas what could be the reason for this?
Inserting data on an increasing key (like an identity column) that means many parallel threads will be going after the last data page of that table if a good number of connections is trying to ingest data to the table, trying to add rows on that same data page.
If you put that table in memory (in-memory OLTP) you may not have a disk contention problem, but in this scenario that table is not in-memory so the disk contention is manifesting itself as pagelatch_ex wait.
Another option is to make the clustered index a GUID. That will make the insert distribute among all the data pages, but it may cause page splits.

postgres rds slow response time

We have an aws rds postgres db od type t3.micro.
I am running simple queries on a pretty small table and I get pretty high response time - around 2 seconds per query run.
Query example:
select * from package where id='late-night';
The cpu usage is not high (around 5%)
We tried creating a bigger rds db (t3.meduiom) with the snapshot of the original one and the performance did not improve at all.
Table size 2600 rows
We examined connection with bot external ip and internal ip.
Disk size 20gib
Memory type: ssd
Is there a way to improve performance??
Thanks for the help!

ERROR : FAILED: Error in acquiring locks: Error communicating with the metastore org.apache.hadoop.hive.ql.lockmgr.LockException

Getting the Error in acquiring locks, when trying to run count(*) on partitioned tables.
The table has 365 partitions when filtered on <= 350 partitions, the queries are working fine.
when tried to include more partitions for the query, it's failing with the error.
working on Hive-managed ACID tables, with the following default values
hive.support.concurrency=true //cannot make it as false, it's throwing <table> is missing from the ValidWriteIdList config: null, should be true for ACID read and write.
hive.lock.manager=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
hive.txn.strict.locking.mode=false
hive.exec.dynamic.partition.mode=nonstrict
Tried increasing/decreasing values for these following with a beeline session.
hive.lock.numretries
hive.unlock.numretries
hive.lock.sleep.between.retries
hive.metastore.batch.retrieve.max={default 300} //changed to 10000
hive.metastore.server.max.message.size={default 104857600} // changed to 10485760000
hive.metastore.limit.partition.request={default -1} //did not change as -1 is unlimited
hive.metastore.batch.retrieve.max={default 300} //changed to 10000.
hive.lock.query.string.max.length={default 10000} //changed to higher value
Using the HDI-4.0 interactive-query-llap cluster, the meta-store is backed by default sql-server provided along.
The problem is NOT due to service tier of the hive metastore database.
It is most probably due to too many partitions in one query based on the symptom.
I meet the same issue several times.
In the hivemetastore.log, you shall able to see such error:
metastore.RetryingHMSHandler: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:254)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1608)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:578)
This is due to in Hive metastore, each partition involved in the hive query requires at most 8 parameters to acquire a lock.
Some possible workarounds:
Decompose the the query into multiple sub-queries to read from fewer
partitions.
Reduce the number of partitions by setting different partition keys.
Remove partitioning if partition keys don't have any filters.
Following are the parameters which manage the batch size for INSERT query generated by the direct SQL. Their default value is 1000. Set both of them to 100 (as a good starting point) in the Custom hive-site section of Hive configs via. Ambari and restart ALL Hive related components (including Hive metastore).
hive.direct.sql.max.elements.values.clause=100
hive.direct.sql.max.elements.in.clause=100
We also faced the same error in HDInsight and after doing many configuration changes similar to what you have done, the only thing that worked is scaling our Hive Metastore SQL DB server.
We had to scale it all the way to a P2 tier with 250 DTUs for our workloads to work without these Lock Exceptions. As you may know, with the tier and DTU count, the SQL server's IOPS and response time improves thus we suspected that the Metastore performance was the root cause for these Lock Exceptions with the increase in workloads.
Following link provides information about the DTU based performance variation in SQL servers in Azure.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu
Additionally as I know, the default Hive metastore that gets provisioned when you opt to not provide an external DB in cluster creation is just an S1 tier DB. This would not be suitable for any high capacity workloads. At the same time, as a best practice always provision your metastores external to the cluster and attach at cluster provisioning time, as this gives you the flexibility to connect the same Metastore to multiple clusters (so that your Hive layer schema can be shared across multiple clusters, e.g. Hadoop for ETLs and Spark for Processing / Machine Learning), and you have the full control to scale up or down your metastore as per your need anytime.
The only way to scale the default metastore is by engaging the Microsoft support.
We faced the same issue in HDINSIGHT. We solved it by upgrading the metastore.
The Default metastore had only 5 DTU which is not recommended for production environments. So we migrated to custom Metastore and spin the Azure SQL SERVER (P2 above 250 DTUs) and the setting the below properties:
hive.direct.sql.max.elements.values.clause=200
hive.direct.sql.max.elements.in.clause=200
Above values are set because SQL SERVER cannot process more than 2100 parameter. When you have partitions more than 348, you faced this issue as 1 partition creates 8 parameters for metastore 8 x 348

Migrating from Azure SQL Database to Data Warehouse

I currently have an Azure SQL database, where the data in it is all in a star schema (Fact/Dim tables with column store indexes) and is used exlusively for a reporting app. We currently use a Premium database instance with 250 DTUs and it is about 150GB in size, but increasing all the time.
For a similar price I could create an SQL Data Warehouse instance with 100 DWUs. My concern is that as it is only 100 DWUs vs 250 DTUs, that I would actually see a performance reduction.
I know that DWUs and DTUs are not directly comparable, but can anyone tell me if I am likely to see a performance boost/reduction in these circumstances?
For what it's worth, 1 DWU = 7.5 DTU with respect to server capacity as explained here.
When you look at the server instance that you provision a DW instance on:
100 DWU instance consumes 750 DTUs of server capacity. This means you receive 500 DTUs more than the 250 DTUs associated with the Azure SQL Database Premium tier you currently have.
400 DWU instance consumes 3,000 DTUs of server capacity
Take in consideration you have lesser concurrency with Azure SQL Data Warehouse.

Is Hazelcast async write transitive?

I am doing some simple benchmarking with Hazelcast to see if it might fit our needs for a distributed data grid. The idea is to have an odd number of servers (eg 5) with '> n/2' replication (eg 3).
With all servers and the client running on my local machine (no network latency) I get the following results:
5 x H/C server (sync backup = 2, async backup = 0); 100 Client Threads : 35,196 puts/second
5 x H/C server (sync backup = 1, async backup = 1); 100 Client Threads : 41,918 puts/second
5 x H/C server (sync backup = 0, async backup = 2); 100 Client Threads : 52,007 puts/second
As expected, async backups allow higher throughput than sync backups. For our use case we would probably opt for the middle option (1x sync and 1x async) as this give us an acceptable balance between resilience and performance.
My question is: If Hazelcast is configured with 1x sync and 1x async, and the node crashes after the sync backup is performed (server returns 'OK' to client and client thread carries on) but before the async backup is performed (so the data is only on one replica and not the second), will the node that received the sync backup pick up the task of the async backup, or will it just wait until the entire cluster re-balances and the 'missing' data from the crashed node is re-distributed from copies? And if the latter, once the cluster re-balances will there be a total of 3 copies of the data, as there would have been if the node hadn't crashed, or will there only be 2 copies because the sync'd node assumes that another node already received its copy?
The partition owner is responsible for creating all backups.
In other words: The 1st backup does NOT create a new backup request for the 2nd backup - it's all responsibility of the owner.
If a member holding a backup replica is stale then anti-entropy mechanism kicks in and the backup partition will be updated to match the owner.
When a member goes down then the 1st (=sync) backup is eventually promoted to be a new partition owner. It's a responsibility of the new owner to make sure a configured redundancy is honoured - a new backup will be created to make sure there 2 backups as configured.