Initially when I used 1 JedisPool to communicate with a Redis server (single instance of redis running on the server) I saw that my 200K queries took 15 sec to complete. For each query I do following:
Jedis jedis = pool.getResource();
Pipeline p = jedis.pipelined();
p.multi();
for (String ViewId : ViewIds) {
for (Article aR : view.getEntriesList()) {
p.rpush(ViewId, <article-in-json>);
}
p.ltrim(ViewId, -10, -1);
}
p.exec();
p.sync();
pool.returnResource(jedis);
JedisPool created using:
JedisPool pool = new JedisPool(new JedisPoolConfig(), getHostName(), getPort(), 0, "foobared");
However when I used 8 JedisPools to the same server (same redis instance) and used them in a round robin way I saw that 200K queries were processed in around 8-10 secs (earlier with 1 JedisPool it took 15 sec) => considerable improvement.
What is the reason for this? and what is the optimal number of JedisPools that we should use?
Related
We have an application that moves csv files from HDFS to Hive. We are using Storm Topology for that process.
8 machines have been using. Each of them has 22 cores and 512 GB RAM. However, our code runs really slow. It takes 10 minutes to finish to transfer 6 million data.
10 MB of 60 files are transferring to HDFS in one second. We are trying to optimize our code, but it is obvious that we are doing something very wrong.
For Hive table, we have 64 buckets.
In our topology, we have 1 Spout and 2 Bolts. Basically our Spout gets the CSV file, emits lines to first Bolt which is responsible for parsing the data then the Bolt emits to second Bolt which is responsible for HDFS process.
HDFS Spout;
HdfsSpout hdfsSpout = new HdfsSpout()
.withOutputFields(TextFileReader.defaultFields)
.setReaderType("text")
.setHdfsUri(hdfsUri)
.setSourceDir("/data/in")
.setArchiveDir("/data/done")
.setBadFilesDir("/data/bad")
.setClocksInSync(true) // NTP installed on all hosts
.setIgnoreSuffix("_COPYING_")
// do not begin reading file until it is completely copied to HDFS
.setMaxOutstanding(50_000);
Mapper;
DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper()
.withColumnFields(new Fields(TTDPIRecord.fieldsList))
.withPartitionFields(new Fields(TTDPIRecord.partitionFieldsList));
Hive Options;
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper)
.withAutoCreatePartitions(true)
.withHeartBeatInterval(3)
.withCallTimeout(10_000) // default = 10.000
.withTxnsPerBatch(2)
.withBatchSize(50_000)
// doing below because its affecting storm metrics most likely
.withTickTupleInterval(1);
Config;
Config conf = new Config();
conf.setNumWorkers(6);
conf.setNumAckers(6);
conf.registerMetricsConsumer(LoggingMetricsConsumer.class);
Topology Builder;
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("hdfsSpout", hdfsSpout, 8);
builder.setBolt("recordParserBolt", recordParserBolt, 8).localOrShuffleGrouping("hdfsSpout");
builder.setBolt("hiveBolt", hiveBolt, 8).localOrShuffleGrouping("recordParserBolt");
We are not sure with the following Parameters;
in HDFS Spout; .setMaxOutstanding(50_000);
in Hive Spout Options; .withTxnsPerBatch(2) .withBatchSize(50_000)
.withTickTupleInterval(1);
in Config; .setNumWorkers(6); .setNumAckers(6);
Parallelism in Spout and Bolt; We gave 8 for each.
What should be the values for those parameters? Thanks in advance.
Edit;
Here is our test result for 10 mb of 100 csv files;
hdfsSpout Executors: 8 Complete Latency: 1834.209 ms
recordParserBolt Executors: 8 Complete Latency: 0.019 ms
hiveBolt Executors: 8 Complete Latency: 1092.624 ms
You are doing conf.setNumWorkers(6); which means you are only using 6 out of 8 machines only, you can set it to 8 to utilise all the hardware you have.
Another parameter you can change is parallelism hint of your bolts which which means the initial number of executor (threads) of a component. You have given parallelism to only 8, you can increase this to 100/200 and see how performance varies.
You can go through this to understand how parallelism works in storm.
Can you also tell what is your config for max-spout-pending?
I'm trying to track down intermittent "bursts" of timeouts using the StackExchange Redis library. Here's a bit about our setup: Our API is written in C# and runs on Windows 2008 and IIS. We have 4 API servers in production, and we have 4 Redis machines (Running Linux latest LTS), each with 2 instances of Redis (one master on port 7000, one slave on port 7001). I've looked at pretty much every aspect of the Redis servers and they look fantastic. No errors in the logs, CPU and network is great, everything with the server side of things seem fantastic. I can tail -f the Redis logs while this is happening and don't see anything out of the ordinary (such as rewriting AOF files or anything). I don't think the problem is with Redis.
Here's what I know so far:
We see these timeout exceptions several times an hour. Usually between 40-50 timeouts in a minute, sometimes up to 80-90. Then, they'll go away for several minutes. There were about 5,000 of these events in the past 24 hours, and they happen in bursts from a single API client.
These timeouts only happen against Redis master nodes, never against slave nodes. However, they happen with various Redis commands such as GETs and SETs.
When a burst of these timeouts happen, the calls are coming from a single API server but happen talking to various Redis nodes. For example, API3 might have a bunch of timeouts trying to call Cache1, Cache2 and Cache3. This is strong evidence that the issue is related to the API servers, not the Redis servers.
The Redis master nodes have 108 connected clients. I log current connections, and this number remains stable. There are no big spikes in connections, and it doesn't look like there's any bad code creating too many connections or not sharing ConnectionMultiplexer instances (I have one and it's static)
The Redis slave nodes have 58 connected clients, and this also looks completely stable as well.
We're using StackExchange.Redis version 1.2.6
Redis is using AOF mode, and size on disk is about 195MB
Here's an example timeout exception. Most look pretty much the same as this:
Type=StackExchange.Redis.RedisTimeoutException,Message=Timeout
performing GET limeade:allActivities, inst: 1, mgr: ExecuteSelect,
err: never, queue: 0, qu: 0, qs: 0, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
clientName: LIMEADEAPI4, serverEndpoint: 10.xx.xx.11:7000,
keyHashSlot: 1295, IOCP: (Busy=0,Free=1000,Min=2,Max=1000), WORKER:
(Busy=9,Free=32758,Min=2,Max=32767) (Please take a look at this
article for some common client-side issues that can cause timeouts:
http://stackexchange.github.io/StackExchange.Redis/Timeouts),StackTrace=
at
StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message
message, ResultProcessor1 processor, ServerEndPoint server) at
StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message
message, ResultProcessor1 processor, ServerEndPoint server) at
StackExchange.Redis.RedisBase.ExecuteSync[T](Message message,
ResultProcessor1 processor, ServerEndPoint server) at
StackExchange.Redis.RedisDatabase.StringGet(RedisKey key, CommandFlags
flags) at Limeade.Caching.Providers.RedisCacheProvider1.Get[T](K
cacheKey, CacheItemVersion& cacheItemVersion) in ...
I've done a bit of research on tracing down these timeout exceptions, but what's rather surprising is all the numbers are all zeros. Nothing in the queue, nothing waiting to be processed, I have tons of threads free and not doing anything. Everything looks great.
Anyone have any ideas on how to fix this? The problem is these bursts of cache timeouts cause our database to be hit more, and in certain circumstances this is a bad thing. I'm happy to add any more info that anyone would find helpful.
Update: Connection Code
The code to connect to Redis is part of a fairly complex system that supports various cache environments and configuration, but I can probably boil it down to the basics. First, there's a CacheFactory class:
public class CacheFactory : ICacheFactory
{
private static readonly ILogger log = LoggerManager.GetLogger(typeof(CacheFactory));
private static readonly ICacheProvider<CacheKey> cache;
static CacheFactory()
{
ICacheFactory<CacheKey> configuredFactory = CacheFactorySection.Current?.CreateConfiguredFactory<CacheKey>();
if (configuredFactory == null)
{
// Some error handling, not important
}
cache = configuredFactory.GetDefaultCache();
}
// ...
}
The ICacheProvider is what implements a way to talk to a certain cache system, which can be configured. In this case, the configuredFactory is a RedisCacheFactory which looks like this:
public class RedisCacheFactory<T> : ICacheFactory<T> where T : CacheKey, ICacheKeyRepository
{
private RedisCacheProvider<T> provider;
private readonly RedisConfiguration configuration;
public RedisCacheFactory(RedisConfiguration config)
{
this.configuration = config;
}
public ICacheProvider<T> GetDefaultCache()
{
return provider ?? (provider = new RedisCacheProvider<T>(configuration));
}
}
The GetDefaultCache method is called once, in the static constructor, and returns a RedisCacheProvider. This class is what actually connects to Redis:
public class RedisCacheProvider<K> : ICacheProvider<K> where K : CacheKey, ICacheKeyRepository
{
private readonly ConnectionMultiplexer redisConnection;
private readonly IDatabase db;
private readonly RedisCacheSerializer serializer;
private static readonly ILog log = Logging.RedisCacheProviderLog<K>();
private readonly CacheMonitor<K> cacheMonitor;
private readonly TimeSpan defaultTTL;
private int connectionErrors;
public RedisCacheProvider(RedisConfiguration options)
{
redisConnection = ConnectionMultiplexer.Connect(options.EnvironmentOverride ?? options.Connection);
db = redisConnection.GetDatabase();
serializer = new RedisCacheSerializer(options.SerializationBinding);
cacheMonitor = new CacheMonitor<K>();
defaultTTL = options.DefaultTTL;
IEnumerable<string> hosts = options.Connection.EndPoints.Select(e => (e as DnsEndPoint)?.Host);
log.InfoFormat("Created Redis ConnectionMultiplexer connection. Hosts=({0})", String.Join(",", hosts));
}
// ...
}
The constructor creates a ConnectionMultiplexer based on the configured Redis endpoints (which are in some config file). I also log every time I create a connection. We don't see any excessive number of these log statements, and the connections to Redis remains stable.
In global.asax, in try adding:
protected void Application_Start(object sender, EventArgs e)
{
ThreadPool.SetMinThreads(200, 200);
}
For us, this reduced errors from ~50-100 daily to zero. I believe there is no general rule for what numbers to set as it's system dependant (200 works for us) so might require some experimenting on your end.
I also believe this has improved the performance of the site.
I am trying to use sentinal redis to get/set keys from redis. I was trying to stress test my setup with about 2000 concurrent requests.
i used sentinel to put a single key on redis and then I executed 1000 concurrent get requests from redis.
But the underlying jedis used my sentinel is blocking call on getResource() (pool size is 500) and the overall average response time that I am achieving is around 500 ms, but my target was about 10 ms.
I am attaching sample of jvisualvm snapshot here
redis.clients.jedis.JedisSentinelPool.getResource() 98.02227 4.0845232601E7 ms 4779
redis.clients.jedis.BinaryJedis.get() 1.6894469 703981.381 ms 141
org.apache.catalina.core.ApplicationFilterChain.doFilter() 0.12820946 53424.035 ms 6875
org.springframework.core.serializer.support.DeserializingConverter.convert() 0.046286926 19287.457 ms 4
redis.clients.jedis.JedisSentinelPool.returnResource() 0.04444578 18520.263 ms 4
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept() 0.035538 14808.45 ms 11430
May anyone help to debug further into the issue?
From JedisSentinelPool implementation of getResource() from Jedis sources (2.6.2):
#Override
public Jedis getResource() {
while (true) {
Jedis jedis = super.getResource();
jedis.setDataSource(this);
// get a reference because it can change concurrently
final HostAndPort master = currentHostMaster;
final HostAndPort connection = new HostAndPort(jedis.getClient().getHost(), jedis.getClient()
.getPort());
if (master.equals(connection)) {
// connected to the correct master
return jedis;
} else {
returnBrokenResource(jedis);
}
}
}
Note the while(true) and the returnBrokenResource(jedis), it means that it tries to get a jedis resource randomly from the pool that is indeed connected to the correct master and retries if it is not the good one. It is a dirty check and also a blocking call.
The super.getResource() call refers to JedisPool traditionnal implementation that is actually based on Apache Commons Pool (2.0). It does a lot to get an object from the pool, and I think it even repairs fail connections for instance. With a lot of contention on your pool, as probably in your stress test, it can probably take a lot of time to get a resource from the pool, just to see it is not connected to the correct master, so you end up calling it again, adding contention, slowing getting the resource etc...
You should check all the jedis instances in your pool to see if there's a lot of 'bad' connections.
Maybe you should give up using a common pool for your stress test (only create Jedis instances manually connected to the correct node, and close them nicely), or setting multiple ones to mitigate the cost of looking to "dirty" unchecked jedis resources.
Also with a pool of 500 jedis instances, you can't emulate 1000 concurrent queries, you need at least 1000.
I am running Spark in standalone mode on 2 machines which have these configs
500gb memory, 4 cores, 7.5 RAM
250gb memory, 8 cores, 15 RAM
I have created a master and a slave on 8core machine, giving 7 cores to worker. I have created another slave on 4core machine with 3 worker cores. The UI shows 13.7 and 6.5 G usable RAM for 8core and 4core respectively.
Now on this I have to process an aggregate of user ratings over a period of 15 days. I am trying to do this using Pyspark
This data is stored in hourwise files in day-wise directories in an s3 bucket, every file must be around 100MB eg
s3://some_bucket/2015-04/2015-04-09/data_files_hour1
I am reading the files like this
a = sc.textFile(files, 15).coalesce(7*sc.defaultParallelism) #to restrict partitions
where files is a string of this form 's3://some_bucket/2015-04/2015-04-09/*,s3://some_bucket/2015-04/2015-04-09/*'
Then I do a series of maps and filters and persist the result
a.persist(StorageLevel.MEMORY_ONLY_SER)
Then I need to do a reduceByKey to get an aggregate score over the span of days.
b = a.reduceByKey(lambda x, y: x+y).map(aggregate)
b.persist(StorageLevel.MEMORY_ONLY_SER)
Then I need to make a redis call for the actual terms for the items the user has rated, so I call mapPartitions like this
final_scores = b.mapPartitions(get_tags)
get_tags function creates a redis connection each time of invocation and calls redis and yield a (user, item, rate) tuple
(The redis hash is stored in the 4core)
I have tweaked the settings for SparkConf to be at
conf = (SparkConf().setAppName(APP_NAME).setMaster(master)
.set("spark.executor.memory", "5g")
.set("spark.akka.timeout", "10000")
.set("spark.akka.frameSize", "1000")
.set("spark.task.cpus", "5")
.set("spark.cores.max", "10")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.kryoserializer.buffer.max.mb", "10")
.set("spark.shuffle.consolidateFiles", "True")
.set("spark.files.fetchTimeout", "500")
.set("spark.task.maxFailures", "5"))
I run the job with driver-memory of 2g in client mode, since cluster mode doesn't seem to be supported here.
The above process takes a long time for 2 days' of data (around 2.5hours) and completely gives up on 14 days'.
What needs to improve here?
Is this infrastructure insufficient in terms of RAM and cores (This is offline and can take hours, but it has got to finish in 5 hours or so)
Should I increase/decrease the number of partitions?
Redis could be slowing the system, but the number of keys is just too huge to make a one time call.
I am not sure where the task is failing, in reading the files or in reducing.
Should I not use Python given better Spark APIs in Scala, will that help with efficiency as well?
This is the exception trace
Lost task 4.1 in stage 0.0 (TID 11, <node>): java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:554)
at sun.security.ssl.InputRecord.read(InputRecord.java:509)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:198)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:200)
at org.apache.http.impl.io.ContentLengthInputStream.close(ContentLengthInputStream.java:103)
at org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:164)
at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:227)
at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
at org.apache.http.util.EntityUtils.consume(EntityUtils.java:88)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.releaseConnection(HttpMethodReleaseInputStream.java:102)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.close(HttpMethodReleaseInputStream.java:194)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:152)
at org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:89)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:63)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:126)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:93)
at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:92)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)
I could really use some help, thanks in advance
Here is what my main code looks like
def main(sc):
f=get_files()
a=sc.textFile(f, 15)
.coalesce(7*sc.defaultParallelism)
.map(lambda line: line.split(","))
.filter(len(line)>0)
.map(lambda line: (line[18], line[2], line[13], line[15])).map(scoring)
.map(lambda line: ((line[0], line[1]), line[2])).persist(StorageLevel.MEMORY_ONLY_SER)
b=a.reduceByKey(lambda x, y: x+y).map(aggregate)
b.persist(StorageLevel.MEMORY_ONLY_SER)
c=taggings.mapPartitions(get_tags)
c.saveAsTextFile("f")
a.unpersist()
b.unpersist()
The get_tags function is
def get_tags(partition):
rh = redis.Redis(host=settings['REDIS_HOST'], port=settings['REDIS_PORT'], db=0)
for element in partition:
user = element[0]
song = element[1]
rating = element[2]
tags = rh.hget(settings['REDIS_HASH'], song)
if tags:
tags = json.loads(tags)
else:
tags = scrape(song, rh)
if tags:
for tag in tags:
yield (user, tag, rating)
The get_files function is as:
def get_files():
paths = get_path_from_dates(DAYS)
base_path = 's3n://acc_key:sec_key#bucket/'
files = list()
for path in paths:
fle = base_path+path+'/file_format.*'
files.append(fle)
return ','.join(files)
The get_path_from_dates(DAYS) is
def get_path_from_dates(last):
days = list()
t = 0
while t <= last:
d = today - timedelta(days=t)
path = d.strftime('%Y-%m')+'/'+d.strftime('%Y-%m-%d')
days.append(path)
t += 1
return days
As a small optimization, I have created two separate tasks, one to read from s3 and get additive sum, second to read transformations from redis. The first tasks has high number of partitions since there are around 2300 files to read. The second one has much lesser number of partitions to prevent redis connection latency, and there is only one file to read which is on the EC2 cluster itself. This is only partial, still looking for suggestions to improve ...
I was in a similar usecase: doing coalesce on a RDD with 300,000+ partitions. The difference is that I was using s3a(SocketTimeoutException from S3AFileSystem.waitAysncCopy). Finally the issue was resolved by setting a larger fs.s3a.connection.timeout(Hadoop's core-site.xml). Hopefully you can get a clue.
I'm investigating some performance problems in an experimental scheduling application I'm working on. I found that calls to session.SaveChanges() were pretty slow, so I wrote a simple test.
Can you explain why the first iteration of the loop takes 200ms and subsequent loop 1-2 ms? How I can I leverage this in my application (I don't mind the first call to be this slow if all subsequent calls are quick)?
private void StoreDtos()
{
for (int i = 0; i < 3; i++)
{
StoreNewSchedule();
}
}
private void StoreNewSchedule()
{
var sw = Stopwatch.StartNew();
using (var session = DocumentStore.OpenSession())
{
session.Store(NewSchedule());
session.SaveChanges();
}
Console.WriteLine("Persisting schedule took {0} ms.",
sw.ElapsedMilliseconds);
}
Output is:
Persisting schedule took 189 ms. // first time
Persisting schedule took 2 ms. // second time
Persisting schedule took 1 ms. // ... etc
Above is for an in-memory database. Using a http connection to a Raven DB instance (on the same machine), I get similar results. The first call takes noticeably more time:
Persisting schedule took 1116 ms.
Persisting schedule took 37 ms.
Persisting schedule took 14 ms.
On Github: RavenDB 2.0 testcode and RavenDB 2.5 testcode.
The very first time that you call RavenDB, there are several things that have to happen.
We need to prepare the serializers for your entities, which takes time.
We need to create the TCP connection to the server.
On the next calls, we can reuse the connection that is already open and the created serializers.