bitcoind -reindex or delete blockchain start over - bitcoin

blockchain is on disk (139GB) but not complete. Had an error. What is faster?
Deleting the blockchain and start over with bitcoind -daemon or bitcoind -reindex.
What will be faster to get it up and running?

In my experience, I calculated how long it would take to reindex and decided to delete and start again.
Watching the logs, you can see how long it takes to reindex each .dat file. It would depende on the speed of your machine and the speed of your connection.

Related

How to troubleshoot periodically CPU jump in redis

I use AWS ElastiCache Redis for our prod. I see CPU every 30 minutes of the round hour from average of 2-3% to 20%.
This is constant, which tells me it comes from schedule job.
From cloudwatch I have a suspicion it is related to KEY (and maybe SET) commands and it's latency is the only one which jumps in the same exact time as the CPU jumps.
I would like to understand what KEY (and maybe SET) commands run on a specific time, or some other way which can help me investigate this.
Thanks for any advice.
with redis-cli monitor I was able to get most of the commands running on server in a stream and get the high usage.

Efficient way to take hot snapshots from redis in production?

We have redis cluster which holds more than 2 million and these keys has been updated with the time interval of 1 minute. Now we have a requirement to take the snapshot of the redis db in a particular interval For eg every 10 minute. This snapshot should not pause the redis command execution.
Is there any async way of taking snapshot from redis ?
It would be really helpful if we get any suggestion on open source tools or frameworks.
The Redis BGSAVE is async and takes a snapshot.
It calls the fork() function of the OS. According to the Redis manual,
Fork() can be time consuming if the dataset is big, and may result in Redis to stop serving clients for some millisecond or even for one second if the dataset is very big and the CPU performance not great
Two million updates in one minutes, that is 30K+ QPS.
So you really have to try it out, run the benchmark that similutes your business, then issue BGSAVE, monitor the I/O and CPU usage of your system, and see if there's a spike in your redis calling latency.
Then issue LASTSAVE, which will tell you when your last success snapshot happened. So you can adjust your backup schedule.

rsync : how much time rsync takes to build file list before starting sync process b/w source & destination

I am using rsync from more than 1 year to sync production data to an folder on the nfs volume, once sync completed our NDMP backup / Tape backup schedule will start.
Situation:
Yesterday we observed that the rsync was still in process to sync file from production folder to destination folder, before completion of rsync command tape backup process was completed. Hence the tape backup data is inconsistent.
Question 1) how to find how much time rsync took to generate list of files which needs to be synchronized b/w source & destination folder?
I used below command to print the time stamp to identify how much time rsync took to generate file list before copying file process start.
rsync -avz --out-format="%t %f" --delete /opt/app_home/shared/data /opt/app_home/shared/plugins /opt/app_home/shared/tape-backup-rsync-shared_new/
However seeking guidance on how to determine the time taken at each stages of rsync process so that i can tweek my scheduled cron job execution times.
Relying on timing is probably a poor idea because it will vary according to how much data needs to be moved, and how congested the machine and/or network might be. I suggest you either make your schedule very generous, or else implement some kind of locking mechanism.
Try this for your rsync:
flock /path/to/some/nfs/filename.lock rsync <args>
And this for your tape backup:
flock /path/to/some/nfs/filename.lock <mycmd> <args>
The flock command (f-lock) ensures that only one process can own the lock-file at once, and will sit and wait until it owns the lock-file before it launches the command you give it. As long as the sync and backup are launched in the right order then the backup will always wait until the sync is done.
The main gotcha is that you ever have a power-cut, network outage, or some other interruption the causes a stale lockfile to be leftover then you have to delete it manually before either job can ever run again (and if you don't notice quickly then there can be a whole bunch queued up).

Redis runs out of memory cause slow query but can not find in slow log

I have query take seconds to get a key from redis sometimes.
Redis info shows used_memory is 2 times lager than used_memory_rss and OS starts to use swap.
After cleaning the useless data, used_memory is lower than used_memory_rss and everything goes fine.
what confuse me is: if any query cost like 10 second and block other query to redis would lead serious problem to other part of the app, but it seems fine to the app.
And I can not find any of this long time query in slow log, so I check redis SLOWLOG command and it says
The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime)
so if this means the execution of the query is normal and not blocking any other queries? What happen to the query when memory is not enough and lead this long time query? Which part of these query takes so long since "actually execute the command" time cost not long enough to get into slowlog?
Thanks!
When memory is not enough Redis will definitely slow down as it will start swapping .You can use INFO to report the amount of memory Redis is using ,even you can set a max limit to memory usage, using the maxmemory option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands),

How can I stop redis graceful when mem and swap is full?

Last night, I run a job to insert data into a redis set(because I want keep my data unique).After I wake up this morning, I find insert operation because very slowly.
Htop shows Memory usage 1884/2015MB and swap usage 1019/1021MB
I realize that 2G memory can not hold redis.
Then I run shutdown in redis-cli, but no action, waiting and waiting...
I also try service redis_6379 stop, but terminal stop at stoping....
What can I do to make redis save all data to dump.rdb and close it graceful?
Normally, a simple redis-cli shutdown should suffice.
Are you using periodical snapshots? If yes, you might be safe to reboot your machine. One important thing to note is that enabling periodical snapshots doubles the memory usage since Redis has to create an in-memory copy of the dataset before writing it to disk.
Another important thing is to follow the advices from Redis setup hints, if you haven't already.
This might not answer your question, but should help you avoid it from happening again.