geth states eth_submitHashrate while mining with Claymore on Windows 10 with 2 GPU's - gpu

I am aiming on GPU-mining Ethereum on a Windows 10 PC with 2 Radeon RX590.
geth version is
1.9.9-stable-01744997
cmd call to start geth:
geth --rpc --syncmode "fast" --cache 4096 --etherbase [ADR] --datadir "[MyDataDir]" --mine --minerthreads 0
Blockchain is up to date and everything seems fine on the geth side.
Used Miner is
Claymore's Dual GPU Miner - v15.0
cmd to start miner:
EthDcrMiner64.exe -epool http://127.0.0.1:8545 -mode 1 -tt 75
Now the miner starts and seems to start mining. GPU's show they are doing massive work.
Once the miner is initiated it only permanently outputs something like this (plus every once in a while some GPU info):
ETH: 12/21/19-15:46:33 - New job from 127.0.0.1:8545
ETH - Total Speed: 21.345 Mh/s, Total Shares: 0, Rejected: 0, Time: 45:52
ETH: GPU0 10.665 Mh/s, GPU1 10.680 Mh/s
So this looks good.
In the geth console meanwhile I get this output:
INFO [12-21|15:46:35.446] Imported new chain segment blocks=1 txs=74 mgas=9.921 elapsed=159.999ms mgasps=62.007 number=9141165 hash=05972d…032349 dirty=1019.58MiB
INFO [12-21|15:46:35.459] Commit new mining work number=9141166 sealhash=35129c…59de27 uncles=0 txs=0 gas=0 fees=0 elapsed=999.3µs
INFO [12-21|15:46:35.720] Commit new mining work number=9141166 sealhash=3788e2…df83fc uncles=0 txs=39 gas=9922304 fees=0.0347883012 elapsed=261.998ms
WARN [12-21|15:46:36.032] Served eth_submitHashrate conn=127.0.0.1:54083 reqid=6 t=0s err="the method eth_submitHashrate does not exist/is not available"
INFO [12-21|15:46:38.548] Commit new mining work number=9141166 sealhash=7451f4…69a431 uncles=0 txs=72 gas=9911680 fees=0.04369322037 elapsed=89.942ms
WARN [12-21|15:46:41.120] Served eth_submitHashrate conn=127.0.0.1:54083 reqid=6 t=0s err="the method eth_submitHashrate does not exist/is not available"
There is this warning/error message:
err="the method eth_submitHashrate does not exist/is not available"
But also it states "Commit new mining work".
I am quite unsure now.
Do I mine or do I only waste electric power as the work is never commited?

You have not connected to any mining pools, only to your own geth node, meaning that you are mining alone and competing against the whole world. When mining solo, you have no shares. You either mine a whole block or get nothing. It is extremely hard to mine all alone, thus it is advised to join a mining pool. Claymore-Dual-Miner (CDM) has a list of available mining pool alternatives.
Also, when mining solo with CDM, you can miss mined messages because in this mode, it uses HTTP protocol instead of the Stratum pool protocol. You can manually check your balance on etherscan at any time, though.

Use PhoenixMiner 5e with -rate 2 command. It will stop showing this error.

Related

Can an end point is connected to more than one router in a NoC topology in gem5 garnet3.0?

I am running gem5 version 22.0.0.2. I operate Garnet in a standalone manner in conjunction with the Garnet Synthetic Traffic injector. I want to emulate a routerless NoC so I guess I need to connect an end point (e.g, Cores, Caches, Directories) to more than one "local" router. I just use a python configuration to configure the topology. But when I do this, there is a runtime error:
build/NULL/mem/ruby/network/garnet/GarnetNetwork.cc:125: info: Garnet version 3.0
build/NULL/base/stats/group.cc:121: panic: panic condition statGroups.find(name) != statGroups.end() occurred: Stats of the same group share the same name `power_state`.
Memory Usage: 692360 KBytes
Program aborted at tick 0
Here is a description from the gem5 documentation: "Each network interface is connected to one or more “local” routers which is could be connected through an “External” link." Here is the link:https://www.gem5.org/documentation/general_docs/ruby/heterogarnet/
Here is the constructor of Stats::Group
Group(Group *parent, const char *name = nullptr)
Here is a description from the gem5 documentation: "there are special cases where the parent group may be null. One such special case is SimObjects where the Python code performs late binding of the group parent."
Here is the link:https://www.gem5.org/documentation/general_docs/statistics/api.
I guess the error may be related to this, but I don't know the exact reason.
Any help would be appreciated.
Thank you.

Bitcoin Cash ABC - sendrawtransaction Error | Code : -26

Tried to make a single signature transfer between two address generated using node in regtest mode. During which I got the following
Error -> mandatory-script-verify-flag-failed (Signature must be zero for failed CHECK(MULTI)SIG operation) (code 16)
Following was the flow.
createrawtransaction -> args: [ UTXO (txid,vout,scriptPubKey,amount), Receiver address, change address]
-> Success
signrawtransactionwithkey -> args: [Hex-Transaction (output of createrawtransaction), PrivateKey, UTXO (txid,vout,scriptPubKey,amount) ] -> Success
sendrawtransaction -> args: [Hex- Signed Transaction (output of signrawtransactionwithkey)] -> Failed
From basic research, many suggested to add amount field in the signrawtransactionwithkey, Which I did, even after then I was getting the same error.
It is to be noted that the this error came all of a sudden, the Node setup was working fine for months. This happens only in a particular linux machine. Is there any other factors in the host machine can be affect Bitcoin Cash ABC node and cause this issue?
Bitcoin Cash ABC Node running in Regtest mode.
This error may arise due to older having older versions too, try updating the core. I got this error in 0.20.8 but later I updated the node to 0.21.8 it works fine. Not sure what is happening, or is there any kind of expiry to the Bitcoin Core ABC releases.

Wrong balance between Aerospike instances in cluster

I have an application with a high load for batch read operations. My Aerospike cluster (v 3.7.2) has 14 servers, each one with 7GB RAM and 2 CPUs in Google Cloud.
By looking at Google Cloud Monitoring Graphs, I noticed a very unbalanced load between servers: some servers have almost 100% CPU load, while others have less than 50% (image below). Even after hours of operation, the cluster unbalanced pattern doesn't change.
Is there any configuration that I could change to make this cluster more homogeneous? How to optimize node balancing?
Edit 1
All servers in the cluster have the same identical aerospike.conf file:
Aerospike database configuration file.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 32
batch-index-threads 32
proto-fd-max 15000
batch-max-requests 200000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
#address any
port 3000
}
heartbeat {
mode mesh
mesh-seed-address-port 10.240.0.6 3002
mesh-seed-address-port 10.240.0.5 3002
port 3002
interval 150
timeout 20
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test {
replication-factor 3
memory-size 5G
default-ttl 0 # 30 days, use 0 to never expire/evict.
ldt-enabled true
storage-engine device {
file /data/aerospike.dat
write-block-size 1M
filesize 180G
}
}
Edit 2:
$ asinfo
1 : node
BB90600F00A0142
2 : statistics
cluster_size=14;cluster_key=E3C3672DCDD7F51;cluster_integrity=true;objects=3739898;sub-records=0;total-bytes-disk=193273528320;used-bytes-disk=26018492544;free-pct-disk=86;total-bytes-memory=5368709120;used-bytes-memory=239353472;data-used-bytes-memory=0;index-used-bytes-memory=239353472;sindex-used-bytes-memory=0;free-pct-memory=95;stat_read_reqs=2881465329;stat_read_reqs_xdr=0;stat_read_success=2878457632;stat_read_errs_notfound=3007093;stat_read_errs_other=0;stat_write_reqs=551398;stat_write_reqs_xdr=0;stat_write_success=549522;stat_write_errs=90;stat_xdr_pipe_writes=0;stat_xdr_pipe_miss=0;stat_delete_success=4;stat_rw_timeout=1862;udf_read_reqs=0;udf_read_success=0;udf_read_errs_other=0;udf_write_reqs=0;udf_write_success=0;udf_write_err_others=0;udf_delete_reqs=0;udf_delete_success=0;udf_delete_err_others=0;udf_lua_errs=0;udf_scan_rec_reqs=0;udf_query_rec_reqs=0;udf_replica_writes=0;stat_proxy_reqs=7021;stat_proxy_reqs_xdr=0;stat_proxy_success=2121;stat_proxy_errs=4739;stat_ldt_proxy=0;stat_cluster_key_err_ack_dup_trans_reenqueue=607;stat_expired_objects=0;stat_evicted_objects=0;stat_deleted_set_objects=0;stat_evicted_objects_time=0;stat_zero_bin_records=0;stat_nsup_deletes_not_shipped=0;stat_compressed_pkts_received=0;err_tsvc_requests=110;err_tsvc_requests_timeout=0;err_out_of_space=0;err_duplicate_proxy_request=0;err_rw_request_not_found=17;err_rw_pending_limit=19;err_rw_cant_put_unique=0;geo_region_query_count=0;geo_region_query_cells=0;geo_region_query_points=0;geo_region_query_falsepos=0;fabric_msgs_sent=58002818;fabric_msgs_rcvd=57998870;paxos_principal=BB92B00F00A0142;migrate_msgs_sent=55749290;migrate_msgs_recv=55759692;migrate_progress_send=0;migrate_progress_recv=0;migrate_num_incoming_accepted=7228;migrate_num_incoming_refused=0;queue=0;transactions=101978550;reaped_fds=6;scans_active=0;basic_scans_succeeded=0;basic_scans_failed=0;aggr_scans_succeeded=0;aggr_scans_failed=0;udf_bg_scans_succeeded=0;udf_bg_scans_failed=0;batch_index_initiate=40457778;batch_index_queue=0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0;batch_index_complete=40456708;batch_index_timeout=1037;batch_index_errors=33;batch_index_unused_buffers=256;batch_index_huge_buffers=217168717;batch_index_created_buffers=217583519;batch_index_destroyed_buffers=217583263;batch_initiate=0;batch_queue=0;batch_tree_count=0;batch_timeout=0;batch_errors=0;info_queue=0;delete_queue=0;proxy_in_progress=0;proxy_initiate=7021;proxy_action=5519;proxy_retry=0;proxy_retry_q_full=0;proxy_unproxy=0;proxy_retry_same_dest=0;proxy_retry_new_dest=0;write_master=551089;write_prole=1055431;read_dup_prole=14232;rw_err_dup_internal=0;rw_err_dup_cluster_key=1814;rw_err_dup_send=0;rw_err_write_internal=0;rw_err_write_cluster_key=0;rw_err_write_send=0;rw_err_ack_internal=0;rw_err_ack_nomatch=1767;rw_err_ack_badnode=0;client_connections=366;waiting_transactions=0;tree_count=0;record_refs=3739898;record_locks=0;migrate_tx_objs=0;migrate_rx_objs=0;ongoing_write_reqs=0;err_storage_queue_full=0;partition_actual=296;partition_replica=572;partition_desync=0;partition_absent=3228;partition_zombie=0;partition_object_count=3739898;partition_ref_count=4096;system_free_mem_pct=61;sindex_ucgarbage_found=0;sindex_gc_locktimedout=0;sindex_gc_inactivity_dur=0;sindex_gc_activity_dur=0;sindex_gc_list_creation_time=0;sindex_gc_list_deletion_time=0;sindex_gc_objects_validated=0;sindex_gc_garbage_found=0;sindex_gc_garbage_cleaned=0;system_swapping=false;err_replica_null_node=0;err_replica_non_null_node=0;err_sync_copy_null_master=0;storage_defrag_corrupt_record=0;err_write_fail_prole_unknown=0;err_write_fail_prole_generation=0;err_write_fail_unknown=0;err_write_fail_key_exists=0;err_write_fail_generation=0;err_write_fail_generation_xdr=0;err_write_fail_bin_exists=0;err_write_fail_parameter=0;err_write_fail_incompatible_type=0;err_write_fail_noxdr=0;err_write_fail_prole_delete=0;err_write_fail_not_found=0;err_write_fail_key_mismatch=0;err_write_fail_record_too_big=90;err_write_fail_bin_name=0;err_write_fail_bin_not_found=0;err_write_fail_forbidden=0;stat_duplicate_operation=53184;uptime=1001388;stat_write_errs_notfound=0;stat_write_errs_other=90;heartbeat_received_self=0;heartbeat_received_foreign=145137042;query_reqs=0;query_success=0;query_fail=0;query_abort=0;query_avg_rec_count=0;query_short_running=0;query_long_running=0;query_short_queue_full=0;query_long_queue_full=0;query_short_reqs=0;query_long_reqs=0;query_agg=0;query_agg_success=0;query_agg_err=0;query_agg_abort=0;query_agg_avg_rec_count=0;query_lookups=0;query_lookup_success=0;query_lookup_err=0;query_lookup_abort=0;query_lookup_avg_rec_count=0
3 : features
cdt-list;pipelining;geo;float;batch-index;replicas-all;replicas-master;replicas-prole;udf
4 : cluster-generation
61
5 : partition-generation
11811
6 : edition
Aerospike Community Edition
7 : version
Aerospike Community Edition build 3.7.2
8 : build
3.7.2
9 : services
10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.41:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.23:3000
10 : services-alumni
10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.23:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.41:3000
I have a few comments about your configuration. First, transaction-threads-per-queue should be set to 3 or 4 (don't set it to the number of cores).
The second has to do with your batch-read tuning. You're using the (default) batch-index protocol, and the config params you'll need to tune for batch-read performance are:
You have batch-max-requests set very high. This is probably affecting both your CPU load and your memory consumption. It's enough that there's a slight imbalance in the number of keys you're accessing per-node, and that will reflect in the graphs you've shown. At least, this is possibly the issue. It's better that you iterate over smaller batches than try to fetch 200K records per-node at a time.
batch-index-threads – by default its value is 4, and you set it to 32 (of a max of 64). You should do this incrementally by running the same test and benchmarking the performance. On each iteration adjust higher, then down if it's decreased in performance. For example: test with 32, +8 = 40 , +8 = 48, -4 = 44. There's no easy rule-of-thumb for the setting, you'll need to tune through iterations on the hardware you'll be using, and monitor the performance.
batch-max-buffer-per-queue – this is more directly linked to the number of concurrent batch-read operations the node can support. Each batch-read request will consume at least one buffer (more if the data cannot fit in 128K). If you do not have enough of these allocated to support the number of concurrent batch-read requests you will get exceptions with error code 152 BATCH_QUEUES_FULL . Track and log such events clearly, because it means you need to raise this value. Note that this is the number of buffers per-queue. Each batch response worker thread has its own queue, so you'll have batch-index-threads x batch-max-buffer-per-queue buffers, each taking 128K of RAM. The batch-max-unused-buffers caps the memory usage of all these buffers combined, destroying unused buffers until their number is reduced. There's an overhead to allocating and destroying these buffers, so you do not want to set it too low compared to the total. Your current cost is 32 x 256 x 128KB = 1GB.
Finally, you're storing your data on a filesystem. That's fine for development instances, but not recommended for production. In GCE you can provision either a SATA SSD or an NVMe SSD for your data storage, and those should be initialized, and used as block devices. Take a look at the GCE recommendations for more details. I suspect you have warnings in your log about the device not keeping up.
It's likely that one of your nodes is an outlier with regards to the number of partitions it has (and therefore number of objects). You can confirm it with asadm -e 'asinfo -v "objects"'. If that's the case, you can terminate that node, and bring up a new one. This will force the partitions to be redistributed. This does trigger a migration, which takes quite longer in the CE server than in the EE one.
For anyone interested, Aerospike Enterpirse 4.3 introduced 'uniform-balance' which homogeneously balances data partitions. Read more here: https://www.aerospike.com/blog/aerospike-4-3-all-flash-uniform-balance/

ServerXmlHttpRequest hanging sometimes when doing a POST

I have a job that periodically does some work involving ServerXmlHttpRquest to perform an HTTP POST. The job runs every 60 seconds.
And normally it runs without issue. But there's about a 1 in 50,000 chance (every two or three months) that it will hang:
IXMLHttpRequest http = new ServerXmlHttpRequest();
http.open("POST", deleteUrl, false, "", "");
http.send(stuffToDelete); <---hang
When it hangs, not even the Task Scheduler (with the option enabled to kill the job if it takes longer than 3 minutes to run) can end the task. I have to connect to the remote customer's network, get on the server, and use Task Manager to kill the process.
And then its good for another month or three.
Eventually i started using Task Manager to create a process dump,
so i could analyze where the hang is. After five crash dumps (over the last 11 months or so) i get a consistent picture:
ntdll.dll!_NtWaitForMultipleObjects#20()
KERNELBASE.dll!_WaitForMultipleObjectsEx#20()
user32.dll!MsgWaitForMultipleObjectsEx()
user32.dll!_MsgWaitForMultipleObjects#20()
urlmon.dll!CTransaction::CompleteOperation(int fNested) Line 2496
urlmon.dll!CTransaction::StartEx(IUri * pIUri, IInternetProtocolSink * pOInetProtSink, IInternetBindInfo * pOInetBindInfo, unsigned long grfOptions, unsigned long dwReserved) Line 4453 C++
urlmon.dll!CTransaction::Start(const wchar_t * pwzURL, IInternetProtocolSink * pOInetProtSink, IInternetBindInfo * pOInetBindInfo, unsigned long grfOptions, unsigned long dwReserved) Line 4515 C++
msxml3.dll!URLMONRequest::send()
msxml3.dll!XMLHttp::send()
Contoso.exe!FrobImporter.TFrobImporter.DeleteFrobs Line 971
Contoso.exe!FrobImporter.TFrobImporter.ImportCore Line 1583
Contoso.exe!FrobImporter.TFrobImporter.RunImport Line 1070
Contoso.exe!CommandLineProcessor.TCommandLineProcessor.HandleFrobImport Line 433
Contoso.exe!CommandLineProcessor.TCommandLineProcessor.CoreExecute Line 71
Contoso.exe!CommandLineProcessor.TCommandLineProcessor.Execute Line 84
Contoso.exe!Contoso.Contoso Line 167
kernel32.dll!#BaseThreadInitThunk#12()
ntdll.dll!__RtlUserThreadStart()
ntdll.dll!__RtlUserThreadStart#8()
So i do a ServerXmlHttpRequest.send, and it never returns. It will sit there for days (causing the system to miss financial transactions, until come Sunday night i get a call that it's broken).
It is of no help unless someone knows how to debug code, but the registers in the stalled thread at the time of the dump are:
EAX 00000030
EBX 00000000
ECX 00000000
EDX 00000000
ESI 002CAC08
EDI 00000001
EIP 732A08A7
ESP 0018F684
EBP 0018F6C8
EFL 00000000
Windows Server 2012 R2
Microsoft IIS/8.5
Default timeouts of ServerXmlHttpRequest
You can use serverXmlHttpRequest.setTimeouts(...) to configure the four classes of timeouts:
resolveTimeout: The value is applied to mapping host names (such as "www.microsoft.com") to IP addresses; the default value is infinite, meaning no timeout.
connectTimeout: A long integer. The value is applied to establishing a communication socket with the target server, with a default timeout value of 60 seconds.
sendTimeout: The value applies to sending an individual packet of request data (if any) on the communication socket to the target server. A large request sent to a server will normally be broken up into multiple packets; the send timeout applies to sending each packet individually. The default value is 30 seconds.
receiveTimeout: The value applies to receiving a packet of response data from the target server. Large responses will be broken up into multiple packets; the receive timeout applies to fetching each packet of data off the socket. The default value is 30 seconds.
The KB305053 (a server that decides to keep the connection open will cause serverXmlHttpRequest to wait for the connection to close) seems like it plausibly could be the issue. But the 30 second default timeout would have taken care of that.
Possible workaround - Add myself to a Job
The Windows Task Scheduler is unable to terminate the task; even though the option is enabled to do do.
I will look into using the Windows Job API to add my self process to a job, and use SetInformationJobObject to set a time limit on my process:
CreateJobObject
AssignProcessToJobObject
SetInformationJobObject
to limit my process to three minutes of execution time:
PerProcessUserTimeLimit
If LimitFlags specifies
JOB_OBJECT_LIMIT_PROCESS_TIME, this member is the per-process
user-mode execution time limit, in 100-nanosecond ticks. Otherwise,
this member is ignored.
The system periodically checks to determine
whether each process associated with the job has accumulated more
user-mode time than the set limit. If it has, the process is
terminated.
If the job is nested, the effective limit is the most
restrictive limit in the job chain.
Although since Task Scheduler uses Job objects to also limit a task's time, i'm not hopeful that the Job Object can limit a job either.
Edit: Job objects cannot limit a process by process time - only user time. And with a process idle waiting for an object, it will not accumulate any user time - certainly not three minutes worth.
Bonus Reading
How can a ServerXMLHTTP GET request hang? (GET, not POST)
KB305053: ServerXMLHTTP Stops Responding When You Send a POST Request (which says the timeout should expire; where mine does not)
MS Forums: oHttp.Send - Hangs (HEAD, not POST)
MS Forums: ASP to test SOAP WebService using MSXML2.ServerXMLHTTP Send hangs
CC to MS Support Forums
Consider switching to a newer, supported API.
msxml6.dll using MSXML2.ServerXMLHTTP.6.0
winhttpcom.dll using WinHttp.WinHttpRequest.5.1.
The msxml3.dll library is no longer supported and is only kept around for compatibility reasons. Plus, there were a number of security and stability improvements included with msxml4.dll (and newer) that you are missing out on.

In what scenarios can the blockchain size decrease for bitcoin?

I am running a private bitcoin network for which I changed the target time between two blocks to 12 seconds and the difficulty adjustment to 25 blocks interval. I ran the network for about 4 hours with 50 nodes. In one of the node's logs I observed that the blockchain height increased up to a maximum of 181 and then started decreasing, all the way to 38. what could be an explanation for such a strange behaviour.
Please refer to the log below:
2015-11-04 01:58:47 receive version message: /Satoshi:0.11.99/: version 70011, blocks=181, us=0.0.0.0:0, peer=2, peeraddr=127.0.0.1:44117
2015-11-04 01:58:47 UpdateTip: new best=0000005265ca4ce01ad0d06f45cf475bf303de3d64e942c5cf1177e00f346c78 height=180 log2_work=37.083283 tx=30941 date=2015-11-04 01:53:17 progress=1.000000 cache=0.0MiB(1tx)
2015-11-04 01:58:47 UpdateTip: new best=00000052a34cedf3c5ddbeb46d36644654523db855c4cce984d2623e840dd219 height=179 log2_work=37.082953 tx=30940 date=2015-11-04 01:53:10 progress=1.000000 cache=0.0MiB(2tx)
2015-11-04 01:58:47 UpdateTip: new best=00000030fd7652affb883f05fe0c98e7fe3fbc3cfd74808e061ed05ec61c22e6 height=178 log2_work=37.082623 tx=30939 date=2015-11-04 01:52:55 progress=1.000000 cache=0.0MiB(3tx)
2015-11-04 01:58:47 AddToWallet c32bcbd8102c602a5e71ee717232e204435f331dce6fbfb9eb5d552698faa95b
2015-11-04 01:58:47 AddToWallet 1c91517aeadd12bcbcfdf4a1423b671d405543ae9abfbd87078969ce1971663f
2015-11-04 01:58:47 AddToWallet b11f9c2e3b1ab3d3983da63783bb95903d89405243d0716ea88272a9261b7a33
Are all 50 nodes mining? What might happen is that some nodes are not in sync and keep mining on earlier blocks. If these soft forks are of a higher difficulty than the chain tip, the chain might rollback.
However it seems the logs are at the same second which might indicate that there is some race condition between receiving the blocks and printing the log messages.
The log you provided shows you have two peers.
If that's the only nodes(2+1) in the network then your chain is not gonna be stable without more fine tuning on the variables.
My guess is that you changed some rules and a chain split&reorganization(soft fork) happened, the extra blocks become orphans after the reorganization.