RxJava Grouped Flowable with Conflation - rx-java3

I' m trying to create a Flow for a fast producer with slow consumer for FX (foreign exchange) prices. The basic idea is that prices coming from the source should be sent to the consumer as fast as possible.
The following is important for the working of this flow:
While the consumer is busy submitting prices, new prices must be received from the price source (in other words we don't want to slow down the producer at any stage).
When the consumer is finished processing it's current rate, it is then available to process another rate from the source.
The consumer is only ever interested in the latest published price for a ccy pair - in other words, we want prices to be conflated if the consumer does not keep up.
The consumer can (and should) process submissions in parallel, as long as these submissions are of a different ccy pair. For example, EUR/USD can be submitted in parallel with GBP/USD, but while EUR/USD is busy, other EUR/USD rates must be conflated.
Here is my current attempt:
public class RxTest {
private static final Logger LOG = LoggerFactory.getLogger(RxTest.class);
CcyPair EUR_USD = new CcyPair("EUR/USD");
CcyPair GBP_USD = new CcyPair("GBP/USD");
CcyPair USD_JPY = new CcyPair("USD/JPY");
List<CcyPair> ALL_CCY_PAIRS = Empty.<CcyPair>vector()
.plus(EUR_USD)
.plus(GBP_USD)
.plus(USD_JPY);
#Test
void testMyFlow() throws Exception {
AtomicInteger rateGenerater = new AtomicInteger(0);
Flowable.<Rate>generate(emitter -> {
MILLISECONDS.sleep(ThreadLocalRandom.current().nextInt(100));
final CcyPair ccyPair = ALL_CCY_PAIRS.get(ThreadLocalRandom.current().nextInt(3));
final String rate = String.valueOf(rateGenerater.incrementAndGet());
emitter.onNext(new Rate(ccyPair, rate));
})
.subscribeOn(Schedulers.newThread())
.doOnNext(rate -> LOG.info("Process: {}", rate))
.groupBy(Rate::ccyPair)
.map(Flowable::publish)
.doOnNext(ConnectableFlowable::connect)
.flatMap(grp -> grp.map(rate -> rate))
.onBackpressureLatest()
.observeOn(Schedulers.io())
.subscribe(onNext -> {
LOG.info("Long running process: {}", onNext);
MILLISECONDS.sleep(500);
LOG.info("Long running process complete: {}", onNext);
});
MILLISECONDS.sleep(5000);
}
record CcyPair(String name) {
public String toString() {
return name;
}
}
record Rate(CcyPair ccyPair, String rate) {
public String toString() {
return ccyPair + "->" + rate;
}
}
}
Which gives me this output:
09:27:05,743 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->1
09:27:05,764 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process: EUR/USD->1
09:27:05,805 INFO [RxTest] [RxNewThreadScheduler-1] - Process: USD/JPY->2
...
09:27:06,127 INFO [RxTest] [RxNewThreadScheduler-1] - Process: GBP/USD->9
09:27:06,165 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->10
09:27:06,214 INFO [RxTest] [RxNewThreadScheduler-1] - Process: USD/JPY->11
09:27:06,265 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process complete: EUR/USD->1
09:27:06,265 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process: USD/JPY->2
09:27:06,302 INFO [RxTest] [RxNewThreadScheduler-1] - Process: USD/JPY->12
09:27:06,315 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->13
...
09:27:06,672 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->23
09:27:06,695 INFO [RxTest] [RxNewThreadScheduler-1] - Process: GBP/USD->24
09:27:06,758 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->25
09:27:06,773 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process complete: USD/JPY->2
09:27:06,773 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process: EUR/USD->3
There are a number problems here:
The consumer is not getting the "latest" rate per ccy pair, but the next one. This is obviously because there is an internal buffer and the backpressure has not kicked in yet. I don't want to wait for this backpressure to kick in, I simply want whatever is the latest emission to go to the consumer next.
The consumer is not processing in parallel - while EUR/USD is processing, for example, the other ccy pairs are being buffered.
Some notes/thoughts:
I'm very new at JavaRx and am struggling to find common patterns/idioms in how to tackle these kinds of problems, so please be patient :)
I'm not at all sure that backpressure is the right way to achieve this at all. RxJava has many interesting operators like window(), cache(), takeLast() etc, but none of them seem to work exactly like I want. I would have liked to have an operator like "conflate" or such - I'm sure there is something that can achieve this, just not sure how.
I struggle to understand how a slow consumer can tell the flow that "I'm busy, just conflate everything until I'm done". Can this only be done via scheduling on threads? That seems worrisome because what if the consumer is asynchronous - how will it tell the producer to hold on while it's busy?

You can use groupBy to create subflows for each currency conversions, but then you need to put those subflows into their own threads. RxJava has many buffers so skipping items while some part of the code is busy may be difficult.
source
.groupBy(Rate::ccyPair)
.flatMap(group -> {
return group
.onBackpressureLatest()
.delay(0, TimeUnit.MILLISECONDS)
.doOnNext(rate -> {
LOG.info("Long running process: {}", onNext);
MILLISECONDS.sleep(500);
LOG.info("Long running process complete: {}", onNext);
})
;
}, false, 128, 1)
.subscribe();

Related

How to send a message with priority to RabbitMQ with StreamBridge

I'm using RabbitMQ. I've defined a queue with priority, and I can send messages to this queue with some priority value using RMQ GUI, and consumers also get the messages in sorted order, but when I try to send the message from my java code using Stream bridge, I don't know how to specify the priority with the message.
Here's what I have tried :
I have added x-max-priority: 10 to the queue while creating the queue.
Consumer example =
#Bean
public Consumer<Message<String>> testListener() {
return (m) -> {
System.out.println("inside consumer with message : " + m);
System.out.println("headers : " + m.getHeaders());
System.out.println("payload : " + m.getPayload());
};
}
Producer example =
#GET
#Path("test/")
public void test(#Context HttpServletRequest request) {
System.out.println("inside test");
try {
String payload = "hello world";
logger.info("going to send a message : {}", payload);
int priority = 5;
Message<String> message = MessageBuilder.withPayload(payload)
.setHeader("priority", priority)
.build();
boolean res = STREAM_BRIDGE.send("testWriter-out-0", message);
System.out.println(message);
System.out.println(res);
} catch (Exception e) {
logger.error(e);
}
}
The output of the Producer =
-> inside test
-> GenericMessage [payload=hello world, headers={priority=5, id=some_id, timestamp=epoch}]
-> true
The output of the Consumer =
-> inside consumer with message : GenericMessage [payload=hello world, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=some_tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=some_tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}]
-> headers : {amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}
-> payload : hello world
So the message goes to RMQ and the consumer also gets the message, but on RMQ GUI when I perform Get-message operation on the Queue, I get this result =>
Message 1
The server reported 0 messages remaining.
Exchange test_exchange
Routing Key test_exchange
Redelivered ○
Properties
timestamp: timestamp
message_id: some_id
priority: 0
delivery_mode: 2
headers:
content_type: application/json
Payload hello world
11 bytes
Encoding: string
As we can see in the above result, priority is set to 0 by RMQ (and hence in the Consumer, I get the messages in the FIFO manner, not in a priority-based manner) and inside headers : only one header is present "content_type: application/json", so I think the priority is not a part of the header but is a part of properties, then how to set message properties using StreamBridge?
To conclude, I am trying to figure out how to set the priority of a message dynamically while sending it using StreamBridge, any help would be appreciated, thanks in advance !
Please, consider to use the latest Spring Cloud Stream: https://spring.io/projects/spring-cloud-stream#learn.
Apparently your spring-cloud-starter-stream-rabbit = 3.0.3.RELEASE is old enough to suffer from the issue https://github.com/spring-cloud/spring-cloud-stream/issues/1931.
Have just tested with the latest one and I got the proper priority property on the message posted into RabbitMQ queue by the mentioned StreamBridge.

Retry is executed only once in Spring Cloud Stream Reactive

When I try again in Spring Cloud Stream Reactive, a situation that I don't understand arises, so I ask a question.
In case of sending String type data per second, after processing in s-c-stream Function, I intentionally caused RuntimeException according to conditions.
#Bean
fun test(): Function<Flux<String>, Flux<String>?> = Function{ input ->
input.map { sellerId ->
if(sellerId == "I-ZICGO")
throw RuntimeException("intentional")
else
log.info("do normal: {}", sellerId)
sellerId
}.retryWhen(Retry.from { companion ->
companion.map { rs ->
if (rs.totalRetries() < 3) { // retrying 3 times
log.info("retry!!!: {}", rs.totalRetries())
rs.totalRetries()
}
else
throw Exceptions.propagate(rs.failure())
}
})
}
However, the result of running the above logic is:
2021-02-25 16:14:29.319 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 0 subscriber(s).
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] k.c.m.c.service.impl.ItemServiceImpl : retry!!!: 0
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 1 subscriber(s).
Retry is processed only once.
Should I change from reactive to imperative to fix this?
In short, yes. The retry settings are meaningless for reactive functions. You can see a more details explanation on the similar SO question here

RabbitMQ unacked messages not removed from queue after expiration

I have a RabbitMQ server (v.3.8.2) with a simple exchange fanout-queue binding running, with several producers and one consumer. The average delivery/ack rate is quite low, about 6 msg/s.
The queue is created at runtime by producers with the x-message-ttl parameter set at 900000 (15 minutes).
In very specific conditions (e.g. rare error situation), messages are rejected by the consumer. These messages then are shown in the unacked counter on the RabbitMQ admin web page indefinitely. They never expire or get discarded event after they timeout.
There are no specific per-message overrides in ttl parameters.
I do not need any dead letter processing as these particular messages do not require processing high reliabilty, and I can afford to lose some of them every now and then under those specific error conditions.
Exchange parameters:
name: poll
type: fanout
features: durable=true
bound queue: poll
routing key: poll
Queue parameters:
name: poll
features: x-message-ttl=900000 durable=true
For instance, this is what I am currently seeing in the RabbitMQ server queue admin page:
As you can see, there are 12 rejected/unack'ed messages in the queue, and they have been living there for more than a week now.
How can I have the nacked messages expire as per the ttl parameter?
Am I missing some pieces of configuration?
Here is an extract from the consumer source code:
// this code is executed during setup
...
consumer = new EventingBasicConsumer(channel);
consumer.Received += (sender, e) =>
{
// Retrieve retry count & death list if present
List<object> DeathList = ((e?.BasicProperties?.Headers != null) && e.BasicProperties.Headers.TryGetValue("x-death", out object obj)) ? obj as List<object> : null;
int count = ((DeathList != null) &&
(DeathList.Count > 0) &&
(DeathList[0] is Dictionary<string, object> values) &&
values.TryGetValue("count", out obj)
) ? Convert.ToInt32(obj) : 0;
// call actual event method
switch (OnRequestReceived(e.Body, count, DeathList))
{
default:
channel.BasicAck(e.DeliveryTag, false);
break;
case OnReceivedResult.Reject:
channel.BasicReject(e.DeliveryTag, false);
break;
case OnReceivedResult.Requeue:
channel.BasicReject(e.DeliveryTag, true);
break;
}
};
...
// this is the actual "OnReceived" method
static OnReceivedResult OnRequestReceived(byte[] payload, int count, List<object> DeathList)
{
OnReceivedResult retval = OnReceivedResult.Ack; // success by default
try
{
object request = MessagePackSerializer.Typeless.Deserialize(payload);
if (request is PollRequestContainer prc)
{
Log.Out(
Level.Info,
LogFamilies.PollManager,
log_method,
null,
"RequestPoll message received did={0} type=={1} items#={2}", prc.DeviceId, prc.Type, prc.Items == null ? 0 : prc.Items.Length
);
if (!RequestManager.RequestPoll(prc.DeviceId, prc.Type, prc.Items)) retval = OnReceivedResult.Reject;
}
else if (request is PollUpdateContainer puc)
{
Log.Out(Level.Info, LogFamilies.PollManager, log_method, null, "RequestUpdates message received dids#={0} type=={1}", puc.DeviceIds.Length, puc.Type);
if (!RequestManager.RequestUpdates(puc.DeviceIds, puc.Type)) retval = OnReceivedResult.Reject;
}
else Log.Out(Level.Error, LogFamilies.PollManager, log_method, null, "Message payload deserialization error length={0} count={1}", payload.Length, count);
}
catch (Exception e)
{
Log.Out(Level.Error, LogFamilies.PollManager, log_method, null, e, "Exception dequeueing message. Payload length={0} count={1}", payload.Length, count);
}
// message is rejected only if RequestUpdates() or RequestPoll() return false
// message is always acked if an exception occurs within the try-catch or if a deserialization type check error occurs
return retval;
}
This state occurs when the Consumer does not ack or reject both after receiving the message.
In the unacked state, the message does not expire.
After receiving the message, you must ack or reject it.
This issue isn't a problem that doesn't expire, problem is you don't ack or reject the message.
x-message-ttl=900000 means how long the message stays in the queue without being delivered to the consumer.
In your situation, your message is already delivered to the consumer and it needs to be acked/rejected.

BleGattCharacteristicException: GATT exception status 129 when trying to Indicate->write->notify to BLE glucometer

I am trying to read all the records stored in a glucometer using rxAndroidBle library. According to several resources that I have found the process consist of three main steps after pairing/bonding and connecting to the device:
Setup indications in the Record Access Control Point characteristic (RACP)
Setup notifications on the Glucose measurement characteristic
Write on the RACP characteristic two bytes 0x01, 0x01.
Then the notifications should flow if there are any records.
Now, this flow has worked fine some times in an LG G5 with android 7.0, but on other phones that I have access to it just won't work. It will throw a gruesome GATT_INTERNAL_ERROR (status 129), which is kind of ambiguous. I found this article which describes kind of what I may be facing.
My concern is that this works but it may be firmware related, which is weird because I've seen it work flawlessly on other application that connects to the glucometer without issue in any device.
Here's what how my test code for this seems like right now:
fun loadRecords(rxBleDevice: RxBleDevice){
...
...
rxBleDevice.establishConnection(false)
.flatMap { rxBleConnection: RxBleConnection ->
rxBleConnection.setupIndication(racpUUID)
.flatMapSingle {
Single.just(rxBleConnection)
}
}
.flatMap { rxBleConnection ->
writeAndReadOnNotification(racpUUID,
glucoseUUID,
byteArrayOf(0x01, 0x01),
false,
rxBleConnection)
}
.subscribe(
{ it:ByteArray ->
decodeReading(it)
},Logger::logException)
}
private fun writeAndReadOnNotification(writeTo: UUID, readOn: UUID,
bytes: ByteArray,
isIndication: Boolean,
rxBleConnection:RxBleConnection)
: Observable<ByteArray> {
val notifObservable = if (isIndication)
rxBleConnection.setupIndication(readOn)
else
rxBleConnection.setupNotification(readOn)
return notifObservable.flatMap { notificationObservable ->
Observable.combineLatest(
notificationObservable,
rxBleConnection.writeCharacteristic(writeTo, bytes).toObservable(),
BiFunction { readBytes: ByteArray, writeBytes: ByteArray -> readBytes })
}
}
and here's what the log looks like for that piece of code:
18:28:58.058 D/BluetoothGatt: connect() - device: E0:7D:EA:FF:38:AB, auto: false
18:28:58.058 D/BluetoothGatt: registerApp()
18:28:58.058 D/BluetoothGatt: registerApp() - UUID=cca42db0-a88f-4b1c-acd0-f7fbe7be536d
18:28:58.065 D/BluetoothGatt: onClientRegistered() - status=0 clientIf=7
18:28:58.518 D/BluetoothGatt: onClientConnectionState() - status=0 clientIf=7 device=E0:7D:EA:FF:38:AB
18:28:58.527 D/BluetoothGatt: discoverServices() - device: E0:7D:EA:FF:38:AB
18:28:58.532 D/BluetoothGatt: onSearchComplete() = Device=E0:7D:EA:FF:38:AB Status=0
18:28:58.873 D/BluetoothGatt: setCharacteristicNotification() - uuid: 00002a52-0000-1000-8000-00805f9b34fb enable: true
18:28:58.965 D/BluetoothGatt: setCharacteristicNotification() - uuid: 00002a18-0000-1000-8000-00805f9b34fb enable: true
18:28:59.057 D/BluetoothGatt: setCharacteristicNotification() - uuid: 00002a18-0000-1000-8000-00805f9b34fb enable: false
18:28:59.061 D/BluetoothGatt: setCharacteristicNotification() - uuid: 00002a52-0000-1000-8000-00805f9b34fb enable: false
18:28:59.066 E/None: com.polidea.rxandroidble2.exceptions.BleGattCharacteristicException: GATT exception from MAC='XX:XX:XX:XX:XX:XX', status 129 (GATT_INTERNAL_ERROR), type BleGattOperation{description='CHARACTERISTIC_WRITE'}. (Look up status 0x81 here https://android.googlesource.com/platform/external/bluetooth/bluedroid/+/android-5.1.0_r1/stack/include/gatt_api.h)
at com.polidea.rxandroidble2.internal.connection.RxBleGattCallback.propagateErrorIfOccurred(RxBleGattCallback.java:243)
at com.polidea.rxandroidble2.internal.connection.RxBleGattCallback.access$800(RxBleGattCallback.java:35)
at com.polidea.rxandroidble2.internal.connection.RxBleGattCallback$2.onCharacteristicWrite(RxBleGattCallback.java:125)
at android.bluetooth.BluetoothGatt$1$7.run(BluetoothGatt.java:438)
at android.bluetooth.BluetoothGatt.runOrQueueCallback(BluetoothGatt.java:770)
at android.bluetooth.BluetoothGatt.access$200(BluetoothGatt.java:39)
at android.bluetooth.BluetoothGatt$1.onCharacteristicWrite(BluetoothGatt.java:433)
at android.bluetooth.IBluetoothGattCallback$Stub.onTransact(IBluetoothGattCallback.java:137)
at android.os.Binder.execTransact(Binder.java:731)
18:28:59.067 D/BluetoothManager: getConnectionState()
18:28:59.067 D/BluetoothManager: getConnectedDevices
18:28:59.074 D/BluetoothGatt: cancelOpen() - device: E0:7D:EA:FF:38:AB
18:28:59.080 D/BluetoothGatt: onClientConnectionState() - status=0 clientIf=7 device=E0:7D:EA:FF:38:AB
18:28:59.083 D/BluetoothGatt: close()
18:28:59.084 D/BluetoothGatt: unregisterApp() - mClientIf=7
18:28:59.507 V/FA: Inactivity, disconnecting from the service
Did I missed something in my code? Why does it work on some phones?
I managed to solve this myself. Looking into the logs I posted earlier I saw that both the indications and the notifications were being disabled right after setting them up, so I made a deep exploration into the library to see, why it was doing so. I turns out that you should not set indications before setting up notifications in whatever characteristic (even though they are two different characteristics). So the main steps to read this should be in this order:
Set up Glucose characteristic notifications and keep observing.
Set up indications on the RACP.
Write 0x01, 0x01 into the RACP.
Profit
Also I found this note in the library code:
/*
*NOTE: due to stateful nature of characteristics if one will setupIndication() before setupNotification()
* the notification will not be set up and will emit an BleCharacteristicNotificationOfOtherTypeAlreadySetException
*/
which led me to move the notification part before the indication part.
Here is what it looks like right now:
fun loadRecords(rxBleDevice: RxBleDevice){
...
//Do stuff
...
rxBleDevice.establishConnection(false)
.flatMap { rxBleConnection: RxBleConnection ->
rxBleConnection.setupNotification(glucoseUUID,
NotificationSetupMode.QUICK_SETUP)
.flatMapSingle {
Single.just(Pair(it, rxBleConnection))
}
}
.flatMap { (observable, rxBleConnection) ->
writeAndReadOnNotification(racpUUID,
racpUUID,
byteArrayOf(0x01, 0x01),
true,
rxBleConnection).subscribe()
observable
}
.subscribe(
{
decodeReading(it)
},Logger::logException)
}
I know it looks ugly and it needs polishing.

Ethereum private network mining

1) I setup a private ethereum network using the following command
$geth --genesis <genesis json file path> --datadir <some path to an empty
folder> --networkid 123 --nodiscover --maxpeers 0 console
2) Created an account
3) Then, started the miner using miner.start() command.
After a while ethers were getting added automatically to my account, but I don’t have any pending transaction in my private network. So from where could my miners are getting the ethers?
Even though, I didn’t instantiate any transactions in my network, I could see the some transaction being recorded in the logs, once I start the miner.
The log is as follows:
I0118 11:59:11.696523 9427 backend.go:584] Automatic pregeneration of ethash
DAG ON (ethash dir: /Users/minisha/.ethash)
I0118 11:59:11.696590 9427 backend.go:591] checking DAG (ethash dir:
/Users/minisha/.ethash)
I0118 11:59:11.696728 9427 miner.go:119] Starting mining operation (CPU=4
TOT=5)
true
> I0118 11:59:11.703907 9427 worker.go:570] commit new work on block 1 with 0
txs & 0 uncles. Took 7.109111ms
I0118 11:59:11.704083 9427 ethash.go:220] Generating DAG for epoch 0 (size
1073739904) (0000000000000000000000000000000000000000000000000000000000000000)
I0118 11:59:12.698679 9427 ethash.go:237] Done generating DAG for epoch 0, it
took 994.61107ms
I0118 11:59:15.163864 9427 worker.go:349]
And my genesis block code is as follows:
{
“nonce”: “0xdeadbeefdeadbeef”,
“timestamp”: “0x0”,
“parentHash”:
“0x0000000000000000000000000000000000000000000000000000000000000000”,
“extraData”: “0x0”,
“gasLimit”: “0x8000000”,
“difficulty”: “0x400”,
“mixhash”:
“0x0000000000000000000000000000000000000000000000000000000000000000”,
“coinbase”: “0x3333333333333333333333333333333333333333”,
“alloc”: {
}
}
Since my network is isolated and have only one node (no peers), I am quite confused with this behaviour. Any insights would be greatly appreciated.
Your client is mining empty blocks (containing no transactions) and getting rewards for mined blocks what is 5 ETH per block.
If you want to prevent empty block in your private blockchain you should consider using eth client (C++ implementation).
In case of geth client you can use a JavaScript script that will modify client's behavior. Any script can be loaded with js command: geth js script.js.
var mining_threads = 1
function checkWork() {
if (eth.getBlock("pending").transactions.length > 0) {
if (eth.mining) return;
console.log("== Pending transactions! Mining...");
miner.start(mining_threads);
} else {
miner.stop(0); // This param means nothing
console.log("== No transactions! Mining stopped.");
}
}
eth.filter("latest", function(err, block) { checkWork(); });
eth.filter("pending", function(err, block) { checkWork(); });
checkWork();
You can try EmbarkJS which can run geth client with mineWhenNeeded option on private network. It will only mine when new transactions come in.