Is there an API to query or subscribe to events on the Elrond blockchain? - smartcontracts

Elrond has events that can be emitted during smart contract execution: https://docs.elrond.com/developers/developer-reference/elrond-wasm-annotations/#events
How would I query for or subscribe to these events? On Ethereum, it would be possible to monitor/query such events as they execute using web3 or the Ethereum client node RPC. Is there something similar on Elrond?

You have a few ways to use those events.
You can use the api to fetch transactions for an account and then use the transaction endpoint to get more details for those transactions and read the events.
The endpoint would be:
http://testnet-gateway.elrond.com/transaction/<txhash>?withResults=true
(Note: Event data and smart contract results will only be returned if you add the ?withResults=true query param)
You can also use the transaction processor package to process all transactions that are notarized on the blockchain and again get the event data using the transaction endpoint.
Those events are also indexed in the elastic search instance so you can use elastic search to query for information. Either by setting up your own observer-squad with elastic search or by using the public indexer. (https://testnet-index.elrond.com/logs/_search)

Related

How to implement user specific real-time data on blazor server?

I've been trying for a few days, and struggling with a best practice for this - any ideas?
Contrived private message example:
Multiple users logged into blazor server
Server subscribes to an event bus/message queue to receive NewMessageEvent
Only the user that is the intended recipient should be updated.
I can create a singleton to subscribe to the message queue.
I can then use a singleton that I inject to the required blazor component to add the message to a list and issue a stateHasChanged event.
That would update all connected clients (not ideal, the service injected to the components should be scoped).
Options so far:
I could verify the recipient for the message inside the blazor component, but it sort of feels the wrong place
Subscribe to the queue once per circuit (The queue still holds all messages though)
What I was hoping to do, was possibly create a service locator based on the circuit Id and connected userId using a circuit handler, and call a function like: NewMessageReceivedFor(userId), if that finds a matched circuit, then call the scoped service function.
This means that I should call a scoped service from a singleton (not allowed by DI through the constructor), by some form of GetRequiredService, but can I get that scoped service by specifying a circuit Id?
I currently feel Im either 90% there, or in the wrong forest, let alone up the wrong tree.
You could have a Singleton service for dealing with all messages, and then a Scoped service that subscribes to an event on the Singleton and then only triggers its own event if the message is for the current user (you'd need a service registered as Scoped to get the current user ID).
That way each user will only get a notification when the message is meant for them.
Don't forget to implement IDisposable on the Scoped service, so you can unsubscribe from the Singleton service.

Controlling Gemfire cache updates in the background

I will be implementing a Java program that acts as a gemfire client. The program will continuosly process records that it receives on its port from a remote program. Each record will be processed using the static data cached with my program. The cache may get updated behind the scenes in my program when it is changed on the gemfire server. The processing of one record may take a few seconds. I run the risk of processing half the record with static data that was prevalent before the change and rest of the record with static data that has taken effect after the change. Is there a way I can tell gemfire to not apply the cache to the local client until I am done processing the ongoing record?
Regards,
Yash
Consider this approach: Use a Continuous query "Select *" instead of event registration. A CQ does not update the client region like a subscription does. Make your client region LOCAL. After receiving the CQ event on the client, execute your long running process and put the value that you received from the CQ into your client region. Decoupling client and server in this way will allow your client to run long-running processes.
Alternatively: if you must have the client cache proxied with the server as an absolute requirement, then keep the interest registration AND register a CQ. Ignore the event callback from the subscription but handle your long-running process using the event callback from the CQ.
The following is from page 683 at http://gemfire.docs.pivotal.io/pdf/pivotal-gemfire-ug.pdf
CQs do not update the client region. This is in contrast to other server-to-client messaging like the updates sent to satisfy interest registration and responses to get requests from the client's Pool.

How to create a single NServiceBus endpoint that uses different transports?

Background
We are trying to introduce a new architectural pattern in our company and are considering CQRS with Event Sourcing using a Service Bus. Technologies we are currently developing our POC with are NServiceBus, Event Store, and MSMQ. We would like to have a single endpoint in NServiceBus defined with two different transports, MSMQ for our commands and Event Store for our events. The current state of our enterprise does not permit us to easily switch everything to Event Store presently as we have significant investment in our legacy apps using MSMQ, which is a reason why we are considering the hybrid approach.
Question
Is it possible to create a single NServiceBus endpoint that uses different transports? If yes, how? If no, what alternatives are there?
Aaron,
I think the best option would be to use MSMQ as a transport in NServiceBus. Here's how it may look like:
send a command via MSMQ
in a command handler (re)create an aggregate which is the target of the command
invoke the operations
store the resulting events in the EventStore along with the command's message id to ensure idempotence. The aggregate itself will be responsible for knowing the commands it already processed
in a separate component (event processor) use EventStore persistent subscription APIs to hook to all events stream. Some of these processed events should cause sending a command. Such a command might be send via NServiceBus send-only endpoint hosted inside this event processor.
in that event processor you can also re-publish all events via NServiceBus and MSMQ. Such events should not be subscribed by other services (see note on autonomy below)
NServiceBus Sagas (process managers) should live inside your service boundary and react on commands and/or events sent or re-published by these event processors over MSMQ.
One remark regarding the service boundaries is that you have to decide what level of service autonomy suits you:
* Weak, where services can directly subscribe to other service event streams. In this design events that cross service boundary are obviously allowed to carry data.
* Strong, where services use higher-level events for communication and these events only carry the identity of things and no data. If you want something like this, you can use your event processors to map from ES events to these "higher level" events.

Practical examples of how correlation id is used in messaging?

Can anyone give me examples of how in production a correlation id can be used?
I have read it is used in request/response type messages but I don't understand where I would use it?
One example (which maybe wrong) I can think off is in a publish subscribe scenario where I could have 5 subscribers and if I get 5 replies with the same correlation id then I could say all my subscribers have received it. Not sure if this would the be correct usage of it.
Or if I send a simple message, the I can use the correlation to guarantee that the client received it.
Any other examples?
A web application that is providing HTTP API for outsiders for performing a processing task and you want to give the results for the caller as a response to the HTTP request they made.
A request comes in, message describing the task is pushed to queue by the frontend server. After that the frontend server blocks to wait for response message with the same correlation id. A pool of worker machines are listening on queue and one of them picks up the task, performs it and returns the result as message. Once a message with right correlation id comes in, frontend server continues to return the response to the caller.
In the context of CQRS and EventSourcing a command message correlation id will most likely get stored togehter with the corresponding events from the domain. This information can later be used to form an audit trail.
Streaming engines like Apache Flink use correlation ids, much like you said, to guarantee exactness of processing.

How a WCF request can be correlated with multiple Workflow instances?

The scenario is a follow:
I have multiple clients in which they can register themselves on a workflow server, using WCF requests, to receive some kind of notifications. The information of the notifications will be received from an external system using another receive activity. The workflow then should get the notification information and callback all registered clients using send activity and callback correlations (the clients are exposing callback interfaces implemented in there and the end-point addresses passed initially with the registration requests). "Log-running workflow service" approach is used with a persistent storage.
Now, I'm looking for some way to correlate the incoming information of the notifications received from the external system with the persisted workflow instances created previously when the registration requests, so that all clients will be notified using end-points that already passed with the registration requests. Is WF 4.0 capable of resuming and executing multiple workflow instances when the information of the notification received without storing end-points somehow manually and go though them? If yes, how can I do that?
Also, if my approach of doing so is not correct, then please advice me about the best practice of doing such system using WCF services.
Your help is highly appreciated.
When you use request correlation with workflow services the correlation key must always match a single workflow instance, you can't have multiple workflow instances react to a single message. So you either need to multicast the message using all the different correlation keys or resume you workflow instances in some other way. That other way could be to store the request somewhere, like a SQL table, and have the workflows periodically check that location if they need to notify the client.