I have a scenario where, when a key which is set in redis reaches a certain count say n, I need to send a HTTP post request, is this possible using Lua scripting in redis?
Any help regarding this or alternate method to do this is appreciated.
PS: I have just stared using Redis please correct me if my understanding is wrong.
No you can't, Lua is running in a sandbox.
You should check the Redis Module RedisGears.
Related
I have an application that depends on the reliability of being able to contact the network through Alchemy’s RPC mainnet API.
During the merge, it is crucial that I am able to interact with the PoS chain ASAP. Should I rely on Alchemy’s API for this or do I need a different method or node or something?
Thanks.
Unless I'm misremembering, no. I seem to remember Infura saying that they will have ~12 minutes of downtime around the merge just in case anything goes wrong. I imagine that alchemy will do likewise.
I recommend you run your own geth node.
I am new to Mulesoft. I have an api in my application which can't handle more than 2000 parallel requests. I am thinking to use Mulesoft as a proxy API which takes the request and hit my API so that even if my API reaches its capacity Mulesoft will pause for sometime and hit my API without loosing any data.
Does Mulesoft solve my issue? if so can anyone please guide me through the process?
Thanks
You probably would want something as simple as the until-successful scope. You can read up more about that here. The premise of it is this:
You wrap a component in the until-successful scope, and you define the following:
What how a failure is defined or how a success is defined
How many times you want to try the component until an overall failure,
How much time should elapse between each call.
There are examples in the documentation that I linked to and those should help guide you!
What is peak load (no.of requests) your application API is expecting to handle ?
Mule API proxy can be used for response caching here. This means for similar reuests, your API wont get hit , the responses would be sent back from proxy itself. But this alone may not ,probably solve the load issue.
You might have to do the load balancing of your API depending on the peak load requirement.
Is there a way to get the value of the "Consumer utilisation" (as seen in the rabbitmq overview in the queues) via the RabbitMQ client? Can I ask for and react on this value via API?
Regards.
I found a solution in the meantime. And I think the question is important, as it is needed for bulk and monitoring purpose. But as the question was even downvoted, I will not post an answer here. It seems it's only important to me.
Following will return the metric for all consumers:
rabbitmqctl list_queues name messages messages_ready state consumer_utilisation --formatter json
Or via REST api for a given queue:
curl http://guest:guest#127.0.0.1:15672/api/queues/%2F/my-queue
Which gives out consumer_utilisation as one of the returned properties. Note it returns null if the value is too small. For an accurate read, use rabbitmqctl.
For a new auction system I am looking for which technology is the best for me.
When there is a new bid, I want to notify the listening users on the auction page. This is something for a pubsub technique, i presume.
First I did take a look at RabbitMQ, and I think this is a good way to build this. But it means I have an extra single point of failure.
So now I am leaning towards Redis PubSub. I know it has disadvantages, because when an user is not listening it won't re-send messages. But that is not a problem. When a user sign in it has all the current bids, and then only want updates. I don't plan to create a chat with a history.
What can you advice? Are there anymore disadvantages to use Redis for this? How about the stability? When a bid occurs, and I want to send the newest price to all listening users, how certain am I everyone gets the message?
Does anyone have experience with this situation?
Thanks
Pro: redis is much simpler than RabbitMQ to set up.
Cons: there is no guarantee of delivery with Redis.
I assume, that by "page" you mean standard HTML page with PHP on backend. If yes, then your main problem is not "should I use Redis or RabbitMQ", because you cannot make direct connection between your user browser and Redis or RabbitMQ.
First you have two answer to yourself, how will you provide updates for the page:
by regular ajax requests asking "is there any new for me"
by using some implementation of websocket
and after selecting answer, you will see that pub/sub mechanism has any use at all in your situation.
With web services it is considered a good practice to batch several service calls into one message to reduce a number of remote calls. Is there any way to do this with RESTful services?
If you really need to batch, Http 1.1 supports a concept called HTTP Pipelining that allows you to send multiple requests before receiving a response. Check it out here
I don't see how batching requests makes any sense in REST. Since the URL in a REST-based service represents the operation to perform and the data on which to perform it, making batch requests would seriously break the conceptual model.
An exception would be if you were performing the same operation on the same data multiple times. In this case you can either pass in multiple values for a request parameter or encode this repetition in the body (however this would only really work for PUT or POST). The Gliffy REST API supports adding multiple users to the same folder via
POST /folders/ROOT/the/folder/name/users?userId=56&userId=87&userId=45
which is essentially:
PUT /folders/ROOT/the/folder/name/users/56
PUT /folders/ROOT/the/folder/name/users/87
PUT /folders/ROOT/the/folder/name/users/45
As the other commenter pointed out, paging results from a GET can be done via request parameters:
GET /some/list/of/resources?startIndex=10&pageSize=50
if the REST service supports it.
I agree with Darrel Miller. HTTP already supports HTTP Pipelining, plus HTTP supports keep alive letting you stream multiple HTTP operations concurrently down the same socket to avoid having to wait for the responses before streaming new requests to the server etc.
So with HTTP pipelining and keep alive you get the effect of batching while using the same underlying REST API - so there's usually no need for another REST API to your service
The team with Astoria made good use of multi-part mime to send a batch of calls. Different from pipelining as the multi-part message can infer the intent of an atomic operation. Seems rather elegant.
Original blog post explaining
rational
MSDN Documentation
Of course there is a way but it would require server-side support. There is no magical one size fits all methodology that I know of.