GCP PUB/SUB io.grpc.StatusException CANCELLED - google-cloud-messaging

We are getting below error for the gcp pub/sub subscription. What could be the cause here and in which case this method getting triggered.
io.grpc.Status.asException(Status.java:541)
at com.google.cloud.pubsub.v1.StreamingSubscriberConnection.doStop(StreamingSubscriberConnection.java:139)
at com.google.api.core.AbstractApiService$InnerService.doStop(AbstractApiService.java:153)
at com.google.common.util.concurrent.AbstractService.stopAsync(AbstractService.java:281)
at com.google.api.core.AbstractApiService.stopAsync(AbstractApiService.java:129)
at com.google.cloud.pubsub.v1.Subscriber.stopConnections(Subscriber.java:394)
at com.google.cloud.pubsub.v1.Subscriber.stopAllStreamingConnections(Subscriber.java:367)
at com.google.cloud.pubsub.v1.Subscriber.access$1700(Subscriber.java:93)
at com.google.cloud.pubsub.v1.Subscriber$3.run(Subscriber.java:308)
at java.base/java.lang.Thread.run(Thread.java:832)your text
As per pub/sub open api jar this method supposed to get triggered in case subscription is getting expired and its trying to clean.

Related

Forward redis pubsub message to specific client

I'm working on a library for interacting with the Discord API. My current setup is:
A gateway, each handling x amounts of shards - so that I can spin up as many of these as I like to scale well. These gateways publish events received to a redis message queue.
A client, which subscribes to the message queue, and responds to events received.
However, there are some scenarios - working with message components - where I want a specific client to handle events related to that message. This client will then use the node.js event emitter to emit an event in itself which is then received by a 'collector' in my code.
Does anyone have any recommendations how I might stop other clients from picking up the event from the message queue, so that only this specific client picks it up? Is it possible for a subscriber to 'read' an event before it like accepts it? As then all clients could read an event to see if it like matches a list of events its waiting for?

Creating multiple subs on same topic to implement load sharing (pub/sub)

I spent almost a day on google pub sub documentation to create a small app. I am thinking of switching from rabbitMQ to google pub/sub. Here is my question:
I have an app that push messages to a topic (T). I wanted to do load sharing via subscribers. So I created 3 subscribers to T. I have kept the name of all 3 subs same (S), so that I don't get same message 3 times.
I have 2 issues:
There is no where I console I see 3 same subscribers to T. It shows 1
If I try to start all 3 instances of subscribers at same time. I get "A service error has occurred.". Error disappeared if I start in sequential manner.
Lastly, Is google serious about pub/sub ? Looking at the documentations and public participations, I am not sure if I should switch to google pub/sub.
Thanks,
In pub/sub, each subscription gets a copy of every message. So to load balance handling message, you don't want 3 different subscriptions, but a single subscription that distributes messages to 3 workers.
If you are using pull delivery, simply create a single subscription (as a one-time action when you set up the system), and have each worker pull from the same subscription.
If you are using push delivery, have a single subscription pushing to a single endpoint that provides load balancing (e.g. push to a HTTP load balancer with multiple instances in a backend service
Google is serious about Pub/Sub, it is deeply integrated into many products (GCS, BigQuery, Dataflow, Stackdriver, Cloud Functions etc) and Google uses it internally.
As per documentation on GCP,https://cloud.google.com/pubsub/architecture.
Load balanced subscribers are possible, but all of them have to use same subscription. Don't have any code sample or POC ready but working on same.

Not receiving PAYMENT_UPDATED event for payment webhooks

I was able to use square's webhook API based on descriptions here, https://docs.connect.squareup.com/api/connect/v1#webhooks-overview
and payment webhook was working fine.
Recently, I noticed that after completing a cash payment my webhook event handler
is not receiving any PAYMENT_UPDATED notifications.
I'm able to get the Test Webhook Notification trigger with my event handler service and I did register the PAYMENT_UPDATED webhook for my location.
This service was working before, is there any new changes for square-connect api?
There is no guarantee that the webhooks notification will successfully go through. If it fails for any reason, Square will not attempt to resend it. You should definitely use alternate methods (such as the ListTransactions endpoint) to fully verify the data.

Google Cloud Storage Notification with Pub/Sub and docs

In the docs about GCP Storage and Pub/Sub notification I find this sentence that is not really clear:
Cloud Pub/Sub also offers at-least-once delivery to the recipient [that's pretty clear],
which means that you could receive multiple messages, with multiple
IDs, that represent the same Cloud Storage event [why?]
Can anyone give a better explanation of this behavior?
Thanks!
Google Cloud Storage uses at-least-once delivery to deliver your notifications To Cloud Pub/Sub. In other words, GCS will publish at least one message into Cloud Pub/Sub for each event that occurs.
Next, a Cloud Pub/Sub subscription will deliver the message to you, the end user, at least once.
So, say that in some rare case, GCS publishes two messages about the same event to Cloud Pub/Sub. Now that one GCS event has two Pub/Sub message IDs. Next, to make it even more unlikely, Pub/Sub delivers each of those messages twice. Now you have received 4 messages, with 2 message IDs, about the same single GCS event.
The important takeaway of the warning is that you should not attempt to dedupe GCS events by Pub/Sub message ID.
An at-least-once delivery means that the service must receive confirmation from the recipient to ensure that the message was received. In this case, we need some sort of timeout period in order to re-send the message. It is possible, due to network latency or packet loss, etc, to have the recipient send a confirmation, but the sender to not receive the confirmation before the timeout period, and therefore the sender will send the message again.
This is a common problem is network communications and distributed systems, and there are different types of messaging to address this issue.
To answer the question of 'why'
'At least once' delivery just means messages will be retried via some retry mechanism until successfully delivered (i.e. acknowledged). So if there's a failure or timeout then there's a retry.
By it's essence (retrying mechanism) this means you might occasionally have duplicates / more than once delivery. It's the same whether it's PubSub or GCS notifications delivering the message.
In the scenario you quote, you have:
The Publisher (GCS notification) -- may send duplicates of GCS events to pubsub topic
The PubSub topic messages --- may contain duplicates from publisher
no deduplication as messages come in
all messages assigned unique PubSub message_id even if they are duplicates of the same GCS event notification
PubSub topic Subscription(s) --- may also send duplicates of messages to subscribers
With PubSub
Once a message is sent to a subscriber, the subscriber must either acknowledge or drop the message. A message is considered outstanding once it has been sent out for delivery and before a subscriber acknowledges it.
A subscriber has a configurable, limited amount of time, or ackDeadline, to acknowledge the message. Once the deadline has passed, an outstanding message becomes unacknowledged.
Cloud Pub/Sub will repeatedly attempt to deliver any message that has not been acknowledged or that is not outstanding.
Source: https://cloud.google.com/pubsub/docs/subscriber#at-least-once-delivery
With Google Cloud Storage
They need to do something similar internally to 'publish' the notification event from GCS to PubSub - so reason is essentially the same.
Why this matters
You need to expect occasional duplicates originating from GCS notifications as well as the PubSub subscriptions
The PubSub message id can be used to detect duplicates from the pubsub topic -> subscriber
You have to figure out your own idempotent id/token to handle duplicates from the 'publisher' (the GCS notification event)
generation, metageneration, etc.. from the resource representation might help
If you need to de-duplicate or achieve exactly once processing, you can then build your own solution utilising the idempotent ids/tokens or see if Cloud Dataflow can accommodate your needs.
You can achieve exactly once processing of Cloud Pub/Sub message streams using Cloud Dataflow PubsubIO. PubsubIO de-duplicates messages on custom message identifiers or those assigned by Cloud Pub/Sub.
Source: https://cloud.google.com/pubsub/docs/faq#duplicates
If interested in a more fundamental exploration of the why we see:
There is No Now - Problems with simultaneity in distributed systems

How to write handler for Error queues in NServiceBus Saga?

I have a situation where the Maxtries in my MSMQ is 5. After 5 times nservicebus sends the message to the Error que that I have defined. Now I want to perfomr some further action when this happens (I have to update status of some processes to Error)
Is it possible to write a handler in my Saga class to read these error queues?
Thanks in Advance
Haris
If your are using 2.x you may want to consider writing a separate endpoint where the error queue is its input queue. The downside to this is that the messages will come off the queue. Assuming you still want to store them, you'll have to push them off to a database or some other type of storage.
You could also write a Saga that polls the error queue to check for messages and updates the appropriate status. After each time you check the queue, you would need to request another Timeout.
In 3.0, you have more control over the exceptions, and can implement your own way to handle the errors. If you implement the interface IManageMessageFailures, you can do your work there.
As an alternative to the solutions provided by Adam, you can subscribe to events raised by ServiceControl which are raisesd when a messages is sent to the errorqueue. See the official documentation about this here: http://docs.particular.net/servicecontrol/contracts
Another approach would be the notification API as described here: http://docs.particular.net/nservicebus/errors/subscribing-to-error-notifications. It allows you to subscribe to certain events (not event messages) like "MessageSentToErrorQueue" directly on the endpoint, so you wouldn't need to consume the error queue.