Viewing enqueued messages with hawtio - activemq

I'm trying to use hawtio to view some enqueued topics in ActiveMQ.
But when I click on view messages, I get a blank list as output (even though I know the contents are not blank).
This is the error message I get when I browse around my localhost on /8080/hawtio/, so I'm guessing something regarding this is causing it.
Failed to get a response! { "error_type": "javax.management.InstanceNotFoundException", "error": "javax.management.InstanceNotFoundException : org.fusesource.insight:type=LogQuery", "status": 404, "request": { "operation": "logResultsSince", "mbean": "org.fusesource.insight:type=LogQuery", "arguments": [ 0 ], "type": "exec" }, "stacktrace": "javax.management.InstanceNotFoundException: org.fusesource.insight:type=LogQuery\n\tat com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)\n\tat com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)\n\tat com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)\n\tat org.jolokia.handler.ExecHandler.extractMBeanParameterInfos(ExecHandler.java:167)\n\tat org.jolokia.handler.ExecHandler.extractOperationTypes(ExecHandler.java:133)\n\tat org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:84)\n\tat org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40)\n\tat org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)\n\tat org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)\n\tat org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:102)\n\tat org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:91)\n\tat org.jolokia.backend.BackendManager.callRequestDispatcher(BackendManager.java:388)\n\tat org.jolokia.backend.BackendManager.handleRequest(BackendManager.java:150)\n\tat org.jolokia.http.HttpRequestHandler.executeRequest(HttpRequestHandler.java:197)\n\tat org.jolokia.http.HttpRequestHandler.handlePostRequest(HttpRequestHandler.java:131)\n\tat org.jolokia.jvmagent.JolokiaHttpHandler.executePostRequest(JolokiaHttpHandler.java:195)\n\tat org.jolokia.jvmagent.JolokiaHttpHandler.handle(JolokiaHttpHandler.java:143)\n\tat com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)\n\tat sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)\n\tat com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:80)\n\tat sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:677)\n\tat com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)\n\tat sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:649)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat java.lang.Thread.run(Thread.java:724)\n" }

Incidentally ActiveMQ doesn't support browsing of topics; only queues

You'll need to upgrade to hawt.io 1.2M27, which fixes this issue. 1.2M26 was assuming the log query was always installed, M27 removed it from the default.

Also we don't yet support all activemq message types, there's an open issue for that -> https://github.com/hawtio/hawtio/issues/655
So if your messages are not text messages this could be why you're not seeing the message body.

Related

How frequently are the Azure Storage Queue metrics updated?

I observed that it took about 6 hours from the time of setting up Diagnostics (the newer offering still in preview) for the Queue Message Count metric to move from 0 to the actual total number of messages in queue. The other capacity metrics Queue Capacity and Queue Count took about 1 hour to reflect actual values.
Can anyone shed light on how these metrics are updated? It would be good to know how to predict the accuracy of the graphs.
I am concerned because if the latency of these metrics is typically this large then an alert based on queue metrics could take too long to raise.
Update:
Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a distinct set of metrics without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition.
And 'Queue Message Count' is platform metrics.
So it should update the data every 1 minute.
But it didn't. And this is not a problem that only occur on portal. Even you use rest api to get the QueueMessageCount, it still not update after 1 minute:
https://management.azure.com/subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/0730BowmanWindow/providers/Microsoft.Storage/storageAccounts/0730bowmanwindow/queueServices/default/providers/microsoft.insights/metrics?interval=PT1H&metricnames=QueueMessageCount&aggregation=Average&top=100&orderby=Average&api-version=2018-01-01&metricnamespace=Microsoft.Storage/storageAccounts/queueServices
{
"cost": 59,
"timespan": "2021-05-17T08:57:56Z/2021-05-17T09:57:56Z",
"interval": "PT1H",
"value": [
{
"id": "/subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/0730BowmanWindow/providers/Microsoft.Storage/storageAccounts/0730bowmanwindow/queueServices/default/providers/Microsoft.Insights/metrics/QueueMessageCount",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "QueueMessageCount",
"localizedValue": "Queue Message Count"
},
"displayDescription": "The number of unexpired queue messages in the storage account.",
"unit": "Count",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2021-05-17T08:57:00Z",
"average": 1.0
}
]
}
],
"errorCode": "Success"
}
],
"namespace": "Microsoft.Storage/storageAccounts/queueServices",
"resourceregion": "centralus"
}
This may be an issue that needs to be reported to the azure team. It is so slow, it even loses its practicality. I think send an alert based on this is a bad thing(it’s too slow).
Maybe you can design you own logic by code to check the QueueMessageCount.
Just a sample(C#):
1, Get Queues
Then get all of the queue names.
2, Get Properties
Then get the number of the message in each queue.
3, sum the obtained numbers.
4, send custom alert.
Original Answer:
At first, after I send message to one queue in queue storage, the 'Queue Message Count' also remains stubbornly at zero on my side, but a few hours later it can get the 'Queue Message Count':
I thought it would be a bug, but it seems to work well now.

Azure Function Apps - maintain max batch size with maxDequeueCount

I have following host file:
{
"version": "2.0",
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout": "00:00:30",
"batchSize": 16,
"maxDequeueCount": 3,
"newBatchThreshold": 8
}
}
}
I would expect with setup there could never be more than batchSize+newBatchThreshold number of instances running. But I realized when messages are dequed they are run instantly and not just added to the back of the queue. This means you can end up with a very high amount of instances causing a lot of 429 (to many requests). Is there anyway to configure the function app to just add the dequeded messages to the back of the queue?
It was not related to dequeueCount. The problem was because it was a consumption plan, and then you cant control the amount of instances. After chaning to a Standard plan it worked as expected.

BigQuery Load Job [invalid] Too many errors encountered

I'm trying to insert data into BigQuery using the BigQuery Api C# Sdk.
I created a new Job with Json Newline Delimited data.
When I use :
100 lines for inputs : OK
250 lines for inputs : OK
500 lines for inputs : KO
2500 lines : KO
The error encountered is :
"status": {
"state": "DONE",
"errorResult": {
"reason": "invalid",
"message": "Too many errors encountered. Limit is: 0."
},
"errors": [
{
"reason": "internalError",
"location": "File: 0",
"message": "Unexpected. Please try again."
},
{
"reason": "invalid",
"message": "Too many errors encountered. Limit is: 0."
}
]
}
The file works well when I use the Bq Tools with command :
bq load --source_format=NEWLINE_DELIMITED_JSON dataset.datatable pathToJsonFile
Something seems to be wrong on server side or maybe when I transmit the file but we cannot get more log than "internal server error"
Does anyone have more informations on this ?
Thanks you
"Unexpected. Please try again." could either indicate that the contents of the files you provided had unexpected characters, or it could mean that an unexpected internal server condition occurred. There are several questions which might help shed some light on this:
does this consistently happen no matter how many times you retry?
does this directly depend on the lines in the file, or can you construct a simple upload file which doesn't trigger the error condition?
One option to potentially avoid these problems is to send the load job request with configuration.load.maxBadRecords higher than zero.
Feel free to comment with more info and I can maybe update this answer.

Frequent 503 errors raised from BigQuery Streaming API

Streaming data into BigQuery keeps failing due to the following error, which occurs more frequently recently:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
{
"code" : 503,
"errors" : [ {
"domain" : "global",
"message" : "Connection error. Please try again.",
"reason" : "backendError"
} ],
"message" : "Connection error. Please try again."
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:145)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:312)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1049)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:410)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:343)
Relevant question references:
Getting high rate of 503 errors with BigQuery Streaming API
BigQuery - BackEnd error when loading from JAVA API
We (the BigQuery team) are looking into your report of increased connection errors. From the internal monitoring, there hasn't been global a spike in connection errors in the last several days. However, that doesn't mean that your tables, specifically, weren't affected.
Connection errors can be tricky to chase down, because they can be caused by errors before they get to the BigQuery servers or after they leave. The more information your can provide, the easier it is for us to diagnose the issue.
The best practice for streaming input is to handle temporary errors like this to retry the request. It can be a little tricky, since when you get a connection error you don't actually know whether the insert succeeded. If you include a unique insertId with your data (see the documentation here), you can safely resend the request (within the deduplication window period, which I think is 15 minutes) without worrying that the same row will get added multiple times.

How can you read/decrypt TimeoutData with Raven from NServiceBus?

We are having an issue where we are using sagas that defer messages for timely execution with a versioning variable. The sagas get a date for execution, defer the messages, and send a timeout to cancel the saga set for five days after the last deferred message. This allows people throughout the company to resolve any errors utilizing the saga data within the five days before the timeout.
We also offer an option to requeue deferred messages in the case that business rules change. We have been utilizing this method with much success over the course of the last several months. Recently there was a business rule change, which deferred all messages for a particular client. While all of the saga data seems ok, and it appears that it is resetting the timeouts to expire at the later date. When the deferred messages execute, it is stating the saga no longer exists, when I look at this, I see this as well. Furthermore, I have noticed that the deferred messages do not carry a SagaID. I verified this is nothing new as a bunch of the "still queued" messages do not contain a SagaID neither, but they appear to be executing successfully.
My question regards the ability to read the timeout and deferred message data. I notice they appear encrypted and what I see is an NServiceBus built message. I was curious if there was a way to read the message that is created by NServiceBus.
{
"Destination": {
"Queue": "clientdata",
"Machine": "cnapp04"
},
"SagaId": "00000000-0000-0000-0000-000000000000",
"State": "PD94bWwgdmVyc2lvbj0iMS4wIiA/Pg0KPE1lc3NhZ2VzIHhtbG5zOnhzaT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiIHhtbG5zOnhzZD0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEiIHhtbG5zPSJodHRwOi8vdGVtcHVyaS5uZXQvTHNyLk1pbGl0YXJ5U2VhcmNoLlNlYXJjaC5NZXNzYWdlcyI+CjxRdWV1ZWRTZWFyY2hDb21tYW5kPgo8U2FnYUlEPmIxNmM4NDk5LTc",
"Time": "2013-09-09T09:00:00.0000000Z",
"CorrelationId": null,
"OwningTimeoutManager": "ClientData",
"Headers": {
"WinIdName": "COMPANY\\user_name",
"NServiceBus.Timeout.Expire": "2013-09-09 09:00:00:000000 Z",
"NServiceBus.OriginatingSagaId": "b16c8499-72f6-4cea-89e1-a18e0101eb82",
"NServiceBus.OriginatingSagaType": "ClientData.Search.Handlers.SalesPolicy.SaleHandler, ClientData.Search, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
"NServiceBus.EnclosedMessageTypes": "ClientData.Search.Messages.QueuedSearchCommand, ClientData.Search, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
"NServiceBus.RelatedTo": "db644e60-5ba1-4d26-a4ef-876855581bd5\\42719333",
"NServiceBus.TimeSent": "2013-04-01 17:34:42:712194 Z",
"NServiceBus.Version": "3.2.7",
"CorrId": null
}
}
Further, how does one utilize the CorrelationID? I am not seeing how this is set.
This state is stored as a byte array. I'm not a Raven expert so I don't know if you can view this any other way natively. You may have to have a bit of code to deserialize this object into a readable format.