I observed that it took about 6 hours from the time of setting up Diagnostics (the newer offering still in preview) for the Queue Message Count metric to move from 0 to the actual total number of messages in queue. The other capacity metrics Queue Capacity and Queue Count took about 1 hour to reflect actual values.
Can anyone shed light on how these metrics are updated? It would be good to know how to predict the accuracy of the graphs.
I am concerned because if the latency of these metrics is typically this large then an alert based on queue metrics could take too long to raise.
Update:
Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a distinct set of metrics without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition.
And 'Queue Message Count' is platform metrics.
So it should update the data every 1 minute.
But it didn't. And this is not a problem that only occur on portal. Even you use rest api to get the QueueMessageCount, it still not update after 1 minute:
https://management.azure.com/subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/0730BowmanWindow/providers/Microsoft.Storage/storageAccounts/0730bowmanwindow/queueServices/default/providers/microsoft.insights/metrics?interval=PT1H&metricnames=QueueMessageCount&aggregation=Average&top=100&orderby=Average&api-version=2018-01-01&metricnamespace=Microsoft.Storage/storageAccounts/queueServices
{
"cost": 59,
"timespan": "2021-05-17T08:57:56Z/2021-05-17T09:57:56Z",
"interval": "PT1H",
"value": [
{
"id": "/subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/0730BowmanWindow/providers/Microsoft.Storage/storageAccounts/0730bowmanwindow/queueServices/default/providers/Microsoft.Insights/metrics/QueueMessageCount",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "QueueMessageCount",
"localizedValue": "Queue Message Count"
},
"displayDescription": "The number of unexpired queue messages in the storage account.",
"unit": "Count",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2021-05-17T08:57:00Z",
"average": 1.0
}
]
}
],
"errorCode": "Success"
}
],
"namespace": "Microsoft.Storage/storageAccounts/queueServices",
"resourceregion": "centralus"
}
This may be an issue that needs to be reported to the azure team. It is so slow, it even loses its practicality. I think send an alert based on this is a bad thing(it’s too slow).
Maybe you can design you own logic by code to check the QueueMessageCount.
Just a sample(C#):
1, Get Queues
Then get all of the queue names.
2, Get Properties
Then get the number of the message in each queue.
3, sum the obtained numbers.
4, send custom alert.
Original Answer:
At first, after I send message to one queue in queue storage, the 'Queue Message Count' also remains stubbornly at zero on my side, but a few hours later it can get the 'Queue Message Count':
I thought it would be a bug, but it seems to work well now.
Related
have two message threads, each thread consists of ten messages. I need to request to display these two chains in one.
The new thread must consist of ten different messages: five messages from one system, five messages from another (backup) system. Messages from the system use the same SrcMsgId value. Each system has a unique SrcMsgId within the same chain. The message chain from the backup system enters the splunk immediately after the messages from the main system. Messages from the standby system also have a Mainsys_srcMsgId value - this value is identical to the main system's SrcMsgId value. Tell me how can I display a chain of all ten messages? Perhaps first messages from the first system (main), then from the second (backup) with the display of the time of arrival at the server.
Specifically, we want to see all ten messages one after the other, in the order in which they arrived at the server. Five messages from the primary, for example: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd71") and five from the backup: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd72"). The problem is that messages from other systems also come to the server, all messages are mixed (chaotically), which is why we want to organize all messages from one system and its relative in the search. Messages from the backup system are associated with the main system only by this parameter: "Mainsys_srcMsgId" - using this key, we understand that messages come from the backup system (secondary to the main one).
Examples of messages from the primary and secondary system:
Main system:
{
"event": "Sourcetype test please",
"sourcetype": "testsystem-2",
"host": "some-host-123",
"fields":
{
"messageId": "ED280816-E404-444A-A2D9-FFD2D171F32",
"srcMsgId": "rwfsdfsfqwe121432gsgsfgd71",
"Mainsys_srcMsgId": "",
"baseSystemId": "abc1",
"routeInstanceId": "abc2",
"routepointID": "abc3",
"eventTime": "1985-04-12T23:20:50Z",
"messageType": "abc4",
.....................................
Message from backup system:
{
"event": "Sourcetype test please",
"sourcetype": "testsystem-2",
"host": "some-host-123",
"fields":
{
"messageId": "ED280816-E404-444A-A2D9-FFD2D171F23",
"srcMsgId": "rwfsdfsfqwe121432gsgsfgd72",
"Mainsys_srcMsgId": "rwfsdfsfqwe121432gsgsfgd71",
"baseSystemId": "abc1",
"routeInstanceId": "abc2",
"routepointID": "abc3",
"eventTime": "1985-04-12T23:20:50Z",
"messageType": "abc4",
"GISGMPRequestID": "PS000BA780816-E404-444A-A2D9-FFD2D1712345",
"GISGMPResponseID": "PS000BA780816-E404-444B-A2D9-FFD2D1712345",
"resultcode": "abc7",
"resultdesc": "abc8"
}
}
When we want to combine in a query only five messages from one chain, related: "srcMsgId".
We make the following request:
index="bl_logging" sourcetype="testsystem-2"
| транзакция maxpause=5m srcMsgId Mainsys_srcMsgId messageId
| таблица _time srcMsgId Mainsys_srcMsgId messageId продолжительность eventcount
| сортировать srcMsgId_time
| streamstats current=f window=1 значения (_time) as prevTime по теме
| eval timeDiff=_time-prevTime
| delta _time как timediff
I have a list of jobs in my application queue(RabbitMQ).
Some of these jobs are group together and must do in the order.(not continuous, but by order of dispatch time)
For example, consider this 4 jobs in the queue:
[
{ "group": "x", "dispatched_timestamp": 10001, "field1": "some data", "field2": "some other data"},
{ "group": "g", "dispatched_timestamp": 10005,"field1": "some data", "field2": "some other data"},
{ "group": "x", "dispatched_timestamp": 10005,"field1": "some data", "field2": "some other data"},
{ "group": "t", "dispatched_timestamp": 10005,"field1": "some data", "field2": "some other data"}
]
I must sure the first job in group "x" execute successfully before the thirth job(same group).
But i don't care if the fourth job execute sooner than the first(or whatever).
Because sometimes it may happen which all three job deliver to 3 consumer, but the first job fail for some reason(but the second and thirth job has done successful).
I know with this conditions there will be some situations which all jobs in the queue are belongs to same group, so multiple consumers can't work on them and they must deliver one by one. that's ok.
There's no such thing in AMQ protocol that can lead to this exact solution, there're some ways to solve this problem.
Define queue for each message group
Set concurrency as 1
Let me quote the message ordering from the doc
Section 4.7 of the AMQP 0-9-1 core specification explains the
conditions under which ordering is guaranteed: messages published in
one channel, passing through one exchange and one queue and one
outgoing channel will be received in the same order that they were
sent. RabbitMQ offers stronger guarantees since release 2.7.0.
Ref: https://www.rabbitmq.com/semantics.html
First of the foremost things for you is to preserve the message ordering, once we have ordered messages we can utilize the concurrency to handle the messages in order.
Let's say your queue has 5 messages as shown
Queue: Queue1
+--------------+
Head-->|m1|m2|m3|m4|m5| <---- Tail
+--------------+
There's the concept of competing consumers, competing consumers means there're more than consumers/subscribers for the same queue. If there is more than one consumer than each of them will run autonomously, which means ordering on the consumer side won't be preserved. To preserve the ordering on consumer side, we should not use competing consumers.
Even though now consumers are not competing, we can still lose message ordering, if we have more than one executor. More than one executor simply means we can poll the queue, send a polled message to any of the executors. Based on the CPU execution policy etc we will still lose the ordering, so now we need to restrict the number of executors to 1.
As we have only one executor each of the polled messages will be executed in orders, so it will become a serial execution.
For Queue1
The executor will consume the message in the following order
-> m1
-> m2
-> m3
-> m4
-> m5
Still, there's one missing piece, what happens if the execution of m1 is failing?
You can retry for N number of times before consuming the next message, to achieve this don't acknowledge unless you have successfully executed any polled message.
From design points of view, this does not look good, since you're processing messages in serial instead of parallel, though you don't have any other alternatives.
I have following host file:
{
"version": "2.0",
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout": "00:00:30",
"batchSize": 16,
"maxDequeueCount": 3,
"newBatchThreshold": 8
}
}
}
I would expect with setup there could never be more than batchSize+newBatchThreshold number of instances running. But I realized when messages are dequed they are run instantly and not just added to the back of the queue. This means you can end up with a very high amount of instances causing a lot of 429 (to many requests). Is there anyway to configure the function app to just add the dequeded messages to the back of the queue?
It was not related to dequeueCount. The problem was because it was a consumption plan, and then you cant control the amount of instances. After chaning to a Standard plan it worked as expected.
We are using metronome, and we want to create a dashboard for our jobs scheduled by it against its rest API.
Alas, the job endpoint
/v1/jobs
does not contain the last state, i.e. success or failure, but only its configuration.
Googling on how to get the history of a job, I found out that I can query the job history through embed=history GET parameter for each jobId.
I could now combine fetching the id list so that I could then fetch each job's history through:
/v1/jobs/{job_id}?embed=history
Yet this includes all the runs and also requires us to fetch each job individually.
Is there a way to get the metronome job status without querying all the jobs one by one?
You can click on each GET or POST endpoint on the official docs to see if it supports additional query params.
The endpoint for jobs indeed supports historic data
As you can see you can use embed=history or embed=historySummary, for your use-case embed=historySummary is better suited as it only contains the timestamps of the last run in this form and is less expensive and time-consuming:
[
{
"id": "your_job_id",
"historySummary": {
"failureCount": 6,
"lastFailureAt": "2018-01-26T12:18:46.406+0000",
"lastSuccessAt": "2018-04-19T13:50:14.132+0000",
"successCount": 226
},
...
},
...
]
You can compare those dates to figure out if the last run was successful. Yet keep in mind that lastFailureAt and lastSuccessAt might be null as a job might have been never run in the first place:
{
"id": "job-that-never-ran",
"labels": {},
"run": {
...
}
"historySummary": {
"successCount": 0,
"failureCount": 0,
"lastSuccessAt": null,
"lastFailureAt": null
}
},
I'm new to blockchain. I understand that blockchain keeps records of all transactions and each transaction is signed with private key. However, why cannot anyone enter an arbitrary amount of Bitcoin transaction? Say, address a only has 1 Bitcoin, but its owner can create a transaction of 100 Bitcoins and still sign it. What is Bitcoin's mechanism to verify the outgoing and incoming amounts of a transaction?
Bitcoin's blockchain contains a historical record of all transactions which have ever occured on it. Clients can certainly choose to store less, and the blockchain can be pruned by not storing transactions which have already been spent long-ago.
Bitcoin addresses don't technically have a "balance" in the sense of a traditional bank ledger. Instead, an address has the ability to spend transactions which were sent to it.
To delve into technical details, let's look at the address 1PkCAVKjPz1YK7iJwT8xTLxBXR1av8dL98 (which I own).
I received a very small transaction of 0.004 BTC recently, in the transaction with the TxID 432794be2e056275cafb0eeb7ab59a24444dd4c9e00cd9702a49c2a655a3e705.
The (hex-encoded) raw data of this transaction is: 0100000001e9a24c1d1b8d10b13482cdcbbb90d894577292c4d0c0c1427411fb9d82ea710c010000006b483045022100d9a5433c1381b39b7e02b0b0f042990e7c16cfea252b05ccfef2e85c2dab2a6f022057c7def782fe3b0d7e5e0eae277d2a5890844da7d72309817a2dac22a6307c6001210390d78cb0c1d34d4417db7e0a9a9f125a689dc29dc2197a01a5f827a20f870f62ffffffff01801a0600000000001976a914f97df8f593e0056d337c274fd81a163f47a17d3788ac00000000
Which in its human-readable form is:
{
"txid": "432794be2e056275cafb0eeb7ab59a24444dd4c9e00cd9702a49c2a655a3e705",
"size": 192,
"version": 1,
"locktime": 0,
"vin": [
{
"txid": "0c71ea829dfb117442c1c0d0c492725794d890bbcbcd8234b1108d1b1d4ca2e9",
"vout": 1,
"scriptSig": {
"asm": "3045022100d9a5433c1381b39b7e02b0b0f042990e7c16cfea252b05ccfef2e85c2dab2a6f022057c7def782fe3b0d7e5e0eae277d2a5890844da7d72309817a2dac22a6307c60[ALL] 0390d78cb0c1d34d4417db7e0a9a9f125a689dc29dc2197a01a5f827a20f870f62",
"hex": "483045022100d9a5433c1381b39b7e02b0b0f042990e7c16cfea252b05ccfef2e85c2dab2a6f022057c7def782fe3b0d7e5e0eae277d2a5890844da7d72309817a2dac22a6307c6001210390d78cb0c1d34d4417db7e0a9a9f125a689dc29dc2197a01a5f827a20f870f62"
},
"sequence": 4294967295
}
],
"vout": [
{
"value": 0.00400000,
"n": 0,
"scriptPubKey": {
"asm": "OP_DUP OP_HASH160 f97df8f593e0056d337c274fd81a163f47a17d37 OP_EQUALVERIFY OP_CHECKSIG",
"hex": "76a914f97df8f593e0056d337c274fd81a163f47a17d3788ac",
"reqSigs": 1,
"type": "pubkeyhash",
"addresses": [
"1PkCAVKjPz1YK7iJwT8xTLxBXR1av8dL98"
]
}
}
]
}
So the address 1PkCAVKjPz1YK7iJwT8xTLxBXR1av8dL98 is able to "spend" the transaction 432794be2e056275cafb0eeb7ab59a24444dd4c9e00cd9702a49c2a655a3e705.
The output value of that transaction is 0.004 BTC, so I can't make a Bitcoin transaction which attempts to spend more. However, let's try to do it anyway.
I'll create a raw transaction which attempts to output 0.01 BTC to 1MgLu9L7ftmGQM84xhKYKw8pTXiSANwggs from the transaction with an output balance of 0.004 BTC:
bitcoin-rpc createrawtransaction '[{"txid":"432794be2e056275cafb0eeb7ab59a24444dd4c9e00cd9702a49c2a655a3e705","vout":0}]' '{"1MgLu9L7ftmGQM84xhKYKw8pTXiSANwggs":0.01}'
Returns the raw transaction:
010000000105e7a355a6c2492a70d90ce0c9d44d44249ab57aeb0efbca7562052ebe9427430000000000ffffffff0140420f00000000001976a914e2d3595bd0a55c16f4b19f5cd996568dd7e811f688ac00000000
I can then sign the transaction:
bitcoin-rpc signrawtransaction 010000000105e7a355a6c2492a70d90ce0c9d44d44249ab57aeb0efbca7562052ebe9427430000000000ffffffff0140420f00000000001976a914e2d3595bd0a55c16f4b19f5cd996568dd7e811f688ac00000000
which returns:
{
"hex": "010000000105e7a355a6c2492a70d90ce0c9d44d44249ab57aeb0efbca7562052ebe942743000000006b483045022100ce3fad8ccdee48f1fe9060ef81624d3bbe721293feb8ee06a96751e65b9c423e0220106a3e80d5fdf93df5dbf037d8cfd32af70a405586e12294c937308a3c57b10e012102f2acb810346866908108dd86462ee5400b15786739f5e908711d2d15d9dd2238ffffffff0140420f00000000001976a914e2d3595bd0a55c16f4b19f5cd996568dd7e811f688ac00000000",
"complete": true
}
And I can take that returned hex, which is a validly-formatted transaction, and submit it to the network:
bitcoin-rpc sendrawtransaction 010000000105e7a355a6c2492a70d90ce0c9d44d44249ab57aeb0efbca7562052ebe942743000000006b483045022100ce3fad8ccdee48f1fe9060ef81624d3bbe721293feb8ee06a96751e65b9c423e0220106a3e80d5fdf93df5dbf037d8cfd32af70a405586e12294c937308a3c57b10e012102f2acb810346866908108dd86462ee5400b15786739f5e908711d2d15d9dd2238ffffffff0140420f00000000001976a914e2d3595bd0a55c16f4b19f5cd996568dd7e811f688ac00000000
Which gives me the error:
66: insufficient priority (code -26)
This is a client-side error, but if I were to successfully broadcast the raw transaction to the network, other peers would simply look up the referenced (or "spent") transaction 432794be2e056275cafb0eeb7ab59a24444dd4c9e00cd9702a49c2a655a3e705 and see that the output total of my new transaction is greater than the output total of the transaction I'm attempting to spend.
There is one exception to this rule: coinbase transactions generate Bitcoins for miners, and thus are allowed to output the correct block subsidy (originally 50 BTC, but currently 12.5 BTC after the halving about a month and a half ago) plus the transaction fees of all of the transactions contained within the block.
I know this post is already old but there is a complete list for validating a bitcoin transcation:
https://en.bitcoin.it/wiki/Protocol_rules#.22tx.22_messages
Maybe this link on how bitcoin transactions work will help you. Look at the section called "What if the input and output amounts don’t match?"
Also, since Blockchain uses a distributed ledger, all nodes will validate a transaction before it is accepted. Furthermore there should be auditors on the chain that make sure fraudulent activities don't happen. Hope this helps.