Introduction to NServiceBus RetailDemo sample not able run due to Performance counter - nservicebus

Using NServiceBus 6.4.3 sample , while running getting following error
2018-02-11 13:35:18.008 INFO DefaultFactory Logging to 'D:\MyProjects\NServiceB
us\NservicebusQuickStart\RetailDemo\Sales\bin\Debug\' with level Info
2018-02-11 13:35:18.320 INFO NServiceBus.LicenseManager No valid license could
be found, falling back to trial license with start date '31-01-2018'
2018-02-11 13:35:18.633 INFO NServiceBus.PerformanceMonitorUsersInstaller Skipp
ed adding user 'dbm\admin' to group 'Performance Monitor Users' because the user
is already in group.
2018-02-11 13:35:18.695 INFO NServiceBus.PerformanceCounterHelper NServiceBus p
erformance counter for '# of msgs pulled from the input queue /sec' is not set u
p correctly. To rectify this problem, consult the NServiceBus performance counte
rs documentation.
2018-02-11 13:35:18.711 INFO NServiceBus.PerformanceCounterHelper NServiceBus p
erformance counter for '# of msgs successfully processed / sec' is not set up co
rrectly. To rectify this problem, consult the NServiceBus performance counters d
ocumentation.
2018-02-11 13:35:18.711 INFO NServiceBus.PerformanceCounterHelper NServiceBus p
erformance counter for '# of msgs failures / sec' is not set up correctly. To re
ctify this problem, consult the NServiceBus performance counters documentation.
Press Enter to exit.
Any workaround.....

Information messages that are logged do not require a workaround.
NServiceBus doesn't find a valid license file and starts a trial period.
For the performance counters, you would need to install performance counters. Documentation can be found here.

Related

AWS CloudWatch parsing for logging type

My CloudWatch log is coming in the below format:
2022-08-04T12:55:52.395Z 1d42aae9-740f-437d-bdf1-4e8c747e0f04 INFO 14 Field Service activities within Launch Advisory are a core set of activities and recommendations that are proven to support successful deployments and accelerate time-to-value. For customers implementing an AEC Product for the first time, the first year of Field Services available to the Customer will be comprised of Launch Advisory activities only. Google’s Launch Advisory services team will work with the Customer's solution implementation team to guide, assess, and make recommendations for the implementation of newly licensed APAC Products..
2022-08-04T12:55:52.395Z : Is the time stamp
1d42aae9-740f-437d-bdf1-4e8c747e0f04: request Id
INFO : Logging Type
Rest is the actual message
I want to parse those above fields from the message. By taking reference from the AWS document started writing the following query but it's not working
fields #timestamp, #message, #logStream
| PARSE #message "* [*] [*] *" as loggingTime, requestId, loggingType, loggingMessage
| sort #timestamp desc
| display loggingTime, requestId, loggingType, loggingMessage
| limit 200
But, the above parsing expression is not working. Can someone suggest how can this message be parsed?

Cisco AXL Error Codes - Access to documentation, database table, etc

I found some AXL Error Codes, 5000 - 5007 (5006 missing), but there have to be more.
For example I also received a -239.
Is there a documentation on the AXL Error Codes?
Perhaps there is a database table containing the AXL Error Codes.
Like the database tables from Cisco Unified Communications Manager 12.5(1) Database Dictionary:
https://developer.cisco.com/docs/axl/#!12-5-cucm-data-dictionary
There are tables like
typeadminerror
typedberrors
containing error codes and messages, but none for the AXL Error Codes (like mention above, 5000 - 5007, -239).
Where are those AXL Error Codes defined, and the messages?
The AXL error codes are not documented and may change per release (though very unlikely to see anything but additions), but may be useful when working with Cisco Developer Support.
Examining the AXL service logs on CUCM may provide some clues for the errors as you encounter them.

Ignite TcpCommunicationSpi : Can slowClientQueueLimit be set to same value as messageQueueLimit as per docs?

I am not completely sure of the meaning or the interplay between slowClientQueueLimit and messageQueueLimit.
As per the documentation, they both should ideally be set to the same value, https://ignite.apache.org/releases/2.4.0/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setSlowClientQueueLimit-int-
However when i do set that i see this in the logs, is it a minor bug in the check or should i change this?
[WARN ] 2018-06-27 22:32:18.429 [main] org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi - Slow client queue limit is set to a value greater than message queue limit (slow client queue limit will have no effect) [msgQueueLimit=1024, slowClientQueueLimit=1024]
Thanks
From code the warning is correct, but javadoc is not. slowClientQueueLimit has to be less than msgQueueLimit, because when message is being prepared to sending, first are checked back pressure limits, and only then slowClientQueueLimit. If these two numbers are equal, sender thread will be blocked by back pressure before it could go to slow client check. What means client would not be dropped.
Set slowClientQueueLimit to msgQueueLimit - 1 or less, and I'll suggest community to fix the docs.

Authentication failures in cassandra when 1 of 16 nodes is down

I have a Cassandra cluster running :
Cassandra 2.0.11.83 | DSE 4.6.0 | CQL spec 3.1.1 | Thrift protocol 19.39.0
The cluster has 18 nodes, split among 3 datacenters, 6 in each. My system_auth keyspace has the following replication defined:
replication = {
'class': 'NetworkTopologyStrategy',
'DC1': '4',
'DC2': '4',
'DC3': '4'}
and my authenticator/authorizer are set to:
authenticator: org.apache.cassandra.auth.PasswordAuthenticator
authorizer: org.apache.cassandra.auth.CassandraAuthorizer
This morning I brought down one of the nodes in DC1 for maintenance. Within a few seconds/minute client applications started logging exceptions like this:
"User my_application_user has no MODIFY permission on or any of its parents"
Running 'LIST ALL PERMISSIONS of my_application_user' on one of the other nodes shows that user to have SELECT and MODIFY on the keyspace xxxxx, so I am rather confused. Do I have a setup issue? Is this a bug of some sort?
Re-posting this as the answer, as BrianC suggested above.
So this is resolved... Here's the sequence of events that seems to have fixed it:
Add 18 more nodes
Run cleanup on original nodes (this was part of the original plan)
Run a scrub on 1 table, since it was throwing exceptions on cleanup
Run a repair on the system_auth KS on the original troubled node
Wait for repair service to complete a full pass on all keyspaces
Decom original 18 nodes.
Honestly, I don't know what fixed it. The system_auth repair makes most sense, but what doesn't make sense is that it had run many passes before, so why work now, I don't know. I hope this at least helps someone.

Spark execution occasionally gets stuck at mapPartitions at Exchange.scala:44

I am running a Spark job on a two node standalone cluster (v 1.0.1).
Spark execution often gets stuck at the task mapPartitions at Exchange.scala:44.
This happens at the final stage of my job in a call to saveAsTextFile (as I expect from Spark's lazy execution).
It is hard to diagnose the problem because I never experience it in local mode with local IO paths, and occasionally the job on the cluster does complete as expected with the correct output (same output as with local mode).
This seems possibly related to reading from s3 (of a ~170MB file) immediately prior, as I see the following logging in the console:
DEBUG NativeS3FileSystem - getFileStatus returning 'file' for key '[PATH_REMOVED].avro'
INFO FileInputFormat - Total input paths to process : 1
DEBUG FileInputFormat - Total # of splits: 3
...
INFO DAGScheduler - Submitting 3 missing tasks from Stage 32 (MapPartitionsRDD[96] at mapPartitions at Exchange.scala:44)
DEBUG DAGScheduler - New pending tasks: Set(ShuffleMapTask(32, 0), ShuffleMapTask(32, 1), ShuffleMapTask(32, 2))
The last logging I see before the task apparently hangs/gets stuck is:
INFO NativeS3FileSystem: INFO NativeS3FileSystem: Opening key '[PATH_REMOVED].avro' for reading at position '67108864'
Has anyone else experience non-deterministic problems related to reading from s3 in Spark?