Collectd cpu plugin Invalid value for type disk_io_time - collectd

I am using collectd cpu plugin and collecting the logs messages using logstash. I in logstash I see following error, anyone knows how to fix it?
{:timestamp=>"2016-07-15T21:03:53.481000+0000", :message=>"Invalid value for type=\"disk_io_time\", key=nil, index=1", :level=>:error}
{:timestamp=>"2016-07-15T21:03:53.482000+0000", :message=>"Invalid value for type=\"disk_io_time\", key=nil, index=0", :level=>:error}
{:timestamp=>"2016-07-15T21:03:53.482000+0000", :message=>"Invalid value for type=\"disk_io_time\", key=nil, index=1", :level=>:error}
{:timestamp=>"2016-07-15T21:03:53.483000+0000", :message=>"Invalid value for type=\"disk_io_time\", key=nil, index=0", :level=>:error}
{:timestamp=>"2016-07-15T21:03:53.484000+0000", :message=>"Invalid value for type=\"disk_io_time\", key=nil, index=1", :level=>:error}
my collectd version - 5.5.1

See logstash collectd documentation:
If no types.db is provided the included types.db will be used (currently 5.4.0)
It seems there were some changes in collectd 5.5.1 (see here). Therefore, you need to explicitly set the types.db, for example:
input {
udp {
codec => collectd {
typesdb => [ '/usr/share/collectd/types.db']
}
}
}
See https://collectd.org/documentation/manpages/types.db.5.shtml in order to determine the location of types.db in your installation.

Related

How to decipher sentry warning; "span of type Connection with operation id (null).."

Using Sentry.AspNetCore in a .NET core app I get these:
warn: Sentry.ISentryClient[0]
=> SpanId:eed7125b44901343, TraceId:bde9ab9c677f6c45a343a489f37e55c1, ParentId:0000000000000000 =>
ConnectionId:0HMBS5J9RV0II => RequestPath:/ws
RequestId:0HMBS5J9RV0II:00000002
Trying to get a span of type Connection with operation id (null), but it was not found.
What does this message mean, should I get rid of them and if so, how?
Update
It was fixed in 3.10.0.
It looks like a bug in Sentry SDK. See the related issue.
There is hope that this will be fixed soon.
The latter error in regard to spans will be fixed soon.
https://github.com/getsentry/sentry-dotnet/issues/1210#issuecomment-933822947

Kafka 1.0.0 - Serialized.with() uses default serde instead of the ones provided

We recently updated our kafka version from 0.10 to 1.0 and I am updating the deprecated code
KTable<Long, myClass> myKTable = this.streamBuilder
.stream(Serdes.Long(), mySerde, sub_topic)
.groupByKey(Serdes.Long(), mySerde)
.reduce(myReducer, my_store);
to this
KTable<Long, myClass> myKTable = this.streamBuilder
.stream(sub_topic, Consumed.with(Serdes.Long(), mySerde))
.groupByKey(Serialized.with(Serdes.Long(), mySerde))
.reduce(myReducer, Materialized.as(my_store));
My stream throws an error while serializing in groupByKey. The Serialized.with() does not use the keySerde provided and defaults back to byteArray. And this byteArray serde then encounters my key which is a Long and throws a cast error.
Has anyone else encountered this error in the 1.0.0 version of kafka. The first code with the outdated version of kafka works fine. But updating the code to use Serialized.with() does not seem to work. Any help is greatly appreciated.
Can you share the stack trace? I actually think the issue is with reduce() -- you need to specify the Serdes via Materialized again.
This is kinda regression on the new API and got fixed recently in trunk: https://github.com/apache/kafka/pull/4919 Thus, upcoming 2.0 release will contain the fix.

OrientDB serverside NullPointerException while serializing record using binary protocol

I've just started to implement binary protocol API to orientDB with C++. Current Version of used orientDB is "orientdb-community-2.2.29" with win 10 x64 and java 1.8. Since I've tried to query "select * from XXXX" on example DB serverside exceptions are thrown and no record is serialized to client. Here are the logs after successful connection and query:
2017-12-03 14:14:12:561 INFO {db=Site} /0:0:0:0:0:0:0:1:2520 - Writing bytes (4+0=4 bytes): null [OChannelBinaryServer]$ANSI{green {db=Site}} Error on unmarshalling record #73:0 (java.lang.NullPointerException)
java.lang.NullPointerException
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.getRecordBytes(ONetworkProtocolBinary.java:2894)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.writeRecord(ONetworkProtocolBinary.java:2907)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.writeIdentifiable(ONetworkProtocolBinary.java:2697)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.serializeValue(ONetworkProtocolBinary.java:1639)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.command(ONetworkProtocolBinary.java:1584)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.executeRequest(ONetworkProtocolBinary.java:660)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.sessionRequest(ONetworkProtocolBinary.java:394)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.execute(ONetworkProtocolBinary.java:217)
at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:81)
2017-12-03 14:14:12:561 WARNI {db=Site} Cannot serialize record: XXXX#73:0{Name:[2],IDs:[1]} v3 [ONetworkProtocolBinary]
Before writing the "null" bytes the recordID, position and record version is serialized and received on client side correctly, also querying from Studio or console works like a charm. I've tried to change the class - property to STRING or EMBEDDEDMAP with the same problem.
Thanks in advance for help :-)
Fortunately I found the mistake at my own: wrong SerializationImpl was configured. The correct configuration must be ORecordSerializerBinary and not ONetworkProtocolBinary.

change jta transaction timeout from default to custom

I am using Atomikos for JTA transaction.
I have following setting for JTA:
UserTransactionImp userTransactionImp = new UserTransactionImp();
userTransactionImp.setTransactionTimeout(900);
but when my code perform JTA transaction, then if it takes more than 5 minutes (which is default value) then it throws exception:
Caused by: com.atomikos.icatch.RollbackException: Prepare: NO vote
at com.atomikos.icatch.imp.ActiveStateHandler.prepare(ActiveStateHandler.java:231)
at com.atomikos.icatch.imp.CoordinatorImp.prepare(CoordinatorImp.java:681)
at com.atomikos.icatch.imp.CoordinatorImp.terminate(CoordinatorImp.java:970)
at com.atomikos.icatch.imp.CompositeTerminatorImp.commit(CompositeTerminatorImp.java:82)
at com.atomikos.icatch.imp.CompositeTransactionImp.commit(CompositeTransactionImp.java:336)
at com.atomikos.icatch.jta.TransactionImp.commit(TransactionImp.java:190)
... 25 common frames omitted
it looks like its taking the default jta transaction timeout (even though i am setting timeout explicitely (to 15 minutes/900 seconds).
I tried using following properties in application.properties file however it still takes the default timeout value(300 seconds).
spring.jta.atomikos.properties.max-timeout=600000
spring.jta.atomikos.properties.default-jta-timeout=10000
I have also tried with below property but no luck:
spring.transaction.default-timeout=900
Can anyone suggest if I need any other setting? I am using wildfly plugin, spring boot and atomikos api for JTA transaction.
From the Atomikos documentation:
com.atomikos.icatch.max_timeout
Specifies the maximum timeout (in milliseconds) that can be allowed for transactions. Defaults to 300000. This means that calls to UserTransaction.setTransactionTimeout() with a value higher than configured here will be max'ed to this value. For 4.x or higher, a value of 0 means no maximum (i.e., unlimited timeouts are allowed).
Indeed, if you take a look at the Atomikos library source code (for both versions 4.0.0M4 and 3.7.0), in the createCC method from class com.atomikos.icatch.imp.TransactionServiceImp you will see:
387: if ( timeout > maxTimeout_ ) {
388: timeout = maxTimeout_;
389: //FIXED 20188
390: LOGGER.logWarning ( "Attempt to create a transaction with a timeout that exceeds maximum - truncating to: " + maxTimeout_ );
391: }
So any attempt to specify a longer transaction timeout gets capped to maxTimeout_ which has a default value of 300000 set during initialization if none is specified.
You can set the com.atomikos.icatch.max_timeout as a JVM argument with:
-Dcom.atomikos.icatch.max_timeout=900000
or you could use The Advanced Case recipe specified in the Configuration for Spring Section from the Atomikos documentation.
I've resolved similar problem where configuration in application.yml (or application. properties) of Spring Boot did not get picked up.
There was even a log that I later found mentioned in official docs.
However, I added transactions.properties file (next to the application.yml) where I set mine desired properties.
# Atomikos properties
# Service must be defined!
com.atomikos.icatch.service = com.atomikos.icatch.standalone.UserTransactionServiceFactory
# Override default properties.
com.atomikos.icatch.log_base_dir = ./atomikos
Some properties can be set within transactions.properties and other within jta.properties file.

Zero timeout on WL.Server.readSingleJMSMessage leads to class cast exception

According to documentation infinite wait for message read from JMS queue is achieved specifying "timeout: 0". Using non-zero values call to WL.Server.readSingleJMSMessage works ok, for zero timeout function returns immediately and I can find log entry:
com.worklight.integration.model.InvocationContext E FWLSE0099E: An error occurred while invoking procedure jms_topic/JMSConsumerFWLSE0100E: parameters:{ "arr": [ { "destination": "myqueue", "singleMessage": true, "timeout": 0.0 }]} java.lang.Double cannot be cast to java.lang.Integer FWLSE0101E: Caused by: null
For positive value like "timeout: 1000" logged params are correct, also "timeout: 1000". For "timeout: 0" logged value is floating point as "timeout: 0.0" unexpected on Java side.
I see not way to force integral zero as literal, I tried "timeout: 0x0" or "timeout: parseInt(0)" but the problem seems to be in JS-Java translation. It is a pity such basic boundary condition is not tested before release.
This appears to be an unfortunate bug with no currently known workaround. A defect will be opened to fix the issue. We will continue to look for a possible workaround for current versions.