My java application problem is that log4j2 syslog is written not in 'local1.log' but
'messages'.
My /etc/rsyslog.conf is configured 'local1.* /var/log/local1.log' in /etc/rsyslog.conf.
But One of weired is when I removed 'appender.syslog.layout.type' and 'appender.syslog.layout.pattern' from log4j2.properties, syslog starts being written in /var/log/local1.log correctly.
Is my configuration incorrect?
Are layout properties not applied in syslog?
[/etc/rsyslog.conf]
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local1.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
...
local1.* /var/log/local1.log
[Used log4j2 library]
log4j-api-2.17.2.jar
log4j-core-2.17.2.jar
[log4j2.properties]
status = warn
name = Test
# Console appender configuration
appender.console.type = Console
appender.console.name = consoleLogger
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{HH:mm:ss} %5p (%c{1} - %M:%L) - %m%n
appender.syslog.type = Syslog
appender.syslog.name = sysLogger
appender.syslog.host = localhost
appender.syslog.port = 514
appender.syslog.protocol = UDP
appender.syslog.facility = LOCAL1
appender.syslog.layout.type = PatternLayout
appender.syslog.layout.pattern = %c{1} (%M:%L) %m\n
# Root logger level
rootLogger.level = debug
rootLogger.appenderRefs = consoleLogger, sysLogger
rootLogger.appenderRef.stdout.ref = consoleLogger
rootLogger.appenderRef.syslog.ref = sysLogger
Log4j2's syslog layout is used to format the entire syslog message and must therefore be one of SyslogLayout (traditional BSD syslog format) or Rfc5424Layout (modern syslog layout). Using any other layout will result in invalid messages and RSyslog will have to guess the message's metadata. Most notably the facility will be set to USER.
If you want to send additional data to syslog, beyond %m, you should use the RFC5424 format and send the additional information as structured data. For example you can use (in XML format):
<Syslog name="sysLogger" host="localhost" port="514" protocol="UDP">
<Rfc5424Layout appName="MyApp" facility="LOCAL1">
<LoggerFields enterpriseId="32473" sdId="location">
<KeyValuePair key="logger" value="%c" />
<KeyValuePair key="class" value="%C" />
<KeyValuePair key="method" value="%M" />
<KeyValuePair key="line" value="%L" />
</LoggerFields>
</Rfc5424Layout>
</Syslog>
which translates to the properties format as:
appender.syslog.type = Syslog
appender.syslog.name = sysLogger
appender.syslog.host = localhost
appender.syslog.port = 514
appender.syslog.protocol = UDP
appender.syslog.layout.type = Rfc5424Layout
appender.syslog.layout.facility = LOCAL1
appender.syslog.layout.appName = MyApp
appender.syslog.layout.fields.type = LoggerFields
appender.syslog.layout.fields.enterpriseId = 32473
appender.syslog.layout.fields.sdId = location
appender.syslog.layout.fields.0.type = KeyValuePair
appender.syslog.layout.fields.0.key = logger
appender.syslog.layout.fields.0.value = %c
...
Virtually all modern syslog servers can interpret structured data. For RSyslog you need to:
Enable structured data parsing:
module(load="mmpstrucdata")
action(type="mmpstrucdata")
Create a template to format your message:
template(name="MyAppFormat" type="list") {
property(name="timereported" dateFormat="rfc3339")
constant(value=" ")
property(name="hostname")
constant(value=" ")
property(name="syslogtag")
constant(value=" ")
property(name="$!rfc5424-sd!location#32473!class")
constant(value=" (")
property(name="$!rfc5424-sd!location#32473!method")
constant(value=":")
property(name="$!rfc5424-sd!location#32473!line")
constant(value=") ")
property(name="msg" droplastlf="on")
constant(value="\n")
}
Use the template for messages coming from "MyApp":
:app-name, isequal, "MyApp" {
/var/log/myapp.log;MyAppFormat
stop
}
Related
After integrating play-redis(https://github.com/KarelCemus/play-redis) with play framework, i've got an error when a request incomes:
[20211204 23:20:48.350][HttpErrorHandler.scala:272:onServerError][E] Error while handling error
java.lang.NullPointerException: null
at play.api.http.HttpErrorHandlerExceptions$.convertToPlayException(HttpErrorHandler.scala:377)
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:367)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:264)
at play.core.server.Server$$anonfun$handleErrors$1$1.applyOrElse(Server.scala:109)
at play.core.server.Server$$anonfun$handleErrors$1$1.applyOrElse(Server.scala:105)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:35)
at play.core.server.Server$.getHandlerFor(Server.scala:129)
at play.core.server.AkkaHttpServer.handleRequest(AkkaHttpServer.scala:317)
at play.core.server.AkkaHttpServer.$anonfun$createServerBinding$1(AkkaHttpServer.scala:224)
at akka.stream.impl.fusing.MapAsync$$anon$30.onPush(Ops.scala:1297)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:495)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:600)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:775)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:790)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:691)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579)
at akka.actor.ActorCell.invoke(ActorCell.scala:547)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
I am sure the cause must be play-redis cause the app runs smoothly without it. Particularly, i use a custom implementation of the configuration provider, since need to get the ip and port by calling rest API of a name service.
#Singleton
class CustomRedisInstance #Inject() (
config: Configuration,
polarisExtensionService: PolarisExtensionService,
#NamedCache("redisConnection") redisConnectionCache: AsyncCacheApi)(implicit
asyncExecutionContext: AsyncExecutionContext)
extends RedisStandalone
with RedisDelegatingSettings {
val pathPrefix = "play.cache.redis"
def name = "play"
private def defaultSettings =
RedisSettings.load(
// this should always be "play.cache.redis"
// as it is the root of the configuration with all defaults
config.underlying,
"play.cache.redis")
def settings: RedisSettings = {
RedisSettings
.withFallback(defaultSettings)
.load(
// this is the path to the actual configuration of the instance
//
// in case of named caches, this could be, e.g., "play.cache.redis.instances.my-cache"
//
// in that case, the name of the cache is "my-cache" and has to be considered in
// the bindings in the CustomCacheModule (instead of "play", which is used now)
config.underlying,
"play.cache.redis")
}
def host: String = {
val connectionInfoFuture = getConnectionInfoFromPolaris
Try(Await.result(connectionInfoFuture, 10.seconds)) match {
case Success(extractedVal) => extractedVal.host
case Failure(_) => config.get[String](s"$pathPrefix.host")
case _ => config.get[String](s"$pathPrefix.host")
}
}
def port: Int = {
val connectionInfoFuture = getConnectionInfoFromPolaris
Try(Await.result(connectionInfoFuture, 10.seconds)) match {
case Success(extractedVal) => extractedVal.port
case Failure(_) => config.get[Int](s"$pathPrefix.port")
case _ => config.get[Int](s"$pathPrefix.port")
}
}
def database: Option[Int] = Some(config.get[Int](s"$pathPrefix.database"))
def password: Option[String] = Some(config.get[String](s"$pathPrefix.password"))
}
But the play-redis itself have no error logs. After all these hard work of reading manual and examples, turns out that i should turn to Jedis or Lettuce? Hopeless now.
The reason is that i want to use Redis with Caffeine which cause collision, as the document says, need to rename the default-cache to redis in application.conf:
play.modules.enabled += play.api.cache.redis.RedisCacheModule
# provide additional configuration in the custom module
play.modules.enabled += services.CustomCacheModule
play.cache.redis {
# do not bind default unqualified APIs
bind-default: false
# name of the instance in simple configuration,
# i.e., not located under `instances` key
# but directly under 'play.cache.redis'
default-cache: "redis"
source = custom
host = 127.0.0.1
# redis server: port
port = 6380
# redis server: database number (optional)
database = 0
# authentication password (optional)
password = "#########"
refresh-minute = 10
}
So in the CustomCacheModule, the input param of NamedCacheImpl need to change to redis from play.
class CustomCacheModule extends AbstractModule {
override def configure(): Unit = {
// NamedCacheImpl's input used to be "play"
bind(classOf[RedisInstance]).annotatedWith(new NamedCacheImpl("redis")).to(classOf[CustomRedisInstance])
()
}
}
I wrote a small java class to test the consumption of Avro encoded Kafka topic.
Properties appProps = new Properties();
appProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://***kfk14bro1.lc:9092");
appProps.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://***kfk14str1.lc:8081");
appProps.put(StreamsConfig.APPLICATION_ID_CONFIG, "consumer");
appProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
appProps.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,LogAndContinueExceptionHandler.class);
StreamsBuilder streamsBuilder = new StreamsBuilder();
streamsBuilder.stream(
"coordinates", Consumed.with(Serdes.String(), new GenericAvroSerde()))
.peek((key, value) -> System.out.println("key=" + key + ", value=" + value));
new KafkaStreams(streamsBuilder.build(), appProps).start();
When I run this class, SerdeConfigs are being logged alright which can be seen in the below log:
[consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] INFO io.confluent.kafka.serializers.KafkaAvroSerializerConfig - KafkaAvroSerializerConfig values:
schema.registry.url = [http://***kfk14str1.lc:8081]
basic.auth.user.info = [hidden]
auto.register.schemas = true
max.schemas.per.subject = 1000
basic.auth.credentials.source = URL
schema.registry.basic.auth.user.info = [hidden]
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
[normal-consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] INFO io.confluent.kafka.serializers.KafkaAvroDeserializerConfig - KafkaAvroDeserializerConfig values:
schema.registry.url = [http://***kfk14str1.lc:8081]
basic.auth.user.info = [hidden]
auto.register.schemas = true
max.schemas.per.subject = 1000
basic.auth.credentials.source = URL
schema.registry.basic.auth.user.info = [hidden]
specific.avro.reader = false
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
but messages are not being consumed and generates the below log for every message:
[normal-consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] WARN org.apache.kafka.streams.errors.LogAndContinueExceptionHandler - Exception caught during Deserialization, taskId: 0_0, topic: coordinates, partition: 0, offset: 782205986
org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 83
Caused by: java.lang.NullPointerException
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:116)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:88)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
at io.confluent.kafka.streams.serdes.avro.GenericAvroDeserializer.deserialize(GenericAvroDeserializer.java:63)
at io.confluent.kafka.streams.serdes.avro.GenericAvroDeserializer.deserialize(GenericAvroDeserializer.java:39)
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:58)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:60)
But I am able to read just fine from the avro console consumer, so I know there is nothing wrong with the data written to the topic. Below command prints logs alright:
~/kafka/confluent-5.1.2/bin/kafka-avro-console-consumer --bootstrap-server http://***kfk14bro1.lc:9092 --topic coordinates --property schema.registry.url=http://***kfk14str1.lc:8081 --property auto.offset.reset=latest
When you instantiate an Avro Serde yourself it is not configured automatically with the schema-registry URL.
So either you have to configure it yourself or you define default serdes by adding:
appProps.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
appProps.setProperty(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, GenericAvroSerde.class.getName());
And by removing
Consumed.with(Serdes.String(), new GenericAvroSerde())
To configure Serde use following code (adapt it to your situation):
GenericAvroSerde genericAvroSerde = new GenericAvroSerde();
boolean isKeySerde = false;
genericAvroSerde.configure(
Collections.singletonMap(
AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG,
"http://confluent-schema-registry-server:8081/"),
isKeySerde);
A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?
I am setting up Trac, on windows. I used a Bitnami installer. It is the newest stable version 1.2.3. I have a lot of stuff setup, including notifications, but I want to see HTML notifications. The plain text emails look wierd.
I did add the TracHtmlNotificationPlugin. Before doing that I was not getting emails with the default_format.email set to text/html.
Now I get the emails but they are still in plain text.
This is my trac.ini notification section. Let me know if I am missing something.
[notification]
admit_domains = domain.com
ambiguous_char_width = single
batch_subject_template = ${prefix} Batch modify: ${tickets_descr}
default_format.email = text/html
email_sender = HtmlNotificationSmtpEmailSender
ignore_domains =
message_id_hash = md5
mime_encoding = base64
sendmail_path =
smtp_always_bcc =
smtp_always_cc =
smtp_default_domain =
smtp_enabled = enabled
smtp_from = trac#domain.com
smtp_from_author = disabled
smtp_from_name =
smtp_password =
smtp_port = 25
smtp_replyto =
smtp_server = smtp.domain.com
smtp_subject_prefix =
smtp_user =
ticket_subject_template = ${prefix} #${ticket.id}: ${summary}
use_public_cc = disabled
use_short_addr = disabled
use_tls = disabled
I have my domain replaced in the real file.
Like I said I get emails now, just not html emails.
Edit:
I changed back the setting and now
Trac[web_ui] ERROR: Failure sending notification on change to ticket #7: KeyError: 'class'
Edit 2:
Fixed the error by putting the htmlnotification_ticket.html file (from the plugin) into the templates directory.
I'm trying to integrate a Datadog monitor check on sshd process in my terraform codebase, but I'm getting datadog_monitor.host_is_up2: error updating monitor: API error 400 Bad Request: {"errors":["The value provided for parameter 'query' is invalid"]}
What I did was to copy the monitor's query I created on the Datadog panel and pasted it into the tf file:
resource "datadog_monitor" "host_is_up2" {
name = "host is up"
type = "metric alert"
message = "Monitor triggered"
escalation_message = "Escalation message"
query = "process.up.over('process:ssh').last(4).count_by_status()"
thresholds {
ok = 0
warning = 1
critical = 2
}
notify_no_data = false
renotify_interval = 60
notify_audit = false
timeout_h = 60
include_tags = true
silenced {
"*" = 0
}
}
ofc the query example "avg(last_1h):avg:aws.ec2.cpu{environment:foo,host:foo} by {host} > 2" works
What's the right way to check via Datadog API or terraform if a specific service, like sshd, is up or not?
There are two error in your code:
The type used is wrong. It should be service check instead of metric alert.
You need to enclose process.up in a pair of ''.
Once done, your code will run flawlessly.