Log yum update checks even when there are no packages available for update? - yum

I need to ingest events for nightly yum update checks (using yum-cron) into a SIEM. Unfortunately yum only logs events to yum.log when action is taken, for example updates or installations. There is no event logged when you check for updates and there are none available. Auditors have also specified that ingesting events proving yum-cron ran is not enough so I can't just import the events from the cron log.
I could run a script that runs yum check-update and pipe the output to a file, then have rsyslog ingest lines from that file but that is messy and not ideal. I also want it to be as easy to configure as possible as it will have to be scripted to be able to configure it on new instances quickly.
It is also a special distribution from a vendor and the logger command does not work with rsyslog on the distribution.
Is there an easy way to track, via log, the fact that yum did run and that no packages were found for update? Indicating that all packages are up to date?

Another forum got me started down the path to a solution and this was what I ended up doing to resolve the issue:
yum-cron supports email notifications, unfortunately the SIEM we are using does not ingest events via email. However looking through the yum-cron scripts they redirect output to a temporary file which they then use to email notifications. I ended up editing the /etc/cron.daily/0yum.cron script to redirect output to /var/log/yum.log instead by changing:
} >> $YUMTMP 2>&1
to:
} >> /var/log/yum.log 2>&1
I then used the im_file module of rsyslog to ingest the yum.log and forward it to the SIEM.

Related

How to make /var/log symlink to a persistent storage in Yocto Rocko

I'm building a Yocto-based distribution with systemd and journald in its core.
And unfortunately I cannot get Yocto to store all logs in /var/log -> /data/log. I need journald logs as well as some other logs that are written there after multi-user.target to be persistent. /data is the persistent partition.
I have a very similar problem to this but unfortunately I couldn't modify it to work properly in my setup.
From my understanding there's two things I need to modify:
volatiles file in base-files which I hope is a config file for systemd-tmpfiles. It should tell it to create at runtime everything that journald needs. Here I modified one line:
L+ root root 0755 /var/log /data/log
fs-perms.txt
${localstatedir}/log link /data/log
I also tried to pull it off with VOLATILE_LOG_DIR set "no" (fs-perms-persistent-log.txtmodified but to no avail. And also adding some kind ofvar.confto/etc/tmpfiles.d` with a config similar to the one above. It also hasn't worked.
I launch a watch ls -l on the resulting rootfs/var and see that var/log is getting symlinked to `/data/log for a short while but later it's overridden somewhere to point to volatile/log once again.
I would greatly appreciate any advice because it seems like I'm overcomplicating this thing. It should be very easy. After all it's just making Yocto to make a symlink. But I guess this is a rather important directory to let me ln -sf /data/log /var/log.
I would also like to hear out implications of this approach.
Other than wearing out my eMMC. We can live with that because the log activity is very low compared to some other actions performed on the device. I'm mostly interested about mount order and stuff. If I remember correctly, journald will use a memory buffer until it has /var/log/journal created for it so I should be fine. But what should I do to ensure everything's in place before the logs are flushed? Do I need to modify systemd services to include RequireMountsFor or After=?
I want to be as defensive as possible so I'm looking forward to what you guys have to say on the topic.
EDIT:
Maybe I can just add a bind mount from /var/log to /data/log? If that is actually the solution I'd also like to know if there's no hidden hindrances down the road?
you can mount your persistent partition via tweaking the base-files recipe (base-files_%.bbappend)
do_install_append () {
cat >> ${D}${sysconfdir}/fstab <<EOF
# Data partition
/dev/mmcblk0p4 /data auto defaults,sync,noauto 0 2
EOF
}
dirs755 += "/data"
then you can tweak volatile-binds (volatile-binds.bbappend)
VOLATILE_BINDS = "\
/data/var/lib /var/lib\n\
/data/var/log /var/log\n\
/data/var/spool /var/spool\n\
/data/var/srv /srv\n\
"
This should hopefully help, I have not tested it fully here, but I hope this might provide you some starting point.
A persistent log data option has made it in Yocto 2.4: https://bugzilla.yoctoproject.org/show_bug.cgi?id=6132
Log data can be made persistent by defining the following in your distro config:
VOLATILE_LOG_DIR = "no"

Flink job started from another program on YARN fails with "JobClientActor seems to have died"

I'm new flink user and I have the following problem.
I use flink on YARN cluster to transfer related data extracted from RDBMS to HBase.
I write flink batch application on java with multiple ExecutionEnvironments (one per RDB table to transfer table rows in parrallel) to transfer table by table sequentially (because call of env.execute() is blocking).
I start YARN session like this
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/yarn-session.sh -n 1 -s 4 -d -jm 2048 -tm 8096
Then I run my application on YARN session started via shell script transfer.sh. Its content is here
#!/bin/bash
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/flink run -p 4 transfer.jar
When I start this script from command line manually it works fine - jobs are submitted to YARN session one by one without errors.
Now I should be able to run this script from another java program.
For this aim I use
Runtime.exec("transfer.sh");
(maybe are there better ways to do this? I have seen at REST API but there are some difficulties because job manager is proxied by YARN).
At the beginning is works as usually - first several jobs are submitted to session and finished successfully. But the following jobs are not submitted to YARN session.
In /opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log I see error (and no another errors found in DEBUG level)
The program execution failed: JobClientActor seems to have died before the JobExecutionResult could be retrieved.
I have tried to analyse this problem by myself and found out that this error has occurred in JobClient class while sending ping request with timeout to JobClientActor (i.e. YARN cluster).
I tried to increase multiple heartbeat and timeout options like akka.*.timeout, akka.watch.heartbeat.* and yarn.heartbeat-delay options but it doesn't solve the problem - new jobs are not submit to YARN session from CliFrontend.
The environment for both case (manual call and call from another program) is the same. When I call
$ ps axu | grep transfer
it will give me output
/usr/lib/jvm/java-8-oracle/bin/java -Dlog.file=/opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log -Dlog4j.configuration=file:/opt/flink-1.3.1/conf/log4j-cli.properties -Dlogback.configurationFile=file:/opt/flink-1.3.1/conf/logback.xml -classpath /opt/flink-1.3.1/lib/flink-metrics-graphite-1.3.1.jar:/opt/flink-1.3.1/lib/flink-python_2.11-1.3.1.jar:/opt/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/opt/flink-1.3.1/lib/log4j-1.2.17.jar:/opt/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/opt/flink-1.3.1/lib/flink-dist_2.11-1.3.1.jar:::/etc/hadoop/conf org.apache.flink.client.CliFrontend run -p 4 transfer.jar
I also tried to update flink to 1.4.0 release or change parallelism of job (even to -p 1) but error has still occurred.
I have no idea what could be different? Is any workaround by the way?
Thank you for any help.
Finally I find out how to resolve that error
Just replace Runtime.exec(...) with new ProcessBuilder(...).inheritIO().start().
I really don't know why the call of inheritIO helps in that case because as I understand it just redirects IO streams from child process to parent process.
But I have checked that if I comment out this line of code the program begins to fall again.

Delete or reset Gitlab CI builds

Is it possible to delete old builds in Gitlab CI?
I tested a few things and have now about 20 builds that are useless (most are failed anyway).
It also shows stages that I don't have anymore which kinda clutters the Pipelines page and some of the uploaded artifacts are a bit big.
I wasn't able to find any documentation on this, only that disabling CI in the settings doesn't remove the builds.
Using Gitlab 8.10 Community (hosted by Gitlab.com)
There is currently no option in the GUI to completely get rid of a build other than expunge related data from the build. (The erase option in the build)
If you would have a local installation you could modify the database directly but I would advise caution. (I'll put the guide here for completeness sake)
Login to the GitLab database. If you use the default PostgreSQL :
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql -d gitlabhq_production
Check if there is a table ci_builds. For pSQL: \dt
Delete the builds with normal SQL. For example: DELETE FROM ci_builds WHERE id = 2
(Optional) If you want to cleanup a list of commits which triggered a build you need to midify the table ci_commits.

How to run scripts automatically after deployment in AWS using EB CLI?

I am trying to make a Django server on AWS. My django app depends on some mathematical python libraries like numpy, scipy, sklearn etc. However there is an issue for which I need to this after every deployment
sudo nano /etc/httpd/conf.d/wsgi.conf
---------------------------------------
add this line in the file
WSGIApplicationGroup %{GLOBAL}
---------------------------------------
sudo /etc/init.d/httpd reload
Basically I need "WSGIApplicationGroup %{GLOBAL}" in my wsgi.conf file otherwise I get 504. I am using a Custom AMI built on top of Amazon Linux 2014 and I am using EB CLI for deployment. However whenever I deploy the wsgi.conf is reset and it does not contain the line that I have added previously and I need to manually SSH into the EC2 instance and do this task myself. It gives a overhead on every deployment and its also not feasible once we scale up (cloning or creating instances also resets it). So is there a way that this will be automatically done after every deployment ?
The content of the wsgi.conf is fixed, so basically I can make a script easily to create it but the issue is how to trigger the script automatically ?
PS:I am new to AWS
You need to use AWS Elastic Beanstalk feature called .ebextensions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
In your case you can't use Files or Commands sections, because:
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
You need to use Container_commands section:
They run after the application and web server have been set up and the
application version file has been extracted, but before the
application version is deployed.
Example .ebextensions/01wsgi.config (not tested :-))
container_commands:
apache_reload:
command: |
echo "WSGIApplicationGroup %{GLOBAL}" >> /etc/httpd/conf.d/wsgi.conf
/etc/init.d/httpd reload
Feel free to tweak my example as you want, for example you can copy your temporary wsgi.conf file somewhere and then replace original in Container_commands section.

Redis Server doesn't start or do anything - Redis-64 on Windows

I'm following these steps outlines on this link, however when I try to start the server nothing happens nor can I connect to anything from the client. Does anyone know how to run this?
when I try from a command prompt instead of double clicking the redis-server.exe I get this message
[11868] 23 Jul 11:58:26.325 # QForkMasterInit: system error caught. error code=0
x000005af, message=VirtualAllocEx failed.: unknown error
http://bartwullems.blogspot.ca/2013/07/unofficial-redis-for-windows.html
The easiest way to install Redis is through NuGet:
Open Visual Studio
Create an empty solution so that NuGet knows where to put the packages
Go the Package Manager Console: Tools –> Library Package Manager –>Package Manager Console
Type Install-Package Redis-64
image
Go to the Packages folder and browse to the Tools folder. Here you’ll find the Redis-server.exe. Double click on it to start it.
Redis is ready to use and start’s listening on a specific port(6379 in
my case)
image
Let’s open up a client and try to put a value into Redis. Start Redis-cli.exe. It already connects to the same port by default.
image
Add a value by executing following command:
image
Read the value again:
image
Try to run with redis-server --maxheap 4000000
Miguel is correct, but it is not that simple. To start redis-server either as a service or from the command prompt, the amount of available RAM and disk space must be sufficient for Redis to run as configured.
Now, if no configuration file is specified when running Redis, it will use the default configuration values. All of this is documented in the redis.windows.conf file as well as in the document "Redis on Windows.docx" (both deployed with the redis installation).
In my experience, errors when starting Redis usually come from lack of available resources (RAM or disk space) or some incorrect configuration of maxhead or maxmemory parameters.
To troubleshoot this kind of behavior, check your system's available resources and try running redis-server from the command line varying the parameters maxmemory, maxheap, and/or heapdir. The loglevel parameter set to verbose might also help diagnosing the issue.
Regards