local server takes time to update clamav db - clamav

I'm serving cvd files from a local server, when I run freshclam on my server it takes some runes until the daily.cvd is updated even though the remote version was updated a while ago.
The clamav version I'm using is 0.102.4-r1
freshclam.conf I'm using is:
DatabaseDirectory /data
LogSyslog yes
UpdateLogFile /logs/freshclam.log
LogTime yes
PidFile /run/clamav/freshclam.pid
DatabaseOwner root
LogVerbose yes
DatabaseMirror database.clamav.net
ScriptedUpdates no. (for serving as local server)
SafeBrowsing yes
Bytecode yes
some focused logs from freshclam run which not update the local daily.cvd even though it indicates a newer version remotely:
daily database available for update (local version: 26009, remote version: 26010)
*The daily.cvd database downloaded from https://database.clamav.net is one version older than advertised in the DNS TXT record.
Database test passed.
daily.cvd updated (version: 26009, sigs: 4351133, f-level: 63, builder: raynman)
Do I need to change my configuration or is it a bug on the 102.4 clamav version?

Related

How to create a RedHat repository with a reduced set of packages

Dears,
I am managing a pool of servers running under RHEL 7.6, I created a local repository of RHEL packages to be able to udpated the other servers by limiting the internet access to the server hosting the local repository.
I used the reposync command to populate my repository but I am downloading a huge number of rpms packages!
I would like to reduce the set of packages to download to the ones already deployed on all the severs, I can do the list using the rpm command, (~750 packages).
I read that there is an includepkgs directive to be used with the reposync command.
How is it working, what is the required format?
I know it is possible to use the yumdownloader command to update the local repository, how is it possible to populate the repository for the first time ?
Any help advice would be appreciated
Regards
Fdv
It seems that the best option is to limit to the last version of the packages by using the option :
-n, --newest-only Download only newest packages per-repo

Jmeter remote testing exits too early

I have 3 instances in AWS with Jmeter installed - one master and two slaves.
I want to test 1M requests against my application. I have a script, which runs 100 threads concurrently 10,000 times.
When running the test on localhost or on a single instance only it runs fine.
My issue is that when I run the test using remote servers it exits immediately on both machines. The only logs I get from this are these:
Starting the test on host 10.229.48.10 # Mon Dec 02 15:21:49 UTC 2019 (1575300109383)
Warning: Nashorn engine is planned to be removed from a future JDK release
Finished the test on host 10.229.48.10 # Mon Dec 02 15:22:00 UTC 2019 (1575300120030)
I get nothing else even with verbose logging enabled.
This is the command I use to run the test:
JVM_ARGS="-Xms2048m -Xmx2048m" ./bin/jmeter -n -t test.jmx -R 10.229.48.10,10.
System load: 0.0 Processes: 122 │229.48.23
Both machines are fully open to the master instance.
Why does the script run fine on a single instance but craps out when using remote hosts?
The general checklist for troubleshooting JMeter master-slave configuration is:
Check jmeter.log file on the master and jmeter-server.log on the slaves
Ensure that Java version is the same on master and the slaves, if it is not the same - get the relevant (better latest) version of 64-bit JDK or Server JRE
Ensure that JMeter version is the same on master and the slaves, if it's not the case - get the relevant (better latest) version of JMeter
If your test is using any of JMeter Plugins - ensure that the same set of plugins is installed on all the machines. The plugins can be installed using JMeter Plugins Manager
If you're using any external data files, i.e. CSV files which are consumed by the CSV Data Set Config - the file(s) need to be copied over to all the slaves
If your test relies on some JMeter Properties make sure to supply the properties via -J or -D command-line arguments on all the machines or via -G command-line arugment on the master or put them into user.properties file
Which version of JDK are you using?
Is it JDK 8 or something else?
Make sure the following things,
a. Internal Networking is enabled in all three instances.
b. JDK 8 is installed from official resources.
c. You are able to communicate with the instances individually.
d. Installed JMeter from the official resource instead of "apt install jmeter"

Cannot connect Impala-Kudu to Apache Kudu (without Cloudera Manager): Get TTransportException Error

I have successfully installed kudu on Ubuntu (Trusty) as per the official kudu documentations (see http://kudu.apache.org/docs/installation.html ). The setup has one node running master and tablet server and another node running the tablet server only. I am having issues installing impala-kudu without Cloudera Manager on the node running kudu master. I have followed CDH installation instructions on this (see http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh5_install.html ) page until Step 3. I have avoided installing CDH with YARN and MRv1 as I don’t need to run any mapreduce jobs and will not be using hadoop. Impala-kudu and impala-kudu-shell installed without errors. When I launch the impala-shell it returns:
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to kudu_test:21000
***********************************************************************************
Welcome to the Impala shell. Copyright (c) 2015 Cloudera, Inc. All rights reserved.
(Impala Shell v2.7.0-cdh5-IMPALA_KUDU-cdh5 (48f1ad3) built on Thu Aug 18 12:15:44 PDT 2016)Want to know what version of Impala you're connected to? Run the VERSION command to
find out!
***********************************************************************************
[Not connected] >
I have tried to use the CONNECT option to connect to the kudu-master node without success. Both imapala-kudu and kudu are running on the same machine. Are there additional configuration settings which need to be changed or is hadoop and YARN a strict requirement to make impala-kudu work?
After running ps -ef | grep -i impalad I can confirm the impala daemon is not running. After navigating to the impala logs at ~/var/log/impala I find a few errors and warning files. Here is the output of impalad.ERROR:
Log file created at: 2016/09/13 13:26:24
Running on machine: kudu_test
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0913 13:26:24.084389 3021 logging.cc:118] stderr will be logged to this file.
E0913 13:26:25.406966 3021 impala-server.cc:249] Currently configured default filesystem: LocalFileSystem. fs.defaultFS (file:///) is not supported.ERROR: block location tracking is not properly enabled because
- dfs.datanode.hdfs-blocks-metadata.enabled is not enabled.
- dfs.client.file-block-storage-locations.timeout.millis is too low. It should be at least 10 seconds.
E0913 13:26:25.406990 3021 impala-server.cc:252] Aborting Impala Server startup due to improper configuration. Impalad exiting.
Maybe I need to revisit HDFS and the Hive Metastore to ensure I have these services configured properly?
According to the log, impalad quits because the default filesystem is configured to be LocalFileSystem, which is not supported. You have to set a distributed filesystem, such as HDFS as the default.
Although Kudu is a separate storage system and does not rely on HDFS, Impala still seems to require a non-local default FS even when using with Kudu. The Impala_Kudu documentation explicitly lists the following requirement:
Before installing Impala_Kudu, you must have already installed and configured services for HDFS (though it is not used by Kudu), the Hive Metastore (where Impala stores its metadata), and Kudu.
I can even imagine that HDFS may not really be needed for any other reason than to make Impala happy, but this is just speculation from my side. Update: Found IMPALA-1850 which confirms my suspicion that HDFS should not be needed for Impala any more, but it's not just a single check that has to be removed.

Upgrading ICU*.dll on a Firebird 2.5.2

I’m trying to Upgrade the ICU*.dll from Version 3.0 to Version 5.0/5.6
I have performed following steps:
Stop the Firebird
Remove the ICU*30.dll from the /bin/ folder
Insert the ICU*50.dll/ICU*56.dll
Edit the fbintl.conf in /intl/ folder
<intl_module builtin>
icu_versions 5.6 default
</intl_module>
Start the Firebird server.
The Server does not start after these steps, I tried different Versions of the fbintl.conf but it does not work.
The Server only starts when the ICU*30.dll is in the /bin/ folder, but in this case the Server does not load the ICU*50.fll/ICU*56.dll.

Update redis server from 1.2.6 to latest

I need to update redis server.
I found a way to save DB on disk and after restore it, but my question is will new redis server have problems with read old DB structure?
The version of the dump file is encoded in the first 9 characters. So the following command can be used to check it:
$ head -1 dump.rdb | cut -c1-9
REDIS0002
Redis 1-2-6 used version 1 of the dump file (it can read and write only version 1)
Redis 2-4-6 is using version 2. However, it is able to read both version 1 and version 2 files. Version 2 happen to be backward compatible with version 1 anyway.
To upgrade, you can just read the version 1 dump file with a recent Redis release, and then dump the file again (it will be written with version 2 format). The new file may be smaller due to some optimizations available with recent Redis versions and the version 2 format.
Optionally, you can check the integrity of the dump file before starting the 2-4 Redis instance by using the redis-check-dump command:
$ ../redis-2.4.4/src/redis-check-dump dump.rdb
==== Processed 19033 valid opcodes (in 639641 bytes) ===========================
This is a pure read-only utility, it cannot harm the dump file.