Syslog-ng row message required to send- no timestamp - no header require - syslog-ng

I am using below configuration of syslog-ng OS. Our purpose is to get the syslog message from device and relay the same message to analytic tool. We want to have row log message as shown below , to be sent to analytic tool without removing any character (i.e. ',") from original message. providing configuration file , original log and processed log (by syslog-ng). We also want to get rid of additional header or timestamp added by syslog-ng.
Configuration file
Used version:- Version: 3.2.5
options {flush_lines (0);time_reopen (10);log_fifo_size (1000);long_hostnames (off);use_dns (no); use_fqdn (no);create_dirs (no);keep_hostname (yes);keep-timestamp(no);};
source slocal{syslog(port(514) transport("udp")flags(no-parse) );};
template t_syslog {template("${MESSAGE}\n");template-escape(yes);};
destination dfgtall { file("/var/netwitness/fgtall.log" template(t_syslog)); };
log { source(slocal);destination(dfgtall); };
Original log
date=2020-03-07 time=20:46:02 devname="ABCD" devid="FGT" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="VDOM-Int" eventtime=1583594162 srcip=1.1.1.1 srcport=55498 srcintf="LAN" srcintfrole="lan" dstip=10.10.10.1 dstport=21 dstintf="EXTERNAL" dstintfrole="wan" sessionid=583411984 proto=6 action="deny" policyid=0 policytype="policy" service="FTP" dstcountry="United States" srccountry="Reserved" trandisp="noop" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high"
Received log message
<5>Jul 20 14:41:42 root: date=2020-03-07 time=20:46:02 devname=ABCD devid=FGT logid=0000000013 type=traffic subtype=forward level=notice vd=VDOM-Int eventtime=1583594162 srcip=1.1.1.1 srcport=55498 srcintf=LAN srcintfrole=lan dstip=10.10.10.1 dstport=21 dstintf=EXTERNAL dstintfrole=wan sessionid=583411984 proto=6 action=deny policyid=0 policytype=policy service=FTP dstcountry=United States srccountry=Reserved trandisp=noop duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 appcat=unscanned crscore=30 craction=131072 crlevel=high

syslog-ng v3.2.5 is really old. Please upgrade to a newer version.
Using flags(no-parse) in the source, and the proper template in the destination config ($MESSAGE\n) are the key here.
The following snippet works as expected with syslog-ng v3.28:
source s_udp {
syslog(
port(514)
transport("udp")
flags(no-parse)
);
};
destination dfgtall { file("/tmp/fgtall.log" template("${MESSAGE}\n")); };
log {
source(s_udp);
destination(dfgtall);
};

Related

Syslog-NG: 2 logs from the same source written differently

I have 2 sets of logs. Each is going to their own syslog server. But the source of the logs is the same - a palo alto prisma vpn.
For whatever reason, Syslog-Server A (the oldest source) writes the logs like this (in bold):
Nov 22 15:08:03 34 456
But my newest Syslog Server, B, writes the logs like this (in bold):
Nov 22 15:08:03 34.0.0.1 456
This is a problem. Because, on each syslog-ng server, we have a Splunk Universal Forwarder that forwards the logs to an index. We use the Palo Tech Add-On to parse the data.
It appears the TA is expected to parse the incoming data with this field:
Nov 22 15:08:03 34 456
Any other way breaks parsing. I have my new syslog-ng file (for the new server - B) written as below:
#version:3.31
#include "scl.conf"
options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
chain_hostnames (off);
use_dns (no); #was yes
use_fqdn (no); #was yes
create_dirs (no);
keep_hostname (no); #was yes
};
source vpn_encrypted_log_traffic {
network(
ip(0.0.0.0)
port(6514)
transport("tls")
tls(
cert-file("/etc/syslog-ng/certs/prv.cer")
key-file("/etc/syslog-ng/certs/prv.key")
peer_verify(optional-untrusted)
)
);
};
destination prisma{ file("/directory/log.log") create_dir(yes) ); }
log { source(vpn_encrypted_log_traffic); destination(prisma); };
And the old syslog server (A) just has this:
#version:3.5
#include "scl.conf"
options {
time-reap (30);
keep_hostname (no); #was yes
};
source vpn_encrypted_log_traffic {
network(
ip(0.0.0.0)
port(6514)
transport("tls")
tls(
cert-file("/etc/syslog-ng/certs/prv.cer")
key-file("/etc/syslog-ng/certs/prv.key")
peer_verify(optional-untrusted)
)
);
};
destination prisma{ file("/directory/log.log") create_dir(yes) ); }
log { source(vpn_encrypted_log_traffic); destination(prisma); };
I can only think the problem exists in Prisma. But the configs look 1-to-1 to me.
34.0.0.1 is the hostname/IP address part of a BSD syslog message.
use_dns(no); keep_hostname(no); means this part of the message will be replaced with server B's IP address.
keep_hostname(yes) can be used to leave the hostname intact.

Strip nulls from message in syslog-ng

I need to strip NULL's from the incoming message so I can forward it back out to another host. Syslog-ng does not forward messages properly that have any nulls in it. I've tried the following but cannot figure out how to target the NULL in the strings. With the below I still see the nulls in my local log and the remote system never see's the messages with nulls in it (not all messages have nulls and the ones that don't have nulls forward properly).
source s_ise {
udp(port(522));
};
destination d_ise {
file("/var/log/ise.log");
udp("myhost.example" port(516) spoof_source(no));
};
rewrite r_ise {
# remove nulls, or it won't forward properly
subst("\x00", "", type("string"), value("MESSAGE"), flags(substring, global));
};
log {
source(s_ise);
filter(f_ise_aaa);
rewrite(r_ise);
destination(d_ise);
};
NULLs are considered as string terminators.
Fortunately, the UDP source does not rely on line endings (newline characters or NULLs), so you can remove all unnecessary 0 bytes before parsing, for example:
source s_ise {
udp(port(522) flags(no-parse));
};
rewrite r_remove_nulls {
subst('\x00', '', value("MESSAGE"), type(pcre), flags(global)); # single quotes!
};
parser p_syslog {
syslog-parser();
};
destination d_ise {
file("/var/log/ise.log");
udp("myhost.example" port(516) spoof_source(no));
};
log {
source(s_ise);
rewrite(r_remove_nulls);
parser(p_syslog);
filter(f_ise_aaa);
destination(d_ise);
};
Alternatively, you can keep NULL bytes, but in that case, you should not use syslog-ng config objects that treat the message as strings (for example, parsers, string-based rewrite rules, string filters, etc).

"TLS support is not available" when creating GTlsClientConnection with libnice

I have working code where two peers are connecting over a relay server (coturn) and everything seems to be fine over pseudo-tcp. I've tested message exchange successfully with nice_agent_attach_recv() and nice_agent_get_io_stream().
But when I try to create a GTlsClientConnection I get back: 0:TLS support is not available
Here is some partial code:
if(!nice_agent_set_relay_info(agent, stream_id,
NICE_COMPONENT_TYPE_RTP,
"my.coturn.server",
5349, //tls-listener-port (I also tried the non tls port: 3478)
username.c_str(),
password.c_str(),
NICE_RELAY_TYPE_TURN_TCP))
{
printf("error setting up relay info\n");
}
...
//after state has changed to NICE_COMPONENT_STATE_READY
...
io_stream = nice_agent_get_io_stream (agent, stream_id, component_id);
input = g_io_stream_get_input_stream (G_IO_STREAM (io_stream));
output = g_io_stream_get_output_stream (G_IO_STREAM (io_stream));
GIOStream* tlsConnection = g_tls_client_connection_new
(G_IO_STREAM (io_stream), NULL, &error);
/////////////////////////
/// error == 0 (TLS support is not available)
I am new to libnice and glib. So, I may be missing something basic.
Probably need the glib-networking package installed.

Rancid/ Looking Glass perl script hitting an odd error: $router unavailable

I am attempting to set up a small test environment (homelab) using CentOS 6.6, Rancid 3.1, Looking Glass, and some Cisco Switches/Routers, with httpd acting as the handler. I have picked up a little perl by means of this endeavor, but python (more 2 than 3) is my background. Right now, everything on the rancid side of things works without issue: bin/clogin successfully logs into all of the equipment in the router.db file, and logging of the configs is working as expected. All switches/routers to be accessed are available and online, verified by ssh connection to devices as well as using bin/clogin.
Right now, I have placed the lg.cgi and lgform.cgi files into var/www/cgi-bin/ which allows the forms to be run as cgi scripts. I had to modify the files to split on ';' instead of ':' due to the change in the .db file in Rancid 3.1:#record = split('\:', $_); was replaced with: #record = split('\;', $_); etc. Once that change was made, I was able to load the lgform.cgi with the proper router.db parsing. At this point, it seemed like everything should be good to go. When I attempt to ping from one of those devices out to 8.8.8.8, the file correctly redirects to lg.cgi, and the page loads, but with
main is unavailable. Try again later.
as the error, where 'main' is the router hostname. Using this output, I was able to find the function responsible for this output. Here it is before I added anything:
sub DoRsh
{
my ($router, $mfg, $cmd, $arg) = #_;
my($ctime) = time();
my($val);
my($lckobj) = LockFile::Simple->make(-delay => $lock_int,
-max => $max_lock_wait, -hold => $max_lock_hold);
if ($pingcmd =~ /\d$/) {
`$pingcmd $router`;
} else {
`$pingcmd $router 56 1`;
}
if ($?) {
print "$router is unreachable. Try again later.\n";
return(-1);
}
if ($LG_SINGLE) {
if (! $lckobj->lock("$cache_dir/$router")) {
print "$router is busy. Try again later.\n";
return(-1);
}
}
$val = &DoCmd($router, $mfg, $cmd, $arg);
if ($LG_SINGLE) {
$lckobj->unlock("$cache_dir/$router");
}
return($val);
}
In order to dig in a little deeper, I peppered that function with several print statements. Here is the modified function, followed by the output from the loaded lg.cgi page:
sub DoRsh
{
my ($router, $mfg, $cmd, $arg) = #_;
my($ctime) = time();
my($val);
my($lckobj) = LockFile::Simple->make(-delay => $lock_int,
-max => $max_lock_wait, -hold => $max_lock_hold);
if ($pingcmd =~ /\d$/) {
`$pingcmd $router`;
} else {
`$pingcmd $router 56 1`;
}
print "About to test the ($?) branch.\n";
print "Also who is the remote_user?:' $remote_user'\n";
print "What about the ENV{REMOTE_USER} '$ENV{REMOTE_USER}'\n";
print "Here is the ENV{HOME}: '$ENV{HOME}'\n";
if ($?) {
print "$lckobj is the lock object.\n";
print "#_ something else to look at.\n";
print "$? whatever this is suppose to be....\n";
print "Some variables:\n";
print "$mfg is the mfg.\n";
print "$cmd was the command passed in with $arg as the argument.\n";
print "$pingcmd $router\n";
print "$cloginrc - Is the cloginrc pointing correctly?\n";
print "$LG_SINGLE the next value to be tested.\n";
print "$router is unreachable. Try again later.\n";
return(-1);
}
if ($LG_SINGLE) {
if (! $lckobj->lock("$cache_dir/$router")) {
print "$router is busy. Try again later.\n";
return(-1);
}
}
$val = &DoCmd($router, $mfg, $cmd, $arg);
if ($LG_SINGLE) {
$lckobj->unlock("$cache_dir/$router");
}
return($val);
}
OUTPUT:
About to test the (512) branch.
Also who is the remote_user?:' '
What about the ENV{REMOTE_USER} ''
Here is the ENV{HOME}: '.'
LockFile::Simple=HASH(0x1a13650) is the lock object.
main cisco ping 8.8.8.8 something else to look at.
512 whatever this is suppose to be....
Some variables:
cisco is the mfg.
ping was the command passed in with 8.8.8.8 as the argument.
/bin/ping -c 1 main
./.cloginrc - Is the cloginrc pointing correctly?
1 the next value to be tested.
main is unreachable. Try again later.
I can provide the code for when DoRsh is called, if necessary, but it looks mostly like this:&DoRsh($router, $mfg, $cmd, $arg);.
From what I can tell the '$?' special variable (or at least according to
this reference it is a special var) is returning the 512 value, which is causing that fork to test true. The problem is I don't know what that 512 means, nor where it is coming from. Using the ref site's description ("The status returned by the last pipe close, backtick (``) command, or system operator.") and the formation of the conditional tree above, I can see that it is some error of some kind, but I don't know how else to proceed with this inspection. I'm wondering if maybe it is in response to some permission issue, since the remote_user variable is null, when I didn't expect it to be. Any guidance anyone may be able to provide would be helpful. Furthermore, if there is any information that I may have skipped over, that I didn't think to include, or that may prove helpful, please ask, and I will provide to the best of my ability
May be you put in something like
my $pingret=$pingcmd ...;
print 'Ping result was:'.$pingret;
And check the returned strings?

401 Error "oauth_problem=nonce_used" Adding Products To Magento w/ Rest API

Getting a 401 status with "oauth_problem=nonce_used" message return when attempting to add products to Magento using the rest api. Oddly, the products are still get imported but it's really throwing me off because I'm not getting the product id's back in which to update the stock info.
Magento install is brand new (crucialwebhost installer) 1.7.0.2 and the code I'm using is pretty much copied and pasted from magento site...
$callbackUrl = '****';
$temporaryCredentialsRequestUrl = "*****/oauth/initiate?oauth_callback=".urlencode($callbackUrl);
$adminAuthorizationUrl = '*****/admin/oauth_authorize';
$accessTokenRequestUrl = '*****/oauth/token';
$apiUrl = '*****/api/rest';
$consumerKey = '*****';
$consumerSecret = '******';
try
{
$authType = ($_SESSION['state'] == 2) ? OAUTH_AUTH_TYPE_AUTHORIZATION : OAUTH_AUTH_TYPE_URI;
$oauthClient = new OAuth($consumerKey, $consumerSecret, OAUTH_SIG_METHOD_HMACSHA1, $authType);
$oauthClient->enableDebug();
if(!isset($_GET['oauth_token']) && !$_SESSION['state'])
{
$requestToken = $oauthClient->getRequestToken($temporaryCredentialsRequestUrl);
$_SESSION['secret'] = $requestToken['oauth_token_secret'];
$_SESSION['state'] = 1;
header('Location: '.$adminAuthorizationUrl.'?oauth_token='.$requestToken['oauth_token']);
exit;
} else if($_SESSION['state'] == 1)
{
$oauthClient->setToken($_GET['oauth_token'], $_SESSION['secret']);
$accessToken = $oauthClient->getAccessToken($accessTokenRequestUrl);
$_SESSION['state'] = 2;
$_SESSION['token'] = $accessToken['oauth_token'];
$_SESSION['secret'] = $accessToken['oauth_token_secret'];
header('Location: '.$callbackUrl);
exit;
} else
{
$oauthClient->setToken($_SESSION['token'], $_SESSION['secret']);
$resourceUrl = "$apiUrl/products";
$productData = json_encode(array(
'type_id' => 'simple',
'attribute_set_id' => 4,
'sku' => $local_product['sku'],
'weight' => 1,
'status' => 1,
'visibility' => 4,
'name' => $local_product['name'],
'description' => $local_product['description'],
'short_description' => $local_product['description'],
'price' => $local_product['price'],
'tax_class_id' => 0,
));
$headers = array('Content-Type' => 'application/json');
$oauthClient->fetch($resourceUrl, $productData, OAUTH_HTTP_METHOD_POST, $headers);
$respHeader = $oauthClient->getLastResponseHeaders();
}
} catch(OAuthException $e)
{
print_r($e);
}
}
session_destroy();
Exact error: {"messages":{"error":[{"code":401,"message":"oauth_problem=nonce_used"}]}}
In Mage_Api2_Model_Resource, about line 227, locate
$this->getResponse()->setHeader('Location', $newItemLocation);
and insert just after this:
$this->getResponse()->setHttpResponseCode(202);
Ref: Wikipedia "HTTP Location":
The HTTP Location header field is returned in responses from an HTTP
server under two circumstances:
To ask a web browser to load a different web page. In this
circumstance, the Location header should be sent with an HTTP status
code of 3xx.
To provide information about the location of a newly
created resource. In this circumstance, the Location header should
be sent with an HTTP status code of 201 or 202
I had exactly the same problem and spend weeks tracking down the problem. It seems to be a strange combination of Apache with PHP and Rewriting. In the end I created a clean installation and the problem was gone. I also tried to create a second installation where the problem could be observed but failed - the error appeared only in my production system, not in any of test installations...
I looked at this and from what I see in the code, it looks like OAuth register all your calls and if it find out that the exact same nonce was actually used with the exact same timestamp as some previous call, it will just discard it with this very specific oauth_problem=nonce_used error.
Code from app/code/core/Mage/Oauth/Model/Server.php
/**
* Validate nonce request data
*
* #param string $nonce Nonce string
* #param string|int $timestamp UNIX Timestamp
*/
protected function _validateNonce($nonce, $timestamp)
{
$timestamp = (int) $timestamp;
if ($timestamp <= 0 || $timestamp > (time() + self::TIME_DEVIATION)) {
$this->_throwException('', self::ERR_TIMESTAMP_REFUSED);
}
/** #var $nonceObj Mage_Oauth_Model_Nonce */
$nonceObj = Mage::getModel('oauth/nonce');
$nonceObj->load($nonce, 'nonce');
if ($nonceObj->getTimestamp() == $timestamp) {
$this->_throwException('', self::ERR_NONCE_USED);
}
$nonceObj->setNonce($nonce)
->setTimestamp($timestamp)
->save();
}
So I would say, when you do calls through Magento API in REST you should take extra care that each and every request you make have its own unique generated combinaison timestamp / nonce value.
Also see
oauth_nonce. A random value, uniquely generated by the application.
oauth_timestamp. A positive integer, expressed in the number of seconds since January 1, 1970 00:00:00 GMT.
And
nonce_used: The nonce-timestamp combination has already been used.
From this source : http://devdocs.magento.com/guides/v2.0/get-started/authentication/gs-authentication-oauth.html
I had the exact same problem and to solve it I looked at the mod_rewrite apache module and turned on logging for this module which is done by adding this to your apache httpd.conf file (this is for apache 2.4x , 2.2x needs to be done differently
<IfModule mod_rewrite.c>
LogLevel mod_rewrite.c:trace8
</IfModule>
The errors are then logged out to the apache standard error_log
When I looked at the rewrite here I could see that my post request was being rewritten twice, the first time it add the products to magento and the second time it failed to add the product again as the nonce was used, obviously.
I could see that the rewrite rule that was causing this in the .htaccess was the one
## workaround for HTTP authorization
## in CGI environment
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
I check my configuration and I was indeed running fast cgi php and I checked this by checking the value of Server API from a php info script. I had spent so long trying to solve this that I knowing the root cause I simply changed PHP from CGI php to an apache module and hey preto my post request now is only rewritten once and returns that all elusive 200 response code.
Work Around:
Use SOAP API.
Reason for not using it before:
SOAP API didn't provide ability to at custom product attributes or product quantity increment fields.
Fix:
Add any field you want to the product using the SOAP api by first creating an array of objects for them like this (last 4 lines of code below repeated for each field added):
$additionalAttrs = array();
$per_item = new stdClass();
$per_item->key = 'price_per_item';
$per_item->value = $local_product['price'];
$additionalAttrs['single_data'][] = $per_item;
And then adding it to your product array with the key "additional_attributes" like:
'additional_attributes' => $additionalAttrs,
I know this work around only helps people that were avoiding the SOAP API for the same reason I was but hopefully it helps some of you. That error we're seeing where it tries to add a product twice seems to be server configuration specific and very hard to track down.