I use this stack:
On each front server
rails
logstasher gem (formats rails log in json)
logstash-forwarder (just forwards logs to logstash on central server)
On log server:
logstash (to centralize and index logs)
kibana to display
Kibana to works well with JSON format. But "message" data is provided by a string, not as a json (cf the provieded snippet). Is there a way to fix this?
For example, access the status is a bit tricky
Here's a message sample
{
_index: logstash-2014.09.18
_type: rails
_id: RHJgU2L_SoOKS79pBzU_mA
_version: 1
_score: null
_source: {
message: "{"#source":"unknown","#tags":["request"],"#fields":{"method":"GET","path":"/foo/bar","format":"html","controller":"items","action":"show","status":200,"duration":377.52,"view":355.67,"db":7.47,"ip":"123.456.789.123","route":"items#show","request_id":"021ad750600ab99758062de60102da8f"},"#timestamp":"2014-09-18T09:07:31.822782+00:00"}"
#version: 1
#timestamp: 2014-09-18T09:08:21.990Z
type: rails
file: /home/user/path/logstash_production.log
host: webserver.example.com
offset: 23200721
format: json_event
}
sort: [
rails
]
}
Thank you for help ;).
EDIT 1: Add logstash configuration files:
/etc/logstash/conf.d/01-lumberjack-input.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
codec => "json"
}
}
/etc/logstash/conf.d/10-syslog.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
/etc/logstash/conf.d/30-lumberjack-output.conf
output {
elasticsearch { host => localhost }
# stdout { codec => rubydebug }
}
if useful, logstash-forwarder configuration:
/etc/logstash-forwarder on web servers
{
"network": {
"servers": [ "123.465.789.123:5000" ],
"timeout": 45,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure"
],
"fields": { "type": "syslog" }
},
{
"paths": [
"/home/xnxx/gportal/shared/log/logstash_production.log"
],
"fields": { "type": "rails", "format": "json_event" }
}
]
}
My config files are mainly inspired from this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04
I've never personally used the lumberjack input, but it looks like it should support the codec=>json, so I'm not sure why it isn't. You could try putting this in instead (in /etc/logstash/conf.d/01-lumberjack-input.conf):
filter {
json {
source => 'message'
remove_field => [ 'message' ]
}
}
The final way was to stop and start logstash, otherwise(restart) the configuration seems not to be updated.
So instead of :
sudo service logstash restart
I did:
sudo service logstash stop
wait for ~1 minute, then
sudo service logstash start
Don't really understand the reason (init script do this, but doesn't wait 1 minutes), but it worked for me.
Related
I am trying to figure out why I get the following error, when I use the Databricks Job API.
{
"error_code": "INVALID_PARAMETER_VALUE",
"message": "Cluster validation error: Missing required field: settings.cluster_spec.new_cluster.size"
}
What I did:
I created a Job running on a single node cluster using the Databricks UI.
I copy& pasted the job config json from the UI.
I deleted my job and tried to recreate it by sending a POST using the Job API with the copied json that looks like this:
{
"new_cluster": {
"spark_version": "7.5.x-scala2.12",
"spark_conf": {
"spark.master": "local[*]",
"spark.databricks.cluster.profile": "singleNode"
},
"azure_attributes": {
"availability": "ON_DEMAND_AZURE",
"first_on_demand": 1,
"spot_bid_max_price": -1
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"custom_tags": {
"ResourceClass": "SingleNode"
},
"enable_elastic_disk": true
},
"libraries": [
{
"pypi": {
"package": "koalas==1.5.0"
}
}
],
"notebook_task": {
"notebook_path": "/pathtoNotebook/TheNotebook",
"base_parameters": {
"param1": "test"
}
},
"email_notifications": {},
"name": " jobName",
"max_concurrent_runs": 1
}
The documentation of the API does not help (can't find anything about settings.cluster_spec.new_cluster.size). The json is copied from the UI, so I guess it should be correct.
Thanks for your help.
Source: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/clusters#--create
To create a Single Node cluster, include the spark_conf and custom_tags entries shown in the example and set num_workers to 0.
{
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "Standard_DS3_v2",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]"
},
"custom_tags": {
"ResourceClass": "SingleNode"
}
}
I am trying to install rabbitmq:8.6.1 from bitnami chart repository using terraform:0.12.18.
My helm version is 3.4.2
while installing I am getting following error
Error: validation: chart.metadata is required
My terraform file is as below
resource "kubernetes_secret" "rabbitmq_load_definition" {
metadata {
name = "rabbitmq-load-definition"
namespace = kubernetes_namespace.kylas_sales.metadata[0].name
}
type = "Opaque"
data = {
"load_definition.json" = jsonencode({
"users": [
{
name: "sales",
tags: "administrator",
password: var.rabbitmq_password
}
],
"vhosts": [
{
name: "/"
}
],
"permissions": [
{
user: "sales",
vhost: "/",
configure: ".*",
write: ".*",
read: ".*"
}
],
"exchanges": [
{
name: "ex.iam",
vhost: "/",
type: "topic",
durable: true,
auto_delete: false,
internal: false,
arguments: {}
}
]
})
}
}
resource "helm_release" "rabbitmq" {
chart = "rabbitmq"
name = "rabbitmq"
version = "8.6.1"
timeout = 600
repository = "https://charts.bitnami.com/bitnami"
namespace = "sales"
depends_on = [
kubernetes_secret.rabbitmq_load_definition
]
}
After looking issue(509) at terraform-provider-helm,
If your module/subdirectory name is same as your chart name (In my case directory name is rabbitmq and my helm_resource name is also same rabbitmq), so I am getting this error, still not able to identify why, With reference to,
Solution: I change my directory name from rabbitmq to rabbitmq-resource and this error is gone.
I am using the TinyAuth plugin with my Cakephp3. I have a controller with the following namespace:
namespace App\Controller\Api\Datatables;
The controller is Listings and my function is Filter
I have the following route setup:
Router::scope('/datatables', ['prefix' => 'api/datatables'], function (RouteBuilder $routes) {
$routes->extensions(['json', 'xml', 'ajax']);
$routes->fallbacks(DashedRoute::class);
});
This allows me to call the following url:
/datatables/listings/filter.json
I want to allow the filter function:
datatables/Listings = filter
When I call my URL I am re-directed to login. If I login the url works, so the allow_auth works.
I have also tried the following:
api/datatables/Listings = filter
api/Datatables/Listings = filter
Api/Datatables/Listings = filter
api/datatables/Listings = filter
datatables/Listings = filter
Datatables/Listings = filter
api/Listings = filter
No matter what the path is not allowed. If I move the controller to the default location then in allow_auth:
Listings = filter
the filter function is accessible without authorisation. This suggests that there is a problem with the plugin when using a router scope.
Here is the plugin's composer.json
{
"name": "ypnos-web/cakephp-datatables",
"description": "jQuery DataTables for CakePHP 3",
"homepage": "https://github.com/ypnos-web/cakephp-datatables",
"type": "cakephp-plugin",
"keywords": ["cakephp", "datatables"],
"license": "MIT",
"authors": [
{
"name": "Frank Heider",
"homepage": "https://github.com/fheider",
"role": "Author"
},
{
"name": "Johannes Jordan",
"homepage": "https://github.com/ypnos-web",
"role": "Author"
}
],
"require": {
"php": ">=7.0",
"cakephp/cakephp": "^3.6"
},
"autoload": {
"psr-4": {
"DataTables\\": "src"
}
},
"autoload-dev": {
"psr-4": {
"DataTables\\Test\\": "tests",
"Cake\\Test\\": "./vendor/cakephp/cakephp/tests"
}
}
}
Am I correct in stating that the slashed routes do work for the acl.ini - they seem to as far as I can see.
I am using the slashed routes to better organise my functions.
My request params are as follows when I call /datatables/listings/filter.json?
'controller' => 'Listings',
'action' => 'filter',
'pass' => [],
'prefix' => 'api/datatables',
'plugin' => null,
'_ext' => 'json',
'_matchedRoute' => '/datatables/:controller/:action/*',
'?' => [
'string' => 'seat'
]
If I call /api/datatables/listings/filter.json:
Controller class Datatables could not be found.
I'm not overly familiar with the plugin, but api/datatables/Listings seems to be the correct format, however looking at the plugin's source, it seems that nested prefixes aren't supported:
if (strpos($key, '/') !== false) {
list($res['prefix'], $key) = explode('/', $key);
}
https://github.com/dereuromark/cakephp-tinyauth/blob/1.11.0/src/Utility/Utility.php#L23-L25
That code would parse api as the prefix, and datatables as the controller.
You may want to open an issue, or add support for it yourself if you can.
I'm using good plugin for my app, and I have copied the config parameters straight from the sample in the page, console and file are working , but good-http is not working!
here is my configuration:
myHTTPReporter: [
{
module: 'good-squeeze',
name: 'Squeeze',
args: [{error: '*', log: 'error', request: 'error'}]
},
{
module: 'good-http',
args: [
'http://localhost/log-request/index.php?test=23424', // <- test file I created to see if data is being sent or not
{
wreck: {
headers: {'x-api-key': 12345}
}
}
]
}
]
the only argument that is actually working is the ops: * and none of the error: *, ... is working
Am I missing something in my config parameters?
Maybe you should change the threshold parameter of the plugin, is set by default to 20 so it only send data after 20 events. If you want immediate results yo have to change it to 0.
myHTTPReporter: [
{
module: 'good-squeeze',
name: 'Squeeze',
args: [{error: '*', log: 'error', request: 'error'}]
},
{
module: 'good-http',
args: [
'http://localhost/log-request/index.php?test=23424', // <- test file I created to see if data is being sent or not
{
threshold: 0,
wreck: {
headers: {'x-api-key': 12345}
}
}
]
}
]
I am trying to learn ELK stack where i have staretd with indexing apache access logs, i have Logstash 1.4.2,Elasticseach 1.5.1 and kiabna 4.0.2 for windows. Following are my configurtion files. for mapping at elasticsearch i have used
curl -XPOST localhost:9200/apache_access?ignore_conflicts=true -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"apache" : {
"properties" : {
"timestamp" : {"type":"date", "format" : "DD/MMM/YYYY:HH:mm:ss" },
"bytes": {"type": "long"},
"response":{ "type":"long"},
"clientip":{ "type": "ip"},
"geoip" : { "type" : "geo_point"}
}
}
}
}'
and my logstash-apache.conf is
input {
file {
path => "D:\data\access_log1.log"
start_position => beginning
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
geoip{
source => "clientip"
target => "geoip"
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ,"ISO8601"]
}
}
output {
elasticsearch {
host => "localhost"
protocol => http
index => "apache_access"
}
stdout { codec => rubydebug }
}
what i am facing is , for the fields for which i have applied mapping in elasticsearch i.e bytes,response,clientip i am getting conflict. i understand what is happening, as it says these fields have string and long both as field type. but i dont understand why it is happening as i have applied mapping. i would also like to resolve this issue. any help is appreciated.