What is the format of logstash config file - serialization

Does logstash use its own file syntax in config file? Is there any parser or validator for config file syntax?
For anyone that does not use logstash but have idea about file formats here is a sample syntax:
input {
file {
path => "/var/log/messages"
type => "syslog"
}
file {
path => "/var/log/apache/access.log"
type => "apache"
}
}

The Logstash configuration file is a custom format developed by the Logstash folks using Treetop.
The grammar itself is described in the source file grammar.treetop and compiled using Treetop into the custom grammar.rb parser.
That parser is then used by the pipeline.rb file in order to set up the pipeline from the Logstash configuration.
If you're not that much into Ruby, there's another interesting project called node-logstash which provides a Logstash implementation in Node.js. The configuration format is exactly the same as with the official Logstash, though the parser is obviously a different one written for Node.js. In this project, the Logstash configuration file grammar is described in jison and the parser is also automatically generated, but could be used by any Node.js module simply by requiring that generated parser.

You can use the following command to verify your logstash config file is valid:
bin/logstash --configtest --config <your config file>

Apparently command line arguments have been updated since the answers have been posted and --configtest and --config arguments are no longer valid. In order to ask Logstash (at least v5) to validate config file:
bin/logstash -t -f config.conf
With expanded arguments it looks like this:
bin/logstash --config.test_and_exit --path.config config.conf

So far. there is no any parser or validator for logstash config. You can only use the logstash to verify the config.
For more information about config, you can visit here. All the format is introduced in this page.

The Logstash-Config Go package config provides a ready to use parser and Abstract Syntax Tree for Logstash configuration files in Go.
The basis of the grammar for the parsing of the Logstash configuration format is the original Logstash Treetop grammar .
logstash-config uses pigeon to generate the parser from the PEG (parser expression grammar).

Related

How to generate openapi client from uri in Gradle

I'm probably trying to do something strange, since this doesn't seem like a common question (or maybe I'm asking it all wrong). I was expecting this to be straightforward.
Basically, what I am looking for is a way to do the same as the following, except by using the gradle openapi-generator plugin:
openapi-generator generate -i www.example.com/openapi-doc -g spring
What I have tried is the following (and the associated errors):
inputSpec.set("www.example.com/openapi-doc") --> Cannot convert URL {} to a file
inputSpec.set(URL("www.example.com/openapi-doc").readText()) --> specified for property 'inputSpec' does not exist
The actual code looks something like this:
tasks.register<GenerateTask>("generateClient") {
validateSpec
generatorName.set("spring")
library.set("spring-cloud")
// inputSpec.set("$openapiSpecDir/client/openapi.json") <-- *I am currently using a file, which I don't want to do*
inputSpec.set("https://www.example.com/openapi-doc")
outputDir.set(generatedClientDir)
apiPackage.set("org.example.api")
modelPackage.set("org.example.model")
skipOverwrite.set(false)
templateDir.set("$rootDir/src/main/resources/openapi/templates/client")
configOptions.put("java8", "false")
configOptions.put("serializationLibrary", "jackson")
configOptions.put("dateLibrary", "java8")
}
Assuming you're using the OpenAPI Generator Gradle Plugin, at the time of writing this answer, getting the inputSpec from a URL is not supported. However, for Maven this has been implemented (Issue #2241 closed with PR #3826), so chances are good to have it implemented with a feature request that gets the Gradle plugin on par with its Maven counterpart.
That being said, you may want to look into Gradle Download Task. Gradle Download Task is a plugin that let's you download files from a URL. The downloaded file can be used to feed it into the OpenAPI generator.

Phinx path with subfolders

i want to have a better overview on the phinx migration files. i want something like this
/db/migration/1.8.5/ID-2065/my_file_name_1234567890
So i can use
'migrations' => '%%PHINX_CONFIG_DIR%%/db/migrations/'. $_ENV['APP_VERSION'],
In the docs only is something like this
migrations: %%PHINX_CONFIG_DIR%%/module/*/{data,scripts}/migrations
But how can i use there maybe a param from the command line.
See you
If your using the default YAML based configuration you can try using Phinx ENV vars (PHINX_ prefix) and then use a %%PHINX_VARNAME%% replacement. Note: I haven't actually tried this before. Read more about them here: http://docs.phinx.org/en/latest/configuration.html#external-variables
Otherwise if your using a PHP-based configuration file you can definitely access the $_ENV superglobal as you have described. Just be sure to call your bootstrap/init scripts so your application version is injected.
Rob

logstash testing a configuration pipeline failing(Translation missing)

New to logstash and following the tutorial posted https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
Trying to set up my first-pipeline.conf where in i need to specify the input , filter and the output configurations
When i specify these configurations , and try
logsstash -f first-pipeline.conf -configtest i get a RuntimeError
RuntimeError : translation missing : en.logstash.runner.configuration.file-not-found>, class=> RuntimeError : backtrace => ["C:/ELK/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lob/logstash/config/loader/rb:58 in 'local_config" and bunch of other stack trace
Here below is the snip of stack trace
looks like im missing some files in my logstash installation direcotory..
BTW here is what my first-pipeline.conf file looks like
Also , i commented out the filter portion of my first-pipeline.conf as was not sure if grok was causing this issue and still the same error is reproducible
The error "io/console not supported; tty will not be manipulated" seems a jruby bug:
https://github.com/jruby/jruby/issues/3550
And It seems to be fixed on version 1.7.24. In Logstash 2.3.2 jruby version is 1.7.23 that the bug opened for. So you can try to download jruby 1.7.25 and replace it with the one under vendor/jruby.
For the other error, you are running Logstast from bin folder. Is your configuration (first-pipeline.conf) file actually in that folder? If not specify it from where it is.

Logstash not writing data from external file to elasticsearch

I have a sample text file named testfile.txt containing simple "Hi". I want this data to get indexed on ElasticSearch. I run the following command on logstash:
bin/logstash -f logstash-test.conf
Conf File content is below:
input{
file
{
path=> "/home/abhinav/ELK/logstash/testfile.txt"
type => "test"
start_position => "beginning"
}
}
output{
elasticsearch { host => localhost
index => "test_index"
}
stdout { codec => rubydebug }
}
The ElasticSearch Log shows the follwing error:
[2015-05-04 14:52:23,082][INFO ][cluster.service ] [Argo]
added
{[logstash-abhinav-HP-15-Notebook-PC-10919-4006][CPk1djqFRnO-j-DlUMJIzg][abhinav-HP-15-Notebook-PC][inet[/192.168.1.2:9301]]{client=true,
data=false},}, reason: zen-disco-receive(join from
node[[logstash-abhinav-HP-15-Notebook-PC-10919-4006][CPk1djqFRnO-j-DlUMJIzg][abhinav-HP-15-Notebook-PC][inet[/192.168.1.2:9301]]{client=true,
data=false}])
I tried following things:
Tried with simple std input(stdin) to ES n stdout . It worked.
If you are using the same file repeatedly to test with, you are running into the "sincedb" problem -- see How to force Logstash to reparse a file?
You need to add sincedb_path => "/dev/null" to your file input. Generally this is not needed in a production scenario, but it is when testing.
I have found solution to my problem and also found things we can check, if logstash & ES not working/communicating properly. :
Make sure ES & Logstash are properly installed
Versions installed are compatible with each other
Even while testing, try making different Logstash conf file for different test case, as suggested by Mr. Alcanzar in above comment.
You can also refer to below links , for help regarding this issue :
Cannot load index to elasticsearch from external file, using logstash
https://logstash.jira.com/browse/LOGSTASH-1800

Hadoop configurations seem not to be read

Every time when I try to start my mapreduce application (in standalone Hadoop), it tries to put stuff in the tmp directory, which it can't:
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-username\mapred\staging\username-1524148556\.staging to 0700
It ties to use an invalid path (slashes should be the other way around for cygwin).
I set hadoop.tmp.dir in core-site.xml (in the conf folder of Hadoop), but it seems that the config file is never read (if I put syntax errors in the file, it makes no difference). I added:
--config /home/username/hadoop-1.0.1/conf
To the command, but no difference. I also tried:
export HADOOP_CONF_DIR=/home/username/hadoop-1.0.1/conf
but also that does not seem to have an effect....
Any pointers on why the configs would not be read, or what else I am failing to see here?
Thanks!
It's not that the slashes are inverted, it's that /tmp is a cygwin path which actually maps to /cygwin/tmp or c:\cygwin\tmp. since hadoop is java and doesn't know about cygwin mappings, it takes /tmp to mean c:\tmp.
there's an awful lot of stuff to patch if you want to get 1.0.1 running on cygwin.
see: http://en.wikisource.org/wiki/User:Fkorning/Code/Hadoop-on-Cygwin
I found the following link useful, it seems that the problem stands with newer version of Hadoop. I'm using version 1.0.4 and I'm still facing this problem.
http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/25837
UPDATED: in Mahout 0.7 and for the ones who use the "Mahoot in Action" book example, you shoud change the example code as follows:
File outFile = new File("output");
if (!outFile.exists()) {
outFile.mkdir();
}
Path output = new Path("output");
HadoopUtil.delete(conf, output);
KMeansDriver.run(conf, new Path("testdata/points"), new Path("testdata/clusters"),
output, new EuclideanDistanceMeasure(), 0.001, 10,
true, 0.1, true);