I am trying to make SQL queries output into the log in my grails app, and I found out how to do so here:
However, I want SQL logging for only specific files in my project.. how can I do this?
Unfortunately, that is not possible. The grails loggers are turned on and off by class or package name of the code doing the logging. In this case, these are core Hibernate and / or Grails classes, so they either log all activity or no activity.
What you can do is add your own logging statements in your code before and after the operations you are interested in. Then you can use grep to find your marker statements in the log file. The SQL logging you are interested in will be in between your markers and you can ignore the rest of the very large log file.
Related
I dont have much experience with elk stack I basically only know the basics.
Something i.e. filebeat gets data and sends it to logstash
Logstash processes it and sends it Elastic search
Kibana uses elastic search to visualise data
(I hope that thats correct)
I need to create an elk system where data from three different projects is passed, stored and visualised.
Project no1. Uses MongoDB and I need to get all the information from 1 table into kibana
Project no2. Also uses MongoDB and I need to get all the information from 1 table into kibana
Project no3. Uses mysql and I need to get a few tables from that database into kibana
All three of these projects are on the same server
The thing is for Projects 1 and 2 I need the data flow to be constant (i.e. if a user registers I can see that in kabana)
But for Project no3. I only need the data when I need to generate a report (this project functions as a BI of sorts)
So my question is how does one go about creating an elk architecture that gets the inputs from these 3 sources and is able to combine into one elk project.
My best guess is :
Project No1 -> filebeat -> logstash
Project No2 -> filebeat -> logstash
Project No3 -> logstash
(logstash here being a single instance that then feeds into elastic)
Would this be a realistic approach?
I also stumbled upon redis, and from the looks of it it looks like it can combine all the data sources into one and then feed the output to logstash.
What would be the better approach?
Finally, I mentioned filebeat, but from what I understand it basically reads the data from a log file. Would that mean that I would have to re-write all my database entries into a log file in order to feed them into logstash or can logstash tap into the DB without an intermediary.
I tried looking for all of this online, but for some reason the internet is a bit scarce on ELK stack beginner questions.
Thanks
filebeat is used for shipping logs to logstash, you can't use it for reading items from DB. But you can read from DB using logstash's input plugins.
From what you're describing you'll need a logstash instance with 3 pipelines (one per project)
For project 3 you can use Logstash JDBC input plugin to connect to your mysql DB and read new/updated lines based on some "last_updated" column.
JDBC input plugin has a cron confguration value, that allows you to set it up to run periodically and read updated lines with an SQL query that you define in configuration.
For projects 1-2 you can also use the JDBC input plugin with mongoDB.
There is also an Open Source implementation for a mongoDB input plugin on git. You can check this post for how to use it here.
(see the full list of input plugins here)
If that works for you and you manage to set it up, then the rest will be about the same for all three configurations.
i.e. using filter plugins to modify data, and Elasticsearch output plugin to push data to an elastic index.
I have this in my application.properties. the sql file created, but nothing goes into it, and everything still is showing in console.
quarkus.log.handler.console."SqlConsoleHandler".enable=true
quarkus.log.handler.file."SqlFileHandler".enable=true
quarkus.log.handler.file."SqlFileHandler".path=hibernate.sql
quarkus.log.handler.file."SqlFileHandler".rotation.max-file-size=500M
quarkus.log.handler.file."SqlFileHandler".rotation.max-backup-index=200
quarkus.log.handler.file."SqlFileHandler".rotation.file-suffix=.yyyy-MM-dd-hh-mm
quarkus.log.handler.file."SqlFileHandler".rotation.rotate-on-boot=true
quarkus.log.category."hibernate".use-parent-handlers=false
quarkus.log.category."hibernate".level=DEBUG
quarkus.log.category."hibernate".handlers=SqlConsoleHandler,SqlFileHandler
quarkus.hibernate-orm.log.sql=true
quarkus.hibernate-orm.log.bind-param=true
My bet would be that hibernate is not the correct category. You need to use the full category of the log.
Have you tried with org.hibernate? It will redirect all the Hibernate logs though.
Apparently, org.hibernate.SQL is what you look like for only pushing the SQL statements to a specific file.
This article might be useful: https://thorben-janssen.com/hibernate-logging-guide/ .
So I have typical run of the mill logs from Nginx and tomcat servers which are just single line text files with typical log format. I have changed the tomcat access logs to output pipe delimited fields so I can easily process them using some unix scripts. I'd like to get rid of my unix scripts and move to using cloudwatch to process my logs in a similar manner, however I found out that cloudwatch really doesn't understand anything beyond timestamp, message, and logstream by default.
It will add fields using JSON, but JSON is verbose when it comes to log files. I'd like to just let it process a CSV file which seems like an obvious alternative to JSON. I'm willing to change my log format to meet a requirement like that, but I can't find any information about how I could do that.
Is my only option to translate my logs into JSON in order to add fields to cloudwatch? I am aware of the parse command, but I find that cumbersome to reconstitute my fields every time I want to build a query. Especially since these will mostly be access logs which will have numerous fields. I have aws cloudwatch log agent setup on my systems and I'm currently sending these logs to cloudwatch.
The closest thing there is to handling space delimited log files is to use Metric Filters. Or at least that's how the authors of CloudWatch designed it.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html
The best examples of this is here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CountOccurrencesExample.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/ExtractBytesExample.html
Not sure if this is going to work for what I'm trying to do with logs, but it's a start. And it's the closest thing to a proper answer. If you want it done right, you gotta do it yo'self.
I want to do some bulk search/edit operation on the scripts embedded in our UrbanCode components and applications, and possibly on the flowcharts and blueprints. Unfortunately a lot of this is stored in UrbanCode's own repository, where it can only be access through the browser GUI and I can't do things like grep for common patterns across the whole set.
Is there any documented way to check out/check in, or at least download, a copy of an entire UCD environment as text files that I could analyze?
Thanks.
I think the closest documented way to get some of the things you are looking for is to export the application and to search through the json file. Component processes with all their steps are included in the application export.
I am currently evaluating Flyway software as a deployment option for our
company. We run our database deployments on an ORACLE database and
currently spool the output from a sqlplus session for logging purposes. We
use this to verify feedback information such as were objects created
successfully, were packages and functions, etc. compiled without errors,
verify amount of records entered and so forth.
Is there similar logging functionality in Flyway? Currently the only
logging we have found is in the server logs. We can tell from these logs
that a script has completed successfully or has triggered an ORA error but
we are curious as to whether this is the extent of the database logging
options or not.
Thank you,
We used the command line method for running flyway and turned on debug output (-X). Along with a lot of other output it also logs more information about the SQL migrations run (eg content of repeatable migrations) and the number of records affected. This is not perfect however it helped us a lot in capturing more information about what was applied.
See https://flywaydb.org/documentation/commandline/ as it is not documented for each individual command as it applies to flyway itself.