How rotate all logs in glassfish? - glassfish

I can push rotate button from "Server" menu item, but it rotates only server.log file, while others files int logs folder are not touched. Is there a way to rotate all?

This is the expected behaviour. The current log file is rotated/archieved and a new log is created. The old logs are kept to look them up later. From the official Oracle documentation:
Logs are rotated automatically based on settings in the
logging.properties file. You can change these settings by using the
Administration Console.
You can rotate the server log file manually by using the rotate-log
subcommand in remote mode.
This example moves the server.log file to
yyyy-mm-dd_server.log and creates a new server.log file in the default
location.
If you want to restrict the number of log files to keep you can try to set the system property com.sun.enterprise.server.logging.max_history_files, which specifies the maximum number of log files to keep (more info here).

Related

Apache Nifi - What happens when you run getFile processor without any downstream processor

I am a beginner to Apache Nifi and i want to move a file in my local filesystem from one location to another. When I used the getFile processor to move files from the corresponding input directory and started it, the file disappeared. I haven't connected it to a putFile processor. What exactly is happening here. Where does the file go if it disappears from the local directory i had placed it in. Also how can i get it back?
GetFile has a property Keep Source File, if you have set to true, the file is not deleted after it has been copied from Input Directory to the Content Repository, default is false so this is the reason your files are deleted and you must have set success relation for auto termination otherwise GetFile won't run without any downstream connection. Your files have been discarded. Not sure whether this will work, but try the Data Provenance option and replay content.
Have a look at this - GetFile Official Doc and Replaying a FlowFile

AWS CloudWatch Agent not uploading old files

During the initial migration to AWS CloudWatch logging I also want legacy log files to be synced. However, it seems that only the current active file (i.e. still being updated) will be synced. The old files even match the file name format will be ignore.
So are there any easy way to upload legacy files?
Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Short answer: you should be able to upload all files by merging them. Or create a new [logstream] section for each file.
Log files in /var/log are usually archived periodically, for instance by logrotate. If the current active file is named abcd.log, then after a few days files will be created automatically with names like abcd.log.1, abcd.log.2...
Depending on your exact system and configuration, they can also be compressed automatically (abcd.log.1.gz,abcd.log.1.gz, ...).
The CloudWatch Logs documentation defines the file configuration parameter as such:
file
Specifies log files that you want to push to CloudWatch Logs. File can point to a specific file or multiple files (using wildcards such as /var/log/system.log*). Only the latest file is pushed to CloudWatch Logs based on file modification time.
Note : using a glob path with a star (*) will therefore not be sufficient to upload historical files.
Assuming that you have already configured a glob path, you could use the touch command sequentially on each of the historical files to trigger their upload. Problems :
you would need to guess when the CloudWatch agent has noticed each file before proceeding to the next
you would need to temporarily pause the current active file
zipped files are not supported, but you can decompress them manually
Alternatively you could decompress then aggregate all historical files in a single merged file. In the context of the first example, you could run cat abcd.log.* > abcd.log.merged. This newly created file would be detected by the CloudWatch agent (matches the glob pattern) which would consider it as the active file. Problem : the previous active file could be updated simultaneously and take the lead before CloudWatch notices your merged file. If this is a concern, you could simply create a new [logstream] config section dedication the historical file.
Alternatively, just decompress the historical files then create a new [logstream] config section for each.
Please correct any bad assumptions that I made about your system.

IBM Worklight v5.0.6 Application Center - apk file upload fails

When attempting to upload our apk file, the server responds back with simply
"File HelloWorld.apk file not uploaded"
Nothing is logged in trace.log in relation to this upload, so not able to see any type of log message to diagnose further. How do you enable logging for this?
Is there a timeout, or file upload size limit? If so, how/where do you change that? The HelloWorld.apk file size is 5.6MB
There is indeed a filesize limit, but it is imposed by MySQL by default (1MB). If you are using MySQL 5.1 or 5.5 (5.6 is not supported in Worklight 5.0.x). follow these steps:
Locate the file my.ini belonging to your MySQL installation
In it, find the section [mysqld]
Underneath the section name, paste this: max_allowed_packet=1000M
Re-start the MySQL service
Re-deploy the .apk file
You may need to re-start the application server running Application Center as well.

oracle udump trc file size issue

My project uses oracle db hosted on unix machine. The issue is that the trace files generated at udump location have loggers from my custom code as well. (custom code loggers are from java callouts which are loadjava on the db).
Now every time i use that module the udump folder is flooded by 3 new trc files which have default oracle logs as well as my custom code logs.
I want to disable the logs that are generated from my code.
Till now i have tried to write a custom log4j.properties and loadjava it n use it for my code, in that prop file i have used file and console handlers pointed to custom location on unix machine other than udump location. But still the custom logs are coming only in udump location and there are no logs at the new location which i tried from the prop file.
I tried disabling the logging.trace=false in the logging.properties file of the oracle jvm.
I have checked a few sql queries which can disable session trace. It identifies around 70 sessions. I just want to disable java logs, i would like to know if its possible to find the session that my java logs use and disable the trace for it.
I am using oracle 9i version and java 1.4 version.
Need to disable the custom logs from coming to the udump location. Also the solution should be implementable generally as my application goes multiple environments like test env, stage env, prod env..
Any hint would be very helpful.

Cpanel File manager Extract button disable

Today i found extraction of zip file seems disabled in cpanel File Manager.Yesterday it was working fine and i had extracted one file .I had uploaded a big file of size 472MB.but then uploading got cancelled and From then I am facing the problem.
What is the reason behind the strange behaviour of cpanel file manager.?
And what is the solution to this problem..?
I really want the extract button enabled because Upload on server using Filezilla is very time consuming..
By default the icon is grayed out if no file is selected, make sure that's not the case.
If it's selected and it's like that then you might consider checking with your hosting provider, if you're the administrator of the server then go ahead and run
/scripts/upcp
You just need to select the file which need to be extracted. Extract button will automatically appear if you select any zip file from your directory.