This is my first question. I don't know how to config error.log has 2 function as below:
The log generated by current day will output to one fixed name log file. e.g error.log. This current log contains the current generated log only.
The previous log will back-up to single log file. e.g:
yesterday is 11/22/2013, so the error log of yesterday is named 11_22_2013.error.log
You can make use of the rotatelogs command to log rotate the apache logs. Try to put the following as a crontab.
crontab -e
Add the following there.
/usr/local/apache/bin/rotatelogs /path_to_apachelogs.%Y.%m.%d 86400
/usr/local/apache/bin/rotatelogs This path is meant for a cPanel server. You needed to give the full path for it to work. You can use the following command for getting the path.
which rotatelogs
If this is not showing any outputs, Try to locate the path with the locate command.
You can have further awareness from the following link
Related
I am using Loadrunner 2020 community edition. I have created a script in TruClient and when trying to replay in develop script mode, getting 'replay failed to start see vugen log for more details' error.
'logfile.log' was empty and 'mdrv.log' file had last replay log details when checked. Are those the files vugen referring to? What is the solution for this?
I faced the same issue, but was able to fix it as follows.
When I compiled the script, I did not see any errors in the output, so I checked the "Output -> Chromium(IE) - Interactive Replay" log which had several errors related to parameter files. When I fixed those, I was able to replay the script.
Output Log Path
I have been trying to omit logs from console while querying in hive, but still it is showing up.
If you are opening the hive console by typing
> hive
in your terminal and then write queries, you can solve this by simply using
> hive -S
This basically means that you are starting hive in silent mode.
Hope that helps.
You could increase the polling interval to minutes or hours:
SET hive.exec.counters.pull.interval=[millis];
The default is 1000 milliseconds, but you can increase it to anything you like. That should decrease the number of logs written to stdout.
If you don't want any logs on the console while starting the shell you can set the hive.root.logger property
$HIVE_HOME/bin/hive --config hive.root.logger=INFO,DRFA
hive.root.logger specifies the logging level as well as the log
destination. Specifying console as the target sends the logs to the
standard error (instead of the log file).
If you want to see ERROR messages on console you can set this command
$HIVE_HOME/bin/hive --config hive.root.logger=ERROR,console
Start hive in silent mode using
$ hive -S
then Set logger level to Error, which will avoid Warnings/Info from printing.
hive> set logger.PerfLogger.level = ERROR;
If there is "SLF4J: Class path contains multiple SLF4J bindings." in your log, it means that there are multiple log4j jars (different versions, different behaviors) in the class path
I don't know the principle of log4j, but according to the Hadoop configuration file, perform the following steps:
cd $HIVE_HOME/conf
cat > log4j.properties <<EOL
log4j.rootLogger=WARN, CA
log4j.appender.CA=org.apache.log4j.ConsoleAppender
log4j.appender.CA.layout=org.apache.log4j.PatternLayout
log4j.appender.CA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
EOL
After starting hive (Hive 3.1.2 Apache), the log is set to WARN level, which may not necessarily work, but you can try it.
I'm getting the following exception while trying to start the hive in Ubuntu 14.04 LTS,Caused by: java.sql.SQLException: Failed to create database 'metastore_db', see the next exception for details. Hadoop installation is correct and it's working fine. Please tell me anyone what's problem?
It is because you're not in the same folder where you have created your metadata. I was facing the same problem because I was in my main user folder. When I changed the folder from main user to hduser my hive stated working.
See the mistake
I tried to find the xml file but it was not their so I searched and found where it was.
Similar to #dk14, In my case, I was in a folder on which I had no permission to write as user, moved directory and worked fine.
The reason for above error is, the user through which you are login doesn't have permission to write in that particular directory. I mean the directory in which you are running the schematool command.
For example my setup of Apache Hive was in /opt/apache-hive-3.1.2-bin I ran the command :-
sudo chown -R hadoopusr /opt/apache-hive-3.1.2-bin/
it is happening because you are on the other folder than your hive is installed.
so first of all change directory to the folder where your hive is installed and you and after that try to run hive once again.
and the hive should work properly.
Best of luck.
After spending some(lot) of time I got that issue is with creating that directory metastore_db inside DERBY_HOME/bin path was already there and I didn't had admin access for this you either:
delete that folder by using admin rights.
open hive-site.xml inside HIVE_HOME/conf path open in notepad and check connection string there change the database name to something else, it worked for me.
So I am trying to use trac as standalone bugtracker. I've generated user and password using script on this page. digest.txt file is in ~/.foo-trac/conf/ directory. Source looks like this:
montreal:FOO:904fa5b01944434358e48467fbf5203c
Running this command:
tracd -p 8000 --auth="foof,.foo-trac/conf/digest.txt,FOO" ~/.foo-trac/
Getting no errors but still not able to login. Strange detail is that tracd shows this line when I'm clicking log in:
127.0.0.1 - - [16/Oct/2014 03:47:53] "GET /.foo-trac/login HTTP/1.1" 500 -
What's going on?
UPD
Now I am trying to make it another way: using base auth on this page.
I've created new environment by this command: trac-admin /home/montreal/.trac initenv. In prompt I've given name Foo to my new project.
Then I've created new user by running this command: sudo htpasswd -c /home/montreal/.trac/.htpasswd username and entered a password. My .htpasswd file looks like this:
username:$apr1$bLbNsCx/$vbVXn5gn6HG.hJvvq/SaD1
Now I'm runnig trac by this command and getting the same result:
tracd -p 8000 --basic-auth="Foo,/home/montreal/.trac/.htpasswd," /home/montreal/.trac
Link says that first argument of --basic-auth should be projectdirname, but in /home/montreal/.trac no Foo directory.
It looks like I've got correct /fullpath/environmentname/.htpasswd argument.
But how can I get the realmname argument? Maybe it makes all the trick. Maybe some logs of tracd can be helpful but log folder is empty and I don't know another place to look.
I need this bloody bug-tracker.
Don't use relative paths (~/.foo-trac/) but absolute ones.
Same applies for the auth file path, that is not even relative like the path to your Trac environment but certainly wrong, because its absolute path is not /.foo-trac/conf/digest.txt, but this is what tracd is picking up from command-line as you see in the "strange" log line.
Enable Trac DEBUG logging in .foo-trac|.trac/conf/trac.ini as advised in the wiki documentation on this topic.
First argument of --basic-auth should be projectdirname, that is /home/montreal/.trac itself, commonly referred to as Trac environment directory, nothing else.
I have my server perform a yum update every night. Last night, it updated suPHP to the newest version:
Oct 16 01:25:43 Updated: mod_suphp-0.7.1-1.el5.art.x86_64
This update has caused my website to throw a 500 internal server error. From what I've been able to find, I should only have to change the last two lines in my suphp.conf file to include quotes, which I did. But after restarting apache, I still get error 500. None of my files or directories are set to 777, so that's not the issue either. Does anybody else know what has changed in the newest suPHP release that would cause my config to no longer work? Thanks. Here is what my conf file looks like now:
[global]
;Path to logfile
logfile=/var/log/suphp.log
;Loglevel
loglevel=warn
;User Apache is running as
webserver_user=apache
;Path all scripts have to be in
docroot=/
;Path to chroot() to before executing script
;chroot=/mychroot
; Security options
allow_file_group_writeable=false
allow_file_others_writeable=false
allow_directory_group_writeable=false
allow_directory_others_writeable=false
;Check wheter script is within DOCUMENT_ROOT
check_vhost_docroot=true
;Send minor error messages to browser
errors_to_browser=true
;PATH environment variable
env_path=/bin:/usr/bin
;Umask to set, specify in octal notation
umask=0022
; Minimum UID
min_uid=500
; Minimum GID
min_gid=500
; Use correct permissions for mod_userdir sites
handle_userdir=true
[handlers]
;Handler for php-scripts
;x-httpd-php=php:/usr/bin/php-cgi
php5-script="php:/usr/bin/php-cgi"
;Handler for CGI-scripts
x-suphp-cgi="execute:!self"
I am using atomic rocket turtle repos.
I fixed it. The following line is deprecated, so I just had to delete it:
handle_userdir=true