Hive command line CLI history - hive

I am not seeing any history file - or being able to retrieve any history from past CLI Sessions at the command prompt.
Is there a setting to enable this?

By default - Hive saves the last 100,00 lines of commands lines into a file $HOME/.hivehistory
source: Programming in hive -> chapter 2 : getting started -> command history
For me .hivehistory is available. hope this helps

type on the terminal "cat .hivehistory"
not able to see then
First check your location by "pwd"
if you are in home check with ls -a
if you are not able to see go to desktop path and you can type on the terminal "cat .hivehistory"

Log in to the terminal type "cat .beeline".

Related

Clear Redis-Cli history on Windows machine

Every time i open Redis-Cli tool , I can see my past entered commands including passwords.
How can i clear the history of Redis-Cli
If you don't want the history to be kept in %HOMEDRIVE%%HOMEPATH%\.rediscli_history file at all - you can set the environment variable REDISCLI_HISTFILE=/dev/null and it will prevent the history to be saved.
To clear the Redis-Cli History follow the instructions below:
1- Goto folder "C:\Users[username]"
2- Clear the content of the file ".rediscli_history"

Cant start linux "screen" with logging to specific output file

I have the problem that I want to enable logging of a screen session at the start of it which then saves the log to a specific file.
What I have until now was:
screen -AmdSL cod2war /home/cod2server/scripts/service_28969.sh
while service_28969.sh is a shell script that will call other scripts which produce output.
I started multiple of those screen-sessions with different names, for example
screen -AmdSL cod2sd /home/cod2server/scripts/service_28962.sh
-L enables logging as the screen's man say, and will safe the ouput in a file called 'screenlog.0', now since I have multiple of those screens only one of it produces output saved in that log file (I can't find other 'screenlog.*' files in that folder).
I thought to use the -Logfile "file" option from the same man page, but it doesn't work for me and I can't find out what I'm doing wrong..
screen -Logfile cod2sd.log -AmdS cod2sd /home/u268450/cod2server/scripts/service_28962.sh
will produce the following error:
Use: screen [-opts] [cmd [args]]
or: screen -r [host.tty]
Options:
[...]
Error: Unknown option Logfile
and
screen -AmdS cod2sd /home/u268450/cod2server/scripts/service_28962.sh -Logfile cod2sd.log
will run without any error and start the screen but without the logging at all..
You can specify a logfile from within the default startup ~/.screenrc file using a line like
logfile mylog.log
To do this from the command line you can create a file mystartup to hold the above line, then use option -c mystartup to tell screen to read this file for setup instead of the default. If you also need to have ~/.screenrc read, you can add the source command to your startup file. The final result would look something like:
echo 'logfile mylog.log
source ~/.screenrc' >mystartup
screen -AmdSL cod2war -c mystartup /home/cod2server/scripts/service_28969.sh
This works for me:
screen -L -Logfile /Logs/Screen/`date +%Y%m%d`_screen.log
The configs I checked:
screen version 4.08.00 (GNU) 05-Feb-20 on FreeBSD 12.2
and
version 4.06.02 (GNU) 23-Oct-17 on Debian GNU/Linux 10 (buster)
and
version 4.00.03 (FAU) 23-Oct-06 on Mac OS X 10.9.5.
I just ran into this error myself and found this solution that worked with my python file, wanted to share for anyone else who might run into this issue:
screen -L -Logfile LOGFILENAME.LOG -dmS SCREENNAME python3 ./FILENAME.PY
I have no idea if this is the 'correct' way but it works.
-L enables logging
-Logfile LOGFILENAME.LOG declares what to call the log file and file format
-dmS SCREENNAME, dm runs in detached mode and S allows you to name the session
python3 ./FILENAME.PY in this case is my script but I assume that any other script here functions
I have tried a different ordering of these commands and this was the only way I managed to have them all run without issues. Hopes this helps.

Save database on external hard drive

I am creating some databases using PostgreSQL but I want to save them on an external hard drive due to lack of memory in my computer.
How can I do this?
You can store the database on another disk by specifying it as the data_directory setting. You need to specify this at startup and it will apply to all databases.
You can put it in postgresql.conf:
data_directory = '/volume/path/'
Or, specify it on the command line when you start PostgreSQL:
postgres -c data_directory='/volume/path/'
Reference: 18.2. File Locations
STEP 1: If postgresql is running, stop it:
sudo systemctl stop postgresql
STEP 2: Get the path to access your hard drive.
(if Linux) Find and mount your hard drive by:
# Retrieve your device's name with:
sudo fdisk -l
# Then mount your device
sudo mount /dev/DEVICE_NAME YOUR_HD_DIR_PATH
STEP 3: Copy the existing database directory to the new location (in your hard drive) with rsync.
sudo rsync -av /var/lib/postgresql YOUR_HD_DIR_PATH
Then rename the previous postgres main dir with .bak extension to prevent conflicts
sudo mv /var/lib/postgresql/11/main /var/lib/postgresql/11/main.bak
Note: my postgres version was 11. Replace in path with your version.
STEP 4: Edit postgres configuration file:
sudo nano /etc/postgresql/11/main/postgresql.conf
Change the data_directory line with:
data_directory = 'YOUR_HD_DIR_PATH/postgresql/11/main'
STEP 5: Restart Postgres & Check everything is working
sudo systemctl start postgresql
pg_lsclusters
Output should shows status as 'online'
Ver Cluster Port Status Owner Data directory Log file
11 main 5432 online postgres YOUR_HD_DIR_PATH/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
Finally your can access your PostgresSQL with:
sudo -u postgres psql
You can try following the walkthrough here. This worked well for me and is similar to #Antiez's answer.
Currently, I am trying to do the same and the only conflict that I'm having at the moment is that it seems there is an issue with PostgreSQL's incremental backup and point-in-time recovery proccesses. I think it has something to do with folder permissions. If I try uploading a ~30MB csv to the postgres db, it will crash and the server will not start again because files cannot be written to the pg_wal directory. The only file in that directory is 000000010000000000000001 and does not move on to 000000010000000000000002 etc. while writing to a new table.
My stackoverflow post looking for a solution to this issue can be found here.

run Pentaho transformation from Pan fails

I get an "not recognised as an internal or external command " error message when I run the following command:
C:\pdi-ce>Pan.bat /file=c:\pdi_labs\matches.ktr usa_201210.txt
pentaho 4.4.0 community edition is installed in :
C:\pdi-ce
transformation and files are saved in :
C:\pdi_labs\
Any hint to run the transformation from Pan since I am able to run it from Spoon.
regards
Generally Pentaho kettle creates another wrapper folder "data-integration" in installation directory. Check whether Pan.bat exists in the directory. If not cd to the directory which has it.
Pan.bat -file="c:\pdi_labs\matches.ktr" > usa_201210.txt
Assuming usa_201210.txt is the log file.
I would recommend starting pan.bat in a different shell and make it wait until it completes as below.
start /wait cmd "c:\pdi-ce\Pan.bat -file=c:\pdi_labs\matches.ktr > usa_201210.txt"
cd /C:/pdi-ce/
Pan.bat /file:"c:\pdi_labs\matches.ktr" /usa_201210.txt

How to use iTunes Connect Transporter

Is there anyone that can explain to someone that doesn't know how to use Terminal what are the commands to use Transporter for iTunes Connect?
I tryed to follow the guide but with no results....
These are my steps till now:
I put this command in terminal:
export TRANSPORTER_HOME=`xcode-select --print-path`/../
Applications/Application\ Loader.app/Contents/MacOS/itms/bin
and my terminal change like this:
~ myname$ Applications/Application\ Loader.app/Contents/MacOS/itms/bin
so I guess with this now I am in the transporter folder...
Now I want to etrieve my app’s current metadata Using Lookup Mode, and I tryed with this command:
$ iTMSTransporter -m lookupMetadata -u [myname#gmail.com] -p [mypassword] -vendor_id [id999999999] -
destination [Applications/Application\ Loader.app/Contents/MacOS/itms/bin]
but I get this:
$ iTMSTransporter -m lookupMetadata -u [myname#gmail.com] -p [mypassword] -vendor_id [id999999999] -
-bash: Applications/Application Loader.app/Contents/MacOS/itms/bin$: No such file or directory
I assume I'm writing the destination in a wrong way....
So how should I write that command?
And also... when I will have to upload my edited file... what shoud I put?
Thanks a lot for any help with this issue
Start by putting the export command into a single line.
export TRANSPORTER_HOME=`xcode-select --print-path`/../Applications/Application\ Loader.app/Contents/MacOS/itms/bin
Then you have to use the full path to the iTMSTransporter Binary. You can use the variable you just defined for this.
"$TRANSPORTER_HOME/iTMSTransporter" -m lookupMetadata -u ... -vendor_id ... -destination ~/myapp
The destination is the directory where the app data will be put. ~ means your user directory. So if your username is blue ~/myapp means /Users/blue/myapp.
Don't use Xcodes directory for this.
I would recommend to NOT specify your password with the -p parameter. You don't want your password to appear in bash_history. If you don't specify the passwort you will be asked for it.
Again. Make sure that this is in one line. You must not spread the command over more than one line. Unfortunately if you copy and paste from the pdf document you get a multi line command that won't work.
I suggest to open a text editor, paste the command from the pdf into the text editor and format the command so it is on a single line.
Then go to https://bugreport.apple.com and file a bug about the crappy documentation of iTMSTransporter