Oracle SQL Developer: sharing configuration via Dropbox - sql

I would like to share my Oracle SQL Developer configuration across my several computers that use Dropbox.
How can I do this?

In case anyone comes here looking for the location of user configured options like me, they are hiding here:
%appdata%\SQL Developer\
This is useful to know when copying your preferences to a new computer. If you are looking for the connection settings, search for connections.xml in that directory. There are also some other configuration files here that you may need:
sqldeveloper.conf – <sqldeveloper dir>\sqldeveloper\bin\
ide.conf – <sqldeveloper dir>\ide\bin\
This is for Oracle SQL Developer 3.

Here's what I did.
#!/bin/bash
# share sqldeveloper config via dropbox
# this is for sqldeveloper 1.5.4, change your paths as necessary
# strace or dtruss sqldeveloper to see what config files are accessed
ITEMS="
o.ide.11.1.1.0.22.49.48/preferences.xml
o.ide.11.1.1.0.22.49.48/settings.xml
o.jdeveloper.cvs.11.1.1.0.22.49.48/preferences.xml
o.jdeveloper.subversion.11.1.1.0.22.49.48/preferences.xml
o.jdeveloper.vcs.11.1.1.0.22.49.48/preferences.xml
o.sqldeveloper.11.1.1.59.40/preferences.xml
o.sqldeveloper.11.1.1.59.40/product-preferences.xml
"
INST=~/Library/Application\ Support/SQL\ Developer/system1.5.4.59.40
DROP=~/Dropbox/Library/SQL\ Developer/system1.5.4.59.40
# note, you can zap your configuration if you are not careful.
# remove these exit lines when you're sure you understand what's
# going on.
exit
# copy from real folder to dropbox
for i in $ITEMS; do
echo uncomment to do this once to bootstrap your dropbox
#mkdir -p "`dirname "$DROP/$i":`"
#cp -p "$INST/$i" "$DROP/$i"
done
exit
# link from dropbox to real folder
for i in $ITEMS; do
rm "$INST/$i"
ln -s "$DROP/$i" "$INST/$i"
done

Simple sharing SQLDeveloper config on Dropbox, the easiest way on MACOSX is to
cd ~/Dropbox
mkdir -p Library/SQLDeveloper
cp -rp ~/.sqldeveloper/* Library/SQLDeveloper/
mv ~/.sqldeveloper ~/remove_when_sure_sqldeveloper
ln -sf $PWD/Library/SQLDeveloper ~/.sqldeveloper
Do this on your most important machine and on the machine on which to share only do
cd ~/Dropbox
mv ~/.sqldeveloper ~/remove_when_sure_sqldeveloper
ln -sf $PWD/Library/SQLDeveloper ~/.sqldeveloper
This works like a charm.

Related

How to copy file from server to local using ssh and sudo su?

Somewhat related to: Copying files from server to local computer using SSH
When debugging on DEV server I can see logs with
# Bash for Windows
ssh username#ip
# On server as username
sudo su
# On server as su
cat path/to/log.file
The problem is that while every line of the file is indeed printed out, the CLI seems to have a height limit, and I can only see the last "so many" lines after the printing is done.
If there is a better solution, please bring it forward, otherwise, how do I copy the "log.file" to my computer.
Note: I don't have a password for my username, because the user is created with echo "$USER ALL=(ALL:ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/$USER.
After sudo su copy the file to the /tmp folder on the server with
cp path/to/log.file /tmp/log.file
After that the standard command should work
scp username#ip:/tmp/log.file log.file
log.file is now in the current directory (echo $PWD).

How to use specific inodes/directories that come under df -i to store files

I'm working on a research project in a ML lab, and running stuff on their machines virtually using ssh. The machines are Linux, and my home laptop is a Mac. The actual machine storage is really small, so I'm supposed to use these two specific directories that show up under df -I (inodes list). But how do I like redirect files to store there?
Adding it to the end of directories when I use scp -r doesn't work. My command to send directories from my local computer to the ssh is scp -r /Local directory pwd/ username#server:/home/username, and for example sticking on the anode directory to the back of it (so new command would be scp -r /Local directory pwd/ username#server:/home/username/inode directory) doesn't work.
Would appreciate help

iTerm2: quick download over SSH using CMD+click

iTerm2 allows you to click on a link (CMD+click) and open it quickly. However, when working over SSH, this doesn't work. Is it possible to enable this functionality, so that I can CMD+click a file, and it will automatically download into a folder on my local machine?
Thanks!
This is actually possible with Shell Integration installed. Note that Shell Integration will need to be installed on any server that you are ssh'ing into, not just on your local machine. From this link:
iTerm has recently introduced a feature called Shell Integration. Using this feature, we can upload and download files conveniently directly from iTerm 2. Drag a file into the window when pressing Option Key uploads the file to the remote ssh connection. Right-click on a file using ls command will bring up a context list containing downloading the file.
Click “iTerm2->Install Shell Integration” when sshing into the remote server.
Ensure the server has a correct FQDN as hostname and can be connected through this hostname. (You can use hostname -f to check it)
If you’re using private key authentication, then you should have id_rsa in your .ssh directory. However, you should also put id_rsa.pub in your .ssh directory to use this feature.
Sorry for the late answer, but I was just trying to do the same thing and came across your question. Thought I would post my findings once I found a solution.
I've not had much success with ⌘+Clicking to download via SCP in iTerm2 because I have a complex set of rules involving jump hosts in ~/.ssh/config.
But I have found an elegant work around: a shell function which writes to STDOUT to trigger iTerm2 into capturing the output and saving it as a file!
I keep the following snippet (Toolbelt → Snippets) which I execute to define a command download:
alias download="bash <(base64 -d <<<'IyEvYmluL2Jhc2gKaWYgWyAkIyAtbHQgMSBdOyB0aGVuCiAgZWNobyAiVXNhZ2U6ICQwIGZpbGUg
Li4uIgogIGV4aXQgMQpmaQpmb3IgZmlsZW5hbWUgaW4gIiRAIgpkbwogIGlmIFsgISAtciAiJGZp
bGVuYW1lIiBdIDsgdGhlbgogICAgZWNobyBGaWxlICRmaWxlbmFtZSBkb2VzIG5vdCBleGlzdCBv
ciBpcyBub3QgcmVhZGFibGUuCiAgICBjb250aW51ZQogIGZpCgogIGZpbGVuYW1lNjQ9JChlY2hv
IC1uICIkZmlsZW5hbWUiIHwgYmFzZTY0KQogIGZpbGVzaXplPSggJCh3YyAtYyAiJHtmaWxlbmFt
ZX0iKSApCiAgcHJpbnRmICJcMDMzXTEzMzc7RmlsZT1uYW1lPSR7ZmlsZW5hbWU2NH07c2l6ZT0k
e2ZpbGVzaXplWzBdfToiCiAgYmFzZTY0IDwgIiRmaWxlbmFtZSIKICBwcmludGYgJ1xhJwpkb25l
Cg==')"
The base64-encoded string decodes to:
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 file ..."
exit 1
fi
for filename in "$#"
do
if [ ! -r "$filename" ] ; then
echo File $filename does not exist or is not readable.
continue
fi
filename64=$(echo -n "$filename" | base64)
filesize=( $(wc -c "${filename}") )
printf "\033]1337;File=name=${filename64};size=${filesize[0]}:"
base64 < "$filename"
printf '\a'
done
Which relies on iTerm2's download protocol
Sample session showing the notifications from iTerm2:

scp command - transfer folder over ssh

I have a Arduino Yun and want setup the server for Yun.
So what I want is to copy a folder that contain a py file and a index.html to my Yun
I used mac terminal to do this operation
the command looks like this
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
and then terminal asked for the password
after I typed, it shows
scp: /mnt/sda1/LobsterHeartRate: Not a directory
I didn't type /mnt/sda1/LobsterHeartRate why it shows this error
Your code
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
requires that the remote directory /mnt/sda1 exists. This looks like it is not true in your case. Check it using ssh root#192.168.240.1 ls /mnt/sda1.
scp is simple tool and it does not allow you to rename directories on the fly and the target directory must exists. You might try
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/
ssh root#192.168.240.1 mv /mnt/LobsterHeartRate /mnt/sda1
or so, if it will suit your needs. But copying more files, rsync is usually more suitable. Check its manual page and give it a try next time.
As #Jens Höpken notes, your post is a bit sparse. But trying to read between the lines of your post I suspect that LobsterHeartRate is a DIRECTORY on your local system but a FILE named LobsterHeartRate in your target system. This might be happening right at the top of the directory tree, or perhaps you have directories/files of the same name further down the tree. scp -rv might help resolve any confusions here.
Beware: scp -r resolves symbolic links. If you want to preserve symlinks you need to do something else. For historic reasons I use the following, though cpio with a find front-end opens up interesting possibilities for fine-grained file selections.
( cd /Users/gudi/Desktop && tar -cf - LobsterHeartRate ) |
ssh root#192.168.240.1 'cd /mnt/sda1 && tar -xf -'
For a safe "dry run" you could change the -xf to a -tf. The && chains are required to prevent bad things from happening if any prior command fails.
Disclaimer: any debugging is left as an exercise for the student.

Is it possible to make SCP ignore symbolic links during copy?

I need to reinstall one of ours servers, and as a precaution, I want to move /home, /etc, /opt, and /Services to backup server.
However, I have a problem: because of plenty of symbolic links a lot of files are copied multiple times.
Is it possible to make scp ignore the symbolic links (or actually to copy link as a link not as a directory or file)? If not, is there another way to do it?
I knew that it was possible, I just took wrong tool. I did it with rsync
rsync --progress -avhe ssh /usr/local/ XXX.XXX.XXX.XXX:/BackUp/usr/local/
I found that the rsync method did not work for me, however I found an alternative that did work on this website (www.docstore.mik.ua/orelly).
Specifically section 7.5.3 of "O'Reilly: SSH: The Secure Shell. The Definitive Guide".
7.5.3. Recursive Copy of Directories
...
Although scp can copy directories, it isn't necessarily the best method. If your directory contains hard links or soft links, they won't be duplicated. Links are copied as plain files (the link targets), and worse, circular directory links cause scp1 to loop indefinitely. (scp2 detects symbolic links and copies their targets instead.) Other types of special files, such as named pipes, also aren't copied correctly.A better solution is to use tar, which handles special files correctly, and send it to the remote machine to be untarred, via SSH:
$ tar cf - /usr/local/bin | ssh server.example.com tar xf -
Using tar over ssh as both sender and receiver does the trick as well:
cd $DEST_DIR
ssh user#remote-host 'cd $REMOTE_SRC_DIR; tar cf - ./' | tar xvf -
One solution is to use a shell pipe. I have a situation where I got some *.gz files and symbolic links generated by some software to link to the same *.gz files with a slightly shorter name. If I simply use scp, then the symbolic links will be copied as regular files and resulting in duplicates. I know rsync can ignore symbolic links, but my gz files are not compressed with syncable options, and sync is very slow in copying these gz files. So I simply use the following script to copy over the files:
find . -type f -exec scp {} target_host:/directory/name/data \;
The -f option will only find regular files and ignore symbolic links. You need to give this command on the source host. Hope this may help some user in my situation. Let me know if I missed anything.
A one liner solution which can be executed at client to copy folder from server using tar + ssh command.
ssh user#<Server IP/link> 'mkdir -p <Remote destination directory;cd <Remote destination directory>; tar cf - ./' | tar xf - C <Source destination directory>
Note: mkdir is must, if the remote destination directory is not present then the command will simply compress the entire home of the remote server and extract it to client.