I want to connect my SQL developer with oracle vision demo database. According to the article I read I have to run ssh opc#158.101.151.204 -i .ssh/id_rsa -L 1521:localhost:1521 but it gave me error. id_rsa file is not in a directory.
So I check in folder .ssh, there're only 2 files: authorized_key and knowhosts.
Anyone please advise what should I do?
Related
On windows it is usually stored in the %USERPROFILE%\ssh or
%USERPROFILE%.ssh folders.
However I do not see the ssh folders when going to %USERPROFILE%.
Is it possible to create the ssh folder and the known_hosts file myself?
Yes, this is expected.
You can in a CMD do:
cd "%USERPROFILE%"
mkdir .ssh
From there, assuming you have ssh-keygen in your PATH (which is included in Git For Windows for example), you can type:
ssh-keygen -t rsa -P ""
That will generate a key in the default path ~/.ssh(/id_rsa[.pub]), with ~/.ssh being translated in %USERPROFILE%\.ssh
Hello I am trying to SCP a log file to serve and I keep getting error
Warning: Identity file ids-east-1.pem not accessible: No such file or directory.
ec2-11.com: Permission denied (publickey).
lost connection
I have tried all the solutions presented earlier but can't seem to figure out what's wrong.
The key I am using is :
scp -r -i ids-east-1.pem ~/int/resources/tests/tasks/lib/testing.log ec2-user#11.com:/home/wn/shelf/wrDb/fractions
Just a reminder- I am able to get a log file from this server using :
scp -i ids-east-1.pem ec2-user#11.com:/home/wn/shelf/wrDb/fractions/chrono.log ~/Desktop/aws_chrono.log
If one command works, but the other gives you:
Warning: Identity file ids-east-1.pem not accessible: No such file or directory.
You are likely not running the commands from the same directory. Try specifying the key path fully (something like):
scp -i ~/.ssh/ids-east-1.pem ...
I recently set up Lamp stack on ubuntu 14.04 for my web server. I'm working through Digital Ocean. These are the steps I went through...
On local machine I logged in to my web server with
sftp user#web_server_ip
Then
sftp> cd /var/www/html
How would I go upon getting onto my local machine to get the file for the site? And how would I transfer them?
I know that I have to use the [get] and [put] commands
I'm just confused what's considered local/remote? if I'm logged into the remote server on my local machine. Overthinking it?
This is the tutorial I'm trying to follow: How To Use SFTP to Securely Transfer Files with a Remote Server
Edit:
So I tried moving a whole directory from my local machine and this is what I ended up doing
scp -r /path/directory_name name#ip_address:/var/www/html
scp: /var/www/html/portfolio.take7: Permission denied
Should I be changing permission by using sudo prior to scp -r?
Edit2:
I have also tried
Where_directory_is$ scp -r /path/directory_name name#ip_address:/var/www/html
/var/www/html: No such file or directory
It might be easier to start with SCP which allows you to copy files with one command. So for example, if you had a local file /path/filename.css and wanted to transfer it to your server, you could use the following command on your local machine:
scp /path/filename.css username#remote_hostname_or_IP:~
This command copies the local file and transfers it to the home directory of the username on the remote server using SSH. You can then SSH in (ssh username#remote_hostname_or_IP) and then do what you need with the file sitting in your home directory, such as move it to the proper Apache directory.
Once you start to get more comfortable, you can switch to sftp if you like.
Update
Here is how to set up your Apache permissions. Let's say you have an account you on the linux computer running Apache, and we'll say the IP is 192.168.1.100.
On your local machine, create this shell script, secure.sh, and remember shell scripts need to have execute privileges (chmod +x secure.sh). Fill it with the following contents:
#!/usr/bin/env bash
# Lockdown the public web files
find /var/www -exec chown you:www-data {} \;
find /var/www -type d -exec chmod -v 750 {} \;
find /var/www -type f -exec chmod -v 640 {} \;
This shell script is setting the permissions for anything in the /var/www/ directory to be 750 for the directories and 640 for the files. This gives you complete read/write permissions for the files and www-data (which is the account for Apache) read permissions. Run this anytime you have uploaded files to ensure the permissions are always set correctly.
Next, SSH into your remote computer and go to the /var/www/html directory. Ensure that the ownership is not set to root. If it is, scp the secure.sh file into your remote computer, become root and run it. This only needs to be done once, so you can remotely set the permissions.
Now you can copy directly to /var/www/ through the scp -r command on your local computer from the top of the directory you wish to copy to /var/www/html:
scp -r ./ you#192.168.1.100:/var/www/html/
Then run this command to remotely run the secure.sh shell script and send the output to out.txt:
ssh you#192.168.1.100 -p 23815 ./secure.sh > out.txt
Then cat out.txt to see that the file permissions changed accordingly.
If this is a public facing computer, then you must add an SSH key to your scp connection. You can use this tutorial to find out more about generating your own keys, it is quite easy. To use the key, you only need to add -i private_key_file to your scp and ssh commands. Lastly, it would actually be safer to keep the /var/www files as root, SSH into the computer, su to become root, then run secure.sh as root (with the owner changed to root in the shell script). It all depends on the level of security you need to worry about. If it is a development computer (which is what I am assuming) no worries then.
For folders use
scp -r root#yourIp:/home/path/ /pathOfDirectory/
For files
scp -r root#yourIp:/home/path/ /pathOfDirectory/file fileNameCopied
I have a Arduino Yun and want setup the server for Yun.
So what I want is to copy a folder that contain a py file and a index.html to my Yun
I used mac terminal to do this operation
the command looks like this
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
and then terminal asked for the password
after I typed, it shows
scp: /mnt/sda1/LobsterHeartRate: Not a directory
I didn't type /mnt/sda1/LobsterHeartRate why it shows this error
Your code
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
requires that the remote directory /mnt/sda1 exists. This looks like it is not true in your case. Check it using ssh root#192.168.240.1 ls /mnt/sda1.
scp is simple tool and it does not allow you to rename directories on the fly and the target directory must exists. You might try
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/
ssh root#192.168.240.1 mv /mnt/LobsterHeartRate /mnt/sda1
or so, if it will suit your needs. But copying more files, rsync is usually more suitable. Check its manual page and give it a try next time.
As #Jens Höpken notes, your post is a bit sparse. But trying to read between the lines of your post I suspect that LobsterHeartRate is a DIRECTORY on your local system but a FILE named LobsterHeartRate in your target system. This might be happening right at the top of the directory tree, or perhaps you have directories/files of the same name further down the tree. scp -rv might help resolve any confusions here.
Beware: scp -r resolves symbolic links. If you want to preserve symlinks you need to do something else. For historic reasons I use the following, though cpio with a find front-end opens up interesting possibilities for fine-grained file selections.
( cd /Users/gudi/Desktop && tar -cf - LobsterHeartRate ) |
ssh root#192.168.240.1 'cd /mnt/sda1 && tar -xf -'
For a safe "dry run" you could change the -xf to a -tf. The && chains are required to prevent bad things from happening if any prior command fails.
Disclaimer: any debugging is left as an exercise for the student.
I'm trying to launch a Hadoop cluster on Amazon ec2, using the instructions in "Hadoop in Action" (also here: http://wiki.apache.org/hadoop/AmazonEC2).
I've set up my private ssh key and configurations, but when I try to launch a cluster using the command-line tools:
hadoop-ec2 launch-cluster test-cluster 2
I repeatedly get this error:
Warning: Identity file ~/.ec2/id_rsa-gsg-keypair not accessible: No such file or directory.
Permission denied (publickey,gssapi-with-mic).
The ~/.ec2/id_rsa-gsg-keypair definitely exists, though, and I did chmod 600 it:
> chmod 600 ~/.ec2/id_rsa-gsg-keypair
> ls -l id_rsa-gsg-keypair
-rw------- 1 my-username
Any idea what's wrong?
You may have already realized this, but the problem is possibly related to the ~/ path usage. Try using the absolute path /home/username/.ec2