bizarre scp behavior and not sure if my computer is hacked? - scp

I tried to copy a directory called httrack_get from my local directory to my ec2 instance and I got this bizarre output in return. What does it mean?

You didn't copy httrack_get to ec2. You copied ec2-user's home directory to your local machine. You have the source and destination the wrong way around.

Related

cPanel - files are not showing in File Manager after copying via SSH / scp

I have just purchased a dedicated server from a UK hosting company that uses cPanel and I have root access
I am using scp to copy a huge (> 2tb) website from another hosting company (1&1 IONOS using Plesk not that it should make any difference)
The files are copying over .. using SSH I can use the "ls" command to list all the files that I've copied over
However, when I use the File Manager option via cPanel interface, I can see the first folder name on the left hand side (i.e. public_html/my-copied-site) but on the right hand window it shows the directory as empty
If I use the "ls" command, I can see the files & folders
if I try an access any of the files directly via a web browser then I get a 403 Forbidden message
What have I done wrong?
The answer to this problem is the ownership of the folder
Using scp over SSH meant that I was logged in as "root" and therefore the owner of the folders was also "root"
Changing the owner of the folder (using "chown" command) to the account's name resolved the problem
Hope this helps someone out

rsync copying from a server with differents permisions and owner

I'm trying to copy an entire folder with his content from a remote server to a new one, the remote server hasn't installed any pannel and new one has cpanel.
When I run the next instruction it looks to copy everything but it only has copied the folder and it's empty. The folder copied has permissions 700 like in remote but in new server the folders permissions are 755, of course the user and group are different.
I'm running this instruction with different options:
rsync -rlDv --no-perms --no-owner --no-group user#000.000.000.000:/home/user/public_html/folder /home/user/public_html/images/
I've used instead -rlDv -av with and without --no-perms --no-owner --no-group
Nothing works.
Any idea, thank you
Well, finally I've just fixed the problem.
The problem was that the files and folders was uploaded but I was seeing it with user own in cpanel and he can't see that files because the group and user was different, I changed in console to root and after change the owner and permissions to the same of the web and erything began to work fine.

SSH and FTP showing different files

I am using a host to try and deploy my Django site but I am confused by the SSH vs. FTP.
Background info:
I got the IP address, name and password from my host for the VPS.
I logged in using the same information via Putty and via WinSCP.
Both show me as having accessed root#[VPS IP Address].
Running ls on Putty shows nothing (no files or folders). So I created a file hello.txt.
WinSCP shows a lot of folders at the root, unlike Putty. I then searched all the folders for the hello.txt that I created and it's nowhere to be found.
Why would accessing the same VPS via two different methods show completely different things?
If you are indeed sure that you are logged into the same host, with the same user account you should check that you are in the same folder.
Using ssh you can issue the command pwd (print working directory) to view the current the directory you are in.
To change to another directory using the shell, use the cd command, for example:
cd .. # This moves up to the parent directory
cd /var/www/html
The Winscp user interface should also show you in what directory you are currently in.
Navigation to another directory using Winscp should be fairly straightforward.
There's no reason to think these methods will put you in the same directory location at all.
When you SSH in using Putty, you will almost certainly be put in your home directory, and that will be where your hello.txt was created.
But the FTP service has presumably been configured to put you in the common area where your service's files are located, which is not under your home directory. Where it is will be specific to the configuration of that machine.
Using SSH you will probably be able to use cd to change directory to the FTP location, if you can find out what it is; however, the reverse is not true and you almost certainly won't be able to navigate to the home directory via FTP.
(Note, this is not a question about Django, and should probably have been asked on ServerFault.)

Just messed up a server misusing chown, how to execute it correctly?

I'm moving from an old shared host to a dedicated server at MediaTemple. The server is running Plesk CP, but, as far as I can tell, there's no way via the Interface to do what I want to do.
On the old shared host, running cPanel, I creative a .zip archive of all the website's files. I downloaded this to my computer, then uploaded it with FTP to the new host account I'd set up.
Finally, I logged in via SSH, navigated to the directory the zip was stored in (something like var/www/vhosts/mysite.com/httpdocs/ and ran the unzip command on the file sitearchive.zip. This extracted everything just the fine. The site appeared to work just fine.
The problem: When I tried to edit a file through FTP, I got Error - 160: Permission Denied. When I Get Info for the file I'm trying to edit, it says the owner and group is swimwir1.
I attemped to use chown at this point to change owner - and yes, as you may be able to tell, I'm a little inexperienced in SSH ;) luckily the server was new, since the command I ran - chown -R newuser / appeared to mess a load of stuff up. The reason I used / on the end rather than /var/www/vhosts/mysite.com/httpdocs/ was because I'd already cded into their, so I presumed the / was relative to where I was working. This may be the case, I have no idea, either way - Plesk was no longer accessible, although Apache and things continued to work. I realised my mistake, and deciding it wasn't worth the hassle of 1) being an amateur and 2) trying to fix it, I just reprovisioned the server to start afresh.
So - what do I do to change the owner of these files correctly?
Thanks for helping out a confused beginner!
Jack
Your command does indeed specify an absolute path to the root of the filesystem. Any path that begins with a '/' is absolute. You need:
chown -R newuser .
or:
chown -R newuser /var/www/vhosts/mysite.com/httpdocs

Is there a way to check if a directory exists in Apache configuration files?

Is there a way to include configuration settings in Apache based on if a directory exists? Basically I have a portable hard drive that I transport between work and home that has some stuff I'm developing on it. I only want the Apache config to load a particular virtual host if the folder exists.
Since Apache 2.4.34 you can now use <IfFile>...</IfFile> which will check to see if a file exists. There's more details on the <IfFile> page.
No, there seems to be no direct way to do this.
The only thing that might be a solution is the IfDefine directive. You can define defines using the -d parameter to when the server is started.
The parameter-name argument is a define as given on the httpd command line via -Dparameter-, at the time the server was started.
You might be able to check for the existence of a directory in a batch or bash file, and set the -d parameter accordingly.
Whether that is an option, will depend on how your server is started from the portable hard drive.
I've come up with a solution that seems to work for Linux and OS X, and it hinges on "mountpoints". It might be possible to emulate it within Windows, as well, but you would probably have to get creative with FUSE and/or Cygwin.
If you create an empty folder in your home directory, such as "/Users/username/ExtraVhosts", you can add an apache directive to "Include /Users/username/ExtraVhosts/*".
Then, when you insert your thumb drive, you can mount somewhere and then use mountpoint "binding" to cross-link the ExtraVhosts folder to a folder on the mobile device.
An OS X example:
I have a thumb drive called 'Cherrybomb'
When I insert it, it always gets mounted to /Volumes/Cherrybomb
I can then use bindfs (sudo port install bindfs) to mount a subfolder of it, like so:
sudo bindfs /Volumes/Cherrybomb/Projects/vhosts /Users/username/ExtraVhosts
Then I can restart apache to read in the updated configuration:
sudo /opt/local/apache2/bin/apachectl restart
At that point, it's just a matter of adding entries in /etc/hosts for server aliases to get picked up.
The linux equivalent would be using the "--bind" parameter of the mount command.
One caveat: This makes it difficult to quickly unmount the USB drive, since it is always marked as "in use" by apache. Here's a removal procedure:
Close all open files and terminal sessions that are using the drive (the present-working-directory in terminal can cause unmount issues)
Stop apache: sudo /opt/local/apache2/bin/apachectl stop
umount /Users/username/ExtraVhosts
Then you can either unmount it graphically or manually (umount /Volumes/Cherrybomb).
If your work and home machines mount the drive to different locations, you could have multiple vhosts folders - home_vhost, work_vhost, etc - and use that in the binding step.
I hope this helps someone out :)
If you point apache to the mountpoint only there shouldn't be an issue. Just don't point Directory directives to directories within the drive.
eg, if you mount /dev/somedisk /mnt/somevhost, the
/mnt/somevhost directory will be there whether or not you have the drive mounted and apache will start. Apache doesn't care if the directory is empty so a <Directory "/mnt/somevhost"/> won't cause server to not start if the drive isn't mounted.
Work with UNIX not against it :-p This solution should be sufficient for development.