Monitoring and limiting the time, bandwidth, and number of connections in the SSH method - ssh

I use the ssh protocol with port 2222 as a VPN and shared it with my friends and created a new username and password for each friend.
1- How can I see how much each user has downloaded or uploaded so far?
2- How can I limit each user to connect with only one connection?
3- How can I limit each user in terms of download and upload capacity? For example, can each user download or upload 50 GB in 30 days?
4- How can I specify an expiration date for each user so that the user becomes inactive on the specified date?
5- And how can I monitor what each user sends and receives? And save them as a log?
i have need for solve problem

Related

How is GitLab/GitHub authentication separated from an ordinary SSH-session?

I read the question: How does the GitHub authentification work? and https://unix.stackexchange.com/questions/315615/is-ssh-public-key-associated-with-a-user Which is exactly what I am wondering. I am still missing a better answer.
When I test my SSH-key-pair I connect to user git#gitlab.com. My stored Public key has a fingerprint of base64. When the SSH Client(me) want to connect to the server(My gitlab/github account server) it sends its ID(fingerprint), the server checks it ".ssh/authorised_keys" and loops through the Fingerprints after the correct public key to encrypt the challenge.
On Github/Gitlab there are several thousand of users, they all use the same username ("git") to initiate a web (SaaS)session. So how is this separated on the server? I don't get root access on gitlab/github, of course. I only get access to my account though the generic user-session git#gitlab.com. But how is this implemented?
When I use SSH in other situations I have a specific username which I use to [my-username]#router.com
E.g.
If I would set up my own GitLab on a local NAS/Server. How can I create an account (User#local-gitlab.com) but the access rights are limited to the Fingerprint of the differents users SSH-key-pairs?
User: ID:001
User: ID:002
User: ID:003
Somehow I need to limit the access for ID:001 when he/she initiate a ssh-session with my server on account "User".
I can't speak for GitLab, but for GitHub, there is a dedicated service that terminates these connections, contacts the authentication service with the key in question, and then receives the response about whether the user is allowed to access that repo, and if so, contacts the servers storing the data.
GitHub has more than 65 million users, many users have multiple SSH keys, and there are also deploy keys for servers, so using the command directive with an OpenSSH authorized_keys file would be extremely slow, since it would involved parsing and reading probably gigabytes of data each time a connection was made.
If you need this yourself for a small set of users, the command directive in authorized_keys is a viable approach. If you need something more scalable, you can create a custom server with something like libssh and perform authentication yourself, either in that process, or in a separate process.
I found this question+answer: https://security.stackexchange.com/questions/34216/how-to-secure-ssh-such-that-multiple-users-can-log-in-to-one-account. Which highlights that you can put restrictions on authorised_keys. Don't know if that provides precise answer for my question, but it looks like it.
command="/usr/local/bin/restricted-app",from="192.0.2.0/24",no-agent-forwarding,no-port-forwarding,no-x11-forwarding ssh-rsa AAAA… git#gitlab.com
I guess there is several thousand of those lines at gitlabs/githubs servers in .ssh/authorized_keys where every single line points out access to only that gitlab/hub account.
Please comment if you don't agree.

How to disable Google compute engine from resetting SFTP folder permissions when using SSH-Key

Currently running a Google compute engine instance and using SFTP on the server.
Followed details to lock a user to the SFTP path using steps listed here: https://bensmann.no/restrict-sftp-users-to-home-folder/
To lock the user to a directory, the home directory of that user needs to be owned by root. Initially, the setup worked correctly but found that Google compute engine sporadically "auto-resets" the permissions back to the user.
I am using an SSH key that is set in the Google Cloud Console and that key is associated with the username. My guess is that Google Compute Engine is using this "meta-data" and reconfiguring the folder permissions to match that of the user associated with the SSH key.
Is there any way to disable this "auto-reset"? Or, rather, is there a better method to hosting SFTP and locking a single user to a SFTP path without having to change the home folder ownership to root?
Set your sshd rule to apply to the google-sudoers group.
The tool that manages user accounts is accounts daemon. You can turn it off temporarily but it's not recommended. The tool syncs the instance metadata's SSH keys with the linux accounts on the VM. If you do this any account changes won't be picked up, SSH from Cloud Console will probably stop working.
sudo systemctl stop google-accounts-daemon.service
That said it may be what you want if you ultimately want to block SSH access to the VM.

Transfer from one cPanel to another cPanel without WHM access

I have cPanel access on two different servers and would like to transfer from one server to another. The original size of the account on the first server is close to 15GB.
Currently, the only two ways I can think of are:
Backup using cPanel then Restore on the second server. But this process times out. I get "Failed - Network error" error
Use FTP App like Filezilla to login and transfer files from that. I haven't tested this but I think it first downloads the files on my local machine (temp folder) then uploads them to the second server.
My problem with option 2 is that this means I will end up using 30GB of data transfer if it actually does that.
What is the best way to transfer from server to server using cPanel?
I have limited knowledge on this. I will suggest you some tips.
1.Backup using cPanel then Restore on the second server. But this process times out. I get "Failed - Network error" error
The backup creation is failed due to the high size of account. So that reason that way is not possible.
Use FTP App like Filezilla to login and transfer files from that. I haven't tested this but I think it first downloads the files on my local machine (temp folder) then uploads them to the second server.
You can download the whole content to your PC or download one folder by folder like first home, mail, and etc and upload to new cpanel account using ftp
Could you please open a support ticket to your hosting provider they will help you create from server backend using /script/pkgacct.
Thank you.

Find Client PC name behind Remote Desktop connection

This is a bit general question.
I have a Windows application writted on VB (or whatever language) that has a login system. This app use computers registry to save and retrieve last login used, so the next time the user run the application it only needs to type his password. This works just fine.
Now one client wants to install my app in his server instead of clients computers and make all the pcs access the system via Remote desktop. No problem here.
My problem comes with the login system, because if i use the registry to save last login user, it will be only one (the server registry) so I always get last login user independet of wich pc access the system.
So my question is: How can I set a default user for each client PCs? I could use my database instead of the registry to save an retrieve last login user, but for then I'll need to know the PC name behind the remote desktop, and I don't know if thats even possible... Any ideas?
Thanks!
Note regardless duplicated post:
My question is different from the above mention. I don't need to get user name from terminal server. I just need a way to save and retrieve some data on the registry for each terminal service user or client PC.

Can a hacker hack a website's FTP, SSH and/or .htaccess file?

I know that a website can have some vulnerabilities that hackers could use even though the admin tries his/her best to make their website very secure. I want to make my website secure and by doing that I need to try my best to do so. First way to make it secure is to ask questions and below are the specific questions I want to ask.
These are my specific questions:
1. Can a hacker access my website's .htaccess file? And if so, can they edit it?
2. Can a hacker get my SSH root password even if the SSH password is 18+ characters long?
3. Can a hacker get my FTP username and password even if the FTP password is 18+ characters long?
4. Is SSH more secure than FTP?
NOTE: The below is just a general information which probably doesn't cover half of the subject, there are tons of things you need to make sure of but the below should give you a rough idea.
Can a hacker access my website's .htaccess file?
Yes, some scenarios this could happen:
If you configure wrong your httpd.conf allowing people to visite .ht* pages which is by default forbidden on the httpd.conf
If your server is meant for hosting and you or your users don't properly apply permission to their files so they are accessible within other accounts.
If your webserver don't deploy user and group protection to accounts
If accounts are not rooted to their folders.
And if so, can they edit it?
Yes and no, just accessing the file from a browser will not grant them access to edit it, however in some cases it may be possible for instance:
If one of your codes PHP, perl, etc have vulnerabilities then yes it may be possible
As mentioned early if your websever does not deploy per account user and group then others account will have access to the files from another account
If the permission set on the .htaccess file is for instance 777 which allows ANYONE to manipulate that file it will be editable and readable from others account.
Can a hacker get my SSH root password even if the SSH password is 18+ characters long?
Brute force is not the only way to grab someone's password, if your computer has been compromised, if your services are not up to date with the newest exploits and more, it's also possible to get your password.
The most common way to protect against this would be to make your SSH password-less, basically you will deny direct access to root, block any access using password and will only grant access to authorized keys that are generated from a pair of keys.
This key would allow you access to a pre-defined account that have that key allowed to be logged as.
From that account you've logged as, you can either use sudo to run commands as root or su - to switch the current account to root.
Change the SSH port to some other port.
Use your firewall to prevent and catch brute force attempts on certain ports and block it.
Use your firewall to allow only your IP to access the server if your IP is static.
Use your firewall to block access to unused ports of service that do not require external access for example if you do not offer MySQL remote access you can block access to the port 3306 as well as configuring your MySQL server to bind on the localhost only.
Can a hacker get my FTP username and password even if the FTP password is 18+ characters long?
Brute force is not the only way to grab someone's password, if your computer has been compromised, if your services are not up to date with the newest exploits and more, it's also possible to get your password.
Is SSH more secure than FTP?
They are different protocols and serve to different purpose and they can be equality insecure or equality secure it all depends on the System Administrator to keep it up to date and secure.