I want to know which script of my server has a hole, and it's adding to the queue 4000 mails / hour.
This is my mail queue screenshot: http://www.diigo.com/item/image/1i66c/8mav
And this is a single email screenshot: http://www.diigo.com/item/image/1i66c/0pad
I use cpanel. Is there a way to solve my problem?
First thing, turn off exim.
if you can shell in flush the queue
exim -bp | awk '/^ *[0-9]+[mhd]/{print "exim -Mrm " $3}' | bash
See if you are an open relay. You shouldn't be, but check.
open relay checker
If you can shell in grep php files for the mail function. Look for suspicious scripts. Comment out the function. My cpanel sends me notices of insecure scripts.
You might have been been compromised as well. Ask your hosting service.
This is more of a serverfault kind of question. You might post there.
Remember to get your self off all the blacklists when you figure it out.
exiqgrep -ir help# | xargs exim -Mrm
Related
I put password_persist.txt file in client site, because esb works as windows server. but we are concern about security of this password. because everyone can have access to that.
They don't want to use password_tmp file every time.
Is there any solution for hiding it or hash it?
Thanks in advance
We run our ESB as a windows service with the Log On As set to a user we setup specifically to run our ESB and other wso2 products (ex. domain\wso2user). We then use a data security product, in our case Vormetric, to lock that file down so that only that user we have setup to run our windows service has access to that file. All other users are denied access to viewing that file.
We worked with our security team to arrive at this solution so I would recommend speaking to yours and seeing if they have a solution in place for this type of scenario. My only experience with this is using Vormetric but I am sure there are countless other products out there that will provide similar functionality.
Joe
Another solution would be to take the base64 encoded key-store password, decode it and generate the password-temp file at server startup using the server startup script. In Linux you could include
echo d3NvMmNhcmJvbgo= | base64 -d | tee $CARBON_HOME/password-tmp
at the start of the
elif [ "$CMD" = "start" ]; then block. You should be able to do something similar for windows as well. It would be better to go with encryption though.
So, say I run "telnet google.com 80\rGET / HTTP/1.0\r", is there any way I could save the ensuing HTTP data?
Is it possible to do in bash? If not, Perl?
Use tee.
telnet google.com 80 | tee output.txt
The script command seems to meet your requirements. Once you start it, all the terminal output from that session gets saved to a file. But for the specific task you mentioned,
I'd use wget or curl rather than messing around with script.
I set up Jenkins CI to deploy my PHP app to our QA Apache server and I ran into an issuse. I successfully set up the pubkey authentication from the local jenkins account to the remote apache account, but when I use rsync, I get the following error:
[jenkins#build ~]# rsync -avz -e ssh test.txt apache#site.example.com:/path/to/site
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(64) [sender=2.6.8]
[jenkins#build ~]#
One potential problem is that the remote apache account doesn't have a valid shell account, should I create a remote account with shell access and part of the "apache" group? It is not an SSH key problem, since ssh apache#site.example.com connects successfully, but quickly kicks me out since apache doesn't have a shell.
That would probably be the easiest thing to do. You will probably want to only set it up with a limited shell like rssh or scponly to only allow file transfers. You may also want to set up a chroot jail so that it can't see your whole filesystem.
I agree that that would probably be the easiest thing to do. We do something similar, but use scp instead. Something like:
scp /path/to/test.txt apache#site.example.com:/path/to/site
I know this is pretty old thread, but if somebody comes across this page in future...
I had the same problem, but got that fixed when I fixed my .bashrc .
I removed the statement "echo setting DISPLAY=$DISPLAY" which was there before in my .bashrc. rsync has issues with that statement for some reason.
So, fixing .bashrc/.cshrc/.profile errors helped me.
I'm not sure if this is possible or not.
What I'm looking for is a way to tell telnet to use a certain IP address to log into and then run commands where the commands change based on a user's MAC address.
Basically it would be:
tell telnet to use x.x.x.x as the IP to log into and put in the correct username and password
tell telnet to run commands (based on the user's MAC address) that can change based on which user stats you want to see, for example: show macaddress
export the output to notepad
close
expect can do this. If you don't have Tcl but Python, try Pexpect.
If you just want to run one command, use ssh (which allows you to log in, run a command and which will return with the error code of the command, so you can handle errors, too).
If you want to run more than a single command, write a script, use scp to copy that script to the other side and then execute the script with ssh. I've used this approach with great success to build a simple spider that could run a script to gather system information over a large number of hosts.
I think you're looking for expect (it automates these kind of interactive applications). Here is a gratis chapter from the authority on expect, the book "Exploring Expect".
Also you should use SSH if this is over the internet. Telnet is insecure as it's a plain text protocol.
Not to blow my own horn, but you may be able to twist a personal app of mine (note: Sorry, I've removed this.) to this end.
There's currently no documentation other than what is on that page and no public source code (though I've been meaning to get onto that, and will work that out tomorrow if you're interested), but I'd be happy to answer any questions.
That said, any MUD client could be turned to the same use too.
I have a bash file that contains wget commands to download over 100,000 files totaling around 20gb of data.
The bash file looks something like:
wget http://something.com/path/to/file.data
wget http://something.com/path/to/file2.data
wget http://something.com/path/to/file3.data
wget http://something.com/path/to/file4.data
And there are exactly 114,770 rows of this. How reliable would it be to ssh into a server I have an account on and run this? Would my ssh session time out eventually? would I have to be ssh'ed in the entire time? What if my local computer crashed/got shut down?
Also, does anyone know how many resources this would take? Am I crazy to want to do this on a shared server?
I know this is a weird question, just wondering if anyone has any ideas. Thanks!
Use
#nohup ./scriptname &>logname.log
This will ensure
The process will continue even if ssh session is interrupted
You can monitor it, as it is in action
Will also recommend, that you can have some prompt at regular intervals, will be good for log analysis. e.g. #echo "1000 files copied"
As far as resource utilisation is concerned, it entirely depends on the system and majorly on network characteristics. Theoretically you can callculate the time with just Data Size & Bandwidth. But in real life, delays, latencies, and data-losses come into picture.
So make some assuptions, do some mathematics and you'll get the answer :)
Depends on the reliability of the communication medium, hardware, ...!
You can use screen to keep it running while you disconnect from the remote computer.
You want to disconnect the script from your shell and have it run in the background (using nohup), so that it continues running when you log out.
You also want to have some kind of progress indicator, such as a log file that logs every file that was downloaded, and also all the error messages. Nohup sends stderr and stdout into files.
With such a file, you can pick up broken downloads and aborted runs later on.
Give it a test-run first with a small set of files to see if you got the command down and like the output.
I suggest you detach it from your shell with nohup.
$ nohup myLongRunningScript.sh > script.stdout 2>script.stderr &
$ exit
The script will run to completion - you don't need to be logged in throughout.
Do check for any options you can give wget to make it retry on failure.
If it is possible, generate MD5 checksums for all of the files and use it to check if they all were transferred correctly.
Start it with
nohup ./scriptname &
and you should be fine.
Also I would recommend that you log the progress so that you would be able to find out where it stopped if it does.
wget url >>logfile.log
could be enough.
To monitor progress live you could:
tail -f logfile.log
It may be worth it to look at an alternate technology, like rsync. I've used it on many projects and it works very, very well.