alternative to tail -f | grep server logs - testing

Currently, I'm making curl calls, check the response and some times do a "ssh HOSTNAME "tail -f LOGFILE" | grep PATTERN. Is there a tool out there that streamline/generalize this process of making some request, checking both the response and server logs for certain patterns? (Oh, and getting statistics like response time would be plus)

I've only got an answer to part of your question. To get good stats out of cURL, try something like this:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null -s http://www.google.com/

Related

parallel ssh (pssh) with output stream

I have 3 servers and I want to run a command on all of them parallely from a client and see the output as it streams.
I have tried using pssh but it shows output only when the command exits. But what I want is the output from all the servers on the stdout of my client as it produces output before exiting.
For example, when I run "ping google.com" on all the servers, I get output only when I hit Ctrl+C like this:
My command looks like this:
pssh -h server_list -l userName -i pemFile.pem 'ping google.com'
How to see the ping output from all the 3 servers as it pings?
I was trying to achieve the same, and the best way for me was to specify an output directory and then follow the stream on the output files, like so:
We add -o /tmp/out -t 0 so we get the output of each host to the specified directory, and we do not get any timeout.
pssh -h server_list -l userName -i pemFile.pem -o /tmp/out -t 0 'ping google.com'
Leave that running, and then follow the streams. Assuming you have host1, host2, host3, and host4 in your server_list, you can do the following:
tail -f /tmp/out/host{1,2,3,4}

Gitlab API add SSH-key

I've problems adding an SSH key to my gitlab server trough the API (It works well trough the webpage).
Gitlab information:
I came across this issue (which was fixed here) which was related to an "wrong" openssh implementation. They've fixed this in milestone 7.10. Only thing... My server has openssh 6.6 installed:
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3, OpenSSL 1.0.1f 6 Jan 2014
Now, I don't know if that fix is backwards compatible or not, but maybe good to mention.
Also, the logs show no warning or errors or whatsoever. The /tmp/gitlab_key* files are generated on the server:
The problem I'm facing is that gitlab can't create the fingerprint trough the API. This is the responce I get from the API:
{
"message": {
"fingerprint": ["cannot be generated"]
}
}
So right now I have no idea what the problem could be. I've been struggling with this for almost a week now, so I really hope that his problem could be fixed.
-just for the record, here's the script I'm using to add the ssh-key trough the API
#!/bin/bash
jsonFile="jsonResponce"
echo `curl http://gitserver/api/v3/session --data 'login=****&password=****'` > $jsonFile
userToken=$(jq '.private_token' $jsonFile)
finalUserToken=$(echo "$userToken" | tr -d '"')
echo "user token: $finalUserToken"
# Below key is for testing, will use output of cat ~/.ssh/id_rsa.pub later on
# sshKey="ssh-rsa AAAAB3N***** ****#***.com
# curl --data "private_token=$userToken&title=keyName&key=$sshKey" "http://gitserver/api/v3/user/keys"
rm $jsonFile
id_rsa.pub is base64 encoded file, it contains + character
http post with application/x-www-form-urlencoded, need encode it's content preventing + being convert to (space)
try
curl --data-urlencode "key=$key_pub" --data-urlencode "title=$hostname" \
http://gitlabserver/api/v3/user/keys?private_token=$Token
see: this
Improving on #Mathlight's answer the following snippet uploads public ssh key to gitlab.com
curl -X POST -F "private_token=${GITLAB_TOKEN}" -F "title=$(hostname)" -F "key=$(cat ~/.ssh/id_rsa.pub)" "https://gitlab.com/api/v3/user/keys"
OP here
In the mean time I've updated the server to version 8.8 and changed the curl code a bit and now it's working like a charm:
curl -X POST -F "private_token=${userToken}" -F "title=${sshName}" -F "key=${sshKey}" "${gitServer}/user/keys"
Just in case anybody needs this in the future...

How to make Apache copy certain requests to another webserver?

I have a busy web server running apache. Now I am interested in certain request like:
http://myserver1/path1/somepage1.html?xxxxxx
http://myserver1/path2/somepage2.html?xxxxxx
What I want to do is to duplicate request like this and forward them to another webserver like:
http://myserver2/request_statistic/
But the original request must be served on myserver1 as they are now. myserver2 is only for research purpose, so I want the duplicated request headers and bodys are just as the original ones.
Can this be done? How?
Thank you.
Where would the response go?
You might try looking at mod_security, which has a number of useful features that would be of use... is your goal security/forensics, or performance analysis?
For performance analysis, I've found it more useful in the past to create a more comprehensive logging format that captures things like response-code, response Location header (for tracking redirects), selected request headers, timing information, etc.
If https is not in use, then you might be better served by something driven by packet-capture. I know that Oracle Real User Information (?) (RUI) works using that principle. For more casual diagnostic sessions, I've often gotten away with the following tcpdump:
tcpdump -s0 -A -p -nn tcp and port 80
That's enough to get the full requests (and responses), it is a little messy, but the data is all there. You can clean it up a bit with a script, such as the following (tcpdump-http-headers-only) -- its not perfect (particularly on a busy server where things get harder to track).
#!/bin/bash
#
# Pass in the output of 'tcpdump -s0 -A ...' to this and it will
# output only the HTTP request headers and response headers.
#
# Cameron Kerr <cameron.kerr.nz#gmail.com>
# 2013-02-14
#
grep --line-buffered -o \
-e $'GET .*\r' \
-e $'POST .*\r' \
-e $'^[A-Z][A-Za-z0-9_-]*: .*\r' \
-e $'HTTP/1.1 .*\r' \
-e $'^\r$' \
| sed --unbuffered -e 's,\r$,,'
Alternatively, you might like to capture them (perhaps in conjunction with the -W, -C or -G options) for later analysis. This can, depending on the cipher used, also work with https connections if the key is provided (useful for Wireshark)

find a message from a curl request in apache logs

With a linux, I send a curl message like that:
curl -X POST -d "hello" http:/server_adress
the server is a Windows. I would like to find "hello" in my Apache log but I don't know how to do this.
Do you think the curl is correct ?
What do I have to put in my php file to see "hello" in my apache log ?
I find the answer.
the message curl have to be like this:
curl -X POST -d "message = hello" _http://server_adress
and in the file php.
$message=$_POST["message"];
write_debug_log("$message);
It returns "hello" in the apache log
Your curl command is okay. Its now you to the server log(and the administrator) to check the data on the log.
So far I know the data(means post data) doesn't come to by default in the log. You need to configure it from the log setting or something like that(if it is possible).

curl w3c-markup-validator localy is slow, how do I make it faster?

I am using curl like this:
curl -s -F "uploaded_file=#/path_to_file;type=text/html" -F output=soap12 http://localhost/w3c-markup-validator/check >text.xml && xsltproc script/guilbep_soap_w3c.xsl text.xml
xsltproc is fast; but curl is not.
does it come from the fact that w3c-markup-validator is local? or from w3c-markup-validator itself? Or from curl and I can do something?
I would like to test more than 6000 xhtml.. and if I have to wait 2 sec between each .. more than 1 hour.. I can wait.. but I don't like it.
Thanks!!
I believe that it is quite probably that the 2 second delay is because you transfer the file over http. Is it possible to run the validator locally, without sending it to a webserver?