How to save output of python-swiftclient to file when dowloading a directory? - file-io

Sometimes I get errors when I download files from a cloud with python-swiftclient, like this one:
Error downloading object 'uploads/1/image.png': Object GET failed: https://orbit.brightbox.com/v1/acc-12345/uploads/1/image.png 500 Internal Error b'An error occurred'
To search for the all errors and re-download failed files I would want to save output of the swift command to a file
I tried to do the following ways:
swift-cli -A https://orbit.brightbox.com/v1/acc-12345 \
-U user -K secret download uploads 2>&1 | tee uploads.log
# and
swift-cli -A https://orbit.brightbox.com/v1/acc-12345 \
-U user -K secret download uploads > uploads.log
But this didn't work. man swift describes -o option
For a single object download, you may use the -o [--output]
option to redirect the output to a specific file or if "-" then just redirect to stdout or with --no-download actually not to write anything to disk.
but when I try to download a directory with -o option if fails with
-o option only allowed for single file downloads
How can I save log to a file when I download a directory with swift CLI?

Actually redirecting output to a file works with swift-client:
swift-cli -A https://orbit.brightbox.com/v1/acc-12345 \
-U user -K secret download uploads > uploads.log
I was confused because after I started the command above, in another terminal window I did
tail -f uploads.log
But it didn't give me any output (like I was seeing when I was running the download command without redirection).
Seems like that swift-client writes to a file in batches and I needed to wait about a minute until tail -f dumps into the console a hundred of lines like this
uploads/documents/1/image.png [auth 0.000s, headers 0.390s, total 14.361s, 0.034 MB/s]

Related

Can you upload just app metadata using Transporter?

I've got an app already in the store and need to add additional IAPs using Transporter. I've used lookupMetadata to get the metadata.xml file. I'd like to edit this file then re-upload without having to upload the app again. Looking at the docs, the upload mode states you have to upload the app package:
In upload mode, you must specify these command-line options:
-m upload
-f or -assetFile <.ipa | .pkg> for macOS, Linux, and Windows uploads (for macOS notarization, use -assetFile <.dmg | .pkg | .zip>)
-assetDescriptionAppStoreInfo.plist (-assetDescription is required for Linux and Windows uploads)
-u username
-p password
-k kilobits_per_second *
Has anyone come across this before and is there a solution to uploading just the app metadata.
It turns out that you can upload the itmsp file with just the metadata.xml in it and it works fine.
I validated with this:
iTMSTransporter -m verify -f <path to itmsp file> -u <email> -p <password> -v eXtreme
And uploaded with this:
iTMSTransporter -m upload -u <email> -p <password> -f <path to itmsp file> -k 100000 -v eXtreme

Find httpd.conf file location after it's been changed by -f flag

Httpd processes use a non-default configuration file if they are run with the -f flag.
For example
/home/myuser/apache/httpd-2.4.8/bin/httpd -f /confFiles/apache/2.4.8/apache.conf -k start
will use this configuration file: /confFiles/apache/2.4.8/apache.conf
I need to get this location and would rather not have to check for possible -f flags used to start httpd.
The answer here says to run /path/to/httpd -V and concatenate
-D SERVER_CONFIG_FILE="conf/httpd.conf"
with
-D HTTPD_ROOT="/etc/httpd"
to get the final path to the config file.
However, this path will not be the correct one if the -f flag is used to start the httpd process.
Is there a command that can get the config file that is actually being used by the process?
The answer you refer to mentions the paths httpd was compiled with, but as you say those can be manually changed with parameters.
The simple way to check is the command line, if process is called "httpd" (standard name), a simple ps will reveal the config file being used:
ps auxw | grep httpd
Or querying the server if server has mod_info loaded, in command line or with your favourite browser:
curl "http://yourserver.example.com/server-info?server" | grep -i "config file"
Note: mod_info should not be publicaly available for everyone to see.

parse output from running wget command

I'm using wget to synchronise my repository server (I know, wget is not the best tool, but company policy forces me...).
This is the wget command:
/usr/bin/wget --no-check-certificate -r -N -np -nH --cut-dirs=2 --include-directories=dir_1/dir_2/RPMS.all https://repo_url/dir_1/dir_2/RPMS.all
This does the job, but I would like to capture the output of wget which looks like this (e.g.) :
--2016-07-07 16:59:10-- https://repo_url/dir_1/dir_2/RPMS.all/repodata/d65d6fc4c2a0500803acde0525aa3e604a5ea03ac7b11c5694cc8b1de08ce7cc-filelists.xml.gz
Reusing existing connection to repo_url:443.
Proxy request sent, awaiting response... 200 OK
Length: 156605 (153K) [application/octet-stream]
Server file no newer than local file ‘RPMS.all/repodata/d65d6fc4c2a0500803acde0525aa3e604a5ea03ac7b11c5694cc8b1de08ce7cc-filelists.xml.gz’ -- not retrieving.
so I can process this output (using grep, awk or whatever) and show only the current file that I'm wget-ing.
Apart from that, I want to display that output on the same line over and over until finished (maybe even discarding the 'no newer' files, like above.
I tried several solutions I found (e.g. using IFS or shopt or stdbuf), but none seem to work. I also tried with the wget -O - option, but that doesn't work either.
Maybe to clarify a bit more:
I'd like to do this while wget is working. I don't want to do this when wget is finished, but process each connection while wget is running, whether the source file is newer or not.
Is this at all possible?

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.

Is there a curl/wget option that prevents saving files in case of http errors?

I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
As far as I can tell from the man pages, neither curl or wget provide such functionality.
Does anyone know about another downloader who does?
I think the -f option to curl does what you want:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better
enable scripts etc to better deal with failed attempts. In normal cases when an HTTP
server fails to deliver a document, it returns an HTML document stating so (which often
also describes why and more). This flag will prevent curl from outputting that and
return error 22. [...]
However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error:
$ curl -fO http://google.com/aoeu
$ cat aoeu
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
To follow the redirect to its dead end, also give the -L option:
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response code), this option will
make curl redo the request on the new place. [...]
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed.
Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it.
if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \
http://example.com/my/url/` = "200" ]; then
echo "yay"; cp /tmp/something /path/to/destination/filename
fi
This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration.
NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide.
wget -q $URL 2>/dev/null
Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok).
Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so:
wget -q $URL 2>/dev/null
if [ $? != 0]; then
echo "There was an error!"
fi
I hope this is helpful to someone out there facing the same issues I was.
Update:
I just put this into a more script-able form for my own project, and thought I'd share:
function dl {
pushd . > /dev/null
cd $(dirname $1)
wget -q $BASE_URL/$1 2> /dev/null
if [ $? != 0 ]; then
echo ">> ERROR could not download file \"$1\"" 1>&2
exit 1
fi
popd > /dev/null
}
I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs).
wget -O <filename> <url/to/file>
if [[ (du <filename> | cut -f 1) == 0 ]]; then
rm <filename>;
fi;
It works for zsh but you can adapt it for other shells.
But it only saves it in first place if you provide the -O option
As alternative you can create a temporal rotational file:
wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json
The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json".
This solution will prevent to overwrite the final file when a network failure occurs.
The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned.
The "-t" parameter attempt to download the file several times in case of error.
The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget.
The "-O" is the output file path and name.
Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well.
You can download the file without saving using "-O -" option as
wget -O - http://jagor.srce.hr/
You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage