Cannot upload file to transfer.sh. Error: could no save metadata - file-upload

Transfer.sh is a service that you can upload files by curl. I'm getting this error when I try to upload some file:
$ curl --upload-file file -s -w "\n" https://transfer.sh/
Could not save metadata

The service seems to be unstable.
You can use https://file.io
$ curl -F "file=#file" -s -w "\n" https://file.io {"success":true,"key":"Y7PDKv","link":"https://file.io/Y7PDKv","expiry":"14 days"}

With this bash function you can obtain the link
snd() {
curl --progress-bar -F "file=#$1" -s -w "\n" https://file.io | jq -r '.link'
}
alias snd=snd
How To use: snd filename.ext
Obtain: https://file.io/0uw5yYbkguc4
To parse the upload response, json, need to install "jq"

Related

How to get all tags from github api

I usually get the releases/tags from github API with below command
$ repo="helm/helm"
$ curl -sL https://api.github.com/repos/${repo}/tags |jq -r ".[].name"
v3.2.0-rc.1
v3.2.0
v3.1.3
v3.1.2
v3.1.1
v3.1.0-rc.3
v3.1.0-rc.2
v3.1.0-rc.1
v3.1.0
v3.0.3
v3.0.2
v3.0.1
v3.0.0-rc.4
v3.0.0-rc.3
v3.0.0-rc.2
v3.0.0-rc.1
v3.0.0-beta.5
v3.0.0-beta.4
v3.0.0-beta.3
v3.0.0-beta.2
v3.0.0-beta.1
v3.0.0-alpha.2
v3.0.0-alpha.1
v3.0.0
v2.16.6
v2.16.5
v2.16.4
v2.16.3
v2.16.2
v2.16.1
But in fact, it doesn't list all releases, what should I do?
For example, I can't get release before v2.16.1 as below link
https://github.com/helm/helm/tags?after=v2.16.1
I try to reference the same way to add ?after=v2.16.1 in curl api
command, but no help
curl -sL https://api.github.com/repos/${repo}/tags?after=v2.16.1 |jq -r ".[].name"
I got same output.
Reference: https://developer.github.com/v3/git/tags/
This could be because of pagination
See this script as an example of detecting pages, and adding the required ?page=x to access to all the data from a GitHub API call.
Relevant extract:
# single page result-s (no pagination), have no Link: section, the grep result is empty
last_page=`curl -s -I "https://api.github.com${GITHUB_API_REST}" -H "${GITHUB_API_HEADER_ACCEPT}" -H "Authorization: token $GITHUB_TOKEN" | grep '^Link:' | sed -e 's/^Link:.*page=//g' -e 's/>.*$//g'`
# does this result use pagination?
if [ -z "$last_page" ]; then
# no - this result has only one page
rest_call "https://api.github.com${GITHUB_API_REST}"
else
# yes - this result is on multiple pages
for p in `seq 1 $last_page`; do
rest_call "https://api.github.com${GITHUB_API_REST}?page=$p"
done
fi
With help from #VonC, I got the result with extra query string ?page=2, if I'd like to query older releases and so on.
curl -sL https://api.github.com/repos/${repo}/tags?page=2 |jq -r ".[].name"
I can easily get the last page now.
$ GITHUB_API_REST="/repos/helm/helm/tags"
$ GITHUB_API_HEADER_ACCEPT="Accept: application/vnd.github.v3+json"
$ GITHUB_TOKEN=xxxxxxxx
$ last_page=`curl -s -I "https://api.github.com${GITHUB_API_REST}" -H "${GITHUB_API_HEADER_ACCEPT}" -H "Authorization: token $GITHUB_TOKEN" | grep '^Link:' | sed -e 's/^Link:.*page=//g' -e 's/>.*$//g'`
$ echo $last_page
4

Any way to use presigned URL uploads and enforce tagging?

Is there any way to issue a presigned URL to a client to upload a file to S3, and ensure that the uploaded file has certain tags? Using the Python SDK here as an example, this generates a URL as desired:
s3.generate_presigned_url('put_object',
ExpiresIn=3600,
Params=dict(Bucket='foo',
Key='bar',
ContentType='text/plain',
Tagging='foo=bar'))
This is satisfactory when uploading while explicitly providing tags:
$ curl 'https://foo.s3.amazonaws.com/bar?AWSAccessKeyId=...&Signature=...&content-type=text%2Fplain&x-amz-tagging=foo%3Dbar&Expires=1538404508' \
-X PUT
-H 'Content-Type: text/plain' \
-H 'x-amz-tagging: foo=bar' \
--data-binary foobar
However, S3 also accepts the request when omitting -H 'x-amz-tagging: foo=bar', which uploads the object without tags. Since I don't have control over the client, that's… bad.
I've tried creating an empty object first and tagging it, then issuing the presigned URL to it, but PUTting the object replaces it entirely, including removing any tags.
I've tried issuing a presigned POST URL, but that doesn't seem to support the tagging parameter at all:
s3.generate_presigned_post('foo', 'bar', {'tagging': '<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>'})
$ curl https://foo.s3.amazonaws.com/ \
-F key=bar \
-F 'tagging=<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>'
-F AWSAccessKeyId=... \
-F policy=... \
-F signature=... \
-F file=#/tmp/foo
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy:
Extra input fields: tagging</Message>...
I simply want to let a client upload a file directly to S3, and ensure that it's tagged a certain way in the process. Any way to do that?
Try the following code:
fields = {
"x-amz-meta-u1": "value1",
"x-amz-meta-u2": "value2"
}
conditions = [
{"x-amz-meta-u1": "value1"},
{"x-amz-meta-u2": "value2"}
]
presignedurl = s3_client.generate_presigned_post(
bucket_name, "YOUR_BUCKET_NAME",
Fields=copy.deepcopy(fields),
Conditions=copy.deepcopy(conditions)
)
Python code:
fields = {
'tagging': '<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>',
}
conditions = [
{'tagging': '<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>'}
]
presigned_url = s3_client.generate_presigned_post(
Bucket="foo",
Key="file/key.json",
Fields=copy.deepcopy(fields),
Conditions=copy.deepcopy(conditions)
)
CURL command:
$ curl -v --form-string "tagging=<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>" \
-F key=file/key.json \
-F x-amz-algorithm=... \
-F x-amz-credential=... \
-F x-amz-date=... \
-F x-amz-security-token=... \
-F policy=...\
-F x-amz-signature=... \
-F file=#key.json \
https://foo.s3.amazonaws.com/
Explanation
It is imperative that --form-string is used in the CURL command, otherwise CURL will interpret the =< as reading in a file!
Also ensure that key.json is in your current working directory for CURL to upload the file to S3 using the pre-signed-url.

Open PDF found with volatility

my task is to analyze a memory dump. I've found the location of a PDF-File and I want to analyze it with virustotal. But I can't figure out how to "download" it from the memory dump.
I've already tried it with this command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
But in my dumpfile-directory there is just a .vacb file which is not a valid pdf.
I think you may have missed a command line argumenet from your command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
If you are not getting a .dat file in your output folder you can add -u:
-u, --unsafe Relax safety constraints for more data
Can't test this with out access to the dump but you should be able to rename the .dat file created to .pdf.
So it should look something like this:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/ -u
You can check out the documentation on the commands here
VACB is "virtual address control block". Your output type seems to be wrong.
Try something like:
$ python vol.py -f img.vmem dumpfiles --output=pdf --output-file=bla.pdf --profile=[your profile] -D dumpfiles/
or check out the cheat sheet: here

Error submitting iOS .app using TestFlight API

I'm running the following script:
#!/bin/bash
archive=`./builds/myapp.ipa`
curl http://testflightapp.com/api/builds.json
-F file=$archive
-F api_token='xxxxxxxxxxxxxxxxxxxxxxxxxx'
-F team_token='xxxxxxxxxxxxxxxxxxxxxxxxxx'
-F notes='here comes the new app!'
-F notify=True
-F distribution_lists='MyFriends'
but I'm getting the error:
You must supply api_token, team_token, the file and notes (missing
file)
I'm actually copy/past-ing the script from the TestFlight website. What's wrong with that?
Please note that, as seen in the example given in the TestFlight API Documentation, you need to use the '#' character before the IPA file name.
You should try with:
#!/bin/bash
archive=`./builds/myapp.ipa`
curl http://testflightapp.com/api/builds.json \
-F file=#$archive \
-F api_token='xxxxxxxxxxxxxxxxxxxxxxxxxx' \
-F team_token='xxxxxxxxxxxxxxxxxxxxxxxxxx' \
-F notes='here comes the new app!' \
-F notify=True \
-F distribution_lists='MyFriends'

Using wget 1.12 centos 6 to batch download and rename output files

using file wget
wget -c --load-cookies cookies.txt http://www.example.com/file
works fine
wget -c --load-cookies cookies.txt http://www.example.com/file.mpg -O filename_to_save_as.mpg
when I use
wget -c --load-cookies cookies.txt -i /dir/inputfile.txt
to pass urls from a text file it wget it works as expected. Is there any way to pass a url from a text file and still rename the out put file as in example 2 above. I have tried passing the -O option with an argument but wget tell me "invalid URL http://site.com/file.mpg -O new_name.mpg: scheme missing"
also I have tried escaping after the url, quotes and formatting in such a way as
url = "http://foo.bar/file.mpg" -O new_name.mpg
is there any way to use an input file and still change the output file name using wget?
if not would a shell script be more appropriate? If so how should it be written?
I don't think that wget supports it, but it's possible to do with a small shell script.
First, create an input file like this (inputfile.txt):
http://www.example.com/file1.mpg filename_to_save_as1.mpg
http://www.example.com/file2.mpg filename_to_save_as2.mpg
http://www.example.com/file3.mpg filename_to_save_as3.mpg
The url and the filename are separated by a tab character.
Then use this bash script (wget2.sh):
#!/bin/bash
while read line
do
URL=$(echo "$line" | cut -f 1 )
FILENAME=$(echo "$line" | cut -f 2 )
echo wget -c --load-cookies cookies.txt "$URL" -O "$FILENAME"
done
with this command:
echo input.txt | wget2.sh
A more simple solution is to write a shell script which contains the wget command for every file:
#!/bin/bash
wget -c --load-cookies cookies.txt http://www.example.com/file.mpg1 -O filename_to_save_as1.mpg
wget -c --load-cookies cookies.txt http://www.example.com/file.mpg2 -O filename_to_save_as2.mpg
wget -c --load-cookies cookies.txt http://www.example.com/file.mpg3 -O filename_to_save_as3.mpg