I'm issuing the following command:
ab -n 100 -c 20 -k -v 1 -H "Accept-Encoding: gzip, deflate" -T "application/json" -p ab-login.txt http://localhost:222/
With the contents of ab-login.txt being:
{username:'user',password:'pass!'}
And I get an error:
Could not stat POST data file (ab-login.txt): Partial results are valid but processing is incomplete
What am I doing wrong?
I added
-v 4 -T 'application/x-www-form-urlencoded'
Also, a line in my post file looked like:
name='myName'
There were no quotes or anything like that.
Related
I am trying to upload a ogg file to jfrog but I am getting the below error, would you mind helping me with what is wrong? I am running below code in my gitlab-ci file.
++ curl -sSf -H 'X-JFrog-Art-Api: <API_KEY>' -X PUT -T dist/myfile.egg 'https://artifactory.mycompany.com/artifactory/dev/apps/myapp/myfile.egg;released=true;build.number=0.1'
curl: (22) The requested URL returned error: 400
ERROR: Job failed: exit status 1
The pattern for Upload a file to Artifactory with properties (metadata):
$ curl -sSf -H "X-JFrog-Art-Api:<API_KEY>" \
-X PUT \
-T file.zip \
'http(s)://<ARTIFACTORY_URL>/<REPO>/<PATH>/file.zip;released=true;build.number=1.0'
if I change my code as below it is working but it won't create a folder and put myfile.egg inside it.
++ curl -sSf -H 'X-JFrog-Art-Api: <API_KEY>' -X PUT -T dist/myfile.egg 'https://artifactory.mycompany.com/artifactory/dev/apps/myapp;released=true;build.number=0.1'
** additional try:
I tried below code and I am getting another error:
curl -H 'X-JFrog-Art-Api: <API_KEY>' -XPUT 'https://artifactory.mycompany.com/artifactory/dev/apps/myapp/myfile.egg' -T dist/myfile.egg
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 120 0 120 0 0 882 0 --:--:-- --:--:-- --:--:-- 882
{
"errors" : [ {
"status" : 400,
"message" : "Parent apps/myapp must be a folder"
} ]
}Job succeeded
I tried to remove my file and upload the artifact by using below command and it worked.
curl -H "X-JFrog-Art-Api: <API_KEY>" -XDELETE "https://artifactory.mycompany.com/artifactory/dev/apps/myapp/myfile.egg"
curl -H "X-JFrog-Art-Api: <API_KEY>" -XPUT "https://artifactory.mycompany.com/artifactory/dev/apps/myapp/myfile.egg" -T "dist/test.zip"
I am trying to pull a project ID using gitlab REST API v4, but when I issue the curl command, I get this error:
"jobs:test:script config should be a string or an array of strings"
The command is this one:
curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"
I tried to single quote it:
'curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"'
But when I do it, it removes the failure, but the command is ignored.
So I tried to eval it like this:
eval - 'curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"'
When I do it, the failure its produced again:
"jobs:test:script config should be a string or an array of strings"
Any clue how should I issue the curl command? I think what is causing the failure is the colon within the "PRIVATE-TOKEN: PRIVATE-TOKEN"
This worked for me
Declare Job variables in variables sections eg:
variables:
PRIVATE-TOKEN: "TokenValue"
PRIVATE_HEADER: "PRIVATE-TOKEN: ${PRIVATE-TOKEN}"
Then under Script Section of the CI file used Curl command as follows
script:
curl -k -H ${PRIVATE_HEADER} "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=${CI_PROJECT_NAME}
Using the {} braces around variable names made sure that ":" issue doesn't show up
I usually get the releases/tags from github API with below command
$ repo="helm/helm"
$ curl -sL https://api.github.com/repos/${repo}/tags |jq -r ".[].name"
v3.2.0-rc.1
v3.2.0
v3.1.3
v3.1.2
v3.1.1
v3.1.0-rc.3
v3.1.0-rc.2
v3.1.0-rc.1
v3.1.0
v3.0.3
v3.0.2
v3.0.1
v3.0.0-rc.4
v3.0.0-rc.3
v3.0.0-rc.2
v3.0.0-rc.1
v3.0.0-beta.5
v3.0.0-beta.4
v3.0.0-beta.3
v3.0.0-beta.2
v3.0.0-beta.1
v3.0.0-alpha.2
v3.0.0-alpha.1
v3.0.0
v2.16.6
v2.16.5
v2.16.4
v2.16.3
v2.16.2
v2.16.1
But in fact, it doesn't list all releases, what should I do?
For example, I can't get release before v2.16.1 as below link
https://github.com/helm/helm/tags?after=v2.16.1
I try to reference the same way to add ?after=v2.16.1 in curl api
command, but no help
curl -sL https://api.github.com/repos/${repo}/tags?after=v2.16.1 |jq -r ".[].name"
I got same output.
Reference: https://developer.github.com/v3/git/tags/
This could be because of pagination
See this script as an example of detecting pages, and adding the required ?page=x to access to all the data from a GitHub API call.
Relevant extract:
# single page result-s (no pagination), have no Link: section, the grep result is empty
last_page=`curl -s -I "https://api.github.com${GITHUB_API_REST}" -H "${GITHUB_API_HEADER_ACCEPT}" -H "Authorization: token $GITHUB_TOKEN" | grep '^Link:' | sed -e 's/^Link:.*page=//g' -e 's/>.*$//g'`
# does this result use pagination?
if [ -z "$last_page" ]; then
# no - this result has only one page
rest_call "https://api.github.com${GITHUB_API_REST}"
else
# yes - this result is on multiple pages
for p in `seq 1 $last_page`; do
rest_call "https://api.github.com${GITHUB_API_REST}?page=$p"
done
fi
With help from #VonC, I got the result with extra query string ?page=2, if I'd like to query older releases and so on.
curl -sL https://api.github.com/repos/${repo}/tags?page=2 |jq -r ".[].name"
I can easily get the last page now.
$ GITHUB_API_REST="/repos/helm/helm/tags"
$ GITHUB_API_HEADER_ACCEPT="Accept: application/vnd.github.v3+json"
$ GITHUB_TOKEN=xxxxxxxx
$ last_page=`curl -s -I "https://api.github.com${GITHUB_API_REST}" -H "${GITHUB_API_HEADER_ACCEPT}" -H "Authorization: token $GITHUB_TOKEN" | grep '^Link:' | sed -e 's/^Link:.*page=//g' -e 's/>.*$//g'`
$ echo $last_page
4
I have an Apache Bench POST test command like:
ab -p test.json -n 1000 -c 100 -T "application/json" "http://localhost:8080/test"
However, my test.json is very simple, e.g.:
{"foo": 1}
Is it possible to read that in directly to the ab command, without a file reference? Something like:
ab -p '{"foo": 1}' -n 1000 -c 100 -T "application/json" "http://localhost:8080/test"
(I know that doesn't work, just wondering if there is a good linux file mimic trick or something)
My only workaround currently is:
echo '{"foo": 1}' > test.json && ab -p test.json -n 1000 -c 100 -T "application/json" "http://localhost:8080/test" && rm test.json
But I find that a bit too clunky.
Try:
cat test.json | ab -p /dev/stdin -n 1000 -c 100 -T "application/json" "http://localhost:8080/test"
I would like to download some free-to-download pdfs (copies of old newspaper) from this website of the Austrian National Library with wget using the bash script below:
for year in {14..57}; do
for month in `seq -w 1 12`; do # -w for leading zero
for day in `seq -w 1 31`; do
wget -A pdf -nc -E -nd --no-check-certificate --content-disposition http://anno.onb.ac.at/pdfs/ONB_lzg_18$year$month$day.pdf
done
done
done
Aside of some newspaper issues not being available, I cannot download any issues even though they exist. I would get errors such as the one for the existing issue of June 30, 1814 for example:
http://anno.onb.ac.at/pdfs/ONB_lzg_18140630.pdf
Aufl"osen des Hostnamens anno.onb.ac.at (anno.onb.ac.at)... 193.170.112.230
Verbindungsaufbau zu anno.onb.ac.at (anno.onb.ac.at)|193.170.112.230|:80 ... verbunden.
HTTP-Anforderung gesendet, auf Antwort wird gewartet ... 404 Not Found
FEHLER 404: Not Found.
However, if you were to download the corresponding pdfs manually (here, see upper-right corner) you have to press "ok" in a pop-up acknowledgement. Once you did this, I can even download the issue via wget without a problem.
How can I tell wget to confirm via the command line the acknowledgements (the question you get once you want to download a pdf), see screenshot below? Is there a command in wget for that?
There are two issues in your code.
lgz newspaper is not available for all the dates
The PDF are not always generated and cached on the URL you used. You need to first run the other URL to make sure the PDF is generated
Below is the updated code that should work
#!/bin/bash
for year in {14..57}; do
DATES=$(curl -sS "http://anno.onb.ac.at/cgi-content/anno?aid=lzg&datum=18$year&zoom=33" | gawk 'match($0, /datum=([^&]+)/, ary) {print ary[1]}' | xargs echo)
for date in $DATES
do
echo "Downloading for $date"
curl "http://anno.onb.ac.at/cgi-content/anno_pdf.pl?aid=lzg&datum=$date" -H 'Connection: keep-alive' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' -H 'DNT: 1' -H "Referer: http://anno.onb.ac.at/cgi-content/anno?aid=lzg&datum=$date" -H 'Accept-Encoding: gzip, deflate' -H 'Accept-Language: en-US,en;q=0.9' --compressed
wget -A pdf -nc -E -nd --no-check-certificate --content-disposition http://anno.onb.ac.at/pdfs/ONB_lzg_$date.pdf
done
done