I usually get the releases/tags from github API with below command
$ repo="helm/helm"
$ curl -sL https://api.github.com/repos/${repo}/tags |jq -r ".[].name"
v3.2.0-rc.1
v3.2.0
v3.1.3
v3.1.2
v3.1.1
v3.1.0-rc.3
v3.1.0-rc.2
v3.1.0-rc.1
v3.1.0
v3.0.3
v3.0.2
v3.0.1
v3.0.0-rc.4
v3.0.0-rc.3
v3.0.0-rc.2
v3.0.0-rc.1
v3.0.0-beta.5
v3.0.0-beta.4
v3.0.0-beta.3
v3.0.0-beta.2
v3.0.0-beta.1
v3.0.0-alpha.2
v3.0.0-alpha.1
v3.0.0
v2.16.6
v2.16.5
v2.16.4
v2.16.3
v2.16.2
v2.16.1
But in fact, it doesn't list all releases, what should I do?
For example, I can't get release before v2.16.1 as below link
https://github.com/helm/helm/tags?after=v2.16.1
I try to reference the same way to add ?after=v2.16.1 in curl api
command, but no help
curl -sL https://api.github.com/repos/${repo}/tags?after=v2.16.1 |jq -r ".[].name"
I got same output.
Reference: https://developer.github.com/v3/git/tags/
This could be because of pagination
See this script as an example of detecting pages, and adding the required ?page=x to access to all the data from a GitHub API call.
Relevant extract:
# single page result-s (no pagination), have no Link: section, the grep result is empty
last_page=`curl -s -I "https://api.github.com${GITHUB_API_REST}" -H "${GITHUB_API_HEADER_ACCEPT}" -H "Authorization: token $GITHUB_TOKEN" | grep '^Link:' | sed -e 's/^Link:.*page=//g' -e 's/>.*$//g'`
# does this result use pagination?
if [ -z "$last_page" ]; then
# no - this result has only one page
rest_call "https://api.github.com${GITHUB_API_REST}"
else
# yes - this result is on multiple pages
for p in `seq 1 $last_page`; do
rest_call "https://api.github.com${GITHUB_API_REST}?page=$p"
done
fi
With help from #VonC, I got the result with extra query string ?page=2, if I'd like to query older releases and so on.
curl -sL https://api.github.com/repos/${repo}/tags?page=2 |jq -r ".[].name"
I can easily get the last page now.
$ GITHUB_API_REST="/repos/helm/helm/tags"
$ GITHUB_API_HEADER_ACCEPT="Accept: application/vnd.github.v3+json"
$ GITHUB_TOKEN=xxxxxxxx
$ last_page=`curl -s -I "https://api.github.com${GITHUB_API_REST}" -H "${GITHUB_API_HEADER_ACCEPT}" -H "Authorization: token $GITHUB_TOKEN" | grep '^Link:' | sed -e 's/^Link:.*page=//g' -e 's/>.*$//g'`
$ echo $last_page
4
I have to download all projects that are hosted on some cgit instance. There are several hundreds of repositories, so it is tedious to do this manually.
How can it be done?
Seems that it is possible to do it with curl by parsing pages one by one. By is there more convenient interface?
There does not seem to be any official or convenient API for CGit to export/clone all its repositories.
You can try those alternatives:
curl -s http://git.suckless.org/ |
xml sel -N x="http://www.w3.org/1999/xhtml" -t -m "//x:a" -v '#title' -n |
grep . |
while read repo
do git clone git://git.suckless.org/$repo
done
Or:
curl -s http://git.suckless.org/ | xml pyx | awk '$1 == "Atitle" { print $2 }'
Or:
curl -s http://git.suckless.org/ | xml pyx | awk '$1 == "Atitle" { printf("git clone %s\n",$2) }' | s
I suspect this work for one page of Git repositories as listed by CGit: you might still have to repeat that for all subsequent Git repositories pages.
I would like to get a HTML report from the zap-cli. I am able to run these commands, but is there a way to run both in single command
[sb#company.local#sb-test-vm ~]$ zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://demo.testfire.net/
[INFO] Running a quick scan for http://demo.testfire.net/
[INFO] Issues found: 6
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Alert | Risk | CWE ID | URL |
+==================================+========+==========+==================================================================================================================+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/bank/login.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/comment.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/notfound.aspx?aspxerrorpath=%3C%2Fb%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E%3Cb%3E |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/search.aspx?txtSearch=%3C%2Fspan%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E%3Cspan%3E |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| SQL Injection | High | 89 | http://demo.testfire.net/bank/login.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| SQL Injection | High | 89 | http://demo.testfire.net/bank/login.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
[sb#company.local#sb-test-vm ~]$ zap-cli report -o abc.html -f html
[INFO] Report saved to "abc.html"
[sb#company.local#sb-test-vm ~]$ ls -l abc.html
-rw-rw-r--. 1 sb#company.local sb#company.local 58659 Sep 25 16:39 abc.html
[sb#company.local#sb-test-vm ~]$ date
Tue Sep 25 16:39:16 EDT 2018
[sb#company.local#sb-test-vm ~]$
I tried the switches provided but unable to execuete the scan and get the report in a single liner. I am willing to use zap.sh even however I didn't see the option to generate the report in HTML, only XML. any insight on this is appreciated
zap-cli --help
Usage: zap-cli [OPTIONS] COMMAND [ARGS]...
ZAP CLI v0.9.0 - A simple commandline tool for OWASP ZAP.
Options:
--boring Remove color from console output.
-v, --verbose Add more verbose debugging output.
--zap-path TEXT Path to the ZAP daemon. Defaults to /zap or the value
of
the environment variable ZAP_PATH.
-p, --port INTEGER Port of the ZAP proxy. Defaults to 8090 or the value
of
the environment variable ZAP_PORT.
--zap-url TEXT The URL of the ZAP proxy. Defaults to http://127.0.0.1
or the value of the environment variable ZAP_URL.
--api-key TEXT The API key for using the ZAP API if required. Defaults
to the value of the environment variable ZAP_API_KEY.
--help Show this message and exit.
Commands:
active-scan Run an Active Scan.
ajax-spider Run the AJAX Spider against a URL.
alerts Show alerts at the given alert level.
context Manage contexts for the current session.
exclude Exclude a pattern from all scanners.
open-url Open a URL using the ZAP proxy.
policies Enable or list a set of policies.
quick-scan Run a quick scan.
report Generate XML, MD or HTML report.
scanners Enable, disable, or list a set of scanners.
scripts Manage scripts.
session Manage sessions.
shutdown Shutdown the ZAP daemon.
spider Run the spider against a URL.
start Start the ZAP daemon.
status Check if ZAP is running.
EDIT:
I tried this command but the abc.html file wasn't there in the current dir. I did a find but couldn't find abc.html anywhere
zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://demo.testfire.net/ && zap-cli report -o abc.html -f html
So, next I tried to out these 2 commands in a zap-run.sh script and chmod +x the script and ran it, that DID create the abc.html file. So, thank you
If the big goal is a "single liner", why not just chain the commands?
zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://demo.testfire.net/ && zap-cli report -o abc.html -f html
If that doesn't suit you then put both commands in a batch file (or shell script) and call that instead.
I use GitLab on their servers. I would like to download my latest built artifacts (build via GitLab CI) via the API like this:
curl --header "PRIVATE-TOKEN: 9koXpg98eAheJpvBs5tK" "https://gitlab.com/api/v3/projects/1/builds/8/artifacts"
Where do I find this project ID? Or is this way of using the API not intended for hosted GitLab projects?
I just found out an even easier way to get the project id: just see the HTML content of the gitlab page hosting your project. There is an input with a field called project_id, e.g:
<input type="hidden" name="project_id" id="project_id" value="335" />
The latest version of GitLab 11.4 at the time of this writing now puts the Project ID at the top of the frontpage of your repository.
Screenshot:
On the Edit Project page there is a Project ID field in the top right corner.
(You can also see the ID on the CI/CD pipelines page, in the exameple code of the Triggers section.)
In older versions, you can see it on the Triggers page, in the URLs of the example code.
You can query for your owned projects:
curl -XGET --header "PRIVATE-TOKEN: XXXX" "https://gitlab.com/api/v4/projects?owned=true"
You will receive JSON with each owned project:
[
{
"id":48,
"description":"",
"default_branch":"master",
"tag_list":[
...
You are also able to get the project ID from the triggers configuration in your project which already has some sample code with your ID.
From the Triggers page:
curl -X POST \
-F token=TOKEN \
-F ref=REF_NAME \
https://<GitLab Installation>/api/v3/projects/<ProjectID>/trigger/builds
As mentioned here, all the project scoped APIs expect either an ID or the project path (URL encoded).
So just use https://gitlab.com/api/v4/projects/gitlab-org%2Fgitlab-foss directly when you want to interact with a project.
Enter the project.
On the Left Hand menu click Settings -> General -> Expand General Settings
It has a label Project ID and is next to the project name.
This is on version GitLab 10.2
Provide the solution that actually solve the problem the api of getting the project id for specific gitlab project
curl -XGET -H "Content-Type: application/json" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" http://<YOUR-GITLAB-SERVER>/api/v3/projects/<YOUR-NAMESPACE>%2F<YOUR-PROJECT-NAME> | python -mjson.tool
Or maybe you just want the project id:
curl -XGET -H "Content-Type: application/json" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" http://<YOUR-GITLAB-SERVER>/api/v3/projects/<YOUR-NAMESPACE>%2F<YOUR-PROJECT-NAME> | python -c 'import sys, json; print(json.load(sys.stdin)["id"])'
Note that the repo url(namespace/repo name) is encoded.
If you know your project name, you can get the project id by using the following API:
curl --header "Private-Token: <your_token>" -X GET https://gitlab.com/api/v4/projects?search=<exact_project_name>
This will return a JSON that includes the id:
[
{
"id":<project id>, ...
}
]
Just for the record, if someone else has the need to download artifacts from gitlab.com created via gitlab-ci
Create a private token within your browser
Get the project id via curl -XGET --header "PRIVATE-TOKEN: YOUR_AD_HERE?" "https://gitlab.com/api/v3/projects/owned"
Download the last artifact from your master branch created via a gitlab-ci step called release curl -XGET --header "PRIVATE-TOKEN: YOUR_AD_HERE?" -o myapp.jar "https://gitlab.com/api/v3/projects/4711/builds/artifacts/master/download?job=release"
I am very impressed about the beauty of gitlab.
You can view it under the repository name
You can query projects with search attribute e.g:
http://gitlab.com/api/v3/projects?private_token=xxx&search=myprojectname
As of Gitlab API v4, the following API returns all projects that you own:
curl --header 'PRIVATE-TOKEN: <your_token>' 'https://gitlab.com/api/v4/projects?owned=true'
The response contains project id. Gitlab access tokens can be created from this page- https://gitlab.com/profile/personal_access_tokens
No answer suits generic needs, the most similar is intended only for the gitlab site, not specific sites. This can be used to find the ID of the project streamer in the Gitlab server my-server.com, for example:
$ curl --silent --header 'Authorization: Bearer MY-TOKEN-XXXX' \
'https://my-server.com/api/v4/projects?per_page=100&simple=true'| \
jq -rc '.[]|select(.name|ascii_downcase|startswith("streamer"))'| \
jq .id
168
Remark that
this gives only the first 100 projects, if you have more, you should request the pages that follow (&page=2, 3, ...) or run a different API (e.g. groups/:id/projects).
jq is quite flexible. Here we're just filtering a project, you can do multiple things with it.
There appears to be no way to retrieve only the Project ID using the gitlab api. Instead, retrieve all the owner's projects and loop through them until you find the matching project, then return the ID. I wrote a script to get the project ID:
#!/bin/bash
projectName="$1"
namespace="$2"
default=$(sudo cat .namespace)
namespace="${namespace:-$default}"
json=$(curl --header "PRIVATE-TOKEN: $(sudo cat .token)" -X GET
'https://gitlab.com/api/v4/projects?owned=true' 2>/dev/null)
id=0
idMatch=0
pathWithNamespaceMatch=0
rowToMatch="\"$(echo "$namespace/$projectName" | tr '[:upper:]' '[:lower:]')\","
for row in $(echo "${json}" | jq -r '.'); do
[[ $idMatch -eq 1 ]] && { idMatch=0; id=${row::-1}; }
[[ $pathWithNamespaceMatch -eq 1 ]] && { pathWithNamespaceMatch=0; [[ "$row" == "$rowToMatch" ]] && { echo "$id"; return 0; } }
[[ ${row} == "\"path_with_namespace\":" ]] && pathWithNamespaceMatch=1
[[ ${row} == "\"id\":" ]] && idMatch=1
done
echo 'Error! Could not retrieve projectID.'
return 1
It expects the default namespace to be stored in a file .namespace and the private token to be stored in a file .token. For increased security, its best to run chmod 000 .token; chmod 000 .namespace; chown root .namespace; chown root .token
If your project name is unique, it is handy to follow the answer by shunya, search by name, refer API doc.
If you have stronger access token and the Gitlab contains a few same name projects within different groups, then search within group is more convenient. API doc here. e.g.
curl --header "PRIVATE-TOKEN: <token>" -X GET https://gitlab.com/api/v4/groups/<group_id>/search?scope=projects&search=<project_name>
The group ID can be found from the Settings page under the group domain.
And to fetch the project id from the output, you can do:
curl --header "PRIVATE-TOKEN: <token>" -X GET https://gitlab.com/api/v4/groups/<group_id>/search?scope=projects&search=<project_name> | jq '[0].id'
To get id from all projects, use:
curl --header 'PRIVATE-TOKEN: XXXXXXXXXXXXXXXXXXXXXXX' 'https://gitlab.com/api/v4/projects?owned=true' > curloutput
grep -oPz 'name\":\".*?\"|{\"id\":[0-9]+' curloutput | sed 's/{\"/\n/g' | sed 's/name//g' |sed 's/id\"://g' |sed 's/\"//g' | sort -u -n
Not Specific to question, but somehow reached here, might help others
I used chrome to get a project ID
Go to the desired project example gitlab.com/username/project1
Inspect network tab
see the first garphql request in network tab
You can search for the project path
curl -s 'https://gitlab.com/api/v4/projects?search=my/path/to/my/project&search_namespaces=true' --header "PRIVATE-TOKEN: $GITLAB_TOKEN" |python -mjson.tool |grep \"id\"
https://docs.gitlab.com/ee/api/projects.html
Which will only match your project and will not find other unnecessary projects
My favorite method is to pull from the CI/CD pipeline so on build it dynamically assigns the project id.
Simply assign a variable in your code to = CI_PROJECT_ID
Does anyone know of a free online tool that can crawl any given website and return just the Meta Keywords and Meta Description information?
Assuming you have access to Linux/Unix:
mkdir temp
cd temp
wget -r SITE_ADDRESS
Then, for keywords:
egrep -r -h 'meta[^>]+name="keywords' * | sed 's/^.*content="\([^"]*\)".*$/\1/g'
and for descriptions:
egrep -r -h 'meta[^>]+name="description' * | sed 's/^.*content="\([^"]*\)".*$/\1/g'
If you want all the unique keywords, try:
egrep -r -h 'meta[^>]+name="keywords' * | sed 's/^.*content="\([^"]*\)".*$/\1/g' | sed 's/\s*,\s*/\n/g' | sort | uniq
I'm sure there's a one-liner or program out there that does this exact thing, and there are definitely easier answers.
To retrive all meta information try this tool Meta Tags Analyzer