I'm writing a script to automatically setup a repository starting from a clean GraphDB running in a Docker container.
I have a config.ttl file containing repository configuration, the namespace and a dump in a file init.nq
I have successfully created the repository using the config.ttf and updated namespace but I cannot understand how to load the init.nq file.
This operation is extremely simple from web interface: Import -> RFD -> Upload, but I'm not able to understand how to perform it using Curl. I suppose that the correct API should be
post /repositories/{repositoryID}/statements
but the dump is to huge to pass it as simple text (~44MB).
This should work:
curl -X POST -H "Content-Type:application/n-quads" -T init.nq 'http://localhost:7200/repositories/test/statements'
Related
I'm trying to kick-off an import of RDF files into a GraphDB repository via the workbench REST API.
It works fine when the file is in the {graphdb.workbench.importDirectory} directory and the request specifies "filenames": [ "file1.owl" ].
However, if the file is in a subdirectory (eg. {graphdb.workbench.importDirectory}/top/) and the request uses "filenames": [ "top/file1.owl" ], no such luck - nor does "/top/file1.owl" work. The Workbench Import UI shows the entire collection of eligible files under the {graphdb.workbench.importDirectory} directory. The file in question imports when the Workbench UI is used to initiate the import.
My question is: does the REST API support importing server files that are located is such child directories? And if so, what simple syntax am I missing out? any chance I have to specify any other property (eg. "baseURI":"file:/home/steve/graphdb-import/top/file1.owl")
Many thanks for any feedback.
If you have started GDB with -Dgraphdb.workbench.importDirectory=<path_to_the_import_directory> in "Server files" tab you should be able to see listed all files in this directory and in the child directories, which are located in the <path_to_the_import_directory> in following manner:
I've started GDB with
-Dgraphdb.workbench.importDirectory=/home/sava/Videos/data_for_import and in this directory I have subDirectory "movieDB" with two files "movieDB.brf" and "movieDB.brf.gz" and both are shown in the tab like "movieDB/movieDB.brf" and "movieDB/movieDB.brf.gz".
If you want to import these files using cURL use server import URL with method POST or:
curl -H POST 'http://localhost:7200/rest/data/import/server/w1' -H 'Accept: application/json, text/plain, /' -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"importSettings":{"name":"movieDB/movieDB.brf","status":"NONE","message":"","context":"","replaceGraphs":[],"baseURI":null,"forceSerial":false,"type":"file","format":null,"data":null,"timestamp":1608016179633,"parserSettings":{"preserveBNodeIds":false,"failOnUnknownDataTypes":false,"verifyDataTypeValues":false,"normalizeDataTypeValues":false,"failOnUnknownLanguageTags":false,"verifyLanguageTags":true,"normalizeLanguageTags":false,"stopOnError":true},"requestIdHeadersToForward":null},"fileNames":["movieDB/movieDB.brf"]}'
When I run a baseline scane on a target I get the following result:
docker run -t owasp/zap2docker-stable zap-baseline.py -d -t https://mytarget.com
Result:
WARN-NEW: HTTP Parameter Override [10026] x 3
What does this result mean? What this scan is about?
Interesting timing, this was just being discussed on the issue tracker the other day: https://github.com/zaproxy/zaproxy/issues/4454
The thread that started it all: http://lists.owasp.org/pipermail/owasp-leaders/2012-July/007521.html
Basically it has to do with forms that don't have actions, or that propagate GET paras into form actions. (Mainly impacting JSP/Servlet).
Edit: Of course you could also use the -r report.html (or any of the reporting options) to get full details vs. just the summary.
-r report_html file to write the full ZAP HTML report
-w report_md file to write the full ZAP Wiki (Markdown) report
-x report_xml file to write the full ZAP XML report
I am trying to get the bq CLI to work with multiple service accounts for different projects without having to re-authenticate using gcloud auth login or bq init.
An example of what I want to do, and am able to do using gsutil:
I have used gsutil with a .boto configuration file containing:
[Credentials]
gs_service_key_file = /path/to/key_file.json
[Boto]
https_validate_certificates = True
[GSUtil]
content_language = en
default_api_version = 2
default_project_id = my-project-id
[OAuth2]
on a GCE instance to run an arbitrary gsutil command as a service. The service does not need to be unique or globally defined on the GCE instance: as long as a service is set up in my-project-id and a private key has been created, then the private key file referenced in the .boto config will take care of authentication. For example, if I run
BOTO_CONFIG=/path/to/my/.boto_project_1
export BOTO_CONFIG
gsutil -m cp gs://mybucket/myobject .
I can copy from any project that I have a service account set up with, and for which I have the private key file defined in .boto_project_1. In this way, I can run a similar gsutil command for project_2 just be referencing the .boto_project_2 config file. No manual authentication needed.
The case with bq CLI
In the case of the bigquery command line interpreter, I want to reference a config file or pass a config option like a key file to run a bq load command, ie. upload the same .csv file that is in GCS for various projects. I want to automate this without having to bq init each time.
I have read here that you can configure a .biqqueryrc file and pass in your credential and key files as options; however the answer is from 2012, references outdated bq credential files, and throws errors due to the openssl and pyopenssl installs that it mentioned.
My question
Provide two example bq load commands with any necessary options/biqueryrc files to correctly load a .csv file from GCS into bigquery for two distinct projects without needing to bq init/authenticate manually between the two commands. Assume the .csv file is already correctly in each project's GCS bucket.
Simply use gcloud auth activate-service-account and use the global --project flag.
https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
https://cloud.google.com/sdk/gcloud/reference/
I am running a local version of the NoFlo Development Environment and would like to know how to remove (unregister) a runtime. Actually, how can I remove a runtime from the FlowHub hosted environment, as well?
There is currently no UI to do this, but the API exists: Issue
Here is my bash script for doing just that.
#!/bin/bash -x
# Your UUID can be found through developer JS console: Resources -> Local Storage -> Look for grid-token
uuid="<your uuid>"
# the list of runtimes you want to delete.
list=$1
for i in ${list}
do
curl -X DELETE http://api.flowhub.io/runtimes/${i} -H "Authorization: Bearer ${uuid}"
done
For a integration test, I need to download a CSV file using poltergeist driver with Capybara. In selenium(for example firefox/chrom webdriver), I can specify download directory and it works fine. But in poltergeist, is there a way to specify the download directory or any special configuration?. Basically I need to know how download stuff works using poltergeist,Capybara, Phantomjs.
I can read server response header as Hash using ruby but can not read the server response to get the file content.Any clue? or help please.
Finally I solved the download part by simply using CURL inside Ruby code without using any webdriver. The idea is simple, first of all, I submitted the login form via CURL and saved the cookie into my server and then submitted(via CURL) the CVS Export form using the saved cookie like this
post_data = "p1=d1&p2=d2&p3=d3"
`curl -c cookie.txt -d "userName=USERNAME&password=PASSWORD" LOGIN SUBMIT_URL`
csv_data = `curl -X POST -b cookie.txt -d '#{post_data}' SUBMIT_URL_FOR_DOWNLOAD_CSV`