Before pushing an image into Harbor, I'm running a Trivy scan. Instead of having Trivy scan images again within harbor, I'm looking for a way to pass the JSON results from Trivy into harbor.
trivy image hello_docker_compose_web --output results.json -f json
Is there a way to have Harbor consume this results.json along with the image being pushed to harbor?
It is not possible, to upload scan results from external Trivy back to Harbor.
The only option is to run the scan again within Harbor.
Related
I'm trying to dev an android app that can scan ble device, connect to one of them and then send an image every second or every 2 second to an RPI 3. The image would be a screenshot, resulting in some kind of screen mirroring with 1 images per seconds. I have an app that can scan ,detect and connect to my RPI with BLE, but i can't seem to find how to send files or images as i don't understand how the write functions works. I don't really understand the characteristics and services. I'd like to write the image in the /tmp directory of my RPI. Do any one of you have an example or could explain me how to do it ?
I can able to successfully execute the .ktr files using browser and as well as using postman tool by using below url
http://localhost:8089/kettle/executeTrans/?trans=D:\Pentaho\ktr\MyJson_to_Database.ktr
But I want to automate the process and this ktr and it need to accept a json file as input(right now the json data is in side the ktr file itself). As I am using NodeJS to automate the ktr executing processing, I am trying to use wreck and post method to execute it(I am new to wreck), I am facing difficulties to identify the problem whether the error is due to wrek or kettle transformation itself
In the mean time I am trying to execute it without passing path as query string in url and instead I want to use it in body, I have searched google with no success so far.
EDIT 1
I am able to reach to the ktr file from NodeJS Microservice and now the challenge is to read the file path inside docker image.
Could you work storing the json data in a file, and modifying/adding the transformation to read the file and pass the information in the file?
I have three projects on GCP which play the role of three environments (dev, staging, prod. Each of them has a corresponding dataset on Big Query created as follows:
bq --location=${REGION} mk \
--dataset \
${DEVSHELL_PROJECT_ID}:mydataset
bq mk \
--table \
${DEVSHELL_PROJECT_ID}:mydataset.mytable \
schema.json
When executing that in the dev shel on GCP, I have my Dev project selected.
And, when I execute
bq ls
in the shell I can see only this dataset available there which is expected.
After that, after switching to another project and executing
bq ls
Again, only one data set is visible and it is the one dedicated to the staging environment, for example. But when I open the UI of Google Big Query (using the staging project), I can see my Dev environment/project dataset.
I am wondering why is that and is it normal and expected?
It is totally normal behavior. The Resources section contains a list of pinned projects. Expand a project to view datasets and tables that you have access to. You can manually pin/unpin your datasets in each project. A search box is available in the Resources section that allows you to search for resources by name or by label.
Please, refer to official documentation. I hope it helps.
So I'm trying to setup this script that pipes this data via an API into BigQuery.
It's all being done on the command line, and I've already successfully setup the framework behind it. Specifically, setting up the schema.json file.
When I run the following, it successfully uploads:
bq load --source_format=NEWLINE_DELIMITED_JSON --max_bad_records=10 program_users gs://internal/program_user.json program_users_schema.json
As I said, this successfully pipes into BQ, which is great. But the problem is, this API only allows a max of 50 records at a time when there are over 1000.
EDIT: The initial call to retrieve the records looks like this:
$ curl -s https://api.programs.io/users/57f263fikgi33d8ea7ff4 -u 'dG9rOmU5NGFjYTkwXzliNDFfNGIyJ24iYzA0XzU0NDg3MjE5ZWJkZD=': -H 'Accept:application/json'
Would anyone have a solution to this, particularly, one that is done in the command line?
Is there an easy way to directly download all the data contained in a certain dataset on Google BigQuery? I'm actually downloading "as csv", making one query after another, but it doesn't allow me to get more than 15k rows, and rows i need to download are over 5M.
Thank you
You can run BigQuery extraction jobs using the Web UI, the command line tool, or the BigQuery API. The data can be extracted
For example, using the command line tool:
First install and auth using these instructions:
https://developers.google.com/bigquery/bq-command-line-tool-quickstart
Then make sure you have an available Google Cloud Storage bucket (see Google Cloud Console for this purpose).
Then, run the following command:
bq extract my_dataset.my_table gs://mybucket/myfilename.csv
More on extracting data via API here:
https://developers.google.com/bigquery/exporting-data-from-bigquery
Detailed step-by-step to download large query output
enable billing
You have to give your credit card number to Google to export the output, and you might have to pay.
But the free quota (1TB of processed data) should suffice for many hobby projects.
create a project
associate billing to a project
do your query
create a new dataset
click "Show options" and enable "Allow Large Results" if the output is very large
export the query result to a table in the dataset
create a bucket on Cloud Storage.
export the table to the created bucked on Cloud Storage.
make sure to click GZIP compression
use a name like <bucket>/prefix.gz.
If the output is very large, the file name must have an asterisk * and the output will be split into multiple files.
download the table from cloud storage to your computer.
Does not seem possible to download multiple files from the web interface if the large file got split up, but you could install gsutil and run:
gsutil -m cp -r 'gs://<bucket>/prefix_*' .
See also: Download files and folders from Google Storage bucket to a local folder
There is a gsutil in Ubuntu 16.04 but it is an unrelated package.
You must install and setup as documented at: https://cloud.google.com/storage/docs/gsutil
unzip locally:
for f in *.gz; do gunzip "$f"; done
Here is a sample project I needed this for which motivated this answer.
For python you can use following code,it will download data as a dataframe.
from google.cloud import bigquery
def read_from_bqtable(bq_projectname, bq_query):
client = bigquery.Client(bq_projectname)
bq_data = client.query(bq_query).to_dataframe()
return bq_data #return dataframe
bigQueryTableData_df = read_from_bqtable('gcp-project-id', 'SELECT * FROM `gcp-project-id.dataset-name.table-name` ')
yes steps suggested by Michael Manoochehri are correct and easy way to export data from Google Bigquery.
I have written a bash script so that you do not required to do these steps every time , just use my bash script .
below are the github url :
https://github.com/rajnish4dba/GoogleBigQuery_Scripts
scope :
1. export data based on your Big Query SQL.
2. export data based on your table name.
3. transfer your export file to SFtp server.
try it and let me know your feedback.
to help use ExportDataFromBigQuery.sh -h