How can I get the media files of a product in Shopware 6 using store-api? - shopware6-api

we have some problem with store-api. When i request a specific product the media is empty. I need to have all the product’s image. Could you help me?

Hm, it would be best to start any question with what you have attempted already, to see if you are missing some part of the puzzle.
Here is a short gist of the request you need:
curl --location --request POST 'http://website.com/store-api/product' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header 'sw-access-key: XXX' \
--data-raw '{
"limit": 1,
"includes": {
"product": ["cover", "media"]
},
"associations": {
"media": {}
}
}'
Sometimes the data that you are looking for requires an extra database JOIN call. In your case it's the media table, hence why you need to use associations to make sure the query joins that table to your response.
P.S. you don't have to use includes, I just used it to shorten the response object.

Related

Get full list of Groups and Projects in Gitlab Cloud

I'm trying to get a full list of Projects and groups in out Gitlab cloud account.
I'm currently using their documentation as reference (bear in mind though I'm no developer) and using Linux command line to do so. Here's the documentation I'm trying to use:
https://docs.gitlab.com/ee/api/projects.html
https://docs.gitlab.com/ee/api/groups.html#list-a-groups-projects
I'm using the following command to get the data and parse in a readable format that I will export to csv or spreadsheet afterwards:
curl --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/api/v4/projects/?owned=yes&per_page=1000&page=1" | python -m json.tool | grep -E "http_url_to_repo|visibility" | awk '!(NR%2){print$0p}{p=$0}' | awk '{print $4,$2}' | sed -E 's/\"|\,//g' > gitlab.txt
My problem is that the code only return about 100 of the 280 repositories we have. It doesn't seem to get it recursively from all the groups and subgroups.
Any ideas on how I can improve this search to get everything ?
Thank you
It seems it can get only 100 per page so you will have to run it two times - first with page=1 and next with page=2. And for second page you will need >> to append to existing file gitlab.txt
curl --header "..." "https://...&per_page=100&page=1" | ... > gitlab.txt
curl --header "..." "https://...&per_page=100&page=2" | ... >> gitlab.txt
Or you will have to write script which first get all pages and later send it to pipe. You may also try to use for-loop in bash

Variable within a bash script

As debian no longer produces a boot log, I'm attempting to produce one from 'journatctl' (which, I admit, may or may not be the correct course!).
The following script (logdate.sh) works:
#!/bin/sh
DATE=$(zenity --entry \
--title="Produce Log For Chosen Date" \
--text="Enter date (Mmm nn, e.g. Jul 08)" \
--entry-text="")
journalctl > /home/john/log && grep -i "${DATE}" log >> /home/john/Logs/log_${DATE}.txt
rm /home/john/log
if zenity --question \
--text="Do you wish to view to log?"
then pluma /home/john/Logs/log_"${DATE}".txt
else zenity --info \
--text="Log not displayed"
fi
The variable DATE is passed from the first routine to the second without problem.
However, when trying to modify the code to ensure that users cannot enter an incorrect date format (using zenity), it appears that the variable DATE is not found in the 2nd routine. Code:
#!/bin/sh
DATE="$(zenity --calendar --date-format="%b %d" \
--title="Select a Date" \
--text="Click on a date to select that date.")"
if [[ $DATE != "" ]]
then journalctl > /home/john/log && grep -i "${DATE}" log >> /home/john/Logs/log_'${DATE}'.txt
rm /home/john/log
else zenity --info --text="No date selected"
fi
if zenity --question \
--text="Do you wish to view to log?"
then pluma /home/john/Logs/log_"${DATE}".txt
else zenity --info --text="Log not displayed"
fi
The error message received - 'bash: /home/john/Logs/log_${DATE}.txt: ambiguous redirect' - is due to the fact the DATE is not recognised/picked-up by the second routine
I've tried putting the 2nd & 3rd routines into another script, called from the first script, with the variable DATE being 'exported' from the the 1st, but the result is exactly the same.
OK, it's SOLVED!
Amended
'#!/bin/sh'
to:
'#!/bin/bash'
The first script is obviously not bothered by the omission of 'ba'!
Sorry to have wasted anyone's time. It's my age.

How to export n-quads from blazegraph

I want to export all data from blazegraph in the typical formats. I've found a curl command for triple-format and I've found a curl command for xml/json/.. sparql result format
curl -X POST http://localhost:9999/bigdata/namespace/YOUR_NAMESPACE/sparql --data-urlencode 'query=SELECT * WHERE{ ?s p ?o } LIMIT 1' --data-urlencode 'format=json' > outputfile
However what I really need is to generate an export in n-quads format. The graph-information is crucial for my systems. How do I do this?

How to extract table data from PDF as CSV from the command line?

I want to extract all rows from here while ignoring the column headers as well as all page headers, i.e. Supported Devices.
pdftotext -layout DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - \
| sed '$d' \
| sed -r 's/ +/,/g; s/ //g' \
> output.csv
The resulting file should be in CSV spreadsheet format (comma separated value fields).
In other words, I want to improve the above command so that the output doesn't brake at all. Any ideas?
I'll offer you another solution as well.
While in this case the pdftotext method works with reasonable effort, there may be cases where not each page has the same column widths (as your rather benign PDF shows).
Here the not-so-well-known, but pretty cool Free and OpenSource Software Tabula-Extractor is the best choice.
I myself am using the direct GitHub checkout:
$ cd $HOME ; mkdir svn-stuff ; cd svn-stuff
$ git clone https://github.com/tabulapdf/tabula-extractor.git git.tabula-extractor
I wrote myself a pretty simple wrapper script like this:
$ cat ~/bin/tabulaextr
#!/bin/bash
cd ${HOME}/svn-stuff/git.tabula-extractor/bin
./tabula $#
Since ~/bin/ is in my $PATH, I just run
$ tabulaextr --pages all \
$(pwd)/DAC06E7D1302B790429AF6E84696FCFAB20B.pdf \
| tee my.csv
to extract all the tables from all pages and convert them to a single CSV file.
The first ten (out of a total of 8727) lines of the CVS look like this:
$ head DAC06E7D1302B790429AF6E84696FCFAB20B.csv
Retail Branding,Marketing Name,Device,Model
"","",AD681H,Smartfren Andromax AD681H
"","",FJL21,FJL21
"","",Luno,Luno
"","",T31,Panasonic T31
"","",hws7721g,MediaPad 7 Youth 2
3Q,OC1020A,OC1020A,OC1020A
7Eleven,IN265,IN265,IN265
A.O.I. ELECTRONICS FACTORY,A.O.I.,TR10CS1_11,TR10CS1
AG Mobile,Status,Status,Status
which in the original PDF look like this:
It even got these lines on the last page, 293, right:
nabi,"nabi Big Tab HD\xe2\x84\xa2 20""",DMTAB-NV20A,DMTAB-NV20A
nabi,"nabi Big Tab HD\xe2\x84\xa2 24""",DMTAB-NV24A,DMTAB-NV24A
which look on the PDF page like this:
TabulaPDF and Tabula-Extractor are really, really cool for jobs like this!
Update
Here is an ASCiinema screencast (which you also can download and re-play locally in your Linux/MacOSX/Unix terminal with the help of the asciinema command line tool), starring tabula-extractor:
As Martin R commented, tabula-java is the new version of tabula-extractor and active. 1.0.0 was released on July 21st, 2017.
Download the jar file and with the latest java:
java -jar ./tabula-1.0.0-jar-with-dependencies.jar \
--pages=all \
./DAC06E7D1302B790429AF6E84696FCFAB20B.pdf
> support_devices.csv
What you want is rather easy, but you're having a different problem also (I'm not sure you are aware of it...).
First, you should add -nopgbrk for ("No pagebreaks, please!") to your command. Because these pesky ^L characters which otherwise appear in the output then need not be filtered out later.
Adding a grep -vE '(Supported Devices|^$)' will then filter out all the lines you do not want, including empty lines, or lines with only spaces:
pdftotext -layout -nopgbrk \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - \
| grep -vE '(Supported Devices|^$|Marketing Name)' \
| gsed '$d' \
| gsed -r 's# +#,#g' \
| gsed '# ##g' \
> output2.csv
However, your other problem is this:
Some of the table fields are empty.
Empty fields appear with the -layout option as a series of space characters, sometimes even two in the same row.
However, the text columns are not spaced identically from page to page.
Therefor you will not know from line to line how many spaces you need to regard as a an "empty CSV field" (where you'd need an extra , separator).
As a consequence, your current code will show only one, two or three (instead of four) fields for some lines, and these fields end up in the wrong columns!
There is a workaround for this:
Add the -x ... -y ... -W ... -H ... parameters to pdftotext to crop the PDF column-wise.
Then append the columns with a combination of utilities like paste and column.
The following command extracts the first columns:
pdftotext -layout -x 38 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 1st-columns.txt
These are for second, third and fourth columns:
pdftotext -layout -x 214 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 2nd-columns.txt
pdftotext -layout -x 390 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 3rd-columns.txt
pdftotext -layout -x 567 -y 77 -W 176 -H 500 \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - > 4th-columns.txt
BTW, I cheated a bit: in order to get a clue about what values to use for -x, -y, -W and -H I did first run this command in order to find the exact coordinates of the column header words:
pdftotext -f 1 -l 1 -layout -bbox \
DAC06E7D1302B790429AF6E84696FCFAB20B.pdf - | head -n 10
It's always good if you know how to read and make use of pdftotext -h. :-)
Anyway, how to append the four text files as columns side by side, with the proper CVS separator in between, you should find out yourself. Or ask a new question :-)
This can be done easily with an IntelliGet (http://akribiatech.com/intelliget) script as below
userVariables = brand, name, device, model;
{ start = Not(Or(Or(IsSubstring("Supported Devices",Line(0)),
IsSubstring("Retail Branding",Line(0))),
IsEqual(Length(Trim(Line(0))),0)));
brand = Trim(Substring(Line(0),10,44));
name = Trim(Substring(Line(0),45,79));
device = Trim(Substring(Line(0),80,114));
model = Trim(Substring(Line(0),115,200));
output = Concat(brand, ",", name, ",", device, ",", model);
}
For the case where you want to extract that tabular data from PDF over which you have control at creation time (for timesheets contracts your employees have to sign), the following solution will be cleaner:
Create a PDF form with field IDs.
Let people fill and save the PDF forms.
Use a Apache PDFBox, an open source tool that allows to extract form data from a PDF. It includes a command-line example tool PrintFields that you would call as follows to print the desired field information:
org.apache.pdfbox.examples.interactive.form.PrintFields file.pdf
For other options, see this question.
As an alternative to the above workflow, maybe you could also use a digital signature web service that allows PDF form filling and export of the data to tables. Such as SignRequest, which allows to create templates and later export the data of signed documents. (Not affiliated, just found this myself.)

Bash while read : output issue

Updated :
Initial issue :
Having a while read loop printing every line that is read
Answer : Put a done <<< "$var"
Subsequent issue :
I may need some explanations about some SHELL code :
I have this :
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
That gets results looking like this :
<ip1> <site1>
<ip2> <site2>
<ip3> <site3>
<ip4> <site4>
up to 5000 ip_address
I did a "while loop" :
while [ `find $proc_dir -name snmpproc* | wc -l` -ge "$max_proc_snmpget" ];do
{
echo "sleeping, fping in progress";
sleep 1;
}
done
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
while read ip codesite;do
{
sendSNMPGET $ip $snmp_community $code_site &
}
done<<<"$temp_ip"
And the sendSNMPGET function is :
sendSNMPGET() {
touch $procdir/snmpproc.$$
hostname=`snmpget -v1 -c $2 $1 sysName.0`
if [ "$hostname" != "" ]
then
echo "hi test"
fi
rm -f $procdir/snmpproc.$$
The $max_proc_snmpget is set to 30
At the execution, the read is ok, no more printing on screen, but child processes seems to be disoriented
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
Why can't it handle this ?
If temp_ip contains the name of a file that you want to read, then use:
done<"$temp_ip"
In your case, it appears that temp_ip is not a file name but contains the actual data that you want. In that case, use:
done<<<"$temp_ip"
Take care that the variable is placed inside double-quotes. That protects the data against the shell's word splitting which would result in the replacement of new line characters with spaces.
More details
In bash, an expression like <"$temp_ip" is called redirection. In this case in means that the while loop will get its standard input from the file called $temp_ip.
The expression <<<"$temp_ip" is called a here string. In this case, it means that the while loop will get its standard input from the data in the variable $temp_ip.
More information on both redirection and here strings in man bash.
Or you can parse the output of your initial command directly:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | \
while read ip codesite
do
...
done
If you want to improve the performance and run some of the 5,000 SNMPGETs in parallel, I would recommend using GNU Parallel (here) like this:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | parallel -k -j 20 -N 2 sendSNMPGET {1} $snmp_community {2}
The -k will keep the parallel output in order. The -j 20 will run up to 20 SNMPGETs in parallel at a time. The -N 2 means take 2 parameters from the mysql output per job (i.e. ip and codesite). {1} and {2} are your ip and codesite parameters.
http://www.gnu.org/software/parallel/
I propose to not store the result value but use it directly:
while read ip codesite
do
sendSNMPGET "$ip" "$snmp_community" "$code_site" &
done < <(
"$mysql" --skip-column-names -h "$db_address" -u "$db_user" -p"$db_passwd" "$db_name" \
-e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
This way you start the mysql command in a subshell and use its output as input to the while loop (similar to piping which here also is an option).
But I see some problems with that code: If you really start each sendSNMPGET command in the background, you very quickly will put a massive load on your computer. For each line you read another active background process is started. This can slow down your machine to the point where it is rendered useless.
I propose to not run more than 20 background processes at a time.
As you don't seem to have liked my answer with GNU Parallel, I'll show you a very simplistic way of doing it in parallel without needing to install that...
#!/bin/bash
MAX=8
j=0
while read ip code
do
(sleep 5; echo $ip $code) & # Replace this with your SNMPGET
((j++))
if [ $j -eq $MAX ]; then
echo -n Pausing with $MAX processes...
j=0
wait
fi
done < file
wait
This starts up to 8 processes (you can change it) and then waits for them to complete before starting another 8. You have already been shown how to feed your mysql stuff into the loop by other respondents in the second to last line of the script...
The key to this is the wait which will wait for all started processes to complete.