Pandoc bibliography : why is nocite not working? - pdf

I am trying to render a markdown document with Pandoc and leverage its bibliography capabilities.
My references are listed in a main.bib file, and citation rendering works flawlessly with --bibliography=main.bib --citeproc and the cited references appear in my #refs div.
However, I'd like all my references to by listed in the #refs div, rather that only those which are cited.
According to Pandoc's user manual, adding the following to the YAML block should do the trick :
nocite: |
#*
However, it doesn't work for me. Neither does -M nocite='#*' in the command.
Any clue about that ?
Here is a minimal reproducible example (with pandoc 2.18) :
main.md :
---
title: "Pandoc nocite not working reproduction"
author: "ombrelin"
date: "2022-07-30"
titlepage: false
toc-own-page: true
bibliography: main.bib
book: true
tof: true
lof: true
...
# Introduction
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
# References
::: {#refs}
:::
``main.bib` :
#www{extreme_programming,
author = {Don Wells},
title = {Extreme Programming},
date = {1999},
url = {http://www.extremeprogramming.org/when.html},
}
and compile command :
pandoc \
main.md -f markdown \
--top-level-division=chapter \
--citeproc \
--bibliography=main.bib \
-V fontsize=12pt \
--pdf-engine=xelatex \
--toc \
-M nocite='#*' \
-o "main.pdf"

Based on your minimal reproducible example, I could verify your problem.
What did the trick for me, was adding a metadata-file metadata.yaml with the following content (based on the documentation):
---
nocite: |
#*
...
... and added it to the compile command:
pandoc main.md -f markdown --top-level-division=chapter --citeproc --bibliography=main.bib -V fontsize=12pt --pdf-engine=xelatex --toc --metadata-file=metadata.yaml -o "main.pdf"
Which adds the entry to the references section

Related

Would filtering by order ID from nvarchar(MAX) column benefit from fulltext search?

In my table I have a nvarchar(MAX) column which holds data about our orders in XML form. Another department often has to get information from the table by, most commonly, looking for a certain order ID in that column but we all need to be able to look for any part of the text in the column.
The problem is that the order IDs and other filter/search criteria are unique to each order and are not words I can define in stopwords or stoplists and new ones are generated for new orders, ofc.
Since the table is growing in size queries with a LIKE clause take a really long time, yet I cannot think how my colleagues can get information about an order except by looking for things like its order ID, a part serial number or a piece of text. We have no control over the XML order data, furthermore it doesn't have a standardised structure, which is to say every order type has its own XML structure and there are too many to separate into different tables and we can't extract separate elements into different columns as we don't have control over the processing, only over the storage of the data.
Since an index would be futile in this case I started reading about fulltext search, but would that help because of the reasons I mentioned and if not is there a better alternative?
EDIT:
Adding example messages being stored in the nvarchar(MAX) column (there are approx. 23 other types not shown here, for the moment):
<?xml version='1.0' encoding='UTF-8'?>
<ServiceRequest>
<OrderID>GT123456789123465</OrderID>
<CreatedBy>some.person</CreatedBy>
<CreationTime>2021-08-18</CreationTime>
<CustomerReference>INC123456</CustomerReference>
<CustomerContract>123456</CustomerContract>
<ShortDescription>A little bit of text</ShortDescription>
<StockLocation>Text-Identifier</StockLocation>
<DueDate>2021-08-19</DueDate>
<ExpectedDeliveryDate>2021-08-19T10:30:00</ExpectedDeliveryDate>
<ServiceLevel>Same-Day</ServiceLevel>
<DeliveryLocation>SITE</DeliveryLocation>
<Site>Address of a building or something</Site>
<ContactName>Name of a person</ContactName>
<ContactPhone>0123456789</ContactPhone>
<DocTypeID>123</DocTypeID>
<DeliveryAddress>
<Address1>Address1</Address1>
<Address2>More details about address</Address2>
<City>Some city</City>
<Postcode>LOL KEK</Postcode>
<Country>Narnia</Country>
</DeliveryAddress>
<Parts>
<Part>
<UniqueID>168468468</UniqueID>
<PartNumber>#ABCDE-1234-ABCDE</PartNumber>
<Description>Example TV set with model name</Description>
<Quantity>1</Quantity>
<Returnable>Y</Returnable>
<ReturnInformation>TBC</ReturnInformation>
<LinkedDemandId>123456789</LinkedDemandId>
</Part>
</Parts>
</ServiceRequest>
.
<?xml version='1.0' encoding='UTF-8'?>
<Engineer>
<PersonId>some.guy</PersonId>
<SearchName>Some Guy</SearchName>
<personemail>some.guy#company.com</PersonEmail>
<PersonPhone>00000000000</PersonPhone>
<Address>
<Zip>LOL KEK</Zip>
<CountryId>US</CountryId>
</Address>
</Engineer>
.
<WarehouseUpdate>
<WarehouseId>Warehouse-name-123456</WarehouseId>
<WarehouseDescription>Friendlier WH name</WarehouseDescription>
<CurrencyId>USD</CurrencyId>
<CostDomainId>MAIN</CostDomainId>
<NodeId>SSL</NodeId>
<SupplySource>W</SupplySource>
<ReturnWarehouse>Warehouse-ID</ReturnWarehouse>
<IsAutoReceive>Y</IsAutoReceive>
<IsRepairWhse>N</IsRepairWhse>
<IsReplenishable>Y</IsReplenishable>
<SupplyWarehouse>Warehouse-ID</SupplyWarehouse>
<WarehouseTypeId>ABC</WarehouseTypeId>
<Address>
<Zip>LOL KEK</Zip>
<CountryId>US</CountryId>
</Address>
</WarehouseUpdate>
.
<briefing>
<incident>
<serviceProvider>A long name of a company that provides a service</serviceProvider>
<receiverURL>https://www.google.com/some/url/that/is/used</receiverURL>
<incidentNumber>123456789</incidentNumber>
<taskNumber>123456789</taskNumber>
<taskAssignmentID>123456789</taskAssignmentID>
<taskCreationDate>2021-02-31</taskCreationDate>
<taskCreationTime>10:32:13</taskCreationTime>
<sendDate>2021-02-31</sendDate>
<sendTime>10:32:27</sendTime>
<request>ABC</request>
<urgency>ABC</urgency>
<severity>D</severity>
<customerName>Name of a company</customerName>
<helpdeskNumber>123456789</helpdeskNumber>
<originalCustomerReference>123456789</originalCustomerReference>
<projectNumber/>
<project/>
<callerFirstName>Some</callerFirstName>
<callerLastName>Person</callerLastName>
<callerPhone>123456789123456789</callerPhone>
<callerPhoneType>ABC</callerPhoneType>
<callerEmailaddress/>
<callerPreferredLanguage/>
<communicationPreference/>
<installedAtAddress1>Address short</installedAtAddress1>
<installedAtAddress2/>
<installedAtAddress3>More address details</installedAtAddress3>
<installedAtAddress4>Even more address details</installedAtAddress4>
<installedAtCity>Washington</installedAtCity>
<installedAtState/>
<installedAtProvince/>
<installedAtPostalCode>LOL KEK</installedAtPostalCode>
<installedAtCountry>US</installedAtCountry>
<installedAtPhone/>
<installedAtFax/>
<installedAtEmail/>
<productSerialNumber>ABCDF123456789</productSerialNumber>
<productTag>ABCDF123456789</productTag>
<productSystem>123456</productSystem>
<productItemNumber>123456789123456789</productItemNumber>
<productItemDescription>This is a thingy</productItemDescription>
<productComponentNumber/>
<productComponentDescription/>
<productServiceGroupNumber>12</productServiceGroupNumber>
<customerSerialNumber/>
<defectDescription>ABCD::ABCD::ABCD EF::ABCD EF</defectDescription>
<orderDescription>Lorem ipsum</orderDescription>
<taskType>In summet idit</taskType>
<customerErrorCode/>
<problemCode>corpsem mepsem dopsem</problemCode>
<resolutionSummary/>
<resolutionCode/>
<reporteddate>2021-05-12</reporteddate>
<reportedtime>10:30:44</reportedtime>
<customerTimezone>EET</customerTimezone>
<coverage>-ABC-EF-</coverage>
<contractServiceNumber>123456789123456789</contractServiceNumber>
<plannedStartDate>2021-05-12</plannedStartDate>
<plannedStartTime>10:32:13</plannedStartTime>
<plannedEndDate>2021-05-14</plannedEndDate>
<plannedEndTime>10:32:44</plannedEndTime>
<chargeableFlag/>
<vkOrg>12AB</vkOrg>
<attribute1/>
<attribute2/>
<attribute3>AB</attribute3>
<attribute4/>
<attribute5/>
<attribute6/>
<attribute7/>
<attribute8/>
<attribute9/>
</incident>
<incidentNotes>
<technicianNote>Went to lunch, bought a snack</technicianNote>
<technicianNote>AB 11.11 11/11/2021
A huge block of text incoming
Here a monumental, monolithic, megalithic block of text resides that has numerous details written in freeform however the person doing the thing decides
There is some structure to the text but it is completely specific to this one type of XML message.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla eu feugiat enim. Praesent malesuada, diam ut ornare tristique, ipsum dolor rutrum enim, et sodales lectus mi at ex. Nulla in varius nisl. Nunc non enim augue. Integer condimentum tempor lacus, non maximus tortor dictum a. Donec dapibus urna nulla, ac tempus justo sodales sed. Cras lacinia tempus lacinia. Sed fermentum libero vel lectus ornare, at egestas eros dignissim. Quisque ac vehicula erat. Morbi id ultrices sem, auctor dapibus ligula. Vivamus vestibulum consectetur ligula non viverra. Proin id mi non ipsum consectetur interdum. Aenean id posuere metus.
Aenean diam justo, ultrices sed cursus eget, posuere eget justo. Maecenas egestas mi et rutrum auctor. Fusce tincidunt ac purus ut gravida. Proin ac condimentum nibh, id venenatis nibh. Sed eu turpis non sem venenatis posuere eu sit amet leo. Vivamus velit lacus, tempor quis dolor vel, sagittis vulputate risus. Praesent dignissim sed turpis vel porta. Duis elit ante, pellentesque sit amet nisl eu, ullamcorper varius ex. Aenean tortor ligula, posuere sed tempor eget, consequat ut orci. Vestibulum eu aliquet ante. Ut eros ex, dignissim nec accumsan eu, posuere nec ligula. Cras tempor volutpat tempor. Duis vitae dui sit amet diam porttitor viverra. Aliquam ornare, turpis ut pulvinar bibendum, urna eros sodales turpis, sed malesuada felis massa in neque. Etiam venenatis volutpat diam eget placerat. Integer ultrices vulputate neque ut ullamcorper.
</technicianNote>
</incidentNotes>
<ibaseNotes>
<serialnumberNote/>
</ibaseNotes>
</briefing>
Also I think I deceived you by accident, most commonly order IDs are being searched but we need to be able to search/filter by any part of the XML message, for example I need to be able to find all messages that contain "some.person" in them. I'll go back and edit that.
Ideally you should store XML in an xml column. Then you can use XQuery to search. An XML index on the column may be wise.
But you can also cast the data to XML (at a cost of lower eficiency), like this
CROSS APPLY (VALUES (
CAST(CAST(yourData AS varchar(max)) AS xml)
) ) v(xmlData)
Cast to varchar is necessary because the XML is defined with UTF-8.
Then you can use XQuery, like this for example
WHERE v.xmlValue.exist('/Engineer/PersonId[contains(text()[1], "some.guy")]') = 1
or v.xmlValue.exist('/Engineer/PersonEmail[contains(text()[1], "some.guy")] ') = 1
Or pass through a SQL variable
WHERE v.xmlValue.exist('/Engineer/PersonId[contains(text()[1], sql:variable("#toSearch"))]') = 1
or v.xmlValue.exist('/Engineer/PersonEmail[contains(text()[1], sql:variable("#toSearch"))] ') = 1
SQL Fiddle
To find text in any node, you can use //*, which means "descend any depth, matching any node name"
WHERE v.xmlValue.exist('//*[contains(text()[1], sql:variable("#toSearch"))]') = 1
That can be inefficient though.

groff: Incorrect line width after page break

I'm using groff version 1.22.4 to create a two-page letter. The first page has three columns, the second page has 2 columns.
The macros for printing columns 1-3 on the first page work as expected. The macro for starting the second page always gives a first line that is the width of the column on the previous page. .
How can I get the first line on the second page to have the correct width?
Below is the groff:
.ll 2.25i \" Line length of a column 2.25 inches. Good for three columns.
.vs 15p \" 11 points between lines of text in the same paragraph
.ps 12 \" 12 point font size
.nr bottom-margin 0.75i \" Bottom margin
.de START-COLUMN-0
. mk \" Mark top of column
. wh -\\n[bottom-margin]u START-COLUMN-1 \" At bottom of 1st column, run next macro.
..
.de START-COLUMN-1
. po +2.55in \" Add offset for second column.
. rt \" Return to top of column.
. wh -\\n[bottom-margin]u START-COLUMN-2 \" At bottom of 2nd column, run next macro.
..
.de START-COLUMN-2
. po +2.55in \" Add offset for second column.
. rt \" Return to top of column.
. wh -\\n[bottom-margin]u START-PAGE-2 \" At bottom of 2nd column, run next macro.
..
.de START-PAGE-2 \"Page break.
'll 3.55i \" Line length of a column 3.55 inches. Good for two columns.
'bp \" Break page.
'po 0.5in \" left margin
'mk \" Mark top of column
'wh -\\n[bottom-margin]u END-PAGE-2 \" At bottom of 1st column, run next macro.
..
.de END-PAGE-2
. po +3.85in \" Add offset for second column.
. rt \" Return to top of column.
. wh -\\n[bottom-margin]u \" Terminate at second column on second page.
..
.START-COLUMN-0
Lots of text here.
Posted as an answer, because I need a picture in it and the size of the comment is too large.
Are you sure? The result is completely different from what you describe. The second page is two columns, instead of the 1 column in your description.
When I run your code (GNU groff version 1.22.4), I get:
or as nroff:
laoreet arcu eros vi‐ faucibus, lacus lectus cus. Quisque mattis
tae lorem. Morbi con‐ ullamcorper massa, euismod tortor, sit
vallis massa lacus, quis fermentum leo me‐ amet hendrerit lacus
vel mollis velit tus sed ipsum. Maece‐ tristique a. Aenean
vulputate nec. nas sagittis pharetra fermentum sapien pu‐
rus,
vel interdum tellus tincidunt nec. Ut euismod massa risus.
Aenean rutrum, sem sed sodales mattis, magna felis ullamcorper dolor, ac
convallis nulla diam vel erat. Donec in turpis velit. Nunc elit arcu, cur‐
sus et condimentum in, efficitur et nisi. Vivamus suscipit porttitor nunc
consectetur malesuada. Vivamus sodales non lacus quis porttitor. Aenean
viverra nulla ut lacus dignissim bibendum. Nulla gravida sem quis ex cursus
I also get a single column on the second page, and not two columns as you have in the output.

Annotate Data in between Markup

I'm trying to write a rule to detect Data in between Markup tags.
Input data format is fixed for example
<1> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim</1>
<2> nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim</2>
What i basically need here to detect data in between start and end tag
in my case output should be 1 and 2
I'm trying below rule.
Document{->ADDRETAINTYPE(MARKUP)};
STRING sStart = "<";
STRING sEnd = ">";
DECLARE spanStart;
DECLARE spanEnd;
DECLARE ZONE;
sStart -> spanStart;
sEnd -> spanEnd;
spanStart NUM spanEnd{->MARK(ZONE,2)};
But value is not getting detected as 1 & 2 are not detected as number
"1" and "2" are not detected as NUM because they are MARKUP. The seeding creates a disjunct non-overlapping partitioning of the document. If you want to create an annotation within a currently smallest part, e.g., in your use case MARKUP, you can do that with a simple regex rule as you did in your question with spanStart and spanEnd.
I would use something like:
MARKUP->{"\\d+"-> ZONE;};
or
MARKUP->{"</?(\\d+)>"-> 1 = ZONE;};
DISCLAIMER: I am a developer of UIMA Ruta

How to extract multiple groups from cell using pandas df.str.extract()

How can I get all the occurences of the patter as a list from the Pandas cell ? Is it possible?
name_pattern = r'([A]u?[-_\s]?[0-9]{2})'
df["Result"] = df["Name"].str.extract(name_pattern, flags=re.IGNORECASE)
Example Text:
Qui voluptates doloremque A-12 veritatis dolor optio temporibus nobis fugit. Inventore excepturi quis nulla. Dolor ratione Z-99 optio doloribus voluptas veritatis voluptatem. Asperiores sed aperiam sint A-99 voluptatem A-66 exercitationem.
I would expect df["Result"] to be ["A-12","A-99","A-66"]
You should be able to use
df["Result"] = (df["Name"].str.extractall(name_pattern, flags=re.IGNORECASE)
.groupby(level=0)[0].apply(list))
which would result in the following df:
Name Result
0 Qui voluptates doloremque A-12 veritatis dolor... [A-12, A-99, A-66]
Unfortunately, there is a bug that prevents this from working in 0.18.0 and 0.18.1 — it is fixed in the development version, and 0.19.0 will not have this problem. In the meantime, you can also do
df["Result"] = df["Name"].apply(lambda x: re.findall(name_pattern, x, flags=re.IGNORECASE))

Get all information for products by categories - SQL?

Since this is the best page I've ever found in case of question like this, I wanna ask this here. I'm a little bit confused here. I'm a beginner in SQL statements, so I need your help, please. I have three tables:
product
product_category
product_to_category
I show you the exported SQL file, so you may get this:
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
SET time_zone = "+00:00";
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
CREATE TABLE IF NOT EXISTS `product` (
`productID` int(6) NOT NULL AUTO_INCREMENT,
`productTitle` varchar(255) NOT NULL,
`productDescription` text NOT NULL,
`productPrice` double NOT NULL DEFAULT '0',
`productQuantity` int(5) NOT NULL,
PRIMARY KEY (`productID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ;
INSERT INTO `product` (`productID`, `productTitle`, `productDescription`, `productPrice`, `productQuantity`) VALUES
(1, 'Chlor 5L', 'Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. \r\n\r\nDuis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. \r\n\r\nUt wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. \r\n\r\nNam liber tempor cum soluta nobis eleifend option congue nihil imperdiet doming id quod mazim placerat facer possim assum. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. \r\n\r\nDuis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis. ', 14.95, 50),
(2, 'Chlor 15L', 'Mit diesem Kanister kommen Sie etwa 27.000 Liter aus.', 50, 13),
(3, 'Chlor 20L', 'Mit diesem Kanister kommen Sie etwa 37.000 Liter aus.', 60, 2),
(4, 'Chlor 25L', 'Mit diesem Kanister kommen Sie etwa 47.000 Liter aus.', 79, 11),
(5, 'Kieselgur 50kg', 'Eine menge Kieselgur zum säubern.', 69.99, 9);
CREATE TABLE IF NOT EXISTS `product_category` (
`categoryID` int(3) NOT NULL AUTO_INCREMENT,
`categoryName` varchar(255) NOT NULL,
`categoryDescription` text NOT NULL,
PRIMARY KEY (`categoryID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;
INSERT INTO `product_category` (`categoryID`, `categoryName`, `categoryDescription`) VALUES
(1, 'Schwimmbecken', 'Hier finden Sie alle Produkte rund um das Thema Schwimmbecken.'),
(2, 'Whirlpool', 'Hier finden Sie alle Produkte rund um das Thema Whirlpools.'),
(3, 'Sauna', 'Hier finden Sie alle Produkte rund um das Thema Sauna.'),
(4, 'Infrarot', 'Hier finden Sie alle Produkte rund um das Thema Infrarotkabinen.');
CREATE TABLE IF NOT EXISTS `product_to_category` (
`categoryID` int(3) NOT NULL,
`productID` int(6) NOT NULL,
`productAddedTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`categoryID`,`productID`),
KEY `productID` (`productID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `product_to_category` (`categoryID`, `productID`, `productAddedTime`) VALUES
(1, 1, '2011-11-27 13:57:12'),
(1, 2, '2011-11-27 13:57:12'),
(1, 3, '2011-11-27 13:57:12'),
(1, 4, '2011-11-27 13:57:12'),
(1, 4, '2011-11-27 13:57:12'),
(2, 1, '2011-11-27 13:57:12');
ALTER TABLE `product_to_category`
ADD CONSTRAINT `product_to_category_ibfk_2` FOREIGN KEY (`categoryID`) REFERENCES `product_category` (`categoryID`),
ADD CONSTRAINT `product_to_category_ibfk_3` FOREIGN KEY (`productID`) REFERENCES `product` (`productID`);
/*!40101 SET CHARACTER_SET_CLIENT=#OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=#OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=#OLD_COLLATION_CONNECTION */;
So, what I wanna do is: I want to get all information of ALL products. I want to group them on ONE page by there categories. In addition I wanna show the number of products in each category. I do not know, how to do this. I get the right number of products per category with this statement, but just on product. But I want alle information on products of ALL products in ALL categories, hopefully I stated my point, otherwise you may ask again. I try to explain it.
SELECT COUNT(ptc.productID) AS productCount, pc.categoryName, p.productTitle, p.productPrice
FROM product_category pc
JOIN product_to_category ptc ON pc.categoryID = ptc.categoryID
JOIN product p ON ptc.productID = p.productID
GROUP BY pc.categoryName
I hope you have an answer for me...
Want I wanna do is a overview like this:
Schwimmbecken (5 products)
- Chlor 5L (14.95)
- Chlor 15L (50.00)
- Chlor....
Whirlpool (1 product)
- Chlor 5L (14.95)
Hope this was enough to show ya...
I think you are close but I would 'start' with either product or category.
I would start with product, link to product_category and then to catergory e.g. something like:
SELECT COUNT(p.productID) AS productCount, c.categoryName, p.productTitle, p.productPrice
FROM product p
JOIN product_to_category ptc ON ptc.categoryID = p.productID
JOIN category c ON ptc.catgeory_id = c.categoryID
ORDER BY c.categoryName
Start with this, examine the results and then add grouping as appropriate.
use (corrected version)
SELECT DISTINCT
c.categoryName || '(' || c.cnt || ' products)' title,
x.productTitle || '(' || x.productPrice || ')' productinfo
FROM
(
select
pc.categoryid,
pc.categoryName,
count(distinct p2c.productID) cnt
from product_category pc
INNER JOIN product_to_category p2c ON p2c.categoryid = pc.categoryid
group by pc.categoryid, pc.categoryName
) c
INNER JOIN
(
SELECT DISTINCT
ptc.categoryid,
p.productTitle,
p.productPrice
FROM product p
INNER JOIN product_to_category ptc ON p.productID = ptc.productID
) x ON x.categoryid = c.categoryid
ORDER BY 1, 2
This will give what you ask for except for one thing - the title will be repeated as often as there are products in the respective category... that part can't be handled via SQL itself, you will have to handle it in your code...
EDIT - as per comments:
The above select makes inner join between two SELECTs... first gets one row per category plus the count of products in that category... second gets all products per category... these are joined via the categoryid...
Just tried it with your sample data and got the following result:
TITLE PRODUCTINFO
Schwimmbecken(4 products) Chlor 15L(50)
Schwimmbecken(4 products) Chlor 20L(60)
Schwimmbecken(4 products) Chlor 25L(79)
Schwimmbecken(4 products) Chlor 5L(14,95)
Whirlpool(1 products) Chlor 5L(14,95)
BTW: your sample data seems off... you have two times productId 4 in categoryID 1 while productID 5 is not used anywhere...