Indexing PDF documents in Solr with no UniqueKey - lucene

I want to index PDF (and other rich) documents. I am using the DataImportHandler.
Here is how my schema.xml looks:
.........
.........
<field name="title" type="text" indexed="true" stored="true" multiValued="false"/>
<field name="description" type="text" indexed="true" stored="true" multiValued="false"/>
<field name="date_published" type="string" indexed="false" stored="true" multiValued="false"/>
<field name="link" type="string" indexed="true" stored="true" multiValued="false" required="false"/>
<dynamicField name="attr_*" type="textgen" indexed="true" stored="true" multiValued="false"/>
........
........
<uniqueKey>link</uniqueKey>
As you can see I have set link as the unique key so that when the indexing happens documents are not duplicated again. Now I have the file paths stored in a database and I have set the DataImportHandler to get a list of all the file paths and index each document. To test it I used the tutorial.pdf file that comes with example docs in Solr. The problem is of course this pdf document won't have a field 'link'. I am thinking of way how I can manually set the file path as link when indexing these documents. I tried the data-config settings as below,
<entity name="fileItems" rootEntity="false" dataSource="dbSource" query="select path from file_paths">
<entity name="tika-test" processor="TikaEntityProcessor" url="${fileItems.path}" dataSource="fileSource">
<field column="title" name="title" meta="true"/>
<field column="Creation-Date" name="date_published" meta="true"/>
<entity name="filePath" dataSource="dbSource" query="SELECT path FROM file_paths as link where path = '${fileItems.path}'">
<field column="link" name="link"/>
</entity>
</entity>
</entity>
where I create a sub-entity which queries for the path name and makes it return the results in a column titled 'link'. But I still see this error:
WARNING: Error creating document : SolrInputDocument[{date_published=date_published(1.0)={2011-06-23T12:47:45Z}, title=title(1.0)={Solr tutorial}}]
org.apache.solr.common.SolrException: Document is missing mandatory uniqueKey field: link
Is there anyway for me to create a field called link for the pdf documents?
This was already asked here before but the solution provided uses ExtractRequestHandler but I want to use it through the DataImportHandler.

Try this:
<entity name="fileItems" rootEntity="false" dataSource="dbSource" query="select path from file_paths">
<field column="path" name="link"/>
<entity name="tika-test" processor="TikaEntityProcessor" url="${fileItems.path}" dataSource="fileSource">
<field column="title" name="title" meta="true"/>
<field column="Creation-Date" name="date_published" meta="true"/>
</entity>
</entity>

Related

Indexing PDF documents using Cloudera Search

I've been trying to index pdf documents using Cloudera Search aka Apache Solr. First I was able to index twitter tweets. Later I tried to index PDF files. I've created the corresponding collection using solrctl with default schema. The morphline file that I used is (I've masked the IP address of zkHost here)...
solrLocator : {
# Name of solr collection
#collection : collection1
collection : pdfs
# ZooKeeper ensemble
#zkHost : "127.0.0.1:2181/solr"
zkHost : "xxx.xxx.xxx.xxx:2181,xxx.xxx.xxx.xxx:2181/solr"
# The maximum number of documents to send to Solr per network batch (throughput knob)
# batchSize : 100
}
morphlines : [
{
id : morphlinepdfs
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
commands : [
{ detectMimeType { includeDefaultMimeTypes : true } }
{
solrCell {
solrLocator : ${solrLocator}
captureAttr : true
lowernames : true
capture : [id, title, author, content, content_type, subject, description, keywords, category, resourcename, url, last_modified, links]
parsers : [ { parser : org.apache.tika.parser.pdf.PDFParser } ]
}
}
{ generateUUID { field : id } }
{ sanitizeUnknownSolrFields { solrLocator : ${solrLocator} } }
{ loadSolr: { solrLocator : ${solrLocator} } }
]
}
]
The PDF metadata fields are present in schema.xml file such as...
<field name="title" type="text_general" indexed="true" stored="true" multiValued="true"/>
<field name="subject" type="text_general" indexed="true" stored="true"/>
<field name="description" type="text_general" indexed="true" stored="true"/>
<field name="comments" type="text_general" indexed="true" stored="true"/>
<field name="author" type="text_general" indexed="true" stored="true"/>
<field name="keywords" type="text_general" indexed="true" stored="true"/>
<field name="category" type="text_general" indexed="true" stored="true"/>
<field name="resourcename" type="text_general" indexed="true" stored="true"/>
<field name="url" type="text_general" indexed="true" stored="true"/>
<field name="content_type" type="string" indexed="true" stored="true" multiValued="true"/>
<field name="last_modified" type="date" indexed="true" stored="true"/>
<field name="links" type="string" indexed="true" stored="true" multiValued="true"/>
But in the solr /select query output, I'm getting only content and content-type fields. How can I get all the metadata in solr frontend query? Do I need to modify the schema.xml or the corresponding morphline file? Also can I index the fields inside the PDF content?
The command I used to index pdf file is:
hadoop --config /etc/hadoop/conf.cloudera.yarn jar /usr/lib/solr/contrib/mr/search-mr-1.0.0-cdh5.8.2-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx500m' --log4j /usr/share/doc/search-1.0.0+cdh5.8.2+0/examples/solr-nrt/log4j.properties --morphline-file /usr/share/doc/search-1.0.0+cdh5.8.2+0/examples/solr-nrt/test-morphlines/solrPDF.conf --output-dir hdfs://xxxxxx:8020/user/root/outdir --verbose --go-live --zk-host xxxxx:2181/solr --collection pdfs hdfs://xxxxxx:8020/user/root/indir
Thanks in advance.
I've found the problem. In fact, the PDF file that I was using don't have any metadata. I've tried with other PDF files and got the results. Hope it helps others.

solr 6.2.1 uniqueField not work

i install solr 6.2.1 and in schema define a uniqueField:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Solr managed schema - automatically generated - DO NOT EDIT -->
<schema name="ps_product" version="1.5">
<fieldType name="int" class="solr.TrieIntField" positionIncrementGap="0" precisionStep="0"/>
<fieldType name="long" class="solr.TrieLongField" positionIncrementGap="0" precisionStep="0"/>
<fieldType name="string" class="solr.TextField" omitNorms="true" sortMissingLast="true"/>
<fieldType name="uuid" class="solr.UUIDField" indexed="true"/>
<field name="_version_" type="long" multiValued="false" indexed="true" stored="true"/>
<field name="id_product" type="uuid" default="NEW" indexed="true" stored="true"/>
<uniqueKey>id_product</uniqueKey>
<field name="name" type="string" indexed="true" stored="true"/>
<field name="title" type="string" multiValued="false" indexed="true" required="true" stored="true"/>
</schema>
and my data-config like bellow:
<dataConfig>
<dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/pressdb-local" user="sa" password="" />
<document>
<entity name="item" query="select * from ps_product as p inner join ps_product_lang as pl on pl.id_product=p.id_product where pl.id_lang=2"
deltaQuery="select id from ps_product where date_upd > '${dataimporter.last_index_time}'">
<field name="name" column="name"/>
<field name="id_product" column="id_product"/>
<entity name="comment"
query="select title from ps_product_comment where id_product='${item.id_product}'"
deltaQuery="select id_product_comment from ps_product_comment where date_add > '${dataimporter.last_index_time}'"
parentDeltaQuery="select id_product from ps_product where id_product=${comment.id_product}">
<field name="title" column="title" />
</entity>
</entity>
</document>
</dataConfig>
but when i want to define a core in solr, give me error:
Error CREATEing SolrCore 'product': Unable to create core [product] Caused by: QueryElevationComponent requires the schema to have a uniqueKeyField.
please help me to solve this problem.
Since Solr 4 and to support SolrCloud the uniqueKey field can no longer be populated using default=... you should remove it from the feld definition in schema.xml :
<field name="id_product" type="uuid" indexed="true" stored="true"/>
Update: As pointed out by MatsLindh, it seems you are using Solr in schemaless mode. Schema updates in this mode must be done via the Schema API, you should not edit the managed schema (<!-- Solr managed schema - automatically generated - DO NOT EDIT -->). To define id_product and uniqueKey field, use the API or revert to the classic schema mode.
To generate a uniqueKey to any document being added that does not already have a value in the specified field you can use UUIDUpdateProcessorFactory (cf. Update Request Processor). You will need to define an update processor chain in solrconfig.xml :
<updateRequestProcessorChain name="uuid">
<processor class="solr.UUIDUpdateProcessorFactory">
<str name="fieldName">id_product</str>
</processor>
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
Then specify the use of the processor chain via the request param update.chain in your request handler definition.

index all files inside a folder in solr

I am having troubles indexing a folder in solr
example-data-config.xml:
<dataConfig>
<dataSource type="BinFileDataSource" />
<document>
<entity name="files"
dataSource="null"
rootEntity="false"
processor="FileListEntityProcessor"
baseDir="C:\Temp\" fileName=".*"
recursive="true"
onError="skip">
<field column="fileAbsolutePath" name="id" />
<field column="fileSize" name="size" />
<field column="fileLastModified" name="lastModified" />
<entity
name="documentImport"
processor="TikaEntityProcessor"
url="${files.fileAbsolutePath}"
format="text">
<field column="file" name="fileName"/>
<field column="Author" name="author" meta="true"/>
<field column="text" name="text"/>
</entity>
</entity>
</document>
then I create the schema.xml:
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
<field name="fileName" type="string" indexed="true" stored="true" />
<field name="author" type="string" indexed="true" stored="true" />
<field name="title" type="string" indexed="true" stored="true" />
<field name="size" type="plong" indexed="true" stored="true" />
<field name="lastModified" type="pdate" indexed="true" stored="true" />
<field name="text" type="text_general" indexed="true" stored="true" multiValued="true"/>
finally I modify the file solrConfig.xml adding the requesthandler and the dataImportHandler and dataImportHandler-extra jars:
<requestHandler name="/dataimport" class="solr.DataImportHandler">
<lst name="defaults">
<str name="config">example-data-config.xml</str>
</lst>
</requestHandler>
I run it and the result is:
Inside that folder there are like 20.000 files in diferent formats (.py,.java,.wsdl, etc)
Any suggestion will be appreciated. Thanks :)
Check your Solr logs . Answer for what is the Root Cause will definitely be there . I also faced same situation once and found through solr logs that my DataImportHandler was throwing exceptions because of encrypted documents present in the folder . Your reasons may be different, but first analyze your solr logs, execute your entity again in DataImport section, and then check the immediate logs for errors by going on the logging section on admin page . If you are getting errors other than I what I mentioned , post them here , so they can be understood and deciphered .

Solr: how to query particuler entity when multiple

I am starting to learn Solr (using version 5.5.0). I am using managed-schema and data-congif.xml files to inex two sql server tables: Company & Contact.
I am able to execute from the UI, the data import, selecting one entity at a time.
This is the message I get for Company:
Indexing completed. Added/Updated: 8,293 documents. Deleted 0 documents. (Duration: 01s)
Requests: 1 (1/s), Fetched: 8,293 (8,293/s), Skipped: 0, Processed: 8,293 (8,293/s) Started: less than a minute ago
This is the message I get for Contact:
Indexing completed. Added/Updated: 81 documents. Deleted 0 documents.
Requests: 1, Fetched: 81, Skipped: 0, Processed: 81
Started: less than a minute ago
When I click the Query section, I want to perform a query to see all the Contact, and/ or Company records, not necessarily combined, but just be able to query them.
I am not sure how to do this, is it possible to get some help to understand how to specify against which entity I want to execute the query?
Here are the 2 files I modified:
data-cofig.xml:
<dataConfig>
<dataSource type="JdbcDataSource"
driver="com.microsoft.sqlserver.jdbc.SQLServerDriver"
url="jdbc:sqlserver://sql.server.com\test;databaseName=test"
user="testusr"
password="testpwd"/>
<document>
<entity name="Company" pk="CompanyID" query="SELECT * FROM tblCompany">
<field column="CompanyID" name="company_companyid"/>
<field column="Name" name="company_name"/>
<field column="Website" name="company_website"/>
<field column="Description" name="company_description"/>
<field column="NumberOfEmployees" name="company_numberofemployees"/>
<field column="AnnualRevenue" name="company_annualrevenue"/>
<field column="YearFounded" name="company_yearfounded"/>
</entity>
<entity name="Contact" pk="ContactID" query="SELECT * FROM tblContact">
<field column="ContactID" name="contact_contactid"/>
<field column="FirstName" name="contact_firstname"/>
<field column="MiddleInitial" name="contact_middleinitial"/>
<field column="LastName" name="contact_lastname"/>
<field column="Email" name="contact_email"/>
<field column="Description" name="contact_description"/>
</entity>
</document>
</dataConfig>
managed-schema:
<!-- Company Begin -->
<field name="company_companyid" type="string" indexed="true"/>
<field name="company_name" type="string" indexed="true"/>
<field name="company_website" type="string" indexed="true"/>
<field name="company_description" type="string" indexed="true"/>
<field name="company_numberofemployees" type="string" indexed="true"/>
<field name="company_annualrevenue" type="string" indexed="true"/>
<field name="company_yearfounded" type="string" indexed="true"/>
<!-- Company End -->
<!-- Contact Begin -->
<field name="contact_contactid" type="string" indexed="true" />
<field name="contact_firstname" type="string" indexed="true"/>
<field name="contact_middleinitial" type="string" indexed="true"/>
<field name="contact_lastname" type="string" indexed="true"/>
<field name="contact_email" type="string" indexed="true"/>
<!-- Contact End -->
UPDATE
I tried using the fl field to select company_companyid, but I did not get any results.
I am including a screen shot:
To get fields as needed from a document, use fl. For example, if you were using SolrJ, you would have something like query.set("fl", "fieldA, fieldB").
In a URL, it looks like this: http://host:port/solr/coreName/select?q=*%3A*&fl=fieldA,fieldB&wt=json&indent=true

SolrException: Document [null] missing required field: id

I have changed the schema.xml file and added few fields in that like this
<field name="url" type="string" indexed="true" stored="true" />
<field name="content_type" type="text" indexed="true" stored="true" />
<field name="title" type="text" indexed="true" stored="true" />
<field name="keywords" type="text" indexed="true" stored="true" multiValued="true" />
<field name="text" type="text" indexed="true" stored="true" />
<field name="timestamp" type="text" indexed="true" stored="true" />
<field name="public" type="text" indexed="true" stored="true" multiValued="true" />
<field name="groups" type="text" indexed="true" stored="true" multiValued="true" />
<field name="sitename" type="text" indexed="true" stored="true" />
<field name="context" type="text" indexed="true" stored="true" />
<field name="modified_date" type="text" indexed="true" stored="true" />
so corresponding to these fields I have created one xml file and added some dummy data into that like this.
<add><doc>
<field name="url">http://www.host.com/</field>
<field name="content_type">text/html</field>
<field name="title">Testing Data</field>
<field name="keywords">software</field>
<field name="keywords">software_cycle</field>
<field name="text">search</field>
<field name="timestamp">2006-02-13T15:26:37Z</field>
<field name="public">Optimized</field>
<field name="public">Optimized_data</field>
<field name="groups">Standards</field>
<field name="groups">Standards_data</field>
<field name="sitename">GoInfo</field>
<field name="context">Scalability</field>
<field name="modified_date">2010-11-13T15:26:37Z</field>
</doc></add>
And when I tried to reindex the data into solr like this:-
C:\apache-solr-3.2.0\example\exampledocs>java -Durl=http://localhost:7788/solr/u
pdate -jar post.jar *.xml
SimplePostTool: version 1.3
SimplePostTool: POSTing files to http://localhost:7788/solr/update..
SimplePostTool: POSTing file 30-example.xml
SimplePostTool: POSTing file hd.xml
SimplePostTool: POSTing file other.xml
SimplePostTool: FATAL: Solr returned an error #400 Bad Request
I always get an error after text.xml file and If I remove this text.xml file then I don't get any error.. This is the below error I am getting if I include the text.xml file. Any help will be appreciated.
SEVERE: org.apache.solr.common.SolrException: Document [null] missing required field: id
at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:336)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:60)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:67)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:864)
at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1665)
at java.lang.Thread.run(Thread.java:662)
You say you added a few fields (supposedly to the sample schema), but you don't mention what happened to the fields that were already there. I'm guessing you left the preexistent fields there, which means that id is still a required field (see here in the sample schema), therefore the error you see.
Make a
"Primary Key" id. It is really required.