Apache Nutch: LinkConent inlink and from url - apache

I am using apache nutch to crawl some websites upto 6 levels deep. I am dumping the link content to my current working directory. The link content contains data in the following format:
www.abc.com/help Inlink:
fromUrl: www.abc.com anchor: Help
fromUrl: www.xyz.com anchor: abc help
My question with respect to nutch is, if nutch is able to generate the above data, should'nt the same lincontent file contain www.abc.com and its Inlink: information (similarly information about www.xyz.com) considering it has information about abc.com/help, it would have analyzed from www.abc.com and www.xyz.com. However I dont find the fromUrls having their inlink information in some cases. Why would this be? Am i missing something here?

Nutch by default adds outlinks to the linkdb only for different domains to reduce the size of the link database. In order to populate all inlinks, both db.ignore.internal.links and linkdb.ignore.external.links have to be set to false in nutch-default.xml or overriden in nutch-site.xml.

Related

Downloading all full-text articles in PMC and PubMed databases

According to one of the answered questions by NCBI Help Desk , we cannot "bulk-download" PubMed Central. However, can I use "NCBI E-utilities" to download all full-text papers in PMC database using Efetch or at least find all corresponding PMCids using Esearch in Entrez Programming Utilities? If yes, then how? If E-utilities cannot be used, is there any other way to download all full-text articles?
First of all, before you go downloading files in bulk, I highly recommend you read the E-utilities usage guidelines.
If you want full-text articles, you're going to want to limit your search to open access files. Furthermore, I suggest also restricting your search to Medline articles if you want articles that are any good. Then you can do the search.
Using Biopython, this gives us :
search_query = 'medline[sb] AND "open access"[filter]'
# getting search results for the query
search_results = Entrez.read(Entrez.esearch(db="pmc", term=search_query, retmax=10, usehistory="y"))
You can use the search function on the PMC website and it will display the generated query that you can copy/paste into your code.
Now that you've done the search, you can actually download the files :
handle = Entrez.efetch(db="pmc", rettype="full", retmode="xml", retstart=0, retmax=int(search_results["Count"]), webenv=search_results["WebEnv"], query_key=search_results["QueryKey"])
You might want to download in batches by changing retstart and retmax by variables in a loop in order to avoid flooding the servers.
If handle contains only one file, handle.read() contains the whole XML file as a string. If it contains more, the articles are contained in <article></article> nodes.
The full text is only available in XML, and the default parser available in pubmed doesn't handle XML namespaces, so you're going to be on your own with ElementTree (or an other parser) to parse your XML.
Here, the articles are found thanks to the internal history of E-utilities, which is accessed with the webenv argument and enabled thanks to the usehistory="y" argument in Entrez.read()
A few tips about XML parsing with ElementTree : You can't delete a grandchild node, so you're probably going to want to delete some nodes recursively. node.text returns the text in node, but only up to the first child, so you'll need to do something along the lines of "".join(node.itertext()) if you want to get all the text in a given node.
According to one of the answered questions by NCBI Help Desk , we cannot "bulk-download" PubMed Central.
https://www.nlm.nih.gov/bsd/medline.html + https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ will download a good portion of it (I don't know the percentage). It will indeed miss the PMC full-texts articles whose license doesn't allow redistribution as explained on https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/.

Alfresco FTS - why first digit of folder's name should be escaped?

I have a question regarding the alfresco FTS/lucene search. It is known that in the search query some special characters have to be escaped, like space (by _x0020_).
But it turned out that if folder's name first chatacter is a digit, it should also be escaped. It can be easily tested in Node Browser by creating a folder, like 123456 and navigate to the parent folder in node browser (in my case I have following folder structure: */2017/123456/):
Primary Path: /app:company_home/st:sites/<some-folders>/cm:_x0032_017/cm:_x0031_23456
^this is 2 ^ and this is 1
If I don't ecape first character of the folder I have 500 error returned.
Why is that, I tried to find something relevant in Alfresco documentation, but didn't manage to.
Alfresco v.4.2.0
Lucene search uses ISO 9075 codification (SQL) like similar frameworks, so we need to encode the path elements. It would be nice if the API hides this requirement like the browser url but you could use ISO9075Encode to do the job.

Drupal 7 Apache solr faceted search with OR condition on two fields instead of drill down/AND

I have a Drupal 7 website that is running apachesolr search and is using faceting through the facetapi module.
When I use the facets to narrow my searches, everything works perfectly and I can see the filters being added to the search URL, so I can copy them as links (ready-made narrowed searches) elsewhere on the site.
Here is an example of how the apachesolr URL looks after I select several facets/filters:
search_url/search_keyword?f[0]=im_field_tag_term1%3A1&f[1]=im_field_tag_term2%3A100
Where the 'search_keyword' portion is the text I'm searching for and the '%3A' is just the url encoded ':' (colon).
Knowing this format, I can create any number of ready-made searches by creating the correct format for the URL. Perfect!
However, these filters are always ANDed, the same way they are when using the facet interface. Does anyone know if there is a syntax I can use, specifically in the search URL, to OR my filters/facets? Meaning, to make it such that the result is all entries that contains EITHER of the two filters?
New edit:
I do know how to OR terms for one facet through the URL im_field_tag_term1:(x or y) but I need to know how to apply OR condition between two facets .
Thanks in advance .

sharepoint crawl rule to exclude AllItems.aspx , but get an item/document in search resu lts if queried in the search box

I followed this blog Tips 1and created a crawl rule http://.*forms/allitems.aspx and ran full crawl. I no longer get the results with AllItems.aspx. However, if there is any document with name Something.doc in a Document Library , it no longer gets pulled in the search results.
I think what I desire is a basic functionality, like the user should not get to see Allitems.aspx in the search results but should get the item/document with names entered in the search box.
Please let me know if I am missing anything. I have already put in 24 hours...googled the max I could.
It seems that an Index Reset is required. Here's the steps I did:
1. Add the following crawl rule to exclude: *://*allitems.aspx.
2. Index Reset.
3. Full Crawl.
I could not find a good way to do this using crawl rules. Instead, I opted to set up a restriction on the search results web part.
In the search results web part properties, select "Change Query"
Add a property filter to exclude anything with "AllItems" (and any other exclusions you want in place.
Used Steve Mann's blog as a reference and for the images: http://stevemannspath.blogspot.com/2013/04/sharepoint-2013-search-removing-junk.html

Search only particular index

I am using Apache Solr for my search , using this i am indexing variety of resources such as (PDF,MS Word document).
If let say the user giving the query like "PDF: java" then i wants to search only the PDF files
Any ideas
Thanks
Dilip.
Well, like I commented. Set up a filetype[string] in your schema and set it when you upload that file.
http://localhost:8983/solr/update/extract?literal.id=123&literal.filetype=pdf
and when you search
http://localhost:8983/solr/select?q=text:electrical design AND filetype:pdf
Quick hack: if your documents are identified by filename, you can tell Solr to limit results to those ending in *.pdf by
http://localhost:8983/solr/select?q=text:electrical design AND id:*.pdf