How do you implement search over static content within cshtml files - asp.net-core

I am using asp.net core and Razor - and as it is a help system I would like to implement some kind of search facility to bring back a list of results hyperlinked based on the search terms.
I would like the search to iterate essentially over the content contained within the and tags and then link this to the appropriate page/view.
What is the best way to do this?
I'm not even sure how you get a handle on the actual content of your own cshtml pages and then go from there.

This question is far too broad. However, I can provide you some pointers.
First, you need to determine what you're actually wanting to surface and where that data lives. Your question says "static web pages", but then you mention .cshtml. Traditionally, when it comes to creating your own search, you're going to have access to some particular dataset (tables in a database, for example). It's much simpler to search across the more structured data than the end result of it being dumped in various and sundry places over a web page.
Search engines like Google only index in this way because they typically don't have access to the raw data (although some amount of "access" can be granted via things like JSON-LD and other forms of Schema.org markup). In other words, they actually read from the web page out of necessity, because that's what they have to work with. It's certainly not the approach you would take if you have access to the data directly.
If for some reason you need to actually spider an index your own site's HTML content, then you'll essentially have to do what the big boys do: create a bot, run it on a schedule, crawl your site, link by link, downloading each document, and then parse and process it. The end result would be to create a set of structured data that you can actually query against, which is why all this is pretty much just wasted effort if you already have that data.
Once you have the data, however you got there, you simply query it. In the most basic of forms, you could store it in a table in a database and literally issue SQL queries against it. Your search keywords/parameters are essentially the WHERE of the SELECT statement, so you'd have to figure out a way to map the keywords/parameters you're receiving to an acceptable WHERE clause that achieves that.
More traditionally, you'd use an actual search engine: essentially a document database that is designed and optimized for search, and generally provides a more search-appropriate API to query against. There's lots of options in this space from roll your own to hosted SaaS solutions, and anywhere in between. Of course the cost meter goes down the more work you have to do and goes up the more out of the box it is.
One popular open-source and largely free option is Elasticsearch. It uses Lucene indexes, which it stitches to together in a clustered environment to provide failover and scale. Deployment is a beast, to say the least, though it's gotten considerably better with things like containerization and orchestration. You can stand up an Elasticsearch cluster in something like Kubernetes with relative ease, though you still will probably need to do a bit of config. Elasticsearch does also have hosted options, but you know, cost.

Related

Picking the right database technique for file storage and search

for a personal project I am searching for the "most suitable" database engine to hit the following key issues
need to store large amounts of single different document files (PDF)
need to perform full-text search onto PDF (for this I plan to use OCR and save the processed data/metadata additionally to the database)
need to get pieces/chunks of the saved documents (for example from a specific year) and show a preview of lots of them within a nice web UI
as much performance as possible
Up to now I did work a lot with SQL (MySql) and have some theoretical knowledge about other systems (MemCached, Redis, PostgreSQ, MongoDb). But I`ve never used them in combination and never hit the point WHEN they should be used for WHAT exactly or how they can be combined.
I think especially for a project like this it`s very important to select the right engine from beginning not to hit performance issues later.
So especially to all experienced developers out there, what would be your favourite choiche for this kind of (I gues SQL may not be the only right solution) ?
Or at the end will it be better to store files within filesystem and keep only metadata in database ?
BTW my planned API backend for this is Laravel 7+, frontend will be Vue 2+.
Thank you very much !

How does the semantic web actually work?

I know that in the web you have lots an lots of pages linked to one another and you can go from page to page and so on.
How does the semantic web work? I understand that it uses the concept of Linked Data, where data is identified and linked by URIs or IRIs and not the web pages them self. But I don't understand how the data is linked across the web when all of the data is stored in local triplestores and are linked internally in the triplestores. Are browsers capable to go from triplestore to triplestore behind the scenes and get back all kinds of data? Or how is the data actually linked? Is there a mechanism to go from data to data all across the web and use the meaning of data in real life situations, or a tool that does something like this?
Also anybody can create ontologies and define and describe anything in all kinds of different ways. Won't this lead to a big mess of data?
So, main question:
How does the semantic web and liked data actually work?
It's a tricky and multifaceted question.
First I'll answer some of you questions.
But I don't understand how the data is linked across the web when all of the data is stored in local triplestores and are linked internally in the triplestores
First of all, it is important to realize that triplestores are not a necessity. You could have SQL servers and D2RQ/R2RML mapping on top to translate queries dynamically. Or plain RDF files. Or simple JSON documents in MongoDB, etc, which you extend by adding a JSON-LD #context.
What is important, is that you serve data in one of the RDF formats such as turtle or JSON-LD
Are browsers capable to go from triplestore to triplestore behind the scenes and get back all kinds of data?
See, they don't have to because, as you mention, URIs are used so that a browser (not necessarily a web browser) can download the data. And of course these URI are URLs and are dereferenceable. Otherwise they are just identifiers.
Or how is the data actually linked?
It is linked simply by reusing identifiers for objects and properties. That's why URI (IRI) is used, so that the identifiers are globally unique and created privately within a domain. Of course there is a risk of being mischievous by creating URIs is someone else's domain. It's a separate topic though.
Is there a mechanism to go from data to data all across the web and use the meaning of data in real life situations, or a tool that does something like this?
One simple mechanism is to simply crawl RDF data and download to a local store. Simple occurrence of matching identifiers will combine the data into larger dataset with less mapping effort required. That is of course the theory because you risk data can be corrupt, incorrect or duplicated so you need some curation. Technology exists to help you do that and it's not something you wouldn't experience is traditional data warehousing. Search engines harvest semantic markup from HTML pages (RFDa/Microdata) is similar manner.
Another option is to use federated queries. SPARQL has the ability to automatically download RDF data and perform queries over it in memory.
Last but not least, there are federated queries using Triple Pattern Fragments
Now about Semantic Web
As I wrote, the question is not that simple. You mostly ask about Linked Data. There is more to Semantic Web than that:
ontologies/taxonomies
inferencing
rules
semantic/faceted search
I hope I answered your question to some extent.

Developing a search and tag heavy website

I'm in the planning phase of developing a very tag heavy website. Everything will essentially be associated with tags and the entire site would be based on searching these tags.
Now, I've been thinking a lot about going the nosql route here, since from what I read and understand, it makes the most sense for something like this.
Would it be best to go with this database system? Would it makes sense to go with the relational database system? Should I think incorporating something like SOLR?
What would the ideal setup be?
UPDATE:
Ideally they would be user generated, but we all know how that would turn out with giving users that much power. So, let’s change up the requirements and say that users WILL NOT have the power to create tags.
Searching on tags based on text matches is something that would probably be useful and needed. If the tag is “garage sale”, the search for “sale” should also pick this up, at a lower relevance for sure.
I can’t imagine the usage being so much that scaling would be an issue.
Thanks
I would spend a bit of time thinking about these tags. For example, are these tags going to be user generated or will you provide a few tags and let users select which ones they want?
Will you need to search on tags based on text matches? For example if a tag is "garage sale" do you want to search for "sale" to also pick this up? Maybe at a lower relevance?
Also, what kind of usage are you looking at? One good thing about Solr is that it's super easy to scale and synchronize data, it is easy to deploy multiple nodes, shard collections and replicate data to other nodes, something that traditional databases struggle with.
Another thing to keep in mind is that most of the time, Solr is not the official "repository of record", most of the time the data gets fed to it from a DB somewhere, but all reading activities are done from Solr.
See this answer for a SQL solution. Offhand I can't think of any advantage to using most NoSQL databases (i.e. key-value, columnar, or document) as the SQL solution will be more compact and ought to give good performance; a graph database may be appropriate if you're doing a lot of navigational type queries on your tags, but it doesn't sound like that's the case.
Use of Solr (or ElasticSearch or whatever) is orthogonal to your primary database; it may be appropriate to incorporate a search tool if users are typing inexact tags for search, but I recommend integrating a stemming library or something along those lines before turning to a full blown search tool.

JSON vs classic schema design [duplicate]

The Project
I've been asked to work on an interesting project -- what amounts to a basic Web CMS -- that uses HTML/CSS/jQuery with PHP. However, one requirement is that there won't be a database to house the data (they want flat files for the documents/pages -- preferable in JSON format).
In a very basic sense, it'll be used to generate HTML pages via a very "non-techie" interface. Each installation would only have around 20 pages, but a few may get up to 100. It has to be fairly easy to drop onto a PHP capable server and run, with very little setup needed.
What's Out There
There are tons of CMS options and quite a few flat file versions. But an OSS or other existing CMS is not an option. They need a simple propriety system.
Initial Thoughts
So flat files it is... but I'd really like to get some feedback on the drawbacks, and if it is worth the effort to try and convince them to use something like MySQL (SQLite or CouchDB are out since none of the servers can be configured to run them at the present time).
Of course the document files are pretty straightforward, but we're also talking about login info for 1 or 2 admins per installation, a few lists, as well as configs/settings (which also can easily be stored in a file with protection).
The Dilemma
If there are benefits to using MySQL rather than JOSN formatted files and some arrays in a simple project like this -- beyond my own pre-conceived notions :) -- I'll be sure to argue them.
But honestly I can't see any that outweigh their need to not have a database system.
I'd appreciate you insight and opinions.
If you can't cite a specific need for relational table design, then you're good with flat files. Build as specified. The moment you can cite a specific need, let them know; upgrading isn't that hard, if you're perception is timely (that is, if you aren;t in the position of having to normalize data that should have been integrated earlier).
It's a shame you can't use CouchDB, this seems like the perfect application for it. Keep in mind that using flat-files severely constrains your architecture and, especially, scalability.
What's the best case scenario for your CMS app? It's successful and people want to use it more? If you're using flat-files it'll be harder to service and improve your system (e.g. make it more robust, and add new features for future versions) and performance will not scale well. So "success" in this case is at best short-lived, as success translates into more and more work for less and less gains in feature-set and performance.
Then again, if the CSM is designed right, then switching between a flat file to RDMS should be as simple as using a different data access file.
Will this be installed on any shared hosting sites. For this to work somewhat safely, a mechanism like suEXEC needs to be set up properly as the web server will need write permissions to various directories.
What would be cool with a simple site that was feed via JSON and jQuery is that the site wouldn't need to load on each click. Just the relevant data would change. You could then use hashes in the location bar to keep track of where you were (ex. http://localhost/#about)
The problem being if they are editing the raw JSON file they can mess it up pretty quick. I think your admin tools would have to generate the JSON files based on the input so that you can ensure nothing breaks. The admin tools would be more entailed then the site (though isn't that always the case with dynamic sites)
What is the predicted data sizes for the CMS?
A large reason for the use of a RDMS is quick,specific access to large amounts of data. The data format might not be large, but if there is a lot of the data, then it might be better in the long run for a RDMS.
Then again, if the CSM is designed right, then switching between a flat file to RDMS should be as simple as using a different data access file.
While an RDBMS may be necessary for a very large CMS, a small one could run off flat files very well. A lot of CMS products out there fall down in that regard, I think, by throwing an RDBMS into the mix when there's no real need.
However, if you are using flat files, there are security issues which others have highlighted. Another issue I've come across is hosting providers using the disable_functions directive in php.ini to disable file I/O functions like fopen() and friends. If you're hosting your CMS on a box you control, you won't have this problem but if you're using a third-party provider, check first.
As the original poster, I wasn't signed in, so I'm following up to the answers so far in an answer (sorry if this is bad form).
There may instances where this is on
a shared host.
Though the JSON files can technically
be edited, this won't be the case.
The admin interface will be robust
enough to do all of the creating/editing of pages
The size for each install will be
relatively small -- 1 - 2 admins,
10-100 pages. A few lists of common
items may run longer (snippets of
copy for example).
Security will be a big issue -- any
other options suggestions on this
specifically?
Well, isn't there a problem with they being distrustful to any database system? Isn't the problem more in their thinking than in technology? Maybe they are afraid of database because it sounds complex to them. In that case, if you just present them some very simple CMS (like CMS made simple, which I've heard is really simple and the learning process is very fast), if they see everything is easy then may be they just don't care what's behind, if it's a database or whatever!
They could hear to arguments like better maintenance, lower cost of maintenance, much better handover to another webmaster than proprietary solutions (they are not dependent on you) etc.

Wiki Database, is there one?

I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this?
I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.
UPDATE: I had registered the domain wikidatabase.org to get started on a tool but I didn't find enough time yet. If anyone is interested in spending some time and coding on this, please let me know at wikidatabase.org
It's not quite what you're looking for, but Semantic Mediawiki adds database-like features to MediaWiki:
http://semantic-mediawiki.org/wiki/Semantic_MediaWiki
It's still fundamentally a Wiki, but you can add semantic tags to pages ([[foo::bar]] [[baz::1000]]) and then do database-type queries across them: SELECT baz FROM pages WHERE foo=bar would be {{#ask: [[foo::bar]] | ?baz}}. There is even an embryonic SPARQL implementation for pseudo-SQL queries.
OK this question is old, but Google led me here, so for anyone else out there looking for a wiki for structured data: Take a look at Foswiki.
This might be like what you're looking for: dbpedia.org. They're working on extracting data from Wikipedia, and encoding it in a structured format using RDF, so that it can be queried using SPARQL.
Linkeddata.org has a big list of RDF data sets.
Do you mean something like http://www.freebase.com?
You should check out https://www.wikidata.org/wiki/Wikidata:Main_Page which is a bit different but still may be of interest.
Something that might come close to your requirements is Google Docs.
What's offered is document editing roughly similar to MS Word, and spreadsheets roughly similar to Excel. I'm thinking of the latter, of course.
In Google Docs, You can create spreadsheets for free; being spreadsheets, they naturally have a row-and-column structure similar to a database, and which you can define flexibly. You can also share these sheets with other people. This seems to be a by-invite-only process rather than open-to-all, but there may be other possibilities I'm not aware of, or that level of sharing might be enough for you in any case.
mindtouch should be able to do it. It's rather easy to get data in / out. (for example: it's trivial to aggregate all the IP's for servers into one table).
I pretty much use it as a DB in the wiki itself (pages have tables, key/value..inheritance, templates, etc...) but you can also interface with the API, write dekiscript, grab the XML...
I like this idea. I have heard of some sites that are trying to pull together large datasets for various things for open consumption, but none that would allow a wiki feel.
You could start with something as simple as an installation of phpMyAdmin with a known password that would allow people to log in, create a database, edit data and query from any other site on the web.
It might suffer from more accuracy problems than wikipedia though.
OpenRecord, development of which seems to have halted in 2008, seems to approach this. It is a structured wiki in which pages are views on the data. Unlike RDBMSes it is loosely typed - the system tries to make a best guess about what data you entered, but defaults to text when it cannot guess. Schemas appear to have been implied.
http://openrecord.org
An example of the typing that is given is that of a date. If you enter '2008' in a record, the system interprets this as a date. If you enter 'unknown' however, the system allows that as well.
Perhaps you might be interested in Couch DB:
Apache CouchDB is a document-oriented
database that can be queried and
indexed in a MapReduce fashion using
JavaScript. CouchDB also offers
incremental replication with
bi-directional conflict detection and
resolution.
I'm working on an Open Source PHP / Symfony / PostgreSQL app that does this.
It allows multiple projects, each project can have multiple directories, each directory has a defined field structure. Admins set all this up.
Then members of the public can suggest new records, edit or report existing ones. All this is moderated and versioned.
It's early days yet but it basically works and is already in real world use in several projects.
Future plans already in progress include tools to help keep the data up to date, better searching/querying and field types that allow translations of content between languages.
There is more at http://www.directoki.org/
I'm surprised that nobody has mentioned Wikibase yet, which is the software that powers Wikidata.