I am using Snap! to try to find the earliest item in a list. For instance, in list [3,1,2], I would like to report "1." I would like the solution to work for words as well (for instance, given list [Bob, George, Ari] report "Ari").
I tried to use recursion to solve the problem
and the solution works. However, I cannot find a way to do so recursively without the second "if else" statement. Is there a way to use recursion to solve this problem without the "if 0= length of..." statement?
Play with it here.
I don't see a way to do this without two if...else statements. You need two checks:
Is the list exhausted?
Is the first element less than all the following elements?
In some languages, you can use the conditional ternary operator ?:, but I don't think Snap! supports that. It's really just syntactic sugar for an if...else anyway.
You can do some clean-up on this function, though.
I recommend explicitly handling the case of a zero-length list.
"Earliest" is confusing. I recommend the term "least", since you're checking with the "less than" operator.
Don't call keep items such that [] from [] multiple times. This is inefficient and potentially a bug if someone modifies one line but forgets to modify the other. Instead, save the result in a script variable.
Don't compare the current first element to every element in the list. This gives the function an O(n^2) run time. Instead, compare it only to the least element so far. This reduces the run time to O(n).
Some of these changes are implemented here:
Related
I use SQLite with persistent in Haskell.
I have a list of keys i.e. [PostId].
Now I want to get all entries
[Desc PostCrtDate, OffsetBy from, LimitTo (to - from + 1)].
Is there an alternative to selectList but with a list of keys instead of or in addition to the "normal" conditions of a SQL query?
It seems horribly inefficient to use mapM get keyList and then do sorting/offsetting/limiting, especially with a big database.
I am open to using esqueleto if necessary but I would rather not introduce another dependency.
Thanks!
I'm on a mobile right now and therefore may get the syntax wrong, but it's something like:
selectWhere [PostId <-. IdList] []
That operator is the "in" operator, checking if a value is in a list.
Note that this will not give any errors if some of the keys are not found, you'd need to check for that manually.
I'm using selenium to automate interaction with a table on a webpage.
The table has a few columns and I'm sorting the data in the table by clicking on the sortable-headers (column names)
I've used a switch-case statement such as
switch(columnName){
case: "firstname":
columnHeader=driver.findElement(By.xpath());
case: "lastname":
columnHeader=driver.findElement(By.xpath());
...
...
and so on.
}
What could be a better alternative to using switch-case.
Also I used this since I didn't want to write a separate method for each column.
My suggestion is to go ahead and write a method to sort the table for each column. This prevents a few problems... 1) consumers don't have to look at the page to remember what all columns are available to sort by, 2) consumers don't have to look at the code for the function you are writing to understand what string they need to pass to sort by the desired column, 3) you don't have to worry about handing bad strings, 4) consumers don't have to worry about passing a bad string, and so on. Many of these problems won't be caught at compile time which means you will (or may) find them when the script is running. You want to give preference to finding as many bugs as possible at compile time. It will save you a LOT of time and prevent a LOT of bugs down the road.
Other benefits are that consumers look at the API and see methods like .sortTableByFirstName(), .sortTableByLastName(), etc. and it's obvious what each function does and it moves bugs into compile time because you can't call some method that doesn't exist because the name was typo'd.
I am using Lucene 3.6.1. I receive a query from a user. This query may contain + or - operators, and may also contain phrases. In certain circumstances, I would like to expand the query by adding some extra terms that I compute. These terms are optional. However, any required include/exclude constraints specified by the user must be respected.
My initial strategy was to create a BooleanQuery, add a clause to it that contains the parsed user query, and then add further clauses that contain my expansion terms. The expansion terms would all be added as Occur.SHOULD. My question is how to constrain the user's query. I can imagine three possibilities:
The user's query contains no operators, which means I can include it as an Occur.SHOULD clause.
The user's query contains a + operator, so I need to include it as an Occur.MUST clause.
The user's query contains a - operator, but also other terms: Do I still include it as an Occur.MUST clause?
The question implicit in these three choices is how do I tell which condition is appropriate? I suppose I can rewrite the query and test for BooleanQuery instances, but that seems brittle.
I suppose can also try to tactic of creating a single string from the user's input and from my expansion terms, like this:
(fld1:userterm1 userterm2 -fld2:userterm3 +userterm4)^10 (fld1:expterm1)^8 (fld2:expterm2)^7 ...
Is this the best way to go? Or is there some elegant programmatic solution?
Okay, Not sure how useful this answer will be, but can't seem to come up with a hard and fast answer here, so I'll list a couple possibilities that come to mind:
First, a problem:
Modifying the query to look like:
(userquery) (other) (stuff)
I makes some sense to add the + with he rules you've shown, but a '-' prohibited term will be hard to respect correctly, since (query -prohibition) (other) will allow matches on other with prohibition present as well, and +(query -prohibition) (other) will require 'query' be matched.
The only way I see to really do that part right is to propagate the prohibited term into your automatically added terms as well, or extract it out to a parent query layer, more like (query -prohibition) --> (query) (other) -(prohibition).
And with user entered queries of arbitrary complexity, that may not be a great strategy.
If you want to tackle it by modifying the query string, then you should probably just add any terms to the end of the query. Nothing more to it.
I don't believe
(fld1:userterm1 userterm2 -fld2:userterm3 +userterm4)^10 (fld1:expterm1)^8 (fld2:expterm2)^7 ...
Is satisfactory, because userterm4 is only required within it's subquery, but a match Only on expterm1 is still acceptable. However, a query like:
fld1:userterm1 userterm2 -fld2:userterm3 +userterm4 (fld1:expterm1)^.8 (fld2:expterm2)^.7 ...
Should, I think, satisfy your needs, and prevents you from having to worry about the internals of your queryparser. I think this is the best approach.
I can also see logic in a query structured like
+(parsed userquery) (other stuff)
Effectively, always requiring a match on the user query. Lucene implicitly does this, in a sense, as it won't return a result that matches no term, even if no required fields are present in the query. This would then be using your added terms to impact scoring, rather than return a larger set of documents. This doesn't quite address what your asking, but might be worth considering.
If, despite the aforementioned problems of applying them, you still want to detect '+' and '-' operators, I think it can be reasonably assumed that a StandardQueryParser will return a BooleanQuery at base level for any query that you need to check for these operands on. You might have to worry about, for instance, DisjunctionMaxQueries, as well as what will happen when you have a simple query with an operator, like:
+myterm
I don't know if QueryParser would simply return a TermQuery, losing the plus (since it would be redundant without another term present). Concerns like that make me hesitant to address it in this way.
Similarly, attempting to detect these values from the query string must make assumptions about how things are parsed, and could become complicated.
To sumamrize, I think the best options are to, either: add terms to the end of the raw query string before doing any parsing, or treat the user query as atomic, and define the appropriate booleanclause independant of it's contents when adding to a boolean clause wrapping it with whatever other queries you need to include.
I have a list of simple regular expressions:
ABC.+DE.+FHIJ.+
.+XY.+Z.+AB
.+KLM.+NO.+J.+
QRST.+UV
they all have alternating patterns of .+ and some text (I will call "words") repeated some number of times. A pattern may or may not begin or end in .+. These regular expression are all mutually exclusive. When another regex is added I want to remove any other matching regular expressions, and add one regular expression that combines the added one with all of its matches. For example, adding:
.+J.+
would match,
ABC.+DE.+FHIJ.+
.+KLM.+NO.+J.+
and thus, these would be remove and replaced with the added regular expression resulting in:
.+J.+
.+XY.+Z.+AB
QRST.+UV
I need to store these patterns either in some data structure or (preferably) in a database in an efficient manner. I first tried a tree of dictionaries, only to realize that in the case that a regex starts with a .* it has to search the entire tree for the next word, which is order O(2^n). Unfortunately, (unless I am mistaken) it appears that neither SQLite (which I am using) nor any other relational database that I have used, supports "regular expression" as a data type. My question is, is there an efficient method for storing and retrieving such simple regular expressions? If there is no canned method, is there some data structure that would be relatively efficient (say, at worst amortized polynomial time)?
Could you please explain what you are using these regular expressions for as that would make it easier to provide a better answer? In particular when I see the way you are splitting your regular expressions I'm wondering if a Trie or a Directed acyclic word graph would be a better fit.
From their you may find your answer is as simple as providing better normalization or finding an alternative no SQL db made specifically for your problem area.
I have a large number of phrases (~ several million), each less than six or seven words and the large majority less than five, and I would like to see if they "phrase match" each other. This is a search engine marketing term - essentially, A phrase matches B if A is contained in B. Right now, they are stored in a db (postgres), and I am performing a join on regexes (see this question). It is running impossibly slowly even after trying all basic optimization tricks (indexing, etc) and trying the suggestions provided.
Is there an easier way to do this? I am not averse to a non-DB solution. Is there any reason to think that regexes are overkill and are taking way longer than a different solution?
An ideal algorithm for doing sub-string matching is AhoCorsick.
Although you will have to read the data out of the database to use it, it is tremendously fast, when compared to more naive methods.
See here for a related question on substring matching:
And here for an AhoCorsick implementation in Java:
It would be great to get a little more context as to why you need to see which phrases are subsets of others: for example, it seems strange that the DB would be built in such a way anyway: you're having to do the work now because the DB is not in an appropriate format, so it makes sense that you should 'fix' the DB or the way in which it is built, instead.
It depends massively on what you are doing with the data and why, but I have found it useful in the past to break things down into single words and pairs of words, then link resources or phrases to those singles/pairs.
For example to implement a search I have done:
Source text: Testing phrases to see
Entries:
testing
testing phrases
phrases
phrases to
to
to see
see
To see if another phrase was similar (granted, not contained within) you would break down the other phrase in the same way and count the number of phrases common between them.
It has the nice side effect of still matching if you were to use (for example) "see phases to testing": because the individual words would match.. but because the order is different the pairs wouldn't, so it's taking phrases (consecutive words) into account at the same time, the number of matches wouldn't be as high, good for use as a 'score' in matching.
As I say that -kind- of thing has worked for me, but it would be great to hear some more background/context, so we can see if we can find a better solution.
When you have the 'cleaned column' from MaasSQL's previous answer, you could, depending on the way "phrase match" works exactly (I don't know), sort this column based on the length of the containing string.
Then make sure you run the comparison query in a converging manner in a procedure instead of a flat query, by stepping through your table (with a cursor) and eliminating candidates for comparison through WHERE statements and through deleting candidates that have already been tested (completely). You may need a temporary table to do this.
What do I mean with 'WHERE' statement previously? Well, if the comparison value is in a column sorted on length, you'll never have to test whether a longer string matches inside a shorter string.
And with deleting candidates: starting with the shortest strings, once you've tested all strings of a certain length, you'll can remove them from the comparison table, as any next test you'll do will never get a match.
Of course, this requires a bit more programming than just one SQL statement. And is dependent on the way "phrase match" works exactly.
DTS or SSIS may be your friend here as well.