I am using the Blueprism Surface Automation training applicationApp1App2, and i want to find account using last name as search filter. Here i got multiple matches and want to select one result based on the first name Result_image.
How to do that? Any suggestions.
Meanwhile i was trying this,
Tried using accessibility mode where i select the first name once and then just change the y coordinate of the region and read the value, but i can't find any option to do that. There is one option Movable, but not sure how to use that.
Any other approach is welcome.
the "Blue Prism training" application is a playground for using advanced techniques, and that exact window is done to learn basics of Surface Automation. It's specifically done that way, that it's really hard to work without it, so I really don't think you can go far without using surface..
The exercise that you want to do is pretty hard, so I'd advise on doing simpler one first - reading the first name in the table.
The by-the-book solution to that is as follows:
1) Identify window using region mode
2) Specify a region in the window that contains the "name" field
3) Use the BluePrism OCR to read what's written there and check if that's the one that you're searching for.
If you'll have that exercise done, then you can think about reading the next names.
The simplest way to do that (and it's still not simple!) is to have dynamic region, that will ask for "Start Y", "End Y", "Start X", "End X" parameters. That way, you can specify the parameters for that object and move it as you with. Then using a loop and basic math you can move that region to the next field in table.
Good luck!
Related
I need to do a Vlookup from another workbook on about 400000 cells with Vba. These cells are all in one Column.And shall be written into one Column. I know already , how the Vlookup Works, but my runtime is much to high by using autofill. Do you have an Suggestion how i can approve it?
Dont use VLookup use Index Match: http://www.randomwok.com/excel/how-to-use-index-match/
If you are able to adjust what the data looks like a slight amount, you may be interested in using a binary search. Its been a while since I last used one (writing a code for group exercise check-in program). https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/implementing-binary-search-of-an-array , was helpful in setting up the idea behind it.
If you are able to sort them in an order, say by last name (im not sure of what data you are working with) then add an order of numbers to use for the binary search.
Edit:
The reasoning for a binary search would be that with a binary search is that the computational time it takes. The amount of iterations it would take is log2(400000) vs 400000. So instead of 400000 possible iterations, it would take at most 19 times with a binary search, as you can see with the more data you use the binary search would yield much quicker times.
This would only be a beneficial way if you are able to manipulate the data in such a way that would allow you to use a binary search.
So, if you can give us a bit more background on what data you are using and any restrictions you have with that data we would be able to give more constructive feedback.
I have a simple, pasted below, statement called against an Oracle database. This result set contains names of businesses but it has 24,000 results and these are displayed in a drop down list.
I am looking for ideas on ways to reduce the result set to speed up the data returned to the user interface, maybe something like Google's search or a completely different idea. I am open to whatever thoughts and any direction is welcome.
SELECT BusinessName FROM MyTable ORDER BY BusinessName;
Idea:
SELECT BusinessName FROM MyTable WHERE BusinessName LIKE "A%;
I'm know all about how LIKE clauses are not wise to use but like I said this is a LARGE result set. Maybe something along the lines of a BINARY search?
The last query can perform horribly. String comparisons inside the database can be very slow, and depending on the number of "hits" it can be a huge drag on performance. If that doesn't concern you that's fine. This is especially true if the Company data isn't normalized into it's own db table.
As long as the user knows the company he's looking up, then I would identify an existing JavaScript component in some popular JavaScript library that provides a search text field with a dynamic dropdown that shows matching results would be an effective mechanism. But you might want to use '%A%', if they might look for part of a name. For example, If I'm looking for IBM Rational, LLC. do I want it to show up in results when I search for "Rational"?
Either way, watch your performance and if it makes sense cache that data in the company look up service that sits on the server in front of the DB. Also, make sure you don't respond to every keystroke, but have a timeout 500ms or so, to allow the user to type in multiple chars before going to the server and searching. Also, I would NOT recommend bringing all of the company names to the client. We're always looking to reduce the size and frequency of traversals to the server from the browser page. Waiting for 24k company names to come down to the client when the form loads (or even behind the scenes) when shorter quicker very specific queries will perform sufficiently well seems more efficient to me. Again, test it and identify the performance characteristics that fit your use case best.
These are techniques I've used on projects with large data, like searching for a user from a base of 100,000+ users. Our code was a custom Dojo widget (dijit), I 'm not seeing how to do it directly with the dijit code, but jQuery UI provides the autocomplete widget.
Also use limit on this query with a text field so that the drop down only provides a subset of all the matches, forcing the user to further refine the query.
SELECT BusinessName FROM MyTable ORDER BY BusinessName LIMIT 10
I am recently inclined towards in Oracle jargon and the more I am looking into the more is attracting me.
I have recently come across the MODEL clause but to be honest I am not understanding the behaviour of this. Can any one with some examples please let me know about the same.
Thanks in advance
Some examples of MODEL are given here.
Personally I've looked at MODEL several times and not yet succeeded in finding a use case for it. While it first appears to be useful, there's a lot of places where only literals work (rather than binds or variables) which restrict its flexibility. For example, on inter-row calculations, you can't readily refer to the 'previous' or 'next' row, but have to be able to absolutely identify it by its attributes. So you can't say 'take the value of the row with the same date in the previous month' but can only code a specific date.
It might be used (internally) by some analytical tools. But as an end-user tool, I never 'got' it. I have previously recommended that, if you ever find a problem you think can be solved by the application of the MODEL clause, go and have a lie down until the feeling wears off.
I think the MODEL clause is quite simple to understand, when you slowly read the official whitepaper. In my opinion, the whitepaper nicely explains the MODEL clause step by step, adding one feature at a time to the examples, leaving out the most advanced features to the official documentation.
From the whitepaper, I also find it easy to understand when to actually use the MODEL clause. In some examples, it is a lot simpler to do "Excel-spreadsheet-like" operations using MODEL rather than, for instance, using window functions, CONNECT BY, or subquery factoring. Think about Excel. Whenever you want to define a complex rule set for Excel columns, use the MODEL clause. Example Excel spreadsheet rules:
A10 = A9 + A8
B10 = A10 * 5
C10 = MAX(A1:A9)
D10 = C10 / A10
In other words, MODEL is a very powerful SQL spreadsheet!
The best explanation is in the official white paper. It uses the SH demo schema and you really need it installed.
http://www.oracle.com/technetwork/middleware/bi-foundation/10gr1-twp-bi-dw-sqlmodel-131067.pdf
I don't think they do a very good job explaining this. It basically lets you load up data into an array and and then loop through array using straight SQL, instead of having to write procedural logic. Alot of the terms are based on spreadsheet terms (they are used in the Excel Help). So if you have them in excel, this would be confusing.
They should have drawn a picture for each of the queries and shown the array created than shown how you look through the array. The syntax looks to be based on Excel syntax. I'm not sure if this is common to all spreadsheet tools or not.
It has uses. Bin fitting is the most common. See the 2nd example. This is basically a complex group by where you are grouping by a range, but that range can change. It requires procedural logic. The example gives 3 ways to do it one of which is the model clause.
http://www.oracle.com/technetwork/issue-archive/2012/12-mar/o22asktom-1518271.html
I think people (often managers) who do complex spreadsheet calculations may have an easier time seeing uses for this and getting the lingo.
I'm not too good with SQL and I know there's probably a much more efficient way to accomplish what I'm doing here, so any help would be much appreciated. Thanks in advance for your input!
I'm writing a short program for the local school high school. At this school, juniors and seniors who have driver's licenses and cars can opt to drive to school rather than ride the bus. Each driver is assigned exactly one space, and their DLN is used as the primary key of the driver's table. Makes, models, and colors of cars are stored in a separate cars table, related to the drivers table by the License plate number field.
My idea is to have a single search box on the main GUI of the program where the school secretary can type in who/what she's looking for and pull up a list of results. Thing is, she could be typing a license plate number, a car color, make, and model, someone driver's name, some student driver's DLN, or a space number. As the programmer, I don't know what exactly she's looking for, so a couple of options come to mind for me to build to be certain I check everywhere for a match:
1) preform a couple of
SELECT * FROM [tablename]
SQL statements, one per table and cram the results into arrays in my program, then search across the arrays one element at a time with regex, looking for a matched pattern similar to the search term, and if I find one, add the entire record that had a match in it to a results array to display on screen at the end of the search.
2) take whatever she's looking for into the program as a scaler and prepare multiple select statements around it, such as
SELECT * FROM DRIVERS WHERE DLN = $Search_Variable
SELECT * FROM DRIVERS WHERE First_Name = $Search_Variable
SELECT * FROM CARS WHERE LICENSE = $Search_Variable
and so on for each attribute of each table, sticking the results into a results array to show on screen when the search is done.
Is there a cleaner way to go about this lookup without having to make her specify exactly what she's looking for? Possibly some kind of SQL statement I've never seen before?
Seems like a right application for the Sphinx full-text search engine. There's the Sphinx::Search module on CPAN which can be used as perl client for Sphinx.
First of all, you should not use SELECT * and you should definitely use bind values.
Second, the easiest way to figure out what the user is searching for is to ask the user. Have a set of checkboxes likes so:
Search among: [ ] Names
[ ] License Plate Numbers
[ ] Driver's License Numbers
Alternatively, you can note that names do not contain any digits and I have not seen any driver's license number which contains digits. There are other heuristics you can apply to partially deduce what the user was trying to search.
If you do an OK job of presenting the results, this might work out.
Finally, try to figure out what search possibilities are offered by the database you are using and leverage them so that most of the searching happens before the user interface touches the data.
I'm trying to implement something like Google suggest on a website I am building and am curious how to go about doing in on a very large dataset. Sure if you've got 1000 items you cache the items and just loop through them. But how do you go about it when you have a million items? Further, suppose that the items are not one word. Specifically, I have been really impressed by Pandora.com. For example, if you search for "wet" it brings back "Wet Sand" but it also brings back Toad The Wet Sprocket. And their autocomplete is FAST. My first idea was to group the items by the first two letters, so you would have something like:
Dictionary<string,List<string>>
where the key is the first two letters. That's OK, but what if I want to do something similar to Pandora and allow the user to see results that match the middle of the string? With my idea: Wet would never match Toad the Wet Sprocket because it would be in the "TO" bucket instead of the "WE" bucket. So then perhaps you could split the string up and "Toad the Wet Sprocket" go in the "TO", "WE" and "SP" buckets (strip out the word "THE"), but when you're talking about a million entries which may have to say a few words each possibly, that seems like you'd quickly start using up a lot of memory. Ok, that was a long question. Thoughts?
As I pointed out in How to implement incremental search on a list you should use structures like a Trie or Patricia trie for searching patterns in large texts.
And for discovering patterns in the middle of some text there is one simple solution. I am not sure if it is the most efficient solution, but I usually do it as follows.
When I insert some new text into the Trie, I just insert it, then remove the first character, insert again, remove the second character, insert again ... and so on until the whole text is consumed. Then you can discover every substring of every inserted text by just one search from the root. That resulting structure is called a Suffix Tree and there are a lot of optimizations available.
And it is really incredible fast. To find all texts that contain a given sequence of n characters you have to inspect at most n nodes and perform a search on the list of children for every node. Depending on the implementation (array, list, binary tree, skip list) of the child node collection, you might be able to identify the required child node with as few as 5 search steps assuming case insensitive latin letters only. Interpolation sort might be helpful for large alphabets and nodes with many children as those usually found near the root.
Don't try to implement this yourself (unless you're just curious). Use something like Lucene or Endeca - it will save you time and hair.
Not algorithmically related to what you are asking, but make sure you have a 200ms or more delay (lag) after the kaypress(es) so you ensure that the user has stopped typing before issuing the asynchronous request. That way you will reduce redundant http requests to the server.
I would use something along the lines of a trie, and have the value of each leaf node be a list of the possibilities that contain the word represented by the leaf node. You could sort them in order of likelihood, or dynamically sort/filter them based on other words the user has entered into the search box, etc. It will execute very quickly and in a reasonable amount of RAM.
You keep the items on the server side (perhaps in a DB, if the dataset is really large and complex) and you send AJAX calls from the client's browser that return the results using json/xml. You can do this in response to the user typing, or with a timer.
if you don't want a trie and you want stuff from the middle of the string, you generally want to run some sort of edit distance function (levenshtein distance) which will give you a number indicating how well 2 strings match up. it's not a particularly efficient algorithm, but it doesn't matter too much for things like words, as they're relatively short. if you're running comparisons on like, 8000 character strings it'll probably take a few seconds. i know most languages have an implementation, or you can find code/pseudocode for it pretty easily on the internet.
I've built AutoCompleteAPI for this scenario exactly.
Sign up to get a private index, then,
Upload your documents.
Example upload using curl on document "New York":
curl -X PUT -H "Content-Type: application/json" -H "Authorization: [YourSecretKey]" -d '{
"key": "New York",
"input": "New York"
}' "http://suggest.autocompleteapi.com/[YourAccountKey]/[FieldName]"
After indexing all document, to get autocomplete suggestions, use:
http://suggest.autocompleteapi.com/[YourAccountKey]/[FieldName]?prefix=new
You can use any client autocomplete library to show these results to the user.