Completely reject a GET event in abap - abap

I would like to know how many times the GET event would happen without actually calling it (or calling it only one time).
By now, I know how to get the total number of iterations: lines( (ldb_include)node_table[] ) , but this only works if the GET has been called, and if the GET has been called, it will iterate through node_table and if it has 8798237 entries, they all will be PUT, but as I already have the number of iterations (that's all I need), I don't want to have all the values put.
I can leave the GET by using REJECT, but it will only go to the next iteration... currently, I don't know how to completely quit the GET.
I've tried using STOP, but it raises the event of end-of-selection immediately, which is not the idea...
at selection-screen output.
"process the selection screen
start-of-selection.
get <node_tab>
"lv_total = lines( (ldb_include)node_table[] )
"some sort of REJECT to all get events
"continue processing the rest of the code, using the lv_total
end-of-selection.
"display the output
I can achieve it using a flag like
if first execution = abap_true. "process it
else reject <node_tab>.
But it would, in all cases, iterate through all the GET events, against the idea. I would like to understand if there is a smarter (possibly more elegant) to iterate only the first GET, and skip all the others.

It's like saying there's a database view with joined tables, but one table is not needed, how to make the program read the view but tell the database not read one table...
Impossible!
The only solution is to copy and adapt the Logical Database. As simple as that.
And, of course, logical databases are obsolete for a long time, so prefer using a database join, or anything else better.

Related

How should an API treat a single item on array failing to insert in a DB?

I have POST request to save an array of items into a DB. If one of them conflicts with a unique constraint, Is there is a best practice on what to do ?
My first instinct on this case would be to save the unique ones into the DB and just return the duplicate error on the duplicated one. ( In fact this would be better for my use case. )
If that's the way to go, what would be the correct response status ?
Sorry for the conceptual question, but I don't even know how to google this.
The "standard" way of doing things would be to wrap the whole thing in a database transaction and roll back the whole thing. Probably return a 500 error for the return code and a message about the duplicated record.
If you want to support a partial, the return codes are more complicated, you could pick a complicated code like 207 Multi-Status, but support will be limited. Not many developers are going to put in an explicit handler for a 207, so most likely it would be missed on the callback processing. I'd just do a straight 200 and have a response payload with warnings embedded.

How to get a single item from my Amazon Dynamodb Table

My .Net code below is always returning a search.Matches.Count of 0 even though the movie is in the table. I've literally searched the whole internet but have not been able to get an answer, even on Amazon's AWS Developer website.
Please let me know what am I doing wrong? I appreciate your help. I'm totally new to this.
client = New AmazonDynamoDBClient(config)
table = Table.LoadTable(client, "MovieTable")
scanFilter = New ScanFilter
With scanFilter
.AddCondition("KeyCode", ScanOperator.NotEqual, MovieName)
.AddCondition("Status", ScanOperator.Equal, "In")
End With
search = table.Scan(scanFilter)
If search.Matches.Count = 1 then getMovieName
As the documentation explains, "Scan", a function which is supposed to go through the entire database, cannot go through the entire database at one fell swoop. Instead, it goes through it 1MB at a time, and after 1MB of data it returns to the caller, and you're supposed to ask to continue in the next page (again, see the documentation on how).
In your case, you have a very specific filter which matches only one item, but still - Scan will return after having read 1MB of data, even if none of the items in this 1MB match your request. It doesn't wait until 1MB of results have been collected! So in your use case it is not surprising that you're getting an empty result set, with LastEvaluatedKey set signalling that there are more pages to read.
By the way In your use case, where you are looking for just one item, doing a Scan of the entire database is obviously not a great choice (unless you're only doing this for debugging). a GetItem or Query operation will make more sense, if you can, and maybe a secondary index would be useful if you're searching by items not in the key.

How to discard / ignore one of a stored procedure's many return values

I have a stored procedure that returns 2 values.
In another procedure, I call this (edit: NOT selectable) procedure but only need one of the two returned values.
Is there a way to discard the other value? I'm wondering what is a good practice, and hoping for a small performance gain.
Here is how I call the procedure without error:
CREATE or ALTER procedure my_proc1
as
declare variable v_out1 integer default null;
declare variable v_out2 varchar(10) default null;
begin
execute procedure my_proc2('my_param')
returning_values :v_out1, :v_out2;
end;
That is the only way I found to call this procedure without getting a -607 error 'unsuccessful metadata update request depth exceeded. (Recursive definition?)' whenever I use only one variable v_out1.
So my actual question is: can I avoid creating a v_out2 variable for nothing, as I will never use it (that value is only used in other procedures which also call my_proc2)?
Edit: the stored procedure my_proc2 is actually not selectable. But I made it selectable after all.
Because your stored procedure is selectable, you should call it by SELECT statement, ie
select out1, out2 from my_proc2('my_param')
and in that case you can indeed omit some of the return value(s). However, I wouldn't expect noticeable performance gain as the logic inside the SP which calculates the omitted field is still executed.
If your procedure is not selectable, then creating a wrapper SP is the only way, but again, it woulnd't give any performance gain as the code which does the hard work inside the original SP is still executed.
The answer is made to use text formatting while demonstrating "race conditions" in the multithreading programming (which SQL is) when [ab]using out-of-transaction objects (SQL sequences aka Firebird Generators).
So, the "use case".
Initial condition: table is empty, generator=0.
You start two concurrent transactions, A and B. For ease of imagining you may think those transactions were started from concurrent connections made by two persons working with your program on two networked computers. Though actually it does not matter much, if you open them transactions from one same connection - the scenario would not change a bit. Just for the ease of imagining.
The Tx.A issues UPDATE-OR-INSERT which inserts new row into the table. Doing so it up-ticks the generator. The transaction is not committed yet. Database condition: the table has one invisible (non-committed) row with auto_id=1, the generator = 1.
The Tx.B issues UPDATE-OR-INSERT too which inserts yet another row into the table. Doing so it also up-ticks the generator. The transaction maybe commits now, or maybe later, irrelevant. Database condition: the table has two rows (one or both are invisible (non-committed)) with auto_id=1 and auto_id=2, the generator = 2.
The Tx.A meets some error, throws the exception, DOWNTICKS the generator and rolls back. Database condition: the table has one row with auto_id=2 the generator = 1.
If Tx.B was not committed before, it is committed now. (this "if" just to demonstrate that it does not matter when other transactions would be committed, earlier or later, it only matters that Tx.A downticks the generator after any other transaction upticked it)
So, the final database condition: the table has one committed=visible row with auto_id=2 and the generator = 1.
Any next attempt to add yet one more row would try to up the generator 1+1=2 and then fail to insert new row with PK violation, then it would down the generator to 1 to recreate the faulty condition outlined above.
Your database stuck and without direct intervention by DB Administrator can not have data added further.
The very idea of rolling back the generator is defeating all intentions generators were created for and all expectations about generators behavior that the database and connection libraries and other programmers have.
You just placed a trap on the highway. It is only a matter of time until someone will be caught into it.
Even if you would continue guarding this hack by other hacks for now - wasting a lot of time and attention to do that scrupulously and pervasively - still one unlucky day in the future there would be another programmer, or even you would forget this gory details - and you would start using the generator in standard intended way - and would run into the trap.
Generators were not made to be backtracked during normal work.
existence of primary key is checked in the procedure before doing anything
Yep, that is the first reaction when multithreading programmer meets his first race condition. Let's just add more prior checks.
First few checks indeed can decrease probability of a clash, but it never can alleviate it completely. And the more use your program would see, the more transactions would get opened by more and more concurrent and active users - it is only a matter of time until this somewhat lowered probability would turn out still too much.
Think about it, SQL is about transactions, yet they had to invent and introduce explicitly out-of-transactions device Generator/Sequence is. If there was reliable solution without them - it would be just used instead of creating that so non-SQLish transaction boundary breaking tool.
When you say your SP "checks for PK violation" it is exactly the same as if you would drop the generator altogether and instead just issue "good old"
:new_id = ( select max(auto_id)+1 from MyTable );
By your description you actually do something like that, but in some indirect way. Something like
while exists( select * from MyTable where auto_id = gen_id(MyGen, +1))
do ;
:new_id = gen_id(MyGen, 0);
You may feel, that because you mentioned generators, you somehow overcame the cross-transaction invisibility problem. But you did not, because the very check "if PK was already taken" is done against in-transaction table.
That changes nothing, your two transactions Tx.A and Tx.B would not see each other's records, because they both did not committed yet. Now it only takes some unlucky Tx.C that would fail and downtick the generator to them collide on the same ID.
Or not, you do not even need Tx.C and downticking at all!
Here we bump into the multithreading idea about "atomic operations".
Let's look at it again.
while exists( select * from MyTable where auto_id = gen_id(MyGen, +1))
do ;
:new_id = gen_id(MyGen, 0);
In a single-threaded application that code is okay: you just keep running the generator up until the free slot, then you just query the value without changing it. "What could possibly go wrong?" But in multithreaded environment it is rooks waiting to be stepped over. Example:
Initial condition, table has 100 rows (auto_id goes from 1 to 100), the generator = 100.
Tx.A starts adding the row, upticks the generator in the while loop and exits the loop. It does not yet pass to the second line where local variable gets assigned. Not yet. The generator = 101, rows not added yet.
Tx.B starts adding the row, upticks the generator in the while loop and exits the loop. The generator = 102, rows not added yet.
Tx.A goes to the second line and reads gen_id(MyGen,0) into a variable for new row. While it was 101 out of the loop, it is 102 now!
Tx.B goes to the second line and reads gen_id(MyGen,0) and gets 102 too.
Tx.A and Tx.B both try to insert new row with auto_id=102
RACE CONDITIONS - both Tx.A and Tx.B try to commit their work. One of them succeeds, another fails. Which one? It is not predictable. A lucky one commits, an unlucky one fails.
The failed transaction downticks the generator.
Final condition: the table has 101 rows, the auto_id consistently goes from 1 to 100 and then skips to 102. The generator = 101, which his less than MAX(auto_id)
Now you might want to add more hacks, I mean more prior checks before actually inserting rows and committing. It will make mistakes yet less probable, right? Wrong. The more checks you do - the slower gets the code. The slower gets the code - the greater gets probability, that while one thread runs throw all them checks there happens another thread that interferes and alters the situation that was checked a moment ago.
The fundamental issue with multithreading is that any check is SEPARATE action. And between those actions the situation MAY change. Your procedure may check whatever it wants BEFORE actually inserting the row. It would not warrant much. Because when you finally gets at the row inserting statement, all the checks you did in the PAST are a matter of past. And the situation is potentially already altered. And warrants your checks were giving in the PAST only belong to that past, not to the moment at hands.
And even if you no more look for warranting sure thing, still adding every new check you can not even be sure if doing so you just decreased or increased probability of failure. Because multithreading is a bitch, it is flowing chaotically out of your control.
So, remember the KISS principle. Until proven otherwise - you most probably do not need SP2 at all, you only need one single UPDATE-OR-INSERT statement.
PS. There was a pretty fun game in my school days, it was called Pascal Robots. There are also C Robots I heard and probably implementation for other many languages. With Pascal Robots though came a number of already coded robots, demonstrating different strategies and approaches. Some of them were really thought out in very intrinsic details. And there was one robot which program was PRIMITIVE. It only had two loops: if you do not see an enemy - keep turning your radar around, if you do see an enemy - keep running to it and shooting at it. That was all. What could this idiot do against sophisticated robots having creative attack and defense strategies, flanking maneuvers, optimal distance to maintain by back and forth movements, escape tricks and more? Those sophisticated robots employed very extensive checks and very thought through hacks to be triggered by those checks. So... ...so that primitive idiot was second or maybe third best robot in the shipped set. there was only one or two smarties who could outwit it. With ALL the other robots this lean-and-fast idiot finished them before they could run through all their checks and hacks thrice. That is what multithreading does to programming. It was astonishing to watch those battles, which went so against out single-threaded intuition.

Cocoa Scripting: Deletion of elements in a loop getting out of sync

While adding scriptability to my Mac program, I am struggling with the common programming problem of deleting items from an indexed array where the item indexes shift due to removal of items.
Let's say my app maintains a data store in which objects of type "Person" are stored. In the sdef, I've define the Cocoa Key allPersons to access these elements. My app declares an NSArray *allPersons.
That far, it works well. E.g, this script works well:
repeat with p in every person
get name of p
end repeat
The problem starts when I want to support deletion of items, like this:
repeat with p in (get every person)
delete p
end repeat
(I realize that I could just write "delete every person", which works fine, but I want to show how "repeat" makes things more complicated).
This does not work because AppleScript keep using the original item numbers to reference the items even after deleting some of them, which naturally shifts the items and their numbering.
So, considering we have 3 Persons, "Adam", "Bonny" and "Clyde", this will happen:
get every person
--> {person 1, person 2, person 3}
delete person 1
delete person 2
delete person 3
--> error number -1719 from person 3
After deleting item 1 (Adam), the other items get renumbered to item 1 and 2. The second iteration deletes item 2 (which is now Clyde), and the third iteration attempts to delete item 3, which doesn't exist any more at that point.
How do I solve this?
Can I force the scripting engine to not address the items by their index number but instead by their unique ID so that this won't happen?
It's not your ObjC code, it's your misunderstanding of how repeat with VAR in EXPR loops work. (Not really your fault either: they're 1. counterintuitive, and 2. poorly explained.) When it first encounters your repeat statement, AppleScript sends your app a count event to get the number of items specified by EXPR, which in this case is an object specifier (query) that identifies all of the person elements in whatever. It then uses that information to generate its own sequence of by-index object specifiers, counting from 1 up to the result of the aforementioned count:
person 1 of whatever
person 2 of whatever
...
person N of whatever
What you need to realize is that an object specifier is a first-class query, not an object pointer (not that Apple tell you this either): it describes a request, not an object. Ignore the purloined jargon: Apple event IPC's nearest living relatives are RDBMSes, not Cocoa or SOAP or any of the OO messaging crud that modern developers so fixate on as The One True Way To Do... well, EVERYTHING.
It's only when that query is sent to your application in an Apple event that it's evaluated against the relational graph your Apple event IPC View-Controller – aka "Apple Event Object Model" – presents as an idealized, user-friendly representation of your Model's user date that it actually resolves to a specific Model object, or objects, with which the event handler should perform the requested operation.
Thus, when the delete command in your repeat loop tells your app to delete person 1 of whatever, all your remaining elements move down by one. But on the next iteration the repeat loop still generates the object specifier person 2 of whatever, which your script then sends off to your app, which resolves it to the second item in the collection – which was originally the third item, of course, until you shifted them all about.
Or, to borrow a phrase:
Nothing in AppleScript makes sense except in light of relational queries.
..
In fact, Apple events' query-based approach it actually makes a lot of sense considering it was originally designed to be efficient over very high-latency connections (i.e. System 7's abysmally inefficient process switcher), allowing a single Apple event carrying one or more complex queries to manipulate many objects at once. It's even quite elegant [when it works right], but is quite undone by idiots at Cupertino who think the best way to make programmers not hate the technology is to lie even harder about how it actually works.
So here, I suggest you go read this, which is not the best explanation either but still a damn sight better than anything you'll get from those muppets. And this, by its original designer that explains a lot of the rationale for creating a high-level coarse-grained query-based IPC system instead of the usual low-level fine-grained OO message passing crap.
Oh, and once you've done that, you might want to consider try running this instead:
delete every person whose name is "bob"
which is pretty much the whole point of creating a thick declarative-y abstraction that does all the work so the user doesn't have to.
And when nothing but an imperative client-side loop will do, you either want to get a list of by-ID object specifiers (which are the closest things to safe, persistent pointers that AEOMs can do) from the app first and then iterate over that, or at least use your own iterator loop that counts over elements in reverse:
repeat with i from (count every person) to 1 by -1
tell person i
..
end tell
end repeat
so that, assuming it's iterating over an ordered array on the server side, will delete from last to first, and so avoid the embarrassing off-by-N errors of your original script.
HTH
re: "If you want your scripable elements to be deletable, make sure you use NSUniqueIDSpecifiers to identify them."
Yes, Apple recommends using formUniqueId or formName for object specifiers, but you can't always do that. For instance, in the Text Suite, you really only have indexing to work with; e.g. character 1, word 3, paragraph 7, etc. You don't have unique IDs for text elements. In addition to deletion, ordering can be affected by other Standard Suite commands: open, close, duplicate, make, and move.
The app implementer is a programmer, but so is the scripter. So it is reasonable to expect the scripter to solve some problems themselves. For instance, if the app has 5 persons, and the scripter wants to delete persons 2 and 4, they can easily do so even with indexed deletion:
delete person 4
delete person 2
Deleting from the end of an ordered list forward solves the problem. AS also supports negative indexes, which can be used for the same purpose:
delete person -2
delete person -4
The key to solving this lies in implementing the objectSpecifier method correctly so that it does return an NSUniqueIDSpecifier.
My code did so far only return an index specifier and that was wrong for this purpose. I guess that had I posted my code (which is, unfortunately, too complex for that), someone may have noticed my mistake.
So, I guess the rule is: If you want your scripable elements to be deletable, make sure you use NSUniqueIDSpecifiers to identify them. For read-only element arrays, using an NSIndexSpecifier is (probably) safe, though, if your element array has persistent ordering behavior.
Update
As #foo points out, it's also important that the repeat command fetches the references to the items by using … in (get every person) and not just … in every person, because only the former leads to addressing the items by their id whereas the latter keeps indexing them as item N.

How can I tell when a LINQ query is enumerated?

This might seem to be a silly question at first, but please read on.
I know that LINQ queries are deferred and only executed when the query is enumerated, but I'm having trouble figuring out exactly when that happens. Certainly in a For Each loop, the query would be enumerated. What's the rule of thumb to follow? I don't want to accidentally enumerate over my query twice if it's a huge result.
For example, does System.Linq.Enumerable.First enumerate over the whole query? I ask for performance reasons. I want to pass a LINQ result set to an ASP.NET MVC view, and I also want to pass the First element separately. Enumerating over the results twice would be painful.
It would be great to turn on some kind of flag that alerts me each time a LINQ query is enumerated. That way I could catch scenarios when I accidentally enumerate twice.
You can add your own logging quite easily to see what's going on. Other than that, the lazy/eager bit is reasonably clear. Basically it's lazy when it can be - any time the return type is IEnumerable<T> or IOrderedEnumerable<T>. It's possible for those to be lazy because you can't get at any of the data without calling GetEnumerator(). Compare that to First() for example - it has to return a value to you. It can't defer anything.
As a general point, if you want to make sure that a query won't be evaluated more than once, call ToList or ToArray on it, then use the results of that several times. Again, those methods have to return a list or an array immediately, neither of which allows for lazy population. The query is evaluated, but then it's effectively disconnected from the resulting populated collection - the query won't be executed again, however much you examine the list.
In addition to the lazy/eager question, there's streaming/non-streaming: will the method read everything from the source enumerable, or just "sip" at it, reading when it needs to. Again, in general LINQ will only read when it has to - so while Reverse is non-streaming (but still lazy), Where and Select are streaming.
There is no hard and fast rule as to when a LINQ query will be enumerated and when it won't. Partially because some methods will or won't based on the underlying type of the query source.
Here is a quick break down. This is not a complete break down by any means, mainly what I could come up with in 5 minutes.
Aggregate Functions
They enumerate the list entirely and immediately. They are usually spotted by the extension methods which return a scalar value. For example Sum, Min, Max, Count, Last etc ...
Note: Count and Last do not necessarily enumerate the entire list. If the underlying type is convertible to ICollection<T> they will instead use a more efficient method.
Front of the list Selectors
They only look at the first element of the list and potentially the second. They are First, FirstOrDefault, Single, SingleOrDefault.
The above is referencing the versions which do not take a predicate. If they take a predicate they are better classified as Inquiries (see below)
Inquiries
They will only enumerate the minimal amount of the list necessary to do the operation. This can be as little as 1 element and as many as the entire list.
Examples: Any, Contains
Create a new list and do no enumeration immediately.
This is the vast majority of the operators in LINQ. Their cost is incurred when the new list is enumerated. Examples: Select, Where, Group, Join, SkipWhile, Skip.