FPM - WD - LPD: POWL as Listpopup to choose a record and then return it - abap

The situation is like this:
We have a POWL, where I added a toobar-button.
The click on that button opens a FPM-POWL, which I created
with the help of this link
https://sapcodes.com/2016/03/25/powl-in-fpm/....
(INFO: This POWL takes it's data from a standard FBI View).
The new FPM-POWL is properly(??? I hope so, because it gets shown) maintained inside the LPD_CUST in order to start this popup inside the FPM-framework (flag that it should wait, is also set on caller and callee) .
Ok, let's proceed.--
The calling class (feeder class "A", implementing IF_POWL_FEEDER) calls the POPUP like this
DATA(lt_lpd_content) = lr_lpd_handle->mt_content.
READ TABLE lt_lpd_content
REFERENCE INTO DATA(lr_lpd_content)
WITH KEY application_alias = 'ZSRM_GP/BP_POPUP'.
lr_lpd_handle->launch_application( iv_application_id = lr_lpd_content->application_id ).
The user shall simply be able to pick one business partner....
After the user picks a record, and hits a special toolbar-button of the "popup", its feeder class "B"(also implementing IF_POWL_FEEDER) does, what it needs to do inside handle_action and until now... I try to figure out, HOW to pass the record back to the caller class "A"...
Because, unlike expected (because both LPD_CUST entries have the flag set "Synch/await"), the class "A" proceeds processing any code I place after
lr_lpd_handle->launch_application( iv_application_id = lr_lpd_content->application_id ).
And so I suppose, this is "works as designed" and is asynchroneous .
So, my current experiments assumed synchroneous, and I exported any picked business partner's number to a memory ID and closed the "popup".
But I never returned to the caller, where I wanted to start coding the rest of the requirement.
So, my second try is, that I created an event in feeder class "B" (called popup) and a handler for this inside feederclass "A" ( caller ).
Inside the ctor of class "A" I register via "SET HANDLER FOR ALL INSTANCES".... and the tests are just about to start.
But I really hate this approach,.... is there a best practice regarding this ?
I cannot imagine, that I am the only one with this kind of requirement, which, in simple terms, means:
"Listpopup"... and we all know, how simple this is in sapgui...but inside the FPM-POWL-LPD environment... I cannot get it...
EDIT: Maybe I should do something in here to FORCE a blocking popup call ?
Or can I somehow populate the exporting parameters of handle action of the callee to return to the callers handle_action ? Ala POWL_FORWARD_anything ?

Related

SubmitForm then Patch results in "The data returned by the service was invalid"

I'm building a PowerApps app on Azure SQL
The requirement
I have a form which has "Save" and "Confirm" buttons.
Both buttons should save the form data. The Commit button should also set database column "Confirm" to 1
I've read at length about how I can programatically override the update value of a hidden control for this. But I'm not satisfied with the level of complexity (maintenance) required to get this working, i.e.
Populate a variable with the current db value
In the button code set the variable value
In the form field, set the update property to the variable
What I'm Trying
So I'm trying a different approach: SubmitForm then Patch. Even though this requires an extra database call, I'd like to understand if this will work. This is the code for OnSelect in the commit button:
// Save the record
SubmitForm(frmEdit);
// Update confirmed to 1
Patch('[dbo].[Comments]',cRecord,{Confirmed:1});
Some Complexities
Note that my record is a variable, cRecord. In short I want this app to be able to upsert based on URL parameters.
This is my App.OnStart which captures URL values, inserts a record if required. Regardless, the result of this event is that cRecord is set to the record to be edited.
// Cache employees and store lookups (as they are in a different db)
Concurrent(Collect(cEmployees, Filter('[dbo].[SalesPerson]', Status = "A")),Collect(cStores, '[dbo].[Store]'));
// Check for parameters in the URL. If found, set to Edit/Add mode
Set(bURLRecord,If((!IsBlank(Param("PersonId")) && !IsBlank(Param("Date"))),true,false));
// If URL Parameters were passed, create the record if it doesn't exist
If(bURLRecord,
Set(pPersonId,Value(Param("PersonId")));
Set(pDate,DateValue(Param("Date")));
// Try and find the record
Set(cRecord,LookUp('[dbo].[Comments]',SalesPersonId=pPersonId && TransactionDate = pDate));
If(IsBlank(cRecord),
// If the record doesn't exist, create it with Patch and capture the resulting record
Set(cRecord,Patch('[dbo].[Comments]',Defaults('[dbo].[Comments]'),{SalesPersonId:pPersonId,TransactionDate:pDate}))
);
// Navigate to the data entry screen. This screen uses cRecord as its item
Navigate(scrEdit);
)
frmEdit.Item is set to cRecord. As an aside I also have a gallery that sets this variable value when clicked so we can also navigate here from a gallery.
The navigating using new and existing URL parameters works. Navigating from the gallery works.
The problem
When I press the Commit button against a record which has Confirmed=0 I get this popup error:
The data returned by the service is invalid
When I run this code against a record which already has Confirmed=1 I don't get an error
If I run the PowerApps monitor it doesn't show any errors but it does show some counts being run after the update. I can paste it here if required.
I also tried wrapping the Path in a Set in case it's result was confusing the button event but it didn't make a difference.
What I want
So can anyone offer me any of the following info:
How can I get more info about "The data returned by the service is invalid"?
How can I get this to run without erroring?
Is there a simpler way to do the initial upsert? I was hoping a function called Patch could upsert in one call but it seems it can't
With regards to the setting field beforehand approach, I'm happy to try this again but I had some issues implementing it - understanding which control where to edit.
Any assistance appreciated.
Edit
As per recommendations in the answer, I moved the patch code into OnSuccess
If(Confirmed=1,Patch('[dbo].[CoachingComments]',cRecord,{Confirmed:1}));
But now I get the same error there. Worse I cleared out OnSucces and just put SubmitForm(frmEdit); into the OnSelect event and it is saving data but still saying
The data returned by the service was invalid
First things first,
Refactoring has multiple steps,
I can t type all out at once...
The submitform and patch issue:
Never use the submitforn with extra conplexity
Submitform is only the trigger to send the form out,
The form handler will work with your data...
If you hsven t filled out the form correctly, u don t want to trigger your patch action...
So:
On your form, you have an OnSucces property,
Place your patch code there...
Change your cRecord in your patch statement:
YourForm.LastSubmit

RSA Archer user cannot specify a date in the future / past

Has anyone any good patterns for RSA Archer validation which prevents a user from saving the record when a given date specified is in the future (or past)?
Currently I am catching this using calculated fields after the data has been saved, in a data exceptions report. But ideally I would like to catch this early prior to the user saving the record.
I would suggest that you use custom object in this case.
So remove the basic onclick attribute of the SAVE and APPLY button.
In your custom object, check if the entered date matches the system date (or the time-zone you need). Set a flag. Based on the flag value, you can call the actual function call of the SAVE or APPLY button.
Hope that helps!
Alex,Tanveer is correct. You have to use a Custom Object with embedded JavaScript code to implement described functionality.
You will need to create a function that will validate the value entered by the end user and either accept it or make user correct himself.
Now, you have two options how to do it:
1. You can attach your validation function to the Save and Apply buttons as Tanveer described. I have shared a similar code in the following question before. You can review it here: LINK
2. You can attach your validation function to the element you plan to validate directly. So when user is done with given input element and input element loses focus your function will be called. Here is a sample code with jQuery:
$('#elementid').blur(function() {
// validate entered value here
// if required show a pop-up message
WarningAlert(msg, title);
});
Good luck!

Whats wrong with my code (GML)

ok so im sorry to be asking, however im trying to make it so that when i press z, a portal appears at my Spr_players coordinates, however if one of them already exists, i want it to be erased and im simply wondering what ive done wrong. Again sorry for bothering. (please note that i am a bad programmer and i appoligize if i broke any rules)
if object_exists(portal)
{
instance_destroy()
action_create_object(portal,Spr_player.x,Spr_player.y)
}
else
{
action_create_object(portal,Spr_player.x,Spr_player.y)
}
The instance_destroy() statement destroys the current self instance which is what is executing the code. You must use the with (<objectID>) {instance_destroy()} syntax to destroy another instance.
As long as there is only one instance of portal in the room this code should work:
if object_exists(portal)
{
with (portal) instance_destroy(); //you should also need a semicolon here to separate
//this statement from the next, it is good practice
//to do this after all statements as I have done.
action_create_object(portal,Spr_player.x,Spr_player.y);
}
else
{
action_create_object(portal,Spr_player.x,Spr_player.y);
}
If there are multiple instances of portal this will only destroy the first one. To destroy all you would have to use a for loop to iterate through them all. Off the top of my mind I can not remember the function to get the ids of all the instances of an object, but it looks like this is not a problem since each time one is created the existing one is destroyed and thus you will only have one a t a time.
Another way of doing this is just to move the existing portal to the new position. The only difference here is that the create event of the portal will not be executed and any alarms will not be reset.
portal.x=Spr_player.x
portal.y=Spr_player.y
Again this will only move the first portal if there are more than one.

SharePoint Workflow Error: "Unable to transform the input lookup data into the requested type" BUT only on New Item Creation

FYI to start, I am aware of how to properly set up an update to a lookup, and am 99% positive I've done this correctly.
I know this because When I set the workflow to automatically start when an Item is Changed, then it works perfectly. But when I simply change this setting so it will automatically start on New Item Creation, it Cancels the workflow and I get a "Coercion Failed: Unable to transform the input lookup data into the requested type." If both options are checked then it fails on creation, but simply clicking edit on the item properties, and the "Save" makes it work.
The workflow is on a Document Library and works as follows;
User selects the Work Task LookUp from a dropdown in the edit properties form after uploading, and then Saves the item (adding it to the document library). The workflow is suppose to then look at the Work Task LookUp selected, and pull the Account and Effective Date-Type lookUp ID's that Work Task item has, and sets the Document's identical fields to the same value.
Here is the code for the workflow if it helps;
If Current Item: Parent Task is not empty
If Current Item: Sub Task is not empty
Log Both are empty to workflow history list
Then Set Account to Work Tasks:Account
The Log Set Account to workflow history list
Then Set Effective Date and Type to WorkTasks: Effective Date and Type
The Log Set EffDateType to the workflow history list
This is all done in one step. I also added additional steps to test if the account and effective date type fields have been set properly, and if not to set them again. But everytime I run the workflow on change and it works, it always correctly sets these fields based upon the first Step (posted above) and the additional check logs to the history that they are not needed.
As an example, The lookUp for Integer for Tasks:Account is set to work as follows;
Date Source: Work Tasks (a list)
Field from Source: Account (a lookup)
Return Field as: Lookup ID (as Integer)
Find the List Item
Field: Title (from the Work Tasks list)
Value: Current Item: Parent Task (Which is a look up of the "Title"
Field from Work Tasks List, and is set to return the Value as a LookUp Value (As Text))
The Effective Date and Type setting is pretty much identical.
So anyone have any insight? I've tried running it as an impersonated Step, setting a workflow pause (for 1 minute), changing the lookup types incase I messed it up to start with, but ultimately the above workflow DOES work, but only when I set it to "Automatically start on the Change (edit) of an Item", NOT "Automatically start on New Item Creation" like I need to to do.
Oh yes, fyi, I am using SPServices CascadingDropDown on the Work Task and Sub Task fields of the doc Library form, but I honestly do not believe this has anything to do with my issue.
UPDATE:
I've talked with another developer, and he believes it is due to the issue that the workflow is occuring too quickly, before the item creates an ID for itself, which it needs to conduct the lookUps. He had me add another "Pause Workflow" to the very top of my workflow code (above the If conditions) and set it for 1 minute.
It then worked properly.
Downside is we want this to labeling to occur as close to item creation as possible. Because a view of the library relies on grouping based upon Account and Effective Date and Type. To add to this downer, Microsoft's Pause Workflow only allows for 1 minute or more, and then the timer used for this is often off, resulting in a pause longer than that. So far, every test is currently showing 2 minutes minimum on the pause.
A possible alternative solution for instantaniously populate the fileds is to use Javascript and SPServices to do the lookUp to the Task list to pull the account and effective date - type fields and then populate, but my Javascript is not very strong and I would need help doing this. If anyone has any suggestions, I would appreciate them.
(Answered in a question edit. Converted to a community wiki answer. See Question with no answers, but issue solved in the comments (or extended in chat) )
The OP wrote:
I don't know if it is the ID for the item after further testing. I changed the start of the workflow to wait until a field in the item changes. I set it to wait until the ID field is not 0 (since you cannot set to null), and it still does not work.
6/14/2012 4:13 PM Comment System Account Waiting on ID ​
6/14/2012 4:13 PM Comment System Account Waiting complete on ID ​
6/14/2012 4:13 PM Error System Account Coercion Failed: Unable to transform the input lookup data into the requested type.
I have tried other fields as well, like document ID value is not empty, and it will wait, log it finishing the wait, and then fail.
UPDATE This issue has something to do with the Parent Task field. I have solved the issue without having to wait for a period of time by setting the change from above to wait until the Parent Task field is not empty. It then completes the workflow fine.
Anyone know why there is a delay though? I've solved it, but still don't fully understand what takes it so long.
The main fault has been solved (hence the answer), and the remaining point about the reasons for the delay would probably be a discussion point or not specific enough for SO. Any further clarification can be edited in here.

Logging Application Block-Microsoft Enterprise Library 4.1

In Logging Application Block in Logger.Write it takes event id as one of the parameter which is integer.So how to decide what should be passed as event id?
btw, do you really need to use the eventId? I think you can just pass the string you want to log:-
Logger.Write("SomeMessage");
EDIT :- I meant there should be another overload which takes just the string you want to write.
EDIT :- From here :-
EventId - a value you can use to
further categorise Log Entries
(defaults to 0 for a LogEntry and to 1
for a LogEntry implicitly created by
Logger.Write);
What we do is gather the different "stories" that you want to report on and then assign a sequence of event IDs to each of those stories. So in short, come up with a system that works for you and document it for future reference.