Send just the changes in a transaction from one stream to workspace under a different parent via change palette - accurev

I am trying to send only the changes in a particular transaction from one stream to workspace under a different parent via change palette. Is there a way to do this?
Right now, When I do a merge in change palette it pulls in all changes until that transaction and draws a merge arrow, I dont want that.
Example:
Let's say at the source stream, file x content is as follows:
abc (line added at txn:5)
def (line added at txn:7)
xyz
At the destination workspace, file x contents:
xyz
I just want to port (txn 7) to make the destination file x content as:
def
xyz
But that doesn't happen. Change Palette pulls in all changes until txn 7.
Let us say I decided to review my merge and select only txn 7 related changes and promote it in my destination. If later for some reason I decide that I need txn 5 ported as well, then this doesn't show up in the change palette, because I already ignored those changes during my previous merge. I don't want that to happen.

Right click on stream -> Show history -> Select the transaction you want -> Right click -> Send to -> Change Palette -> Choose destination stream

Related

Do the children in a Shopify bulk operation JSONL with nested connections come right after their parent?

When parsing a bulk operation JSONL file with nested items from top to bottom line by line, when I reach a new top level parent object, does that mean I've gone through all children of the previous parent?
Context
When processing a bulk operation JSONL file, I do some processing that requires having a parent and all of their children. I'd like to keep my memory requirements as small as possible, so I need to know when I'm done processing an object and all of its children.
Example for clarification
Using the documentation page's JSONL example:
{"id":"gid://shopify/Product/1921569226808"}
{"id":"gid://shopify/ProductVariant/19435458986123","title":"52","__parentId":"gid://shopify/Product/1921569226808"}
{"id":"gid://shopify/ProductVariant/19435458986040","title":"70","__parentId":"gid://shopify/Product/1921569226808"}
{"id":"gid://shopify/Product/1921569259576"}
{"id":"gid://shopify/ProductVariant/19435459018808","title":"34","__parentId":"gid://shopify/Product/1921569259576"}
{"id":"gid://shopify/Product/1921569292344"}
{"id":"gid://shopify/ProductVariant/19435459051576","title":"Default Title","__parentId":"gid://shopify/Product/1921569292344"}
{"id":"gid://shopify/Product/1921569325112"}
{"id":"gid://shopify/ProductVariant/19435459084344","title":"36","__parentId":"gid://shopify/Product/1921569325112"}
{"id":"gid://shopify/Product/1921569357880"}
{"id":"gid://shopify/ProductVariant/19435459117112","title":"47","__parentId":"gid://shopify/Product/1921569357880"}
If I'm reading the file line by line from top to bottom and I hit Product with id gid://shopify/Product/1921569259576 on line 4, does this mean that I've already seen all of the previous product's (gid://shopify/Product/1921569226808) product variants the JSONL file contains?
What a lot of people do is shell out and use tac to reverse the file. Then when you parse the file you end up nice processing the children first and then knowing when you hit the parent, you have everything and you can move.
Obviously this is nicer than getting the parent and then the children, and then wondering, have I hit all the children or are there more.
Try it! It works!
My pseudo code (which you can convert to whatever scripting language you want looks like this:
inventory_file = Tempfile.new
inventory_file.binmode
uri = URI(result.data.node.url)
IO.copy_stream(uri.open, inventory_file) # store large amount of JSON Lines data in a tempfile
inventory_file.rewind # move from EOF to beginning of file
y = "#{Rails.root}/tmp/#{shop_domain}.reversed.jsonl"
`tac #{inventory_file.path} > #{y}`
puts "Completed Reversal of file using tac, so now we can quickly iterate the inventory"
f = File.foreach(y)
variants = {}
f.each_entry do |line|
data = JSON.parse(line)
# play with my data
end

SubmitForm then Patch results in "The data returned by the service was invalid"

I'm building a PowerApps app on Azure SQL
The requirement
I have a form which has "Save" and "Confirm" buttons.
Both buttons should save the form data. The Commit button should also set database column "Confirm" to 1
I've read at length about how I can programatically override the update value of a hidden control for this. But I'm not satisfied with the level of complexity (maintenance) required to get this working, i.e.
Populate a variable with the current db value
In the button code set the variable value
In the form field, set the update property to the variable
What I'm Trying
So I'm trying a different approach: SubmitForm then Patch. Even though this requires an extra database call, I'd like to understand if this will work. This is the code for OnSelect in the commit button:
// Save the record
SubmitForm(frmEdit);
// Update confirmed to 1
Patch('[dbo].[Comments]',cRecord,{Confirmed:1});
Some Complexities
Note that my record is a variable, cRecord. In short I want this app to be able to upsert based on URL parameters.
This is my App.OnStart which captures URL values, inserts a record if required. Regardless, the result of this event is that cRecord is set to the record to be edited.
// Cache employees and store lookups (as they are in a different db)
Concurrent(Collect(cEmployees, Filter('[dbo].[SalesPerson]', Status = "A")),Collect(cStores, '[dbo].[Store]'));
// Check for parameters in the URL. If found, set to Edit/Add mode
Set(bURLRecord,If((!IsBlank(Param("PersonId")) && !IsBlank(Param("Date"))),true,false));
// If URL Parameters were passed, create the record if it doesn't exist
If(bURLRecord,
Set(pPersonId,Value(Param("PersonId")));
Set(pDate,DateValue(Param("Date")));
// Try and find the record
Set(cRecord,LookUp('[dbo].[Comments]',SalesPersonId=pPersonId && TransactionDate = pDate));
If(IsBlank(cRecord),
// If the record doesn't exist, create it with Patch and capture the resulting record
Set(cRecord,Patch('[dbo].[Comments]',Defaults('[dbo].[Comments]'),{SalesPersonId:pPersonId,TransactionDate:pDate}))
);
// Navigate to the data entry screen. This screen uses cRecord as its item
Navigate(scrEdit);
)
frmEdit.Item is set to cRecord. As an aside I also have a gallery that sets this variable value when clicked so we can also navigate here from a gallery.
The navigating using new and existing URL parameters works. Navigating from the gallery works.
The problem
When I press the Commit button against a record which has Confirmed=0 I get this popup error:
The data returned by the service is invalid
When I run this code against a record which already has Confirmed=1 I don't get an error
If I run the PowerApps monitor it doesn't show any errors but it does show some counts being run after the update. I can paste it here if required.
I also tried wrapping the Path in a Set in case it's result was confusing the button event but it didn't make a difference.
What I want
So can anyone offer me any of the following info:
How can I get more info about "The data returned by the service is invalid"?
How can I get this to run without erroring?
Is there a simpler way to do the initial upsert? I was hoping a function called Patch could upsert in one call but it seems it can't
With regards to the setting field beforehand approach, I'm happy to try this again but I had some issues implementing it - understanding which control where to edit.
Any assistance appreciated.
Edit
As per recommendations in the answer, I moved the patch code into OnSuccess
If(Confirmed=1,Patch('[dbo].[CoachingComments]',cRecord,{Confirmed:1}));
But now I get the same error there. Worse I cleared out OnSucces and just put SubmitForm(frmEdit); into the OnSelect event and it is saving data but still saying
The data returned by the service was invalid
First things first,
Refactoring has multiple steps,
I can t type all out at once...
The submitform and patch issue:
Never use the submitforn with extra conplexity
Submitform is only the trigger to send the form out,
The form handler will work with your data...
If you hsven t filled out the form correctly, u don t want to trigger your patch action...
So:
On your form, you have an OnSucces property,
Place your patch code there...
Change your cRecord in your patch statement:
YourForm.LastSubmit

Custom fields not showing on table output

I'm having trouble figuring out why two custom fields are not showing up in transaction VA05 (its output to be specific).
Before reading below: I didn't make any changes to the program because I wasn't assigned to this task. One of my colleagues did this change.
I followed this tutorial, which explains how to expand the VA05 output table with custom fields.
I didn't actually followed this guide. I followed this tutorial only to understand what my colleague might have done to achieve what we need and what might be needed to fix the issue.
So, as the tutorial suggests, this is what should be done in short:
Go to SE11 and search for VBEP Database table and click display.
Click on Append Structure
Click on Create Append
Insert the name of the append name, in my case it's ZZVBEP_MECC
Insert two fields: ZZDELIVERYDATE and ZZREQDELIVERYDATE
Save and activate
If you go to VBEP table now, you'll see at the bottom the field .APPEND with the column Data Element set to ZZVBEP_MECC.
Now, following the tutorial, the include program V05TZZMO need to be changed, and here's ours:
***INCLUDE V05TZZMO .
* This form is called in the include LV05TFMO.
FORM MOVE_USERFIELDS USING ZP.
CASE ZP.
WHEN 'VBAK'.
WHEN 'VBAP'.
CHECK LVBAP-PSTYV NE 'ZRAC'.
CHECK LVBAP-PSTYV NE 'ZCAC'. "Escl.acconti
MOVE LVBAP-KDMAT TO LVBMTV-ZZKDMAT.
PERFORM OFFENE_AUFTRAGSMENGE.
SELECT SINGLE * FROM VBKD WHERE VBELN = LVBAP-VBELN.
IF SUBRC = 0.
MOVE VBKD-KDGRP TO LVBMTV-ZZKDGRP.
ENDIF.
WHEN 'VBEP'.
MOVE LVBEP-ZZDELIVERYDATE TO LVBMTV-ZZDELIVERYDATE.
MOVE LVBEP-ZZREQDELIVERYDATE TO LVBMTV-ZZREQDELIVERYDATE.
ENDCASE.
ENDFORM.
When I run VA05 however, these two custom fields are not there nor are in the Change Layout screen.
Is it possible that the code in V05TZZMO is not in the right place? Looking at the tutorial's code I saw that they put those statements in WHEN 'VBAK' instead of WHEN 'VBEP'.
Also, the ENHANCEMENT 1 ZZ_SD_VBAK_VA05 is not present in my code.
There might be something I missed. As I said above, I didn't make these changes so I cannot tell exactly what my colleague did.
Custom fileds also must be included in the structure VBMTVZ.
Regards
Finally solved.
Here's the procedure step-by-step we followed in order to fix the issue:
Create an Append Structure
Go to TCode SE11 and open database table VBMTVZ
Click on Append Structure...(It's in the program's toolbar) - or Goto > Append Structure
Create a new Append Structure(There's a button for it)
Enter the name of the structure(in our case was ZZVBMTVZ_MECC
A new structure is then created. Enter the needed components/fields
Save and activate the structure.
Note: To check if the structure is actually appended, go back and open table VBMTVZ again. At the end of the list of fields there must be one field called .APPEND with the type set to the name of the structure you created. After this field, there's also all your fields you created on that structure. See the image below for reference.
Change V05T program
Now you'll need to make some changes to the program linked to the TCode VA05.
To do this:
Go to TCode SE80.
Select Function Group and insert V05T as the name of the group.
Open the sub-folder called Includes and look for V05TZZMO
Open this include file in change mode.
The file should look like this:
***INCLUDE V05TZZMO .
* This form is called in the include LV05TFMO.
FORM MOVE_USERFIELDS USING ZP.
CASE ZP.
WHEN 'VBAK'.
* header
* MOVE LVBAK-XXXXX TO LVBMTV-ZZXXXXX.
WHEN 'VBAP'.
* item
* MOVE LVBAP-XXXXX TO LVBMTV-ZZXXXXX.
WHEN 'VBEP'.
* schedule line
* MOVE LVBEP-XXXXX TO LVBMTV-ZZXXXXX.
ENDCASE.
ENDFORM.
Add the following line of code to the WHEN 'VBEP'. code block.
MOVE LVBEP-name_of_your_field TO LVBMTV-name_of_your_field
You need to add that line for each field you appended.
In my case, the final result was the following:
***INCLUDE V05TZZMO .
* This form is called in the include LV05TFMO.
FORM MOVE_USERFIELDS USING ZP.
CASE ZP.
WHEN 'VBAK'.
* header
* MOVE LVBAK-XXXXX TO LVBMTV-ZZXXXXX.
WHEN 'VBAP'.
* item
* MOVE LVBAP-XXXXX TO LVBMTV-ZZXXXXX.
WHEN 'VBEP'.
* schedule line
* MOVE LVBEP-XXXXX TO LVBMTV-ZZXXXXX.
move LVBEP-zzdeliverydate to lvbmtv-zzdeliverydate.
move LVBEP-zzreqdeliverydate to lvbmtv-zzreqdeliverydate.
ENDCASE.
ENDFORM.
Save the file and activate the program.
At this point you should be able to see your fields in the output of TCode VA05.
Notes
In order to be able to change the include program V05TZZMO you need an Access Key. Without that you cannot change the program.
If the fields you appended aren't showing even after activating, try:
To logout and log back in or
reset the buffer with this TCode /$SYNC

How does the Global Image ID work ? - Linking Dual-EELS datasets

Is there a way to script a pair of dual-EELS SIs so that DM recognises them as siblings (say, for use in the "SI->Align SI by Peak" menu option)?
I have a script which performs a transformation on a pair of dualEELS SIs where the results are given in new images with all the tags copied from the original. The new SIs do not seem to be acknowledged by the SI options as a pair, however. A MWE of how this occurs is below:
image a, b
GetTwoLabeledImagesWithPrompt("Get SI", \
"Get DualEELS SIs", \
"Low-Loss", a, \
"High-Loss", b)
image LL, HL
LL := a.ImageClone()
HL := b.ImageClone()
LL.ShowImage()
HL.ShowImage()
Assuming that the two inputs are real dualEELS SIs. Attempting afterward to run a method like "SI->Align SI by Peak" on the outputs does not recognise the second SI as a sibling.
I suspect that my issue is in properly assigning the four EELS:Dual acquire sibling:UID tags highlighted in the image provided, however I have no idea how (or if) these are accessible from the scripting language.
Thanks in advance for any help you are able to render.
Yes, "siblings" in DigitalMicrograph are recognized by tag-checks and tagging of the Unique Image Id (UID).
Depending on the exact application/plugin, there might be additional tag-checks before a sibling can be accepted (i.e. "is it EELS data?", "is it spatially compatible?", etc.) but he prime-mechanism is using the UID.
A bit of background info
The UID is a set of four long-numbers generated at random whenever a new image data is created, and then subsequently stored with the data. It is "unique" by the assumption that a set of four randomly generated 8-byte longs are "unique".
If you create an image, save it to disc, and open it, the UID will be the same. (It is stored with the data.)
If you ImageClone() an image, it gets a new UID.
If you copy a image file on the hard drive and rename it, it will retain the UID.
Creating and using UIDs
The commands to get an image's UID are described in the F1 help documentation here:
And the example section even has a script showing how one uses UID to "link" data together:

Can the user be forced to enter a non-initial value by the Table Maintenance screen in SM30?

I'd like to force the user to chose between Yes and No, and not let him add an entry where the value is initial.
This is regardless of whether I check the Initial checkbox in the table definition.
Can this be done?
Domain data type : CHAR, 1 character, no conversion routine.
Value range: single values:
'1' description = 'Yes'
'2' description = 'No'
By far the easiest way is to use a data element in the table that only allows non-initial values.
If you can't change the data element, you can try using table maintenance events in the table maintenance generator:
You may be able to use event 1 (Before save) or event 5 to build a manual check, but 5 does not kick off on change.
If that doesn't work, you can still manually add a check in the PAI of the screen, however you run the risk that if someone regenerates the maintenance scree, they will forget/not know to put the check back in.
You can set the compare flag:
But from what I've seen the flag doesn't actually force you to redo any of the changes, and is still pretty easy to miss.
You can edit the screen and set the field to mandatory. Be aware that you will loose the change if the screen is re-generated.
You can do that with that steps:
in SE11 choose the Utilities menu -> Table Maintenance Generator
in the Table Maintenance Generator go to menu Environment -> Modification -> Maintenance Screens, then select the screen (usually is 0001), in the Element List Tab you find the Special attr, in the field Input, you choose Required for the field you want obligatory.
Thanks.
Regards.
Gil Mota.