I'm new to transport management in SAP. How does it work when customizing requests are imported into PROD? Do the customizing tasks get imported by order of creation?
Say that a released customizing task makes a change to a specific table record. After that, I want to correct the change made before so I create a 2nd customizing task to change the exact same record. Is there any risk that the 1st (wrong one) task gets imported after the 2nd (right one)?
Thanks.
It depends on transport strategy.
From your words one can assume you talk about Import Queue strategy with mass transports about which SAP Help clearly says:
All requests are imported into the target systems in the order they
were exported.
Single transports approach imply you are doing imports manually in whatever order you need.
So yes, the order matters.
Related
I have created a custom Business Object using Transaction BOBX. I would like to implement Change documents for this BO to keep record of all the transactional data changes made to this BO. These Change Documents should contain all the relevant information like the changed object dataset, old & new values, date & time of change along with the person name who has made the changes.
Is it possible?
Thanks in advance.
If you really wanna use SAP Change Documents, SAP provides a reusable BO /BOFU/CHANGE_DOCUMENT which allows to implement to recored all changes of your BOs. On Business Suite systems the BO is part of the Business Suite Foundation Layer, on S/4 systems it is part of the S/4 Foundation layer. For more details regarding how you can integrate it please check the official help (Using the Change Document Adaptor).
I have some processes that ran with old process definitions. But due to requirement change the user task data has been updated with new attributes and this process definition has been deployed. I'm aware that "SetProcessDefinitionVersionCmd" can be set to "yes" to point the processes to the new definition/version.
I would like to know how to migrate the old process data to have the newly added attributes of the user task updated in them?
There is no easy way to migrate process instance data, however, when you set the version to the new process definition the instance data will go with the migrated instance.
What you have to be careful of is to make sure you include null checks for any of the data that may not be present in the migrated process instances.
Hope this helps,
Greg
Indeed there is no easy way for migration, however depending on the differences between the two definitions and to what extend you may not prefer to use SetProcessDefinitionVersionCmd, you may find DynamicBpmnService useful when combined with detecting definitions' versions inside your logic.
And yes another way would be to use SetProcessDefinitionVersionCmd but be extra cautions for tasks that were actually active prior to migration, as Activiti's database model have some redundant data (some for performance reasons), you are better studying the DB tables first for these tasks and then inspecting the before and after migration state. For example, keeping up with a simple changed attribute is much easier than an added boundary event on an active User Task, which affects the "execution tree".
I would also advice to compare SetProcessDefinitionVersionCmd's implementations between Activiti and Camunda, it is sad to have such enhancements efforts separated, but that is another story.
I have a question on how we can import/synchronise products from our back-office to CQ5 front end.
The architecture to be is pretty simple - custom back-office managing all the products( basically it will be the source of truth). CQ5 driven web-site to show search results(driven by Adobe SearchAndPromote) and product details. Purchase transactions will be handled outside of CQ5.
I went through http://dev.day.com/docs/en/cq/current/ecommerce/eCommerce-framework.html and I think have some idea in which direction we should move, but I would like someone to confirm that my understanding is correct.
1) I need to create scheduled job running on Author node that would call back-office and import products as json feed. I use annotation based #Service(Runnable.class) - Is there a way to set it so it rund on Author node only?
2) Create custom service(called my service above) that will actually create all the nodes in crx. If I have desktop and mobile versions of the site do I need to create all those dones twice? Are there any tips on easier way to create those?
3) Let CQ5 replicate those products to publish nodes.
Is there a easier way? I mean if I was using more standard web-app I would have one controller to show product details, two templates(one for mobile, one for desktop) and a service that would call back-office and return details for requested product. But Sling world is very different, and I want to check if I understand it correctly.
Cheers.
Here are some answers:
1) Here is a good article about different configs for different runmodes: http://helpx.adobe.com/cq/kb/RunModeSetUp.html you can create configs for pub and auth runmodes with certain flag your code will look for which will tell whether to execute import or not.
2) It depends. CQ tends to have copies of content for mobile site so it may make sense to do copies of nodes for mobile site but only in case you those nodes are pages (cq:Page and cq:PageContent) you create based on imported data. Otherwise you just need to save imported data somewhere and obtain it at some moment (via JCR queries or methods like .getNode()). In this case of course it makes sense not to copy your data.
3) It depends here as well. I would consider following forces you may have: should imported data be editable? how frequent are updates? how massive are updates? how critical is consistency across pubs? In case updates are not massive, not frequent and consistency matters import to auth followed by replication can work. Also it may be the case if you need to be able to edit imported data. In case updates are massive and/or frequent and consistency across pubs do not matter much (you can afford that some people may see different results from different pubs during import) I'd suggest run import on all pubs at the same time since massive replication of imported data may affect regular page/images replications.
Thanks,
Max.
I wonder what is the best way to implement global data version for database. I want for any modification that is done to the database to incerease the version in "global version table" by one. I need this so that when I talk to application users I know what version of data we are talking about.
Should I store this information in table?
Should I use triggers for this?
This version number can be stored in a configuration table or in a dedicated table (with one field).
This parameter should not be automatically updated because you are the owner of the schema and you are responsible for knowing when you need to update it. Basically, you need to update this number every time you deploy a new application package (regardless of the reason for the package: code or database change).
Each and every deployment package should take care of updating the schema version number and the database schema (if necessary)
I tend to have a globals or settings table with various pseudo-static values stored.
- Just one row
- Many fields
This can include version numbers.
In terms of maintaining the version number you refer to, would this change when the data content changes? If so, the a trigger would be useful. If you mean for the version number to relate to table structures, etc, I'd be more inclined to manage this by hand. (Some changes may be irrelevant as far as teh applications are concerned, or there maybe several changes wrapped up into a single version upgrade.)
The best way to implement a "global data version for database" is via your source control system and build process. When all the changes have been submitted and passed testing your build process will increment your versioning number schema.
The version number could be implemented in a stored procedure. The result of the call to the stored proc could be added to a screen in your app so you can avoid users directly accessing a table.
To complete the previous answers, I came across the concept of "Migrations" (from the Ruby on Rails world apparently) today, and there was already a question on SO that covered existing frameworks in .Net.
The concept is still to store DB versioning information as data in a table somewhere, but for that versioning information to be managed automatically by a framework, rather than manually by your custom deployment processes:
previous SO question with overview of options: https://stackoverflow.com/questions/313/net-migrations-engine
I would like to know whether there is any RFC or BAPI functions to display change documents (transaction RSSCD001) based on input query in SAP. The customer requirement is to implement a java monitor system on SAP without adding any ABAP functions on the SAP server.
I tried to make use of 'RFC_READ_TABLE' functions, which is deprecated according to the official documents, to read the CDPOS and CDHDR table and join them. But as vwegert said, to traverse the table CDPOS is really time-costing, as it contains billions of table entries.
My intention of this query is to find changes to all bank details of vendors.
Any other thoughts?
Many thanks in advance!
The least resource-consuming way to do this would be to use the workflow runtime system to actively notify the java application whenever a change document is written. You don't have to write any ABAP functions to do this, just setup the workflow engine (using the automatic customizing) and customize the event generation (documentation). Then, you write a java service that connects to the SAP system using JCo and registers as an RFC server using a destination of Type TCP/IP and a registered program ID. This java server program has to provide a function module handler that can be called using tRFC from the SAP system. Finally, add a linkage entry that will tell the workflow runtime system to call your java program each time a change document is written.
Of course, this will only record the changes that happen after installation, not the historical changes.
warning : I'm not very familliar with this field.
The RFC function BAPI_VENDOR_FIND (BAPI Vendor) seems to be used to find vendor based on values in table. You could use it to check gainst the modification date. This is not perfect, as there is no relationnal operator, only equals, and you'll have to check against several dates...
hopes this helps
Guillaume