What is the difference between class alv and function alv - abap

We are using class alv and function alv, what is the difference between those options?

Some of the differences:
You can create screens using function modules however classes have to call separate program to generate screens.
Classes are more secure than FMs.
Class type increases the performance.
Class type is OO, it allows more flexibility. Can have multiple ALVs in one screen.
Classes will allow reusability better than FM because being Object Oriented.
Classes are instantiable while function groups are not.
Objects are the instance of class but FMs are not instances of a Function Group.
Function modules can be executed asynchronously and can be called by other systems remotely also.
A program can work with instances of several function groups at the same time however it cannot work with several instances of a single function group.
*This referance could be helpful too.
Hope it was helpful
Talha

Related

How do I model shared code pieces differing only slightly at a specific point effectively?

I am writing a data export application that is used for many scenarios (Vendors, Customers, Cost Centers, REFX Contracts, etc).
In the end there are two main ways of exporting: save to file or call webservice.
So my idea was create an interface if_export, implement a class for each scenario.
The problem is, the call webservice code differs slightly at the point of the actual call: the called method has a different name each time.
My ideas for dealing with this so far are:
Abstract cl_webservice_export with subclasses for each scenario. Overwrite method containing the actual call.
cl_webservice_export with member type if_webservice_call. class for each scenario implementing if_webservice_call method call_webservice()
Dynamic CALL METHOD webservice_instance->(method_name) inside
concrete cl_webservice_export method containing the actual call and passing (method_name) to cl_webservice_export.
My code:
export_via_webservice is the public interface provided by cl_webservice_export or via if_export
METHODS export_via_webservice
IMPORTING
VALUE(it_xml_strings) TYPE tt_xml_string_table
io_service_consumer TYPE REF TO ztnco_service_vmsoap
RETURNING
VALUE(rt_export_results) TYPE tt_xml_string_table.
METHOD export_via_webservice.
LOOP AT it_xml_strings INTO DATA(lv_xml_string).
call_webservice(
EXPORTING
io_service = io_service_consumer
iv_xml_string = lv_xml_string-xmlstring
RECEIVING
rv_result = DATA(lv_result)
).
rt_export_results = VALUE #( BASE rt_export_results (
lifnr = lv_xml_string-xmlstring
xmlstring = lv_result ) ).
ENDLOOP.
ENDMETHOD.
Actual webservice call, overridden or provided by if_webservice_call
METHODS call_webservice
IMPORTING
io_service TYPE REF TO ztnco_service_vmsoap
iv_xml_string TYPE string
RETURNING
VALUE(rv_result) TYPE string.
METHOD call_webservice.
TRY.
io_service->import_creditor(
EXPORTING
input = VALUE #( xml_creditor_data = iv_xml_string )
IMPORTING
output = DATA(lv_output)
).
CATCH cx_ai_system_fault INTO DATA(lx_exception).
ENDTRY.
rv_result = lv_output-import_creditor_result.
ENDMETHOD.
How would you solve this problem, maybe there are other, better ways of doing it?
I know three common patterns to solve this question. They are, in ascending order of quality:
Individual implementations
Create one interface if_export, and one class that implements it for each web service export variant that you need, i.e. cl_webservice_export_variant_a, cl_webservice_export_variant_b, etc.
Major advantages are the intuitive simplistic class design and complete independence of the implementations that avoids accidental spillover from one variant to the other.
Major disadvantage are the probably massive portion of code duplication between the different variants, if their code varies in only few, minor positions.
You already sketched this as your option 2, and also already highlighted that it is the least optimal solution for your scenario. Code duplication is never welcome. The more so since your web service calls vary only slightly, in some method name.
In summary, this pattern is rather poor, and you shouldn't actively choose it. It usually comes into existence on its own, when people start with variant a, and months later add a variant b by copy-pasting the existing class, and then forgetting to refactor the code to get rid of the duplicate parts.
Strategy pattern
This design is commonly known as the strategy design pattern. Create one interface if_export, and one abstract class cl_abstract_webservice_export that implements the interface and includes most of the web service-calling code.
Except for this detail: The name of the method that should be called is not hard-coded but retrieved via a call to a protected sub-method get_service_name. The abstract class does not implement this method. Instead, you create sub-classes of the abstract class, i.e. cl_concrete_webservice_export_variant_a, cl_concrete_webservice_export_variant_b, etc. These classes implement only the inherited protected method get_service_name, providing their concrete needs.
Major advantages are that this pattern completely avoids code duplication, is open for further extensions, and has been employed successfully in lots of framework implementations.
Major disadvantage is that the pattern starts to erode when the first variant arrives that does not completely fit, e.g. because it does not only vary the method name, but also some parameters. Evolving then requires in-depth redesign of all involved classes, which can amount to considerable cost. Another disadvantage is that the inheritance setup can make it cumbersome to write unit tests: for example, unit-testing the abstract class requires to make up a test double that sub-classes it and overwrites the protected method with sensing and mocking code - all possible but not as neatly as with interfaces between the classes.
You already sketched this as your option 1. In summary, I would recommend to choose this pattern if you have control over all involved classes and are willing to spend some extra-effort to keep the pattern clean in case it doesn't fit completely.
Composition
Composition means avoiding inheritance in favor of loose interaction between indepdent classes over classes. Create the interface if_export and individual concrete implementations of it as cl_webservice_export_variant_a, cl_webservice_export_variant_b, etc.
Move out the shared code to a class cl_export_webservice_caller that receives whatever data and variant (e.g. method name) it needs. Let the variant classes call this shared code. To complete the class design, introduce another interface if_export_webservice_caller that decouples the variants classes from the caller class.
The major advantages are that all classes are independent from each other and can be recombined in several different ways. For example, if in the future you need to introduce a variant X that would call its web service in a completely different way, you can simply add it, without having to redesign any of the other involved classes. In contrast to the strategy pattern, writing unit tests for all involved classes is trivial.
There are no real disadvantages to this pattern. (The seeming disadvantage that it needs one more interface is not really one - object orientation has the aim to clearly separate concerns, not to minimize the overall number of classes/interfaces, and we shouldn't be afraid to add more of those if it adds clarity to the overall design.)
This option sounds similar to the option 3 you sketched, but I am not 100% sure. Anyway, this would be the pattern I would vote for.

Class with a list of materials: best practice

I've created the custom class ZMaterial that can be instantiated passing an ID to the constructor which sets the properties for a single material using SELECTs and BAPIs. This class is basically used to READ and UPDATE a single material.
Now I need to create a service to return a list of materials. I already have the procedural code for it in a static method (for now actually a function module), but I would like to keep using a full OOP approach and instantiate a list of my custom material object. The first approach I found is to enhance the static method to instantiate a list of my single material object after the selects are executed and I have the data in internal tables, but it does not seem the most OOP.
The second option in my mind is to create a new class ZMaterialList with one property being a list of objects ZMaterial and then a constructor with the necessary input parameters for the database select. The problem I see with this option is that I create a full class just for the constructor.
What do you think is the best way to proceed?
Create a separate class to produce the list of materials. The single responsibility principle says each class should do exactly one thing. In all but the most simple cases, using a thing is a different responsibility than producing it.
Don’t make a ZMaterialList class. A list’s focus would be managing the list items, i.e. adding, removing, iterating, sorting etc. But you should be fine with a regular STANDARD TABLE OF REF TO ZMaterial.
Make a ZMaterialReader, -Repository, -Query or -Factory class or the like, depending on the precise way you want to produce the ZMaterials. Readers read by keys, repositories read and write, queries use varying sets of selection criteria, factories instantiate with possibly different sets of inputs.
You can well let that class use the original FUNCTION underneath. It’s good style to exploit what’s already there. Just make sure you trust that code, put it in a test harness, and keep it afar from the rest of your oo code.
Extract all public interaction of ZMaterial to an interface and use only that interface. That allows you to offer alternative implementations of ZMaterial, ones that differ in the way they are produced or how they store their data.
Split single production from mass production. Reading MARA to retrieve a single material is okay. But you don’t want thousands of ZMaterials reading MARA individually - that wrecks performance.
Now you’ve got the interface, you could offer a second implementation of ZMaterial whose constructor receives all relevant data and relies on it already having been validated to avoid additional SELECTs.
You could also offer an implementation that doesn’t store its data at all but only stores pointers to rows in internal tables somewhere else. See the flyweight pattern for ideas.
If you expect mass updates on the materials, such as “reclassify all of these as B”, consider extracting these list-oriented operations to separate classes as well.

How to share local classes?

I'm currently working on a rather complex ABAP application that is going to be split into several modules each performing a specific part of the job:
one for gathering some data from multiple sources;
one for displaying that data in UI (SALV grid, if that matters);
one for doing some business things based on that data.
According to my plan each module will be a global class. However, there is some logic that may need to be shared between these classes: helper subroutines, DB access logic and so on. All of this is a set of local classes at the moment.
I know could these classes global as well, but this would mean exposing them (as well as a number of internal data structures) to the public which I would not like to. Another approach would be sharing the includes with them between my global classes, but that is said to be a bad design.
So, my question is: how do real ABAPers solve problems like this?
Here is an example of how one can access a local class defined in a report.
The report with the class.
REPORT ZZZ_PJ1.
CLASS lcl_test DEFINITION FINAL.
PUBLIC SECTION.
METHODS:
test.
ENDCLASS.
CLASS lcl_test IMPLEMENTATION.
METHOD test.
WRITE 'test'.
ENDMETHOD.
ENDCLASS.
The report which uses the class.
REPORT ZZZ_PJ2.
CLASS lcl_main DEFINITION FINAL CREATE PRIVATE.
PUBLIC SECTION.
CLASS-METHODS:
main.
ENDCLASS.
CLASS lcl_main IMPLEMENTATION.
METHOD main.
DATA:
lr_object TYPE REF TO object.
CREATE OBJECT lr_object
TYPE ('\PROGRAM=ZZZ_PJ1\CLASS=LCL_TEST')
CALL METHOD lr_object->('TEST').
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
lcl_main=>main( ).
Of course this is not a clever solution as each method call would have to be a dynamic call.
CALL METHOD lr_object->('TEST').
This could be solved however by using global interfaces that would define the methods of your classes (of course if they are not static which I assume they are not). Then you have to control each of the instances through the interface. Your target would be fulfilled then, as only the interface would be exposed globally, the implementations would remain in local classes.
You may want to do some reading on Model-View-Controller design patterns. Displaying data in a UI - would be a "view". Both the gathering and updating of data would be incorporated into a "Model" . Business logic should likely be implemented as an interaction between the view and the model in a "Controller".
That said, one approach to this is would be to utilize the friendship feature offered in ABAP OO.
As an example: create the model and view classes globally but only allow them to be instantiated privately, then grant private component access to the controller. Class definitions would be follows:
CLASS zcl_example_view DEFINITION
PUBLIC
FINAL
CREATE PRIVATE
GLOBAL FRIENDS zcl_example_controller
CLASS zcl_example_model DEFINITION
PUBLIC
FINAL
CREATE PRIVATE
GLOBAL FRIENDS zcl_example_controller
CLASS zcl_example_controller DEFINITION
PUBLIC
FINAL
CREATE PUBLIC
Additionally it may be a good idea to make the controller singleton and store a reference to it in both the view and the model. By enforcing that the controller IS BOUND when the view and model are instantiated we can effectively ensure that these three classes exist as only you desire.
Stepping back to your initial problem: I sounds to me like you're already using something like a MVC pattern in your development so your only problem is that some routines shall be used publicly by both models, views and controllers.
In this case I strongly recommend to put these routines in global available classes or to implement getter methods in your already existing classes to access those functionality.
Any hacks like \PROGRAM=ZZZ_PJ1\CLASS=LCL_TEST are sometimes essential but not here imho.
If your application is as large as you make it sound, you should organize it using multiple packages. You will certainly have to deal with non-OO stuff like function modules, data dictionary objects and other things that can not be part of a class, so using classes as the basic means to organize your application won't work outside of very small and specialized applications.
Furthermore, it sounds like you have some really severe flaws embedded in your plan if you think that "DB access logic" is something that should be "shared between classes". It is hard to guess without further information, but I would strongly suggest that you enlist someone who has experience in designing and implementing applications of that scale - at least to get the basic concept right.

When do I put logic in a class as opposed to passing the class into a utility class?

When I have a series of processes which are similar in nature but work on slightly different types of objects, do I unify the type of work in a single utility class, or do I put the functionality directly on each object that will need to utilized the functionality?
I'm not concerned about a specific case per-se, but I'm most curious about what factors go into this decision.
I think it depends on the class-ancestry of your objects, the real difference in logic between each object, and the future-possible-need to do this on some other kind of class.
It sounds to me like the utility class is a good way to go if the functionaliy you're applying to multiple classes is largely the same, and could be applied to future classes down the road.
if on the other hand the functionality is different enoguh that you'd end up with a big switch/case statement in your utility class to accomodate the differnet object types, you might want to implement it in the objects themselves.
You have two approaches to your problem: One is to use generic programming (horizontal polymorphism) or to attack it using a more traditional vertical hierarchy based implementation.
You decisions have to be based in the type of similarities that are shared among the various data types. In the case that we can define a complete and orthogonal contract that can be operated in any type then we can easily decide to use generics.
This for example is the case with List, Dictionary and all the class under the System.Collections.Generic name space that eventually replaced their corresponding non generic counterparts of the early versions of .NET.
In the other hand, again from the .NET world we can use as an example of a vertical hierarchy, the UserControl class than derives from ContainerControl and serves as the base for other controls specializing it behavior using its virtual methods...
In most of the cases though the design of your class hierarchy involves a lot of judgment calls that are not always to defined deterministically as they rely more on your experience and talent as a developer rather in a concrete model that can be applied across the board in every possible situation..

Class Hierarchy - Data design in an RPG game where classes overlap (VB.Net)

This is a followup to the question I asked here:
Class Hierarchy - Data design in an RPG Game (VB.Net)
I understand the answer in the post above, which is absolutely amazing, by the way. It's about implementing interfaces with a class. However, what if a class needs to share features with another class?
Yes, that class can an Interface. However, let's use this sample definition.
An ITEM can be USED or EQUIPPED
An EQUIPPED ITEM can be either ARMOR or a WEAPON
A USED ITEM either heals the team, casts MAGIC, or damage the opposing team.
Certain EQUIPPED ITEMS can function as a USED ITEM.
Certain EQUIPPED ITEMS can cast magic.
Or, in other words:
An equippable item can perform acts outside of its typical usage of a shield or weapon. But not all items can act as a sheid or weapon.
I mean, I could create a class that implements IWeapon, IShield, IMagic, IUseableItem, etc. But there should be a better way than returning NULL when those interfaces are called.
But there should be a better way than returning NULL when those interfaces are called.
It's called "not implementing them on objects that don't support them". Or so I would have thought.
One option is to have IEquippable, and IUsable, and any item that can be both equipped and used implements both, while other items only implement the applicable interface.
To be honest, I would choose to use a single Item class here. You're going to have lots of permutations of functionality and are going to end up having to query manually for interface existence or the object type anyway. So trying to fix the permutations at design-time seems like the wrong thing to do.