In ABAP I want to know which OO-Properties a function group has.
A function group has encapsulation, because I can put in global variables in a function group.
Polymorphism and Inheritance are not possible within a function group. Is this correct?
What about different instances of function groups? Is this an OO-Property at all and is it possible to accomplish this with function groups?
As described in Clean ABAP:
No instantiation. You cannot create multiple instances of the same function group.
No inheritance. You cannot inherit from or let inherit function groups.
No interfaces. You cannot provide two implementations for the same function group.
No substitution. You cannot exchange a call to one function with a call to another one with different name but identical signature.
No overloading. You cannot provide two functions with identical names but different parameters. (This is not possible in ABAP OO too, by the way.)
Variable encapsulation. Function groups can hide internal state in private variables.
Method encapsulation. Function groups can hide internal methods ("form routines").
Like Jagger and Sandra Rossi suggest, think of a function group as a global abstract final class with static public/private members.
Related
I write many Maximo Where Clauses (which use Oracle SQL), and save them as public queries.
I encourage others to edit/customize the work I share with them.
But when my query uses the same string throughout it many times, it's tedious to make sure all instances are replaced. It's not like there's a built-in find-replace tool or anything.
Is there a way to define a custom string variable at the beginning, and reuse that variable many times thoughout the where clause?
No, you can't do that. But what you could do is make a relationship from your parent object to a system property / maxpropvalue child object where propname = 'company.app.varname. Then, you can use the System Properties app to change the value of the variable for all references. And you can make a reference to it in your query using standard :relationshiptomaxpropvalue.propvalue syntax.
HTH.
Using the SQL type provider in FSharp.Data with MSFT SQL Server, I declare the type:
type dbSchema = FSharp.Data.TypeProviders.SqlDataConnection<"Data Source=DESKTOP-5\SQLEXPRESS;Initial Catalog=Data;Integrated Security=True;MultipleActiveResultSets=True;">
outside of any module, along with most of my other types (following a suggestion to declare types outside of modules to avoid nested classes). I really wouldn't know, but assume that it's so far so good.
Where I'm wondering how to arrange things is in using the type, with eg:
use db = dbSchema.GetDataContext()
db.DataContext.ExecuteCommand(sqlCreateTableStmt a b c)
My upload process goes through lists of lists and functions calling functions. And I don't know what the pros and cons are of where to declare use db. It could be re-done locally in each function, or "globally" outside any module, or in the first, top-level function and passed along from function to function as a parameter. Or some combination.
Hopefully that's enough of a question to be worthwhile. Right now I have a use declaration in each function. Not passing it ever. In some places one function declares use db and calls another function that declares use db again. I don't know if there's overhead in making or managing these connections. Or whatever else to worry about.
Thanks in advance.
Fortran 2003 supports data polymorphism by using class, like:
subroutine excute(A)
class(*) :: A
select type (A)
class is ()
...
type is ()
...
end select
end subroutine
My question is: if I need to call this subroutine a huge amount of times, whether it will slow the code down because of the SELECT statement?
SELECT TYPE is typically implemented by the descriptor for the polymorphic object (CLASS(*) :: A here) having a token or pointer or index of similar that designates the dynamic type of the object. Execution of the select type is then like a SELECT CASE on this token, with the additional complication that non-polymorphic type guards (TYPE IS) are matched in preference to polymorphic guards (CLASS IS).
There is some overhead associated with this construct. That overhead is more than if the code did nothing at all. Then again, code that does nothing at all is rarely useful. So the real question is whether the use of SELECT TYPE is better or worse execution speed wise to some alternative approach that provides the necessary level of functionality (which might be less than the full functionality that SELECT TYPE provides) that your code needs. To answer that, you would need to define and implement that approach, and then measure the difference in speed in a context that's relevant to your use case.
As indicated in the comments, an unlimited polymorphic entity is essentially a type safe way of storing something of any type. In this case, SELECT TYPE is required at some stage to be able to access the value of the stored thing. However, this is only a rather specific subset of F2003's support for polymorphism. In more typical examples SELECT TYPE would not be used at all - the behaviour associated with the dynamic type of an object would be accessed by calling overridden bindings of the declared type.
I see it used a lot in context of data. From ScottGu's post:
One of the really powerful
capabilities provided by LINQ and
query syntax is the ability for you to
define new classes that are separate
from the data being queried, and to
then use them to control the shape and
structure of the data being returned
by the query.
What does he mean when he refers to shape of the data?
I think these are informal terms, and the definitions are subjective. I would use "shape" to refer to how the object fits in with other objects in the system. (Compare "surface area", which is a rough (no pun intended :-) measure of the complexity of the object's interface.) I'd use "structure" to refer to how the object is designed and implemented internally.
Hence you can have classes that have a good "shape", but a structure like crepe paper. That's probably easier to refactor than the other way around: a poor shape, but good implementation. (I'm sure some folks would question whether the latter is even possible.)
consider shape to be the objects "api" while the structure is it's internal implementation. In a well designed system the shape will remain static while the structure may change significantly.
The shape is any spatial attributes (especially as defined by outline) of the object, whereas the structure is the manner of construction of the object and the arrangement of its parts. Of course, that can apply to any type of object. :)
Generally, I would consider the shape of an a class to be the public methods and properties that the class offers. The structure would be the internal constructs and representation used. In the context of the quoted material, I would take it to mean that by allowing one to define the return type of a query using anonymous or alternate named classes, you can redefine the data returned by query, constraining and transforming its shape from the original data source.
For example, say you have a user table that is related to a contacts table. Using LINQ and anonymous class as the selection, you can return a user with primary contact object without having to define a particular view; using only LINQ.
var userWithContact = from u in db.Users
select new
{
Name = u.Name,
Address = u.Contacts
.Where( c => c.Type = "self" ).First().Address
};
I essentially have a database layer that is totally isolated from any business logic. This means that whenever I get ready to commit some business data to a database, I have to pass all of the business properties into the data method's parameter. For example:
Public Function Commit(foo as object) as Boolean
This works fine, but when I get into commits and updates that take dozens of parameters, it can be a lot of typing. Not to mention that two of my methods--update and create--take the same parameters since they essentially do the same thing. What I'm wondering is, what would be an optimal solution for passing these parameters so that I don't have to change the parameters in both methods every time something changes as well as reduce my typing :) I've thought of a few possible solutions. One would be to move all the sql parameters to the class level of the data class and then store them in some sort of array that I set in the business layer. Any help would be useful!
So essentially you want to pass in a List of Parameters?
Why not redo your Commit function and have it accept a List of Parameter objects?
If your on SQL 2008 you can use merge to replace insert / update juggling. Sometimes called upsert.
You could create a struct to hold the parameter values.
Thanks for the responses, but I think I've figured out a better way for what I'm doing. It's similar to using the upsert, but what I do is have one method called Commit that looks for the given primary key. If the record is found in the database, then I execute an update command. If not, I do an insert command. Since the parameters are the same, you don't have to worry about changing them any.
For your problem I guess Iterator design pattern is the best solution. Pass in an Interface implementation say ICommitableValues you can pass in a key pair enumeration value like this. Keys are the column names and values are the column commitable values. A property is even dedicated as to return the table name in which to insert these value and or store procedures etc.
To save typing you can use declarative programming syntax (Attributes) to declare the commitable properties and a main class in middleware can use reflection to extract the values of these commitable properties and prepare a ICommitableEnumeration implementation from it.