What is an opaque type in Elm and why is it valuable? - elm

I've used types before but don't know what an opaque type is. I've seen it mentioned as well. Is it better to expose an opaque type than a type alias?

Let’s answer this question by first looking at type aliases:
A type alias is fully transparent. This means that any other module importing it will have full access to its inner workings. Let’s say we’ve got a User module exposing a User type:
module User exposing User
type alias User =
{ userName : String
, age : Int
}
Anyone importing User can manipulate the data, e.g. newUser = { oldUser | age = 25 }. Or do someUser = User "Bill" 27. These manipulations are fine when you have control over the context that they exist in.
However, if User is part of a library then every change to the User type is a breaking change to people that use the library. For example, if an email field is added to User, then the constructor example (someUser = User "Bill" 27) will give a compiler error.
Even inside of a project codebase, a type alias can provide too much information to other modules which leads to code that is difficult to maintain and evolve. Perhaps a User changes drastically at some point and has a completely new set of properties. This would require changes wherever the code manipulates Users.
Opaque types are valuable because they avoid these issues. Here’s an opaque version of User:
module User exposing User
type User =
User
{ userName : String
, age : Int
}
With this version, other modules cannot access or manipulate the data directly. Often, this means you will make and expose some getter and functions:
initUser : String -> Int -> User
userName : User -> String
age : User -> String
setAge : Int -> User -> User
This is more work, but it has advantages:
Other modules only care about the User functions and don’t need to know what data is in the type
The type can be updated without breaking code outside the containing module
Much of this explanation comes from #wintvelt: elmlang.slack.com

Related

How can I discover all the roles a Perl 6 type does?

With .does I can check if a type has the role I already know. I'd like to get the list of roles. Inheritance has .^mro but I didn't see anything like that for roles in the meta model stuff.
Along with that, given a "type", how can I tell if it was defined as a class or a role?
.^roles
say Rat.^roles; # ((Rational[Int,Int]) (Real) (Numeric))
By default it includes every role, including roles brought in by other roles. To only get the first level use :!transitive
Rat.^roles(:!transitive); # ((Rational[Int,Int]))
There's already a good answer to the first question. About the second one, each meta-object has an archetypes method that in turn carries a range of properties of the types represented by that meta-type. This exists because Perl 6 is open to new meta-types (which might be easier to think about as "types of type"); probably the most widely used example of this today is OO::Monitors. The archetypes are more focused on what one can do with the type. For example:
> role R { }; say "C: {.composable} I: {.inheritable}" given R.HOW.archetypes;
C: 1 I: 0
> class C { }; say "C: {.composable} I: {.inheritable}" given C.HOW.archetypes;
C: 0 I: 1
The set of available properties can be introspected:
> Int.HOW.archetypes.^methods(:local)
(nominal nominalizable inheritable inheritalizable composable
composalizable generic parametric coercive definite augmentable)
For example, "nominal" means "can this serve as a nominal type", and "augmentable" means "is it allowed to augment this kind of type". The things like "inheritalizable" mean "can I inheritalize such a type" - that is, turn it into a type that I can inherit from even if I can't inherit from this type. A role is not inheritable, but it is inheritalizable, and the inheritalize operation on it will produce the pun of the role. This is what is happening under the hood when writing something like class C is SomeRole { }, and means that not only is Perl 6 open to new types of type, but those new types of type can describe how they want to work, if at all, with inheritance and composition.
Being composable with does is probably the main defining property of a role, and thus the composable property is likely the best one to use when asking "is this a role". It is also possible to look at the type of the meta-object, as suggested in another answer, but there are multiple meta-objects involved in representing roles (the short name role group, a currying of that group with parameters, and an individual role, plus an internal concretization form that supports the composition process).
> say (role RRR[::T] { }).HOW.^name
Perl6::Metamodel::ParametricRoleHOW
> say RRR.HOW.^name
Perl6::Metamodel::ParametricRoleGroupHOW
> say RRR[Int].HOW.^name
Perl6::Metamodel::CurriedRoleHOW
Thus it's rather more robust to simply check if the thing is composable.
> say (role RRR[::T] { }).HOW.archetypes.composable
1
> say RRR.HOW.archetypes.composable
1
> say RRR[Int].HOW.archetypes.composable
1
Along with that, given a "type", how can I tell if it was defined as a class or a role?
A class is a type whose meta class is of type Metamodel::ClassHOW:
sub type-classify(Mu \t) {
given t.HOW {
return 'class' when Metamodel::ClassHOW;
return 'role' when Metamodel::ParametricRoleGroupHOW;
}
return 'other';
}
say type-classify(Int); # class
say type-classify(Rational); # role
say type-classify(Bool); # other
Regarding your second question,
given a "type", how can I tell if it was defined as a class or a role?
I haven't found a direct way of doing that. Both classes and roles have Mu in their hierarchy, so that will not distinguish them. However, only classes get to be recognized by (the curiously named) MetaModel::ClassHOW. So we can hack something like this:
role Ur { }
role F does Ur { }
class G does F { }
for Ur, F, G -> $class-or-role {
CATCH {
default {
say "not classy";
}
}
$class-or-role.say;
$class-or-role.^mro.say;
}
Which will print:
(Ur)
not classy
(F)
not classy
(G)
((G) (Any) (Mu))
, since calling ^mro on a role will raise an exception. That can be turned into a function for printing out which one is a role, and which is not.

how to recognize object's responsibility?

I'm new in OOP and I just started learning it. Its too complicated to determine the functionality of classes. Let's take an example:
We have an Address-book and an user want to add a new contact to it.
In this scenario we have 2 classes:
User: that determine the user that logged in.
Contact: A contact object that consists of Name, Address, Phone Number, etc
And the questions:
Who have to save a new contact?User class or Contact Class
If we try to check the user's permission before doing anything where is the best place for it?
Is it OK that these classes have a access to database?(Is it better to create 3rd class for doing query stuffs?)
Thanks for any good idea ;)
Usable distribution of "responsibility" is an OOP design and architecture decision with no single simple correct answer. For discussion refer to Stack Overflow question What is the single most influential book every programmer should read?
You'll learn the pros/cons by coding (using someone's design or creating your own design which does not work well).
However there are some useful/frequent distributions of responsibility already known as http://en.wikipedia.org/wiki/Software_design_pattern
In my opinion the only fixed fact is that each class/function/structure should have its responsibility clearly defined/documented - since the very first lines of code - and "do one thing and do it well"
Contacts are user specific. Thus every user object (class instance) should contain its own contacts object which is a container of contact (other user) objects, comprising in turn of name, address, phone etc.
class User {
String name;
String phone;
String address;
Contacts contacts;
....
}
class Contacts {
List<User> items;
}
The Contacts class should have the implementation of saving a new contact, which needs to be called from a User method, something like the following.
User u;
Contacts c = u.getContacts();
c.addContact(name, address, phone);
User's permissions should be checked in the User class.
The methods of these classes should interface with the database. For this each class method can open a new connection to a database and execute SQL queries. Example method of User cass:
User getContact(String name) {
Connection conn = getConnection();
....
PreparedStatement ps = con.prepareStatement("select * from Contacts where name = ?");
...
return userRcd;
}
1) Save new contact must the separate class, which working directly with database
2) Best place to check user permission - in user class of course
3) See the item 1:)
I recommend you get strong knowledge about SOLID principles, it's basics for good design.

Authentication in liferay pages

We are having a portlet on a liferay page. We want to put up up a permission on every action method that is performed. For example on page A we have landed an XYZ portlet. Now we want that whenever there is any action performed form this portlet, we want to check that if the user is having a role to perform this action or not.
It wont be a good approach to put up the code in Action method of the portlet cause we are having approximately 20 such pages and portlets.
Can we have some sort of filter or so, so that each action request is checked if the user is having the access to the content or not.
Thank you...
My idea.
Use a filter to intercept all request
You can add a filter to the Liferay Servlet to check every request.
For that you can use a hook-plugin.
Look at this :
http://www.liferay.com/fr/documentation/liferay-portal/6.1/development/-/ai/other-hooks
http://connect-sam.com/2012/06/creating-servlet-filter-hook-in-liferay-6-1-to-restrict-access-based-on-ip-location/
Issue with filter is that you can't access ThemeDisplay or use PortalUtil.getUser(request).
So you must use work around like that :
private User _getUser(HttpServletRequest request) throws Exception {
HttpSession session = request.getSession();
User user = PortalUtil.getUser(request);
if (user != null) {
return user;
}
String userIdString = (String) session.getAttribute("j_username");
String password = (String) session.getAttribute("j_password");
if ((userIdString != null) && (password != null)) {
long userId = GetterUtil.getLong(userIdString);
user = UserLocalServiceUtil.getUser(userId);
}
return user;
}
Filtering the request
To filter the request you must get :
page id (Layout id in Liferay)
portlet id
portlet lifecycle
One more time using a filter is a pain because you can get the ThemeDisplay. These params are easy to get (with real object instancee) with ThemeDisplay.
So you must get this as parameter in the request.
final String portletId = ParamUtil.get((HttpServletRequest) servletRequest, "p_p_id", "");
final String layoutId = ParamUtil.get((HttpServletRequest) servletRequest, "plid", "");
final String portletLifecycle = ParamUtil.get((HttpServletRequest) servletRequest, "p_p_lifecycle", "");
Lifecycle details :
portletLifecycle is a int and the meaning of value is :
0 : RENDER
1 : ACTION (the one that interests you)
2 : RESOURCE
I think that with this data you can be able to define if user can or cannot make the action.
You can get user roles from the user.
You can get the current page and portlet linked to the request.
And you can know if the request is an action request.
Good luck with Liferay.
You can add freely configurable permissions to Liferay, see the Developer Guide for detailed information. My first guess on this would be that these affect "model resources", e.g. the data that your portlet is dealing with, rather than portlet-resources, e.g. permissions on the individual portlet itself. Think of portlet-permissions as permissions that are defined by Liferay, model-resources as permissions where you can come up with your own vocabulary on the actions, e.g. "UPDATE_ADDRESS" etc.
These permissions will typically be tied to roles, which are granted to users/usergroups/etc.
Based on this variability, it depends on the nature of your permissions if you can write a filter to generically check permissions, or if it depends on more than the individual action call.
If you determine that there is a generic solution, look up PortletFilters, they behave just like ServletFilters. These can easily provide a home for permission checks.
It's quite hard to cover this topic in such a short answer, I hope to have given enough resources for you to continue your quest.
You can abuse some existing portlet permission like "Add to Page" and set it to roles that should call the action.
And by the rendering and action phases validate "has the user necessary permission".
Or you can create new permission and configure it by portlet-configuration. This way is cleaner, but difficulty.

Is it possible to make `#SQLDelete` take the `hibernate.default_schema` parameter into account?

In a webapp, I use Hibernate's #SQLDelete annotation in order to "soft-delete" entities (i.e. set a status column to a value that denotes their "deleted" status instead of actually deleting them from the table).
The entity code looks like this :
#Entity
#SQLDelete(sql="update pizza set status = 2 where id = ?")
public class Pizza { ... }
Now, my problem is that the web application doesn't use the owner of the schema to which the tables belong to connect to the DB. E.g. the schema (in Oracle) is called pizza, and the db user the webapp uses to connect is pizza_webapp. This is for security reasons. The pizza_webapp user only has select/update/delete rights, it can't modify the structure of the DB itself. I don't have any choice here, it is a policy that I can't change.
I specify the name of the schema where the tables actually are with the hibernate-default_schema parameter in hibernate config :
<property name="hibernate.default_schema">pizza</property>
This works fine for everything that goes through mapped entities, Hibernate knows how to add the schema name in front of the table name in the SQL it generates. But not for raw SQL, and the #SQLDelete contains raw SQL. This is executed 'as is' and results in a "table or view not found error".
So far we worked around the issue by adding synonyms to the pizza_webapp schema, pointing to the pizza schema. It works, but it is not fun to maintain across multiple DBs when entities are added.
So, is it possible to make #SQLDelete take the hibernate.default_schema parameter into account ?
(NB: Obviously I don't want to hard-code the schema name in the SQL either...)
Yes, it is possible:
#SQLDelete(sql="update {h-schema}pizza set status = 2 where id = ?")
I could not find any Hibernate solution to this problem. However I found a work-around based on an Oracle feature. I do this in to my session before using it :
//set the default schema at DB session level for raw SQL queries (see #SQLDelete)
HibernateUtil.currentSession().doWork(new Work() {
#Override
public void execute(Connection connection) throws SQLException {
connection.createStatement().execute("ALTER SESSION SET CURRENT_SCHEMA="+HibernateUtil.getDefaultSchema());
}
});
I works fine, but unfortunately only on Oracle (which is fine for us for now at least). Maybe there are different ways to achieve the same thing on other RDBMS as well ?
Edit: the the getDefaultSchema() method in my HibernateUtil class does this to get the default schema from Hibernate's config :
defaultSchema = config.getProperty("hibernate.default_schema");
where config is my org.hibernate.cfg.Configuration object.

Make openam/opensso return role name instead of role universal id

I'm using OpenAM 9.5.2 for authenticating users on an application. The authentication works well but I'm having issues to get user memberships from final application.
I've defined the group "somegroup" in openam and added my user to this group. Now in my application, I want to test if authenticated users is member of this group. If I'm testing it with:
request.isUserInRole("somegroup");
I get false result. Actually, I have to test
request.isUserInRole("id=somegroup,ou=group,dc=opensso,dc=java,dc=net");
in order to get a true response.
I know that it's possible to define a privileged attribute mapping list in the sso agent configuration to map id=somegroup,ou=group,dc=opensso,dc=java,dc=net on somegroup, but it's not suitable in my situation since roles and groups are stored in an external database. It's not convenient to define role in database and mapping in sso agent conf.
So my question : is there a way to make openam use the "short" (i.e. somegroup) group name instead of its long universal id ?
This is not an answer, just one remark.
I've performed some researches in openam sources and it seems to confirm that the role name stored in repository is replaced by universalId when openam build the request. This is performed in com.sun.identity.idm.server.IdRepoJAXRPCObjectImpl.java class:
public Set getMemberships_idrepo(String token, String type, String name,
String membershipType, String amOrgName,
String amsdkDN
) throws RemoteException, IdRepoException, SSOException {
SSOToken ssoToken = getSSOToken(token);
Set results = new HashSet();
IdType idtype = IdUtils.getType(type);
IdType mtype = IdUtils.getType(membershipType);
Set idSet = idServices.getMemberships(ssoToken, idtype, name, mtype, amOrgName, amsdkDN);
if (idSet != null) {
Iterator it = idSet.iterator();
while (it.hasNext()) {
AMIdentity id = (AMIdentity) it.next();
results.add(IdUtils.getUniversalId(id));
}
}
return results;
}
To my knowledge this is not possible currently with out of box code. In case you have limited amount of groups, then privileged attribute mapping could be a way to go, but if not, then the issue gets more complicated.
You could try to change the AmRealm implementation (authenticateInternal method) to match your requirements and hook the new class into the container-specific ServiceResolver class (like http://sources.forgerock.org/browse/openam/trunk/opensso/products/j2eeagents/tomcat/v6/source/com/sun/identity/agents/tomcat/v6/AmTomcatAgentServiceResolver.java?r=700&r=700&r=700 )
You can also create a JIRA issue about providing a config property to put membership information into roles in non-UUID format.