I have a singleton class AppSetting in an ASP.NET app where I need to check a value and optionally update it. I know I need to use a locking mechanism to prevent multi-threading issues, but can someone verify that the following is a valid approach?
private static void ValidateEncryptionKey()
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
lock (AppSetting.Instance)
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
AppSetting.Instance.EncryptionKey = GenerateNewEncryptionKey();
AppSetting.Instance.Save();
}
}
}
}
I have also seen examples where you lock on a private field in the current class, but I think the above approach is more intuitive.
Thanks!
Intuitive, maybe, but the reason those examples lock on a private field is to ensure that no other piece of code in the application can take the same lock in such a way as to deadlock the application, which is always good defensive practice.
If it's a small application and you're the only programmer working on it, you can probably get away with locking on a public field/property (which I presume AppSetting.Instance is?), but in any other circumstances, I'd strongly recommend that you go the private field route. It will save you a whole lot of debugging time in the future when someone else, or you in the future having forgotten the implementation details of this bit, take a lock on AppSetting.Instance somewhere distant in the code and everything starts crashing.
I'd also suggest you lose the outermost if. Taking a lock isn't free, sure, but it's a lot faster than doing a string comparison, especially since you need to do it a second time inside the lock anyway.
So, something like:
private object _instanceLock = new object () ;
private static void ValidateEncryptionKey()
{
lock (AppSetting.Instance._instanceLock)
{
if (AppSetting.Instance.EncryptionKey.Equals(Constants.ENCRYPTION_KEY, StringComparison.Ordinal))
{
AppSetting.Instance.EncryptionKey = GenerateNewEncryptionKey();
AppSetting.Instance.Save();
}
}
}
An additional refinement, depending on what your requirements are to keep the EncryptionKey consistent with the rest of the state in AppSetting.Instance, would be to use a separate private lock object for the EncryptionKey and any related fields, rather than locking the entire instance every time.
Related
Although I'm not sure if I've chosen the right name for it, anyone who's worked on a large project with lots of features has probably seen it: some boolean return function gets bloated with the interaction of every little feature. Eventually what was once a simple one or two variable check becomes:
public boolean showFavoritesTool(UserData userData){
if(currentPage.isPremiumPage())
{
return true;
}
if(!userData.isLoggedIn())
{
return false;
}
if(userData.isMember())
{
return userData.getPreferences().isFavoritesTurnedOn();
}
if(getUrlParams()["showFavorites"])
{
return getUrlParams()["showFavorites"]
}
return false;
}
Edit: Let me clarify, this is just an early example of functions like this. At some point, it would grow as new features are developed to at least twice this size. The function I was looking at that prompted this question had at least 15 variables, some of which were nested. This code may look simple, but it won't remain so as new variables are added.
Everytime a new feature is added, another entry is thrown into the flag function. They don't usually overlap, but when they do you can be sure that no one has thought about what should happen. It doesn't take long before these functions become hard to interpret.
Is there a cleaner solution to this? Also, if that cleaner solution involves more architecture, when would one implement it? As soon as a second variable is added? Or is there some breaking point?
Your question is rather general and it is difficult to understand its scope, nevertheless I will try and provide some insight which will hopefully answer it.
Everytime a new feature is added, another entry is thrown into the flag function. They don't usually overlap, but when they do you can be sure that no one has thought about what should happen
This is a nasty side effect of poor planning and design. Code should be closed for modification, but open to extension
Design with the future in mind, as mentioned previously by Yuval Itzchakov, an interface which contains an abstract showFavouritesTool() method which is overriden by each user class depending on each classes requirements will provide greater flexibility, and adhere to the Open/Closed principle. Unfortunately, with the limited information given, it would be difficult to create an example which fits your problem.
Incidently, there will be occassions where multiple boolean expressions need checking. Why not simplify the method by using one concise statement.
For example:
public boolean showFavoritesTool(UserData userData){
return currentPage.isPremiumPage()
|| userData.isLoggedIn() && userData.isMember() && userData.getPreferences().isFavoritesTurnedOn()
|| userData.isLoggedIn() && getUrlParams()["showFavorites"];
}
I would just rewrite it in a more conscious and understandable form. This requires some care because early returns, (double) negations and the like often obscure the behavior and it is easy to introduce bugs.
public boolean showFavoritesTool(UserData userData)
{
return currentPage.isPremiumPage()
|| userData.isLoggedIn() && userData.isMember() && userData.getPreferences().isFavoritesTurnedOn()
|| userData.isLoggedIn() && getUrlParams()["showFavorites"];
}
If the logic becomes is really complex it helps introducing some intermediate variables.
public boolean showFavoritesTool(UserData userData)
{
var isPremiumPage = currentPage.isPremiumPage();
var isLoggedIn = userData.isLoggedIn();
var isMemberAndHasFavoritesTurnedOn = userData.isMember() && userData.getPreferences().isFavoritesTurnedOn();
var urlIndicatesShowFavorites = getUrlParams()["showFavorites"];
return isPremiumPage
|| isLoggedIn && isMemberAndHasFavoritesTurnedOn
|| isLoggedIn && urlIndicatesShowFavorites;
}
In this example it is a bit to much but you get the idea. It is usually a good idea to align the meaning of intermediate variables with business or technical requirements.
If all the validations stay in the same place, are clear (for example, take the ! out of the second if and make it a return true, put the return value in the third if and make it a return true also, same with the last if) and commented when "favorites tool" should be shown, I don't see a problem with it.
I think your question is a bit too general, since i don't fully understand the scope of the problem. there are multiple uses of fields which we have no knowledge about.
But in general, if you are trying to categories which set of features you want to expose to a group of users, you could make them all inherit a base type, which has an abstract method with a function called SetFavoriteTool, like so:
public abstract class BaseData
{
public abstract bool ShowFavoriteTool();
}
public class UserData : BaseData
{
public override bool ShowFavoriteTool()
{
if ....
}
}
or if you're more into interfaces, you could depend on an IFavoriteTool:
public interface IFavoriteTool
{
bool ShowFavoriteTool();
}
public class UserData : IFavoriteTool
{
public bool ShowFavoriteTool()
{
if..
}
}
and then you could change your method to:
public bool ShowFavoriteTool(UserData userData)
{
var favoriteTool = (IFavoriteTool) userData;
return favoriteTool.ShowFavoriteTool();
}
This is just a lead since i don't really understand the Domain problem you're dealing with.
Hope this helps.
This is what's known as "code rot." It's a process of source degrading in performance and/or maintainability. Eventually it may degrade to a point where the performance gets so bad or it becomes so unmaintainable that you have to start over (version 2.0)
Code rod occurs as a result of incremental enhancements or bug fixes that causes your design veer further and further away from the original design.
To combat code rot, you need to have (and enforce) good standards. Have code reviews, code audits, document source, write unit tests, etc.
I will occasionally create my own enum to declare the state of the object. In this case, something similar to this:
public enum FavoriteToolState {
IsVisible,
IsHidden
}
This way, you could set the state of the favorite inside the inner methods which actually determine whether or not the favorite is premium, or whether the user has logged in, or any of those other options, and instead of checking each of those methods, you could check to see if the current FavoriteToolState is FavoriteToolState.IsVisible.
Whether or not this is cleaner could be argued either way, I suppose. I prefer this way in some cases.
I have a habit of creating classes that tend to pass objects around to perform operations on them rather than assigning them to a member variable and having operations refer to the member variable. It feels much more procedural to me than OO.
Is this a terrible practice? If so, what are the adverse effects (performance, memory consumption, more error-prone)? Is it simply easier and more closely aligned to OO principles like encapsulation to favour member variables?
A contrived example of what I mean is below. I tend to do the following;
public class MyObj()
{
public MyObj() {}
public void DoVariousThings(OtherObj oo)
{
if (Validate(oo))
{
Save(oo);
}
}
private bool Validate(OtherObj oo)
{
// Do stuff related to validation
}
private bool Save(OtherObj oo)
{
// Do stuff related to saving
}
}
whereas I suspect I should be doing the following;
public class MyObj()
{
private OtherObj _oo;
public MyObj(OtherObj oo)
{
_oo = oo;
}
public void DoVariousThings()
{
if (Validate())
{
Save();
}
}
private bool Validate()
{
// Do stuff related to validation with _oo
}
private bool Save()
{
// Do stuff related to saving with _oo
}
}
If you write your programs in an object oriented language, people will expect object oriented code. As such, in your first example, they would probably expect that the reason for making oo a parameter is that you will use different objects for it all the time. In your second example, they would know that you always use the same instance (as initialized in the constructor).
Now, if you use the same object all the time, but still write your code like in your first example, you will have them thoroughly confused. When an interface is well designed, it should be obvious how to use it. This is not the case in your first example.
I think you already answered your question yourself, you seem to be aware of the fact that the 2nd approach is more favorable in general and should be used (unless there are serious reasons for the first approach).
Advantages that come to my mind immediately:
Simplified readability and maintainability, both for you and for others
Only one entry point, therefore only needing to checking for != null etc.
In case you want to put that class under test, it's way easier, i.e., getting something like this (extracting interface IOtherObj from OtherObj and working with that):
public MyObj (IOtherObj oo)
{
if (oo == null) throw...
_oo = oo;
}
Talking of the adverse effects of your way, there are none, but only if you are keeping the programs and the code to yourself,, what are you doing is NOT a standard thing, say, if after some time, you start to work making libraries and code that may be used by others also, then it is a big problem. The may pass any foo object and hope that it would work.
you have to validate the object before passing it and if the validation fails do things accordingly, but if u use the standard OOP way, there is no need for validation or taking up the cases where an inappropriate type object is pass,
In a nutshell, your way is bad for :
1. code re-usability.
2. you have to handle more exceptions.
3. okay, if u r keeping things to urself, otherwise, not a good practice.
hope, it cleared some doubt.
I'm looking to implement a command pattern to support undo/redo in my application. The data is very closely tied together, so there are some downstream consequences of modifying some of my objects that I also want to be able to undo. My main concern is where I should put the code that executes the downstream commands. For example:
class:MoveObjectCommand
{
private hierarchicalObject:internalObject;
public MoveObjectCommand(hierarchicalObject:newObject)
{
internalObject = newObject;
}
public Execute()
{
internalObject.Location = someNewLocation;
foreach(hierarchicalObject:child in internalObject.Children)
{
if(someNewLocation = specialPlace)
{
var newCommand:MoveObjectCommand = new MoveObjectCommand(child)
CommandManager.add(newCommand);
}
}
}
public Undo()
{
internalObject.location = oldLocation;
}
}
As far as I can tell, something like this would work be fine, but I can't wrap my head around where the majority of the execution code should actually go. Should the hierarchicalObject have a .changeLocation() method that adds all the subsequent commands, or should they be in the command itself like it is above? The only difference I can think of is that in the example above, the MoveObjectCommand would have to be called for subsequent changes to process, whereas the other way it could be called without needing a command and still process the same way (could have negative consequences for tracking undo/redo steps). Am I overthinking this? Where would you put it and why (obviously this example doesn't hit all angles, but any general best practices with the command pattern?).
sounds like you should have the changeLocation() method in the model (hierarchicalObject i presume). just store the new location and the object in the command.
for undo/redo you will need a list or two for commands.
sound like your hierarchicalObject may be a http://en.wikipedia.org/wiki/Composite_pattern, so have a take a look at the macro command in the gang-of-four book. also review: http://en.wikipedia.org/wiki/Command_pattern.
Christopher Alexander says: "Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice".
Today I read a book and the author wrote that in a well-designed class the only way to access attributes is through one of that class methods. Is it a widely accepted thought? Why is it so important to encapsulate the attributes? What could be the consequences of not doing it? I read somewhere earlier that this improves security or something like that. Any example in PHP or Java would be very helpful.
Is it a widely accepted thought?
In the object-oriented world, yes.
Why is it so important to encapsulate the attributes? What could be the consequences of not doing it?
Objects are intended to be cohesive entities containing data and behavior that other objects can access in a controlled way through a public interface. If an class does not encapsulate its data and behavior, it no longer has control over the data being accessed and cannot fulfill its contracts with other objects implied by the public interface.
One of the big problems with this is that if a class has to change internally, the public interface shouldn't have to change. That way it doesn't break any code and other classes can continue using it as before.
Any example in PHP or Java would be very helpful.
Here's a Java example:
public class MyClass {
// Should not be < 0
public int importantValue;
...
public void setImportantValue(int newValue) {
if (newValue < 0) {
throw new IllegalArgumentException("value cannot be < 0");
}
}
...
}
The problem here is that because I haven't encapsulated importantValue by making it private rather than public, anyone can come along and circumvent the check I put in the setter to prevent the object from having an invalid state. importantValue should never be less than 0, but the lack of encapsulation makes it impossible to prevent it from being so.
What could be the consequences of not
doing it?
The whole idea behind encapsulation is that all knowledge of anything related to the class (other than its interface) is within the class itself. For example, allowing direct access to attributes puts the onus of making sure any assignments are valid on the code doing the assigning. If the definition of what's valid changes, you have to go through and audit everything using the class to make sure they conform. Encapsulating the rule in a "setter" method means you only have to change it in one place, and any caller trying anything funny can get an exception thrown at it in return. There are lots of other things you might want to do when an attribute changes, and a setter is the place to do it.
Whether or not allowing direct access for attributes that don't have any rules to bind them (e.g., anything that fits in an integer is okay) is good practice is debatable. I suppose that using getters and setters is a good idea for the sake of consistency, i.e., you always know that you can call setFoo() to alter the foo attribute without having to look up whether or not you can do it directly. They also allow you to future-proof your class so that if you have additional code to execute, the place to put it is already there.
Personally, I think having to use getters and setters is clumsy-looking. I'd much rather write x.foo = 34 than x.setFoo(34) and look forward to the day when some language comes up with the equivalent of database triggers for members that allow you to define code that fires before, after or instead of a assignments.
Opinions on how "good OOD" is achieved are dime a dozen, and also very experienced programmers and designers tend to disagree about design choices and philosophies. This could be a flame-war starter, if you ask people across a wide varieties of language background and paradigms.
And yes, in theory are theory and practice the same, so language choice shouldn't influence high level design very much. But in practice they do, and good and bad things happen because of that.
Let me add this:
It depends. Encapsulation (in a supporting language) gives you some control over how you classes are used, so you can tell people: this is the API, and you have to use this. In other languages (e.g. python) the difference between official API and informal (subject to change) interfaces is by naming convention only (after all, we're all consenting adults here)
Encapsulation is not a security feature.
Another thought to ponder
Encapsulation with accessors also provides much better maintainability in the future. In Feanor's answer above, it works great to enforce security checks (assuming your instvar is private), but it can have much further reaching benifits.
Consider the following scenario:
1) you complete your application, and distribute it to some set of users (internal, external, whatever).
2) BigCustomerA approaches your team and wants an audit trail added to the product.
If everyone is using the accessor methods in their code, this becomes almost trivial to implement. Something like so:
MyAPI Version 1.0
public class MyClass {
private int importantValue;
...
public void setImportantValue(int newValue) {
if (newValue < 0) {
throw new IllegalArgumentException("value cannot be < 0");
}
importantValue = newValue;
}
...
}
MyAPI V1.1 (now with audit trails)
public class MyClass {
private int importantValue;
...
public void setImportantValue(int newValue) {
if (newValue < 0) {
throw new IllegalArgumentException("value cannot be < 0");
}
this.addAuditTrail("importantValue", importantValue, newValue);
importantValue = newValue;
}
...
}
Existing users of the API make no changes to their code and the new feature (audit trail) is now available.
Without encapsulation using accessors your faced with a huge migration effort.
When coding for the first time, it will seem like a lot of work. Its much faster to type: class.varName = something vs class.setVarName(something); but if everyone took the easy way out, getting paid for BigCustomerA's feature request would be a huge effort.
In Object Oriente Programming there is a principle that is known as (http://en.wikipedia.org/wiki/Open/closed_principle):
POC --> Principle of Open and Closed. This principle stays for: a well class design should be opened for extensibility (inheritance) but closed for modification of internal members (encapsulation). It means that you could not be able to modify the state of an object without taking care about it.
So, new languages only modify internal variables (fields) through properties (getters and setters methods in C++ or Java). In C# properties compile to methods in MSIL.
C#:
int _myproperty = 0;
public int MyProperty
{
get { return _myproperty; }
set { if (_someVarieble = someConstantValue) { _myproperty = value; } else { _myproperty = _someOtherValue; } }
}
C++/Java:
int _myproperty = 0;
public void setMyProperty(int value)
{
if (value = someConstantValue) { _myproperty = value; } else { _myproperty = _someOtherValue; }
}
public int getMyProperty()
{
return _myproperty;
}
Take theses ideas (from Head First C#):
Think about ways the fields can misused. What can go wrong if they're not set properly.
Is everything in your class public? Spend some time thinking about encapsulation.
What fields require processing or calculation? They are prime candidates.
Only make fields and methods public if you need to. If you don't have a reason to declare something public, don't.
I've searched StackOverflow and there are many ConcurrentModificationException questions. After reading them, I'm still confused. I'm getting a lot of these exceptions. I'm using a "Registry" setup to keep track of Objects:
public class Registry {
public static ArrayList<Messages> messages = new ArrayList<Messages>();
public static ArrayList<Effect> effects = new ArrayList<Effect>();
public static ArrayList<Projectile> proj = new ArrayList<Projectile>();
/** Clears all arrays */
public static void recycle(){
messages.clear();
effects.clear();
proj.clear();
}
}
I'm adding and removing objects to these lists by accessing the ArrayLists like this: Registry.effects.add(obj) and Registry.effects.remove(obj)
I managed to get around some errors by using a retry loop:
//somewhere in my game..
boolean retry = true;
while (retry){
try {
removeEffectsWithSource("CHARGE");
retry = false;
}
catch (ConcurrentModificationException c){}
}
private void removeEffectsWithSource(String src) throws ConcurrentModificationException {
ListIterator<Effect> it = Registry.effects.listIterator();
while ( it.hasNext() ){
Effect f = it.next();
if ( f.Source.equals(src) ) {
f.unapplyEffects();
Registry.effects.remove(f);
}
}
}
But in other cases this is not practical. I keep getting ConcurrentModificationExceptions in my drawProjectiles() method, even though it doesn't modify anything. I suppose the culprit is if I touched the screen, which creates a new Projectile object and adds it to Registry.proj while the draw method is still iterating.
I can't very well do a retry loop with the draw method, or it will re-draw some of the objects. So now I'm forced to find a new solution.. Is there a more stable way of accomplishing what I'm doing?
Oh and part 2 of my question: Many people suggest using ListIterators (as I have been using), but I don't understand.. if I call ListIterator.remove() does it remove that object from the ArrayList it's iterating through, or just remove it from the Iterator itself?
Top line, three recommendations:
Don't do the "wrap an exception in a loop" thing. Exceptions are for exceptional conditions, not control flow. (Effective Java #57 or Exceptions and Control Flow or Example of "using exceptions for control flow")
If you're going to use a Registry object, expose thread-safe behavioral, not accessor methods on that object and contain the concurrency reasoning within that single class. Your life will get better. No exposing collections in public fields. (ew, and why are those fields static?)
To solve the actual concurrency issues, do one of the following:
Use synchronized collections (potential performance hit)
Use concurrent collections (sometimes complicated logic, but probably efficient)
Use snapshots (probably with synchronized or a ReadWriteLock under the covers)
Part 1 of your question
You should use a concurrent data structure for the multi-threaded scenario, or use a synchronizer and make a defensive copy. Probably directly exposing the collections as public fields is wrong: your registry should expose thread-safe behavioral accessors to those collections. For instance, maybe you want a Registry.safeRemoveEffectBySource(String src) method. Keep the threading specifics internal to the registry, which seems to be the "owner" of this aggregate information in your design.
Since you probably don't really need List semantics, I suggest replacing these with ConcurrentHashMaps wrapped into Set using Collections.newSetFromMap().
Your draw() method could either a) use a Registry.getEffectsSnapshot() method that returns a snapshot of the set; or b) use an Iterable<Effect> Registry.getEffects() method that returns a safe iterable version (maybe just backed by the ConcurrentHashMap, which won't throw CME under any circumstances). I think (b) is preferable here, as long as the draw loop doesn't need to modify the collection. This provides a very weak synchronization guarantee between the mutator thread(s) and the draw() thread, but assuming the draw() thread runs often enough, missing an update or something probably isn't a big deal.
Part 2 of your question
As another answer notes, in the single-thread case, you should just make sure you use the Iterator.remove() to remove the item, but again, you should wrap this logic inside the Registry class if at all possible. In some cases, you'll need to lock a collection, iterate over it collecting some aggregate information, and make structural modifications after the iteration completes. You ask if the remove() method just removes it from the Iterator or from the backing collection... see the API contract for Iterator.remove() which tells you it removes the object from the underlying collection. Also see this SO question.
You cannot directly remove an item from a collection while you are still iterating over it, otherwise you will get a ConcurrentModificationException.
The solution is, as you hint, to call the remove method on the Iterator instead. This will remove it from the underlying collection as well, but it will do it in such a way that the Iterator knows what's going on and so doesn't throw an exception when it finds the collection has been modified.