nhibernate virtual methods & resharper - nhibernate

I am curious how other Resharper users deal wih R#'s complaint about virtual methods it thinks are unused because it can't tell that NHIb will use them at runtime. I currently leave it as a hint, reluctantly, although am tempted to shut it off completely.
Cheers,
Berryl
example property or method where R# sees that a virtual member is never overriden
public virtual string Hello{ get { return "Hello"; } }

Have you tried adding the UsedImplicitlyAttribute?
EDIT: This works for me at the method level to suppress "Method 'Fink' is never used":
[UsedImplicitly]
private void Fink()
{
Console.WriteLine("Fink!");
}
Note that you can also go to ReSharper/Options/Code Inspection/Settings and add to the Generated code regions. We do that for our CodeSmith templates.

You can safely keep them as hints.
It would be nice if R# allowed different settings per project, so you could disable it for your Domain classes only.
It's important to remember R# is just a tool; don't let it do the thinking for you. If an inspection is unhelpful most of the time, just disable it (or leave it as a hint, like you did)

Related

FxCop (/VS2010 Code Analysis), possible to flag method result as "callers responsibility now" for IDisposable?

If I write the following code:
public void Execute()
{
var stream = new MemoryStream();
...
}
then code analysis will flag this as:
Warning 1 CA2000 : Microsoft.Reliability : In method 'ServiceUser.Execute()', call System.IDisposable.Dispose on object 'stream' before all references to it are out of scope. C:\Dev\VS.NET\DisposeTest\DisposeTest\ServiceUser.cs 14 DisposeTest
However, if I create a factory pattern, I still might be required to dispose of the object, but now FxCop/Code Analysis doesn't complain. Rather, it complains about the factory method, not the code that calls it. (I think I had an example that did complain about the factory method, but the one I post here doesn't, so I struck that out)
Is there a way, for instance using attributes, to move the responsibility of the IDisposable object out of the factory method and onto the caller instead?
Take this code:
public class ServiceUser
{
public void Execute()
{
var stream = StreamFactory.GetStream();
Debug.WriteLine(stream.Length);
}
}
public static class StreamFactory
{
public static Stream GetStream()
{
return new MemoryStream();
}
}
In this case, there are no warnings. I'd like FxCOP/CA to still complain about my original method. It is still my responsibility to handle that object.
Is there any way I can tell FxCOP/CA about this? For instance, I recently ventured into the annotation attributes that ReSharper has provided, in order to tell its analysis engine information it would otherwise not be able to understand.
So I envision something like this:
public static class StreamFactory
{
[return: CallerResponsibility]
public static Stream GetStream()
{
return new MemoryStream();
}
}
Or is this design way off?
There is a difference between FxCop 10 (which ships with the Windows 7 and .NET 4.0 SDK) and Code Analysis 2010 (which ships with Visual Studio Premium and higher). Code Analysis 2010 has a set of additional rules, which includes a highly improved version of the IDisposable rules.
With Code Analysis 2010 under Visual Studio Premium, the Factory isn't being flagged (as the rule now sees the IDisposable variable is returned to the calling method). The Receiving method, however, isn't flagged either, due to one of the corner case exceptions to the rule. There is a list of method names that will cause the rule to trigger. If you rename your GetStream method to CreateStream, suddenly the rule will trigger:
Warning 4 CA2000 : Microsoft.Reliability : In method 'ServiceUser.Execute()',
call System.IDisposable.Dispose on object 'stream' before all references to it are out
of scope. BadProject\Class1.cs 14 BadProject
I was unable to locate the list of method pre-fixes that will work. I've tried a few and Create~, Open~ trigger the rule, many others that you might expect to work, don't, including Build~, Make~, Get~.
Additionally there is a long list of bugs surrounding this rule. The rule was altered in Visual Studio 2010 to trigger fewer false positives, but now it sometimes misses items it should have flagged (and would have flagged in the previous version). There wasn't enough time to fix the rules in the Visual Studio 2010 time frame (check the bug report comments).
With the upcoming Roslyn compilers, Code Analysis will probably see a major upgrade, until then there are only minor updates to be expected. The current build of Visual Studio Dev11 does not trigger where you want it.
So concluding, no your attribute wouldn't help much, as the rule already detects that you're passing the IDisposable as a return value. Thus Code Analysis knows it's not good to dispose it before returning. If you're using the undocumented naming rules, the rule will trigger. Maybe an attribute could extend the naming rules, but I'd rather have Microsoft would actually fix the actual rule.
I created a connect bug requesting the naming guideline to be documented in the rules documentation.
Comment from Microsoft:
Posted by Microsoft on 1/19/2012 at 10:41 AM
Hello,
Thank you for taking the time to investigate this and file the request for the documentation update. However after some discussion with our documentation team, we have decided to not document the naming convention as you requested.
As you indicated on the stackoverflow thread, there have historically been a lot of reliability issues with this rule, and keying off of the names was an internal implementation detail added to try to reduce the number of false positives. However this is not considered prescriptive guidance for how developers should name their methods, it was added after a survey of common coding practices. We believe the long-term fix is to improve the reliability of the rule, not add naming guidance to our public documentation based on internal implementation details that will continue to change as the rule is improved.
Best Regards,
Visual Studio Code Analysis Team

JPA <pre-persist> <pre-update> are being ignored but the #PrePersist and #PreUpdate work fine

I ran into strange problem. I have the whole domain model defined in the orm.xml file. All my entities in my project are just simple POJOs (no jpa annotations at all). I want to save the last update and the insert timestamps of my entities and I've decided to use the "pre persist" and "pre update" like most of us. So I've defined a base entity class and let all my entities to extend it.
Strange is that the "pre persist" (and all others events) are being called only when I define them using annotations. When I define them in the orm.xml file instead - nothing happens, they are just ignored.
This works for me:
public abstract class BaseEntity {
private Timestamp insertTimestamp;
private Timestamp lastUpdateTimestamp;
#PrePersist
public void onPersist() {
...
}
#PreUpdate
public void onUpdate() {
...
}
}
But after removing annotations and switching to the xml nothing works anymore:
<mapped-superclass class="com.my.model.BaseEntity">
<pre-persist method-name="onPersist"/>
<pre-update method-name="onUpdate"/>
<post-load method-name="postLoad"/>
</mapped-superclass>
According to the JPA specification the above declarations in xml seem to be correct.
I have no idea where to dig for the problem.
I'm using EclipseLink 2.2.0 with H2 in the SE environment.
UPDATE:
Thanks for your answer. There are no errors in log/console to see. Events just seem being ignored.
As you thought is might be a bug because moving the methods and XML declarations from the superclass to the subclass solves the problem. It is not a desired solution for me as I want to have a global solution for all entities but moved me a bit forward.
I've sent the bug report to the EclipseLink guys :)
As you suggested I've tried with entity listener and it works for me. so I will stick to this solution. It even looks better then the solution with base entity class ;)
Thanks !
Your XML looks correct. Do any errors occur in the logs?
It could be a bug with MappedSuperClass and entity events.
Can you try setting the event on a subclass and see if it works?
If it does, then it is probably a bug, please log the bug in Eclipse Bugzilla.
Another workaround would be to use an entity listener.

Disable "not used" warning for public methods of a class

The new intellij upgrade (10.5) now shows a warning that some of the methods defined for a class are not being used. These methods are public and I plan on not using all of them as I have created them to support the API expected. I would like to disable this warning (not used for public methods in a class). Is there a way to do it?.
You can disable it for a single method like this
#SuppressWarnings("unused")
public void myMethod(){...}
IDEA 2016.3
In the upcoming version IDEA 2016.3 (preview version already available) it is now possible to adjust the inspection scope:
< IDEA 14.0
If you want to highlight unused public methods, please enable the "Settings|Inspections|Declaration redundancy|Unused declaration" global inspection.
If you want to highlight unused private methods, please enable the "Settings|Inspections|Declaration redundancy|Unused symbol" local inspection.
So, if you want to highlight unused private members, but do not highlight unused public members, turn off "Unused declaration" and turn on "Unused symbol".
Source
I've just tested it using IDEA 13.1.4, and it worked exactly as described.
IDEA 14.x
In IntelliJ IDEA 14.0.x the settings are under:
Settings | Editor | Inspections | Declaration redundancy | Unused symbol/declaration
In IntelliJ IDEA 14.1 the option appears to be gone..
Disable Settings | Inspections | Declaration redundancy | Unused Declaration code inspection. As an option you can create a custom scope for your API classes and disable this inspection only per API scope so that it still works in the rest parts of your project.
2018-2019
Here is the 2019 update for:
IntelliJ IDEA 2018.3.2 (Community Edition)
Build #IC-183.4886.37, built on December 17, 2018
Settings | Editor | Inspections | Declaration redundancy | Unused declaration
I think the best way to avoid the highlighting of that unused public methods is writing a couple of test for those methods in your API.
In the latest version, this options is under Settings>Inspections>Java>Declaration redundancy>Unused declaration>Methods uncheck options which are not required.
This is an old thread, but I ended up here faster than I could find a solution so I'm going to go ahead and share my findings.
First, I am not sure if we are working with the same language (JS here,) but fiddling with the GUI-based tools, here is what I ended up with.
The following code was giving me the infamous "not used" warning:
/**
* #class sample class
*/
var MyClass = function () {
return this;
};
/**
* Some public method
* #api public
*/
MyClass.prototype.myMethod = function () {
return null;
};
There goes the "Unused definition myMethod"
The inspector ended up suggesting to ignore this specific issue by adding
//noinspection JSUnusedGlobalSymbols
right on top of this specific method so that the following code no longer results in this warning:
//noinspection JSUnusedGlobalSymbols
/**
* Some public method
* #api public
*/
MyClass.prototype.myMethod = function () {
return null;
};
Other warnings (typoes etc..) still seem to show up, including unused local variables and parameters, so it seems to isolate this particular issue.
The downside is that it tends to pollute your code if you have lots of it...
I just clicked "suppress for statement" and webstorm prepended this:
//noinspection JSUnusedGlobalSymbols
When extending a library recently, I was also alerted by that "not used" inspection warning.
Think about why IntelliJ signals
Usually when doing refactoring all unused methods/parameters should be safe to be deleted (via Intellij's safe delete action).
This way the intend of IntelliJ (like Checkstyle and others) is to support our clean design. Since the unused methods are neither used internally (in src/java/main) nor externally tested (in src/java/test) they seem obsolete. So why not following the saying "When in doubt, throw it out".
When refactoring, that's mostly a true advice.
But if we are developing a library/API that is intended to be used by other codebases (modules/dependencies from the ouside), then we rather answer "When not used, get confused".
We are astonished about IntelliJ's warning. The methods should not be deleted, because they are actually intended to be used elsewhere. They are entry-points.
Then choose suitable solution
All of below solutions have one in commen:
Communicate through your code, so every IDE and developer can understood (e.g. add a test so it becomes used)
Tell your intent (e.g. to IntelliJ via reconfiguring Code Inspection)
Configure Inspection or Disable
As described in various earlier answers. With screenshots and navigation hints to IntelliJ's Code Inspection settings
Add a test
If you add a test for the unused (method, class etc.) you will benefit in 3 ways:
correctness: (previously) unused subject under test (SUT) is tested
communication: you clearly communicate to every reader, that and how your unused (method, class, etc.) should be used
clearance: now the unused finally got used. So IntelliJ's inspection would not find and warn anymore.
Add or mark as Entry Point
I saw the suggestion multiple times:
as optional dialog tab inside IntelliJ's Inspection Settings
as comment below top-ranked answer:
IMO better approach is to mark class as "entry point". – Tanya Jivvca Aug 18 at 8:46
in IntelliJ's forum: Code inspection: unused element - what is an entry point? (emphasis in below quote by me):
Add the element as an entry point. By default, all code in the global scope as well as tests is treated as reachable. If you know that a method or function is executed, you may add it as an entry point. The code inside the entry point is now executed and reachable, as well.
When you add an entry point, your source code files stay unaffected, and the element’s record is stored with the project under .idea\misc.xml.
Maybe the entry points funtion can work, where you can specify the code pattern that can disable the warning
Settings | Inspections | Declaration redundancy | Unused Declaration | entry point

Does an isolate / sandbox access modifier exist in any language?

Is there a language which has a feature that can prevent a class accessing any other class, unless an instance or reference is contained?
isolated class Example {
public Integer i;
public void doSomething()
{
i = 5; // This is ok because i belongs to this class
/*
* This is forbidden because this class can only
* access anything contained within, nothing outside
*/
System.out.println("This does not work.");
}
}
[edit]An example use case might be a plugin system. I could define a plugin object with references to certain objects that class can manipulate, but nothing else is permissible. It could potentially make security concerns much easier.[/edit]
I'm not aware of any class-based access modifiers with such intent, but I believe access modifiers to be misguided anyway.
Capability-based security or, more specifically, the object-capability model seems to be what you want.
http://en.wikipedia.org/wiki/Object-capability_model
The basic idea is that in order to do anything with an object, you need to hold a reference to it. Withhold the reference and no access is possible.
Global things (such as System.out.println) and a few other things are problematic features of a language, because anyone can access them without a reference.
Languages such as E, or tools like google caja (for Javascript) allow proper object-capability models. Here an example in JS:
function Example(someObj) {
this.someObj = someObj;
this.doStuff() = function() {
this.someObj.foo(); //allowed, we have been given a reference to it
alert("foobar"); //caja may deny/proxy access to global "alert"
}
}
Any language where you must include headers would probably count: Just don't include any headers.
However, I would wager that there's no language that explicitly forbids external access. What's the point? You can't do anything if you can't access the outside world. And, why would the reference to Integer be okay, but System.out.println not be?
If you clarify the potential use-case, we can probably help you better...
Edit for your Edit:
I thought you might be going there.
If this is for security, it's flawed from the start. Let's examine:
class EvilCode {
void DoNiceThings() {
HardDrive.Format();
}
}
What incentive do I have to voluntarily place a keyword on my class? I'm certainly not going to because I'm nice, since I'm not!
One thing to consider is that any time you're loading native code that's not your own (native, in this case, means not scripted), you're potentially allowing a bad guy to run his code. No language features are going to protect you from that.
The proper answer depends on your target language. Java has Security descriptors, .NET lets you create AppDomains with restricted permissions, etc. Unfortunately, I'm not an expert in these fields.

C#4 dynamic keyword - why not?

After reading many of the replies to this thread, I see that many of those who dislike it cite the potential for abuse of the new keyword. My question is, what sort of abuse? How could this be abused so badly as to make people vehemently dislike it? Is it just about purism? Or is there a real pitfall that I'm just not seeing?
I think that a lot of the revulsion that people are expressing to this feature boils down to "this is a bad language feature because it will allow bad developers to write bad code." If you think about it, by that logic all language features are bad.
When I run into a block of VB code that some genius has prefixed with On Error Resume Next, it's not VB that I curse. Maybe I should, I suppose. But in my experience a person who is determined to put a penny in the fuse box will find a way. Even if you empty his pockets, he'll fashion his own pennies.
Me, I'm looking forward to a more useful way of interoperating between C# and Python. I'm writing more and more code that does this. The dynamic keyword can't come soon enough for that particular use case, because the current way of doing it makes me feel like I'm a Soviet academic in the 1950s who's traveling to the West for a conference: there's an immense amount of rules and paperwork before I get to leave, I am pretty sure someone's going to be watching me the whole time I'm there, and most of what I pick up while I'm there will be taken away from me at the border when I return.
Some see it as a tool that will be abused. Like "Option Strict Off" and "On Error Resume Next" in VB which "pure" languages like C# and Java have never had.
Many said the same about the "var" keyword, yet I don't see it being abused, once it became understood that it wasn't the same as VB's "Variant"
It could be abused in places that lazy developers don't want type checking on classes and just try catch dynamic calls instead of writing "if blah is Blah ...".
I personally feel it could be used properly in situations like this recent question that I answered.
I think the ones really understanding it's power are those heavily into the dynamic .NET languages.
dynamic is bad because code like this will pop all over the place:
public dynamic Foo(dynamic other) {
dynamic clone = other.Clone();
clone.AssignData(this.Data);
return clone ;
}
instead of:
public T Foo<T>(T other) where T: ICloneable, IAssignData{
T clone = (T)other.Clone();
clone.AssignData(this.Data);
return clone;
}
The first one, has no static type info, no compile time checking, it's not self documenting, no type inference so people will be forced to use a dynamic reference at the call site to store the result, leading to more type loss, and all this spirals down.
I'm already starting to fear dynamic.
The real pitfall? Severe lack of documentation. The entire application's architecture exists in the mind of the person (or persons) who wrote it. At least with strong-typing, you can go see what the object does via its class definition. With dynamic-typing, you must infer the meaning from it's use, at best. At worst, you have NO IDEA what the object is. It's like programming everything in JavaScript. ACK!
When people realize that they don't get good IntelliSense with dynamic, they'll switch back from being dynamic-happy to dynamic-when-necessary-and-var-at-all-other-times.
The purposes of dynamic include: interoperability with dynamic languages and platforms such as COM/C++ and DLR/IronPython/IronRuby; as well as turning C# itself into IronSmalltalkWithBraces with everything implementing IDynamicObject.
Good times will be had by all. (Unless you need to maintain code someone else wrote.)
This is sort of like discussing public cameras, sure they can and will be misused but there are benefits to having them as well.
There is no reason why you couldn't outlaw the "dynamic" keyword in your own coding guideline if you don't need them. So whats the problem? I mean, if you want to do crazy things with the "dynamic" keyword and pretend C# is the some mutant cousin of JavaScript, be my guest. Just keep these experiments out of my codebase. ;)
I don't see a reason why the current way of invoking methods dynamicly is flawed:
It takes three lines to do it, or you can add a extension method on System.Object to do it for you:
class Program
{
static void Main(string[] args)
{
var foo = new Foo();
Console.WriteLine(foo.Invoke("Hello","Jonathan"));
}
}
static class DynamicDispatchHelper
{
static public object Invoke(this object ot, string methodName, params object[] args)
{
var t = ot.GetType();
var m = t.GetMethod(methodName);
return m.Invoke(ot, args);
}
}
class Foo
{
public string Hello(string name)
{
return ("Hello World, " + name);
}
}