After reading many of the replies to this thread, I see that many of those who dislike it cite the potential for abuse of the new keyword. My question is, what sort of abuse? How could this be abused so badly as to make people vehemently dislike it? Is it just about purism? Or is there a real pitfall that I'm just not seeing?
I think that a lot of the revulsion that people are expressing to this feature boils down to "this is a bad language feature because it will allow bad developers to write bad code." If you think about it, by that logic all language features are bad.
When I run into a block of VB code that some genius has prefixed with On Error Resume Next, it's not VB that I curse. Maybe I should, I suppose. But in my experience a person who is determined to put a penny in the fuse box will find a way. Even if you empty his pockets, he'll fashion his own pennies.
Me, I'm looking forward to a more useful way of interoperating between C# and Python. I'm writing more and more code that does this. The dynamic keyword can't come soon enough for that particular use case, because the current way of doing it makes me feel like I'm a Soviet academic in the 1950s who's traveling to the West for a conference: there's an immense amount of rules and paperwork before I get to leave, I am pretty sure someone's going to be watching me the whole time I'm there, and most of what I pick up while I'm there will be taken away from me at the border when I return.
Some see it as a tool that will be abused. Like "Option Strict Off" and "On Error Resume Next" in VB which "pure" languages like C# and Java have never had.
Many said the same about the "var" keyword, yet I don't see it being abused, once it became understood that it wasn't the same as VB's "Variant"
It could be abused in places that lazy developers don't want type checking on classes and just try catch dynamic calls instead of writing "if blah is Blah ...".
I personally feel it could be used properly in situations like this recent question that I answered.
I think the ones really understanding it's power are those heavily into the dynamic .NET languages.
dynamic is bad because code like this will pop all over the place:
public dynamic Foo(dynamic other) {
dynamic clone = other.Clone();
clone.AssignData(this.Data);
return clone ;
}
instead of:
public T Foo<T>(T other) where T: ICloneable, IAssignData{
T clone = (T)other.Clone();
clone.AssignData(this.Data);
return clone;
}
The first one, has no static type info, no compile time checking, it's not self documenting, no type inference so people will be forced to use a dynamic reference at the call site to store the result, leading to more type loss, and all this spirals down.
I'm already starting to fear dynamic.
The real pitfall? Severe lack of documentation. The entire application's architecture exists in the mind of the person (or persons) who wrote it. At least with strong-typing, you can go see what the object does via its class definition. With dynamic-typing, you must infer the meaning from it's use, at best. At worst, you have NO IDEA what the object is. It's like programming everything in JavaScript. ACK!
When people realize that they don't get good IntelliSense with dynamic, they'll switch back from being dynamic-happy to dynamic-when-necessary-and-var-at-all-other-times.
The purposes of dynamic include: interoperability with dynamic languages and platforms such as COM/C++ and DLR/IronPython/IronRuby; as well as turning C# itself into IronSmalltalkWithBraces with everything implementing IDynamicObject.
Good times will be had by all. (Unless you need to maintain code someone else wrote.)
This is sort of like discussing public cameras, sure they can and will be misused but there are benefits to having them as well.
There is no reason why you couldn't outlaw the "dynamic" keyword in your own coding guideline if you don't need them. So whats the problem? I mean, if you want to do crazy things with the "dynamic" keyword and pretend C# is the some mutant cousin of JavaScript, be my guest. Just keep these experiments out of my codebase. ;)
I don't see a reason why the current way of invoking methods dynamicly is flawed:
It takes three lines to do it, or you can add a extension method on System.Object to do it for you:
class Program
{
static void Main(string[] args)
{
var foo = new Foo();
Console.WriteLine(foo.Invoke("Hello","Jonathan"));
}
}
static class DynamicDispatchHelper
{
static public object Invoke(this object ot, string methodName, params object[] args)
{
var t = ot.GetType();
var m = t.GetMethod(methodName);
return m.Invoke(ot, args);
}
}
class Foo
{
public string Hello(string name)
{
return ("Hello World, " + name);
}
}
Related
I've been trying to learn PHP and I'm progressing pretty well at making my own blog engine. When it came time to integrate OAuth, I came across this solution to encrypt keys.
The usage says to do something along these lines:
<?php
// a new proCrypt instance
$crypt = new proCrypt;
// encrypt the string
$encoded = $crypt->encrypt( 'my message');
echo $encoded."\n";
// decrypt the string
echo $crypt->decrypt( $encoded ) . "\n";
?>
My question is... why is this a class? It seems like two functions would be just fine. I don't really get why I'd instantiate an object and then call some methods. Is this an example of OOP thinking run amok, or is there something I'm missing here?
If there is some compelling reason for it to be a class, why aren't the methods static so that I could just call proCrypt::encrypt( 'my message' );?
This is relavent as a lot of the code I've written has been using static functions, or stand along functional programming instead of OOP. If I'm doing something horribly wrong, I'd like to know about it.
The class has some variables that can be set as to affect the outcome of the encryption. If you were to make this class static, you would set these variables once and the everyone who used that function would be affected. Instead if you make it an object, it is easy to create multiple versions with different values.
Maybe because some encryption algorithms need some additional state as input (like a public/private key), and that's encapsulated by the object.
One possibility: "memoization".
A class might be useful here because it might retain intermediate results or cache previous results.
That's not "OOP thinking run amok". It's just prudent design because -- perhaps -- there's something stateful going on behind the scenes.
Well i'm not sure why that solution you found isn't static.
I have started to use this solution which i found on stack which is called in a static way
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Functional programming vs Object Oriented programming
Can someone explain to me why I would need functional programming instead of OOP?
E.g. why would I need to use Haskell instead of C++ (or a similar language)?
What are the advantages of functional programming over OOP?
One of the big things I prefer in functional programming is the lack of "spooky action at a distance". What you see is what you get – and no more. This makes code far easier to reason about.
Let's use a simple example. Let's say I come across the code snippet X = 10 in either Java (OOP) or Erlang (functional). In Erlang I can know these things very quickly:
The variable X is in the immediate context I'm in. Period. It's either a parameter passed in to the function I'm reading or it's being assigned the first (and only—c.f. below) time.
The variable X has a value of 10 from this point onward. It will not change again within the block of code I'm reading. It cannot.
In Java it's more complicated:
The variable X might be defined as a parameter.
It might be defined somewhere else in the method.
It might be defined as part of the class the method is in.
Whatever the case is, since I'm not declaring it here, I'm changing its value. This means I don't know what the value of X will be without constantly scanning backward through the code to find the last place it was assigned or modified explicitly or implicitly (like in a for loop).
When I call another method, if X happens to be a class variable it may change out from underneath me with no way for me to know this without inspecting the code of that method.
In the context of a threading program it's even worse. X can be changed by something I can't even see in my immediate environment. Another thread may be calling the method in #5 that modifies X.
And Java is a relatively simple OOP language. The number of ways that X can be screwed around with in C++ is even higher and potentially more obscure.
And the thing is? This is just a simple example of how a common operation can be far more complicated in an OOP (or other imperative) language than in a functional. It also doesn't address the benefits of functional programming that don't involve mutable state, etc. like higher order functions.
There are three things about Haskell that I think are really cool:
1) It's a statically-typed language that is extremely expressive and lets you build highly maintainable and refactorable code quickly. There's been a big debate between statically typed languages like Java and C# and dynamic languages like Python and Ruby. Python and Ruby let you quickly build programs, using only a fraction of the number of lines required in a language like Java or C#. So, if your goal is to get to market quickly, Python and Ruby are good choices. But, because they're dynamic, refactoring and maintaining your code is tough. In Java, if you want to add a parameter to a method, it's easy to use the IDE to find all instances of the method and fix them. And if you miss one, the compiler catches it. With Python and Ruby, refactoring mistakes will only be caught as run-time errors. So, with traditional languages, you get to choose between quick development and lousy maintainability on the one hand and slow development and good maintainability on the other hand. Neither choice is very good.
But with Haskell, you don't have to make this type of choice. Haskell is statically typed, just like Java and C#. So, you get all the refactorability, potential for IDE support, and compile-time checking. But at the same time, types can be inferred by the compiler. So, they don't get in your way like they do with traditional static languages. Plus, the language offers many other features that allow you to accomplish a lot with only a few lines of code. So, you get the speed of development of Python and Ruby along with the safety of static languages.
2) Parallelism. Because functions don't have side effects, it's much easier for the compiler to run things in parallel without much work from you as a developer. Consider the following pseudo-code:
a = f x
b = g y
c = h a b
In a pure functional language, we know that functions f and g have no side effects. So, there's no reason that f has to be run before g. The order could be swapped, or they could be run at the same time. In fact, we really don't have to run f and g at all until their values are needed in function h. This is not true in a traditional language since the calls to f and g could have side effects that could require us to run them in a particular order.
As computers get more and more cores on them, functional programming becomes more important because it allows the programmer to easily take advantage of the available parallelism.
3) The final really cool thing about Haskell is also possibly the most subtle: lazy evaluation. To understand this, consider the problem of writing a program that reads a text file and prints out the number of occurrences of the word "the" on each line of the file. Suppose you're writing in a traditional imperative language.
Attempt 1: You write a function that opens the file and reads it one line at a time. For each line, you calculate the number of "the's", and you print it out. That's great, except your main logic (counting the words) is tightly coupled with your input and output. Suppose you want to use that same logic in some other context? Suppose you want to read text data off a socket and count the words? Or you want to read the text from a UI? You'll have to rewrite your logic all over again!
Worst of all, what if you want to write an automated test for your new code? You'll have to build input files, run your code, capture the output, and then compare the output against your expected results. That's do-able, but it's painful. Generally, when you tightly couple IO with logic, it becomes really difficult to test the logic.
Attempt 2: So, let's decouple IO and logic. First, read the entire file into a big string in memory. Then, pass the string to a function that breaks the string into lines, counts the "the's" on each line, and returns a list of counts. Finally, the program can loop through the counts and output them. It's now easy to test the core logic since it involves no IO. It's now easy to use the core logic with data from a file or from a socket or from a UI. So, this is a great solution, right?
Wrong. What if someone passes in a 100GB file? You'll blow out your memory since the entire file must be loaded into a string.
Attempt 3: Build an abstraction around reading the file and producing results. You can think of these abstractions as two interfaces. The first has methods nextLine() and done(). The second has outputCount(). Your main program implements nextLine() and done() to read from the file, while outputCount() just directly prints out the count. This allows your main program to run in constant memory. Your test program can use an alternate implementation of this abstraction that has nextLine() and done() pull test data from memory, while outputCount() checks the results rather than outputting them.
This third attempt works well at separating the logic and the IO, and it allows your program to run in constant memory. But, it's significantly more complicated than the first two attempts.
In short, traditional imperative languages (whether static or dynamic) frequently leave developers making a choice between
a) Tight coupling of IO and logic (hard to test and reuse)
b) Load everything into memory (not very efficient)
c) Building abstractions (complicated, and it slows down implementation)
These choices come up when reading files, querying databases, reading sockets, etc. More often than not, programmers seem to favor option A, and unit tests suffer as a consequence.
So, how does Haskell help with this? In Haskell, you would solve this problem exactly like in Attempt 2. The main program loads the whole file into a string. Then it calls a function that examines the string and returns a list of counts. Then the main program prints the counts. It's super easy to test and reuse the core logic since it's isolated from the IO.
But what about memory usage? Haskell's lazy evaluation takes care of that for you. So, even though your code looks like it loaded the whole file contents into a string variable, the whole contents really aren't loaded. Instead, the file is only read as the string is consumed. This allows it to be read one buffer at a time, and your program will in fact run in constant memory. That is, you can run this program on a 100GB file, and it will consume very little memory.
Similarly, you can query a database, build a resulting list containing a huge set of rows, and pass it to a function to process. The processing function has no idea that the rows came from a database. So, it's decoupled from its IO. And under-the-covers, the list of rows will be fetched lazily and efficiently. So, even though it looks like it when you look at your code, the full list of rows is never all in memory at the same time.
End result, you can test your function that processes the database rows without even having to connect to a database at all.
Lazy evaluation is really subtle, and it takes a while to get your head around its power. But, it allows you to write nice simple code that is easy to test and reuse.
Here's the final Haskell solution and the Approach 3 Java solution. Both use constant memory and separate IO from processing so that testing and reuse are easy.
Haskell:
module Main
where
import System.Environment (getArgs)
import Data.Char (toLower)
main = do
(fileName : _) <- getArgs
fileContents <- readFile fileName
mapM_ (putStrLn . show) $ getWordCounts fileContents
getWordCounts = (map countThe) . lines . map toLower
where countThe = length . filter (== "the") . words
Java:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.Reader;
class CountWords {
public interface OutputHandler {
void handle(int count) throws Exception;
}
static public void main(String[] args) throws Exception {
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader(new File(args[0])));
OutputHandler handler = new OutputHandler() {
public void handle(int count) throws Exception {
System.out.println(count);
}
};
countThe(reader, handler);
} finally {
if (reader != null) reader.close();
}
}
static public void countThe(BufferedReader reader, OutputHandler handler) throws Exception {
String line;
while ((line = reader.readLine()) != null) {
int num = 0;
for (String word: line.toLowerCase().split("([.,!?:;'\"-]|\\s)+")) {
if (word.equals("the")) {
num += 1;
}
}
handler.handle(num);
}
}
}
If we compare Haskell and C++, functional programming makes debugging extremely easy, because there's no mutable state and variables like ones found in C, Python etc. which you should always care about, and it's ensured that, given some arguments, a function will always return the same results in spite the number of times you evaluate it.
OOP is orthogonal to any programming paradigm, and there are lanugages which combine FP with OOP, OCaml being the most popular, several Haskell implementations etc.
Is there a language which has a feature that can prevent a class accessing any other class, unless an instance or reference is contained?
isolated class Example {
public Integer i;
public void doSomething()
{
i = 5; // This is ok because i belongs to this class
/*
* This is forbidden because this class can only
* access anything contained within, nothing outside
*/
System.out.println("This does not work.");
}
}
[edit]An example use case might be a plugin system. I could define a plugin object with references to certain objects that class can manipulate, but nothing else is permissible. It could potentially make security concerns much easier.[/edit]
I'm not aware of any class-based access modifiers with such intent, but I believe access modifiers to be misguided anyway.
Capability-based security or, more specifically, the object-capability model seems to be what you want.
http://en.wikipedia.org/wiki/Object-capability_model
The basic idea is that in order to do anything with an object, you need to hold a reference to it. Withhold the reference and no access is possible.
Global things (such as System.out.println) and a few other things are problematic features of a language, because anyone can access them without a reference.
Languages such as E, or tools like google caja (for Javascript) allow proper object-capability models. Here an example in JS:
function Example(someObj) {
this.someObj = someObj;
this.doStuff() = function() {
this.someObj.foo(); //allowed, we have been given a reference to it
alert("foobar"); //caja may deny/proxy access to global "alert"
}
}
Any language where you must include headers would probably count: Just don't include any headers.
However, I would wager that there's no language that explicitly forbids external access. What's the point? You can't do anything if you can't access the outside world. And, why would the reference to Integer be okay, but System.out.println not be?
If you clarify the potential use-case, we can probably help you better...
Edit for your Edit:
I thought you might be going there.
If this is for security, it's flawed from the start. Let's examine:
class EvilCode {
void DoNiceThings() {
HardDrive.Format();
}
}
What incentive do I have to voluntarily place a keyword on my class? I'm certainly not going to because I'm nice, since I'm not!
One thing to consider is that any time you're loading native code that's not your own (native, in this case, means not scripted), you're potentially allowing a bad guy to run his code. No language features are going to protect you from that.
The proper answer depends on your target language. Java has Security descriptors, .NET lets you create AppDomains with restricted permissions, etc. Unfortunately, I'm not an expert in these fields.
Every time I write trivial getters (get functions that just return the value of the member) I wonder why don't oop languages simply have a 'read only' access modifier that would allow reading the value of the members of the object but does not allow you to set them just like const things in c++.
The private,protected,public access modifiers gives you either full (read/write) access or no access.
Writing a getter and calling it every time is slow, because function calling is slower than just accessing a member. A good optimizer can optimize these getter calls out but this is 'magic'. And I don't think it is good idea learning how an optimizer of a certain compiler works and write code to exploit it.
So why do we need to write accessors, read only interfaces everywhere in practice when just a new access modifier would do the trick?
ps1: please don't tell things like 'It would break the encapsulation'. A public foo.getX() and a public but read only foo.x would do the same thing.
EDIT: I didn't composed my post clear. Sorry. I mean you can read the member's value outside but you can't set it. You can only set its value inside the class scope.
You're incorrectly generalizing from one or some OOP language(s) you know to OOP languages in general. Some examples of languages that implement read-only attributes:
C# (thanks, Darin and tonio)
Delphi (= Object Pascal)
Ruby
Scala
Objective-C (thanks, Rano)
... more?
Personally, I'm annoyed that Java doesn't have this (yet?). Having seen the feature in other languages makes boilerplate writing in Java seem tiresome.
Well some OOP languages do have such modifier.
In C#, you can define an automatic property with different access qualifiers on the set and get:
public int Foo { get; private set; }
This way, the class implementation can tinker with the property to its heart's content, while client code can only read it.
C# has readonly, Java and some others have final. You can use these to make your member variables read-only.
In C#, you can just specify a getter for your property so it can only be read, not changed.
private int _foo;
public int Foo
{
get { return _foo; }
}
Actually, no they aren't the same. Public foo.getX() would still allow the internal class code to write to the variable. A read-only foo.x would be read-only for the internal class code as well.
And there are some languages that do have such modifier.
C# properties allow to define read only properties easily. See this article.
Not to mention Objective-C 2.0 property read-only accessors
http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/ObjectiveC/Articles/ocProperties.html
In Delphi:
strict private
FAnswer: integer;
public
property Answer: integer read FAnswer;
Declares a read-only property Answer that accesses private field FAnswer.
The question largely boils down to: why does not every language have a const property like C++?
This is why it's not in C#:
Anders Hejlsberg: Yes. With respect to
const, it's interesting, because we
hear that complaint all the time too:
"Why don't you have const?" Implicit
in the question is, "Why don't you
have const that is enforced by the
runtime?" That's really what people
are asking, although they don't come
out and say it that way.
The reason that const works in C++ is
because you can cast it away. If you
couldn't cast it away, then your world
would suck. If you declare a method
that takes a const Bla, you could pass
it a non-const Bla. But if it's the
other way around you can't. If you
declare a method that takes a
non-const Bla, you can't pass it a
const Bla. So now you're stuck. So you
gradually need a const version of
everything that isn't const, and you
end up with a shadow world. In C++ you
get away with it, because as with
anything in C++ it is purely optional
whether you want this check or not.
You can just whack the constness away
if you don't like it.
See: http://www.artima.com/intv/choicesP.html
So, the reason wy const works in C++ is because you can work around it. Which is sensible for C++, which has its roots in C.
For managed languages like Java and C#, users would expect that const would be just as secure as, say, the garbage collector. That also implies you can't work around it, and it won't be useful if you can't work around it.
I am trying to use a class from a C# assembly in vb.net. The class has ambiguous members because vb.net is case insensitive. The class is something like this:
public class Foo {
public enum FORMAT {ONE, TWO, THREE};
public FORMAT Format {
get {...}
set {...}
}
}
I try to access the enum: Foo.FORMAT.ONE
This is not possible because there is also a property named 'format'.
I can not change the C# assembly. How can I get around this and reference the enum from vb.net?
I don't think you can get around this. Get in touch with the author of the C# component you are trying to use and convince them to fix their code.
Incidentally, this is the primary reason behind the CLSCompliant(true) attribute, which if you are writing APIs or other code that has a high probability of being used by other languages you should always set. It would have flagged this issue for the original author to be aware of and fix correctly.
There are a couple of ways you can work around it, but neither one is really a good option.
One is to create a C# project and completely wrap the class, changing the ambiguous members into unambiguous ones. Depending on how big the class is, it could be a lot of work, though you only have to wrap the members you need, obviously.
The other is to use reflection, which isn't as much work as wrapping, but is still pointless work compared to the author just writing the code correctly in the first place.