What should I use the "My" namespace for in VB .NET? - vb.net

I'm considering building a framework for VB.NET, and using the My namespace to plug it into VB seems like a reasonable idea. What is "My" used for?

The purpose of My, as I understand it, is to be an easy shortcut to certain API tasks that are common but hard-to-find or hard-to-use. You probably shouldn't completely subsume your framework under My. (For one thing, C# people using your framework may get grouchy.)
Instead, you should design it as a normal framework. When you're finished, make a list of some common tasks that people might want to use your framework for. See whether any of those could be useful to have under My, especially where there are classes or methods that can be used in a number of ways, but they have one or two really common usages that can be abbreviated with My.
This article shows how to extend My, and it has a section at the end that describes a few design guidelines to follow: Simplify Common Tasks by Customizing the My Namespace
As to your main question, when coding in VB .NET, I use My as often as I can. It reduces a number of operations to one line of code.

I really like the "My" Namespace in VB.NET and I always use it in my WindowsForms applications, because it is very intuitive.
I use primarily these categories:
My.Computer: primarily for file system and network purposes
My.Application: Version number, current directory
My.Resources: Access to resources used by the application residing in resource files in a strongly typed manner.
My.Settings: very handy
I think, if your extensions for My of your framework fit well, then many VB.NET programmers would appreciate them.

I've used My in my VB.NET projects, and I don't feel guilty about it. I am primarily a C# guy, but until I transitioned my company to C#, we were a VB shop. In my mind, the My namespace is a nice piece of syntactic sugar. Just as I'm not embarrassed to use C#'s coalesce operator and other sugar, I'm not embarrassed to use VB's sugar, either. (To an extent; I won't use the classic VB functions that .NET still exposes.)
That said, never put anything in that namespace. It's Microsoft's namespace, and just as you wouldn't put anything under System nor Microsoft, don't put anything under My. It will cause confusion later on -- if not for you, then for others who maintain your code. Create your own namespace for your own code.

We do use it in some code, but hesitantly so. It's true that My often helps make code more readable. For example, the Environment.SpecialFolder enumeration curiously lacks a Temp member, whereas My.Computer.FileSystem.SpecialDirectories has one (Path.GetTempPath() will do as well, but is hardly intuitive compared to other special folders).
But My is only beneficial in such cases because the existing APIs are badly-designed, not because My is inherently better. Like JAGregory, I strongly suggest one avoids extending My — or any other kind of global namespace, variable, etc. — whenever possible. The idea just doesn't fit a clean OOP architecture.

I never use the My namespace (I'm a C# developer), but my VB co-worker doesn't as well. I found the My members not necessary, because in many cases, they're counter-intuitive for me, e.g. in my opinion opening a file has something to do with IO (hence System.IO.File) and not with my computer (My.Computer.FileSystem). They always seem so scattered and bunched together.
It's just some re-roll of functionality that is already available otherwise, from all languages. And I don't like depending on Microsoft.VisualBasic.dll when I'm developing for .NET - I always prefer System.*.
And then, it's always kind of limited. I see VB developers struggle with their app when they can't find something in the My namespace, because they can't imagine that you can use something in the System namespace. That of course is not a problem of the My namespace itself.

I mainly use C# and Boo, but when I do use VB.NET I use My namespace quite often. I dont see any reason to not simplify coding. It still retains its readability.

I've only used it from a user perspective, I've never plugged anything into it. I consider the My namespace to be some highly reliable, platform-provided, global helper mechanisms. Officially sanctioned shortcuts, really. I might be surprised to see external user or third-party code in there.
As such, I'd encourage a vb framework to define its own appropriately-named namespace instead of latching on to the existing My namespace. Such a framework shouldn't have that "global" feel to it.

Never used it so far, although I've never actually looked into it either.
I wouldn't advise putting anything into the My namespace yourself, it's much more clear just to lay it out like you would if it were a non-VB framework.

Love the My! Anything that helps me get the job done faster, and provides code for solutions that I don't have to write, the better!

I use My.Settings and My.Computer often while programming in VB.NET. I particularly enjoy My.Settings as an alternative to using ConfigurationManager.AppSettings when it is appropriate.
I agree with John Rudy about the use of My. It is syntactic sugar that makes life a little more readable.

I don't use it a lot.

I'm considering building a framework for VB.NET, and using the My namespace to plug it into VB seems like a reasonable idea. Is it?
If it fits, by all means, use it. Since you didn't offer any further information about your framework it's hard to say. I wouldn't put general-purpose stuff into the My namespace (such as the My.Computer stuff) because there isn't really any advantage to putting it there. However, application-centered helpers fit in well.

Related

Left and Right vs. Substring() in VB.NET

Which one would you prefer to extract a sub-string from the given string and why?
I am thinking that since Left and Right are VB functions and not .NET functions, they may cause problems in the future in terms of compatibility.
Please clarify my thoughts on that.
Use whichever makes most sense i.e. makes that piece of code easier to read.
I don't know why you'd think that they'd cause problems in the future, they're functions, they're provided as part of the VB.Net Language set, there is no earthly reason why they would be removed, and even if they were, they would be trivial to re-implement.
Use 'em, cause you ain't gonna lose 'em
When given the choice between a feature that is from Microsoft.VisualBasic vs a comparable feature that is provided in the core framework assembiles, I tend to stick with the latter in most cases.
I do this for various reasons:
It tends to be understood by more developers. (e.g. C# guy looking at my VB.NET code).
You're more likely to find online help (message boards, stackoverflow, etc) for the core framework version than you are for the VB-ized version.
Using them gives your code a "legacy" feeling to it. It's like making use of the Call statement.
Makes it easier for another person to "copy and paste" VB.Net code into their C# (or other .NET language) project and have it be one less language translation point/hangup. (Unlikely this is a real concern/reason, but I know I've many-a-time "copy and pasted" example C# code into my VB.Net project and anything that doesn't cause road blocks in the translation process (e.g. usage of yield) makes my life easier.)
While completely inconceivable they are going away (as most of these keywords/statements are a BASIC language construct), they do feel more likely to become marked obsolete than any of their core framework counterparts. Especially as VB6 is becoming more and more of a distant memory and the VB.NET language takes on a life of its own in conjunction with the core .NET framework advancing.
One notable exception to this, I tend to make use of the My namespace proxies offered; My.FileSystem.ReadAllText(...) is just sexy. :P
Do you work alone?
If no, the decision is simple.
If your team members have C# background, use Substring.
If your devs have some VB6 background, use Left and Right.
If you ain't sure, ask them.
.net SubString will throw if start + length is > than the string length where VB Left() will return the entire string if a length greater than the string length is provided. That's a substantial behavioral difference.
Also, given .Net Standard, avoid Microsoft.VisualBasic.
Yes, I know this is > 10 years old, but ended up here when trying to port some really old code. So, thought I'd share the difference.

How do you write good highly useful general purpose libraries?

I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)

OO or procedural

I have an Access db I use for my checkbook (with a good amount of fairly simple VBA behind it) and I'd like to rewrite it as a stand-alone program with a SQL backend. I'm thinking of using either C++, Java, or Python. I had assumed, before I started, that I would write it OO because I thought that I would think "in OO terms" (due to a OO Logic class and a C++ class I took), but I'm finding that I can only visualize it as procedural (but maybe because I'm mentally stuck in thinking of how the db works in Access). How do I decide? Am I making sense or does it seem like I'm not understanding the concepts?
Thanks for your help.
I'd suggest OO - it's not harder than procedural programming, actually easier to maintain with the right tool. Delphi would be my choice - great DB programming support, visual designer, strongly-typed, plenty of components available. There are many great applications that are written in Delphi. Often underestimated, there are many reasons it's got a loyal following.
Now I'll duck as the Delphi-haters load up with tomatoes.
Well, OO may well be overkill, but it is excellent practice. Any code monkey can write procedural code. Its the path of least resistance in every case, which is why most people use it for one off apps that don't do much. However, if you're writing to get experience in working with OO, than it is best to think of it that way. You could start by designing an object that manages financial transaction, then you will also need a way to interact with the DB. Perhaps you could write a DB layer where you abstract away the database calls from the transaction object using the Entity framework where you could learn LINQ (or whatever the JAVA equivalent is). This is all assuming that you are doing this for fun and practice.
oo seems to be overkill for a simple checkbook app. Try something on a larger scale like something to manage all your financial accounts. This way designing an account class would make sense
Well it depends on your motivation. If you want a checkbook application as quickly as possible, just churn out the procedural code. No-one other than you will know the difference. If you want to use this application to better yourself as aprogrammer. Take the time to learn how to write in in OO.
I'd go with Python: no compiling and uses dynamic typing (you can use strict typing too if you want). Plus, it has a huge following in the open source community which means great support, tools, and documentation for free.
As for OO vs. Procedural -- all these languages you've mentioned could be written in a procedural style -- that is, one big class/method that does everything -- but you'll soon find that you'll want to follow DRY principles (Don't Repeat Yourself) and start with some private methods that do one particular thing well. From then, you'll want to group similar things into separate classes, and then from there you'll want to abstract those classes... see where I'm going here?
In my opinion you should concentrate less on the OO versus procedural thing. If you have the possibility to go procedural in the beginning, then go procedural. It's the easiest thing you can do to get you started. The OO thing, on the other hand, may just as well qualify as YAGNI (You Ain't Gonna Need It).
What you should do though, is to write tests, unit tests and then integration tests. And you should strive to write tests first. This way, even if you begin with a procedural application you may later on refactor it into a full-fledged OO application. But, only if you need objects. These tests will be you're safety net when moving around code in your application.
Trying to think your applications into object from the beginning may lead you to an point where you're stuck with your class hierarchies and architecture.
I'm not a genius, so I may be wrong, but in my experience, starting with simple functions and then thinking about grouping them into objects or modules is better than starting by saying: OK, I'll have this object that interacts with this object, which is implementing pattern X, so this way I'll decouple interface Y from implementation Z. Later on, you may observe that your domain model is weak. Take an evolutionary design path and start with small building blocks.
If you are looking for a quick app that you can extend, check out Dynamic Data.

How could you improve this code design?

Lately, I've been making use a lot of the Strategy Pattern along with the Factory Pattern. And I really mean a lot. I have a lot of "algorithms" for everything and factories that retrieve algorithms based on parameters.
Even though the code seems very extensible, and it is, having N factories seems a bit of an abuse.
I know this is pretty subjective, and we're talking without seeing code, but is this acceptable in real world code? Would you change something ?
OK- ask yourself a question. does/will this algorithm implementation ever change? If no then remove the strategy.
I am maintence.
I once was forced (by my pattern lovin' boss) to write a set of 16 "buffer interpreter tuxedo services" using an AbstractFactory and a double DAO pattern in C++ (no reflections, no code gen). All up it something like 20,000 lines of the nastiest code I've even seen (not the least because I didn't really know C++ when I started) and it took about three months.
Since my old boss has moved on I've rewritten them using good 'ole "straight up and down" procedural style C++, with couple of funky-macros... each service is like 60 lines of code, times 16... all up less than a 1000 lines of really SIMPLE code; so simple that even I can follow it.
Cheers. Keith.
Whenever I'm implementing code in this fashion, some questions I ask are:
what components do I need to substitute to test ?
what components will I expect users/admins to disable or substitute (e.g. via Spring configs or similar) ?
what components do I expect or suspect will not be required in the future due to (possibly) changing requirements ?
This all drives how I construct object or components (via factories) and how I implement algorithms. The above is vague, but (of course) the requirements can be similarly difficult to pin down. Without seeing your implementation and your requirements, I can't comment further, but the above should act as some guideline to help you determine whether what you've done is overkill.
If you're using the same design pattern all over the place, perhaps you should either switch to a language that has better support for what you're trying to do or rethink your code to be more idiomatic in your language of choice. After all, that's why we have more than one programming language.
Would depends on the kind of software I'm working on.
Maintenance asks for simple code and factories is NOT simple code.
But extensibility asks sometimes for factories...
So you have to take both in consideration.
Just have in mind that most of the time, you will have to maintain a source file MANY times a year and you will NOT have to extend it.
IMO, patterns should only be used when absolutely needed. If you think it can be handy in two years, you are better to use them... in two years.
How complex is the work the factory is handling? Does object creation really need to be abstracted to a different class? A variation of the factory method is having a simple, in-class factory. This really works best if any dependencies have already been injected.
For instance,
public class Customer
{
public Customer CreateNewCustomer()
{
// handle minimally complex create logic here
}
}
As far as Strategy overuse... Again, as #RichardOD explained, will the algorithm ever really change?
Always keep in mind the YAGNI principle. You Aren't Gonna Need It.
Can't you make an AbstractFactory instead off different standalone factories?
AlgorithmFactory creates the algorithms you need based on the concrete factory.

How to write static code analyzer for .net

I am interested in writing static code analyzer for vb.net to see if it conforms to my company standard coding guidelines. Please advise from where i have to start.
Rather than write your own static code analyzer, I recommend you use FxCop: and instead, write custom FxCop rules for your needs. It'll save you a lot of time.
http://www.binarycoder.net/fxcop/
I would suggest you use Mono's Gendarme. It's a very nice tool, with plenty of built in rules. It also generates nice HTML reports.
if you need mroe architectural insight use NDepend. This tool does not stop to amaze me. It can do soo much more than FxCop. It's commercial though, but has a free trial version
FXCop is a good start for coding problems/mistakes, StyleCop is good for coding style (obviously), but if neither of those two work then you then you can either write a parser yourself or use the VBCodeProvider class in the .Net Framework
Start with FxCop. If you can't do what you're trying there, try something like NStatic or NDepend.
The best options are to use FxCop or StyleCop and write custom rules if necessary.
Use FxCop, this isn't a project you want to undertake personally. The parsing/lexical rules involved and the possible catches would be insane. The only way I could imagine doing it while retaining a modicum of sanity would be to use Lisp thanks to the extreme amount of expressiveness, but again, best to use FxCop.
If you must write a custom in-house tool for some (dogmatic?) reason, I'd recommend writing a Lisp program that does only basic rules-checking. Don't try to make it comprehensive, we're talking the kind of frontier that AI researchers are dealing with in terms of the parsing capabilities of a piece of software.
Just use Lisp to find the possible obvious offenders, or just at catching whatever it ends up being good at catching in terms of non-compliant code, then subject it to a brief human eye scan. I highly recommend abusing macros if you do use Lisp to write the parser.
I agree with one of the posters that it would be a quite difficult taks, but rather than with Lisp I'd start with F#, just like Microsoft did for their 3rd party windows drivers analysis tool:
http://arstechnica.com/journals/microsoft.ars/2005/11/10/1796
F# shares Lisp's expressiveness (ok, almost) and works on CLR just like VB.NET, which would make the whole thing easier.