Is there a need to set Objects to Nothing - vba

I always read that it is recommended to set objects to nothing, once I am done with them. But I normally use them only in functions inside forms.
Isn't the reference lost and memory released when the function scope is left, regardless of setting objects to Nothing?
i.e. is it really necessary to do:
Set db = Nothing
Set record_set = Nothing

VB uses a so-called "reference counting" garbage collector.
Basically, the moment a variable goes out of scope, the reference counter on the referenced object is decremented. When you assign the object reference to another variable, the reference counter is incremented.
When the counter reaches zero, the object is ready for garbage collection. The object resources will be released as soon as this happens. A function local variable will most likely reference an object whose reference count never goes higher than 1, so object resources will be released when the function ends.
Setting a variable to Nothing is the way to decrease the the reference counter explicitly.
For example, you read in a file, and set the file object variable to Nothing right after the ReadAll() call. The file handle will be released immediately, you can take your time process its contents.
If you don't set to Nothing, the file handle might be open longer than absolutely necessary.
If you are not in a "must unblock valuable resource" kind of situation, simply letting the variables go out of scope is okay.

Garbage collection is rarely perfect. Even in .NET there are times where you are strongly encouraged to prompt the system to do garbage collection early.
For this reason, I explicitly both close and set to Nothing recordsets when I'm done with them.

The very last line of the help topic for "Recordset.Close" in the Microsoft DAO help and the Access Developer Reference is this:
"An alternative to the Close method is
to set the value of an object variable
to Nothing (Set dbsTemp = Nothing)."
http://msdn.microsoft.com/en-us/library/bb243098.aspx
With that in mind, this article from the Microsoft Knowledge Base entitled "How to prevent database bloat after you use Data Access Objects (DAO)", tells you that you should explicitly close if you don't want your databases to bloat. You'll notice that the article is a little vague about the details; the "Cause" section is unclear, almost to the point of being gibberish.
http://support.microsoft.com/kb/289562
SYMPTOMS: A Microsoft Access database
has begun to bloat (or grow rapidly in
size) after you implement Data Access
Objects (DAO) to open a recordset.
CAUSE: If you do not release a
recordset's memory each time that you
loop through the recordset code, DAO
may recompile, using more memory and
increasing the size of the database.
MORE INFORMATION: When you create a
Recordset (or a QueryDef) object in
code, explicitly close the object when
you are finished. Microsoft Access
automatically closes Recordset and
QueryDef objects under most
circumstances. However, if you
explicitly close the object in your
code, you can avoid occasional
instances when the object remains
open.
Finally, let me add that I have been working with Access databases for 15 years, and I almost always let my locally declared recordset variables go out of scope without explicitly using the Close method. I have not done any testing on it, but it does not seem to matter.

When you are using ASP classic (server-side scripting), it is important to set all objects to nothing when you are through with them, because they do not go out of scope until the [virtual] server is shut down.
For this reason, all MS VB scripting examples always showed objects being closed and set to nothing. So that the script excerpts could be used in environments like ASP classic where the objects did not go out of scope.
There are, rarely, other situations where you wish to code long-running processes where the objects do not go out of scope, and you find yourself running out of physical memory if you do not explicitly release objects.
If you find yourself coding ASP classic, or running processes in global scope for some other reason, then yes, you should explicitly release objects.

References are supposed to be cleaned up when the variable goes out of scope. Presumably this has improved with later versions of the software, but it was at one time not reliable. I believe that it remains a good practice to explicitly set variables to "Nothing."

I usually always put this at the end of my procedures, or call a "CloseRecordSet" sub with it in if I'm using module level ones:
Private Sub Rawr()
On Error GoTo ErrorHandler
'Procedural Code Here.
ExitPoint:
'Closes and Destroys RecordSet Objects.
If Not Recset Is Nothing Then
If Recset.State = 1 Then
Recset.Close
Conn.Close
End If
Set Recset = Nothing
Set Conn = Nothing
End If
Exit Sub
ErrorHandler:
'Error Handling / Reporting Here.
Resume ExitPoint
End Sub
That way however the procedure ends, (be it normally or due to an error) the objects are cleaned up and resources are free.
Doing it that way is quite safe in that it you can just slap it in and it will only do what is necessary in regards to closing, or destroying the recordset / connection object, incase it has already been closed (due to a runtime error or just closing it early as ya should, this just makes sure).
Its really not much hassle and its always best to clean up your objects when you're finished with them to free up resources immediately regardless of what happens in the program.

Try this
If Not IsEmpty(vMyVariant) Then
Erase vMyVariant
vMyVariant = Empty
End If

Related

Is there a reason for checking is something is Nothing before setting it to nothing? [duplicate]

I've noticed that some members of the Stack Overflow community will use Set Object = Nothing in closing procedures. I was able to find why this is useful for instances of Access, but no answer has been satisfying when it comes to doing this for Excel, so my question is
What are the benefits of setting objects to Nothing in VBA?
In the sample code below, is setting my objects ws and Test equal to Nothing a waste of space? Else, if doing so is in fact good practice, why?
Dim ws as Worksheet
Dim Test as Range
Set ws = Sheets(“Sheet1”)
Set Test = ws.Range(“A1”)
'Utilize Test variable (copy, paste, etc)
Set Test = Nothing
Set ws = Nothing
Exit Sub
If this was managed .NET code (which is garbage-collected), you'd have to Release every single COM object you ever accessed, lest the host process (EXCEL.EXE) would likely remain running in the background, consuming memory and unable to completely tear down.
But this is VBA code (which is reference-counted), moreover VBA code that uses objects that the host application controls - these objects will die when the host application shuts down, and when that happens the VBA execution context is long gone.
In other words, all these Set ... = Nothing instructions are completely redundant.
In some specific cases, when you're dealing with a 3rd-party API / type library, it's possible that objects don't entirely clean up. For example you might be creating an Access.Application instance, and find that a "ghost" ACCESS.EXE process remains open in Task Manager well after Excel exited: that's a sign that you're leaking an object reference somehow, somewhere, and Set ... = Nothing can help prevent that.
However I wouldn't recommend systematically nulling all object references like that. Only when not doing it causes a problem. And even then, it's going to be one or two objects dragging everything down, not all of them. If ACCESS.EXE shuts down properly, there's no reason to clutter up your code with such instructions.
Avoiding storing object references in global state helps, too. If everything is local, in theory all objects involved are destroyed as soon as the local scope exits.

Is it expensive to connect to COM objects?

I'm actually involved into a legacy system maintenance that has been written in PowerBuilder.
My actual objective is to handle the errors that may occur when connecting to COM objects.
The COM objects have been written in VB.NET and are mainly Web services.
I have defined a base factory which handles the instantiation of the COM objects through the PB function ConnectToNewObject().
So, when I instantiate a COM object, I get an instance of an oleobject.
In the code to be replaced, I can see that the oleobject used may be immediately destroyed. In some other places, it isn't. To add to the confusion, sometimes it is previously disconnected.
So I can encounter either:
oleobject lole_new_com_object
lole_new_com_object = create oleobject
lole_new_com_object.ConnectToNewObject("MyAssembly.MyNamespace.MyClass")
...
lole_new_com_object.DisconnectObject() // Disconnect before destroy...
destroy lole_new_com_object
or:
oleobject lole_new_com_object
lole_new_com_object = create oleobject
lole_new_com_object.ConnectToNewObject("MyAssembly.MyNamespace.MyClass")
...
// No disconnect...
destroy lole_new_com_object
So I wonder whether it is expensieve to instantiate an object this way.
Would it be best to disconnect before any destroy?
Would it be best to destroy any unreturned oleobject?
Would it be best to define them as singletons?
I am quite comfortable in PowerBuilder, though I am no expert. For such concerns, I don't know how PB handles things.
From the official documentation here: Shutting down and disconnecting from the server
You can rely on garbage collection to destroy the OLEObject variable.
Destroying the variable automatically disconnects from the server.
It is preferable to use garbage collection to destroy objects, but if
you want to release the memory used by the variable immediately and
you know that it is not being used by another part of the application,
you can explicitly disconnect and destroy the OLEObject variable
So you can keep the destroy w/o the disconnect. As for the destroy itself, it depends on the type of object. You don't have to do it, especially for in-process objects. Usually, it's used for COM objects such as Microsoft Word or Excel (when you want to make sure the exe is really gone when you want it to be gone).
You should disconnect before the destroy and destroy any which have been created.
I don't know if defining them as singletons would make any difference or not.
I agree with Matt's comment. Theoretically a local variable is automatically destroyed when the script ends. That being said, I think most developers would code a destroy immediately after the DisconnectObject like in your first example.

Difference Between Dispose And nothing

**Public Sub ExecuteQuery(ByVal pQueryString As String, Optional ByVal pConn As Odbc.OdbcConnection = Nothing)
Dim Mycmd As New Odbc.OdbcCommand(pQueryString, MyConn)
Mycmd.ExecuteNonQuery()
Mycmd.Dispose()
End Sub**
Here I am Clear the object using Dispose( Mycmd.Dispose()). Can I Use here Nothing ( Mycmd = Nothing? . Which is the Best ?
Please Help Me Sir,
By
Arul.
Dim Mycmd As New Odbc.OdbcCommand(pQueryString, MyConn)
This command stores the reference of object created by New Odbc.OdbcCommand(pQueryString, MyConn) into Mycmd, i.e Mycmd would basically have the address of the newly created object.
now when you do
Mycmd.Dispose()
then it indicates that the use of that newly created object is over and the space allocated to that object can be freed during garbage collection.
but when you do
Set Mycmd = Nothing
then it just remove the reference of the newly created object from Mycmd, it does not mark it for garbage collection.
Oftentimes, .net objects will ask other entities (which may or may not even be on the same computer) to "do something"(*) on their behalf, and such entities will continue doing so unless or until they are told to stop. Such objects should implement IDisposable, and their IDisposable.Dispose routine should notify any and all entities which had been doing something on their behalf, to stop doing so. If all references to an IDisposable object were to disappear without Dispose being called first, some other entities might continue forever in uselessly doing something on behalf of an object which has long since ceased to exist.
(*) That "doing something" could be anything, including blocking other requests to do something. For example, an object might ask the operating system for exclusive access to a file, and the operating system might relay that request to another computer. If the object doesn't notify the OS when it no longer needs access to the file, the server could leave everyone else in the universe locked out of the file indefinitely.
In an effort to minimize problems from entities' perpetually acting on behalf of abandoned objects, .net provides a means by which objects can ask to be notified when they're abandoned. If an object which overrides Object.Finalize() is abandoned, then .net will usually call that object's override of the Finalize() method. This kinda sorta works, mostly, but it should almost never be relied upon. It's very hard to design a class so that Finalize() will always do the right thing and never do the wrong thing. Among other things, if one isn't careful, it's possible for .net to call Finalize() on an object which it determines is going to be abandoned, even while that object is interacting with an outside entity. This will never happen in code which properly calls Dispose on the object, but may happen in code which relies upon Finalize().
If you mean assigning the value Nothing to the object as below:
Set Mycmd = Nothing
This doesn't actually do anything in terms of signaling as object ready for garbage collection, or freeing an object's used resources.
In VB6 setting an object to equal Nothing was the correct way to free the object's resources, but now calling the Dispose method is correct. Where objects do not implement IDisposable, then you can simply leave them.
Garbage collection will then happen in its own time (and would have even without the call to .Dispose() ).
There is not any .nothing() method to release object resource. You can use dispose and nothing to release object resources. But dispose is used when IDisposable interface is implemented. If you are using your custom classes. The .net built in object already has this functionality but it works if your object is having referenced otherwise it will give an error. You can use
command=nothing
for all objects either .net framework or custom. Both release object.

Forcing Garbage Collection

Is there a way to force garbage collection in VBA/Excel 2000?
This question refers to the Macro language in Excel.
Not using VB .NET to manipulate Excel. So GC.collect() won't work
You cannot take advantage of garbage collection provided by the .NET Framework when using straight VBA. Perhaps this article by Eric Lippert will be helpful
You can't force GC in VBA, but it's good to set to Nothing the global variables.
The article mentioned by kd7 says it's useless to set to Nothing the local variables before they go out of scope, but doesn't talk about the global variables.
In VBA the global variables defined in a module remain alive through the whole Excel session, i.e. until the document containing the VBA module that defines them closed.
So don't put useless Set O = Nothing when O is local, but do it when it's global.
VBA/Excel does not have garbage collection, like old VB. Instead of GC, it uses reference counting. Memory is freed when you set a pointer to nothing (or when variable goes out of scope). Like in old VB it means that circular references are never freed.

Is object clearing/array deallocation really necessary in VB6/VBA (Pros/Cons?)

A lot of what I have learned about VB I learned from using Static Code Analysis (Particularly Aivosto's Project Analyzer). And one one of things it checks for is whether or not you cleared all objects and arrays. I used to just do this blindly because PA said so. But now that I know a little bit more about the way VB releases resources, it seems to me that these things should be happening automatically. Is this a legacy feature from pre VB6, or is there a reason why you should explicitly set objects back to nothing and use Erase on arrays?
Matt Curland, author of Advanced Visual Basic 6, who knows more about Visual Basic than most of us ever will, thinks it is wasted effort. Consider this quote (p110) about DAO, the COM data access library that primarily targets the Access Database Engine:
another example of poor teardown code.
DAO has Close methods that must be
called in the correct order, and the
objects must be released in the
correct order as well (Recordset
before Database, for example). This
single poor object model behavior has
led to the misconception that VB leaks
memory unless you explicitly set all
the local variables to nothing at the
end of a function. This is a
completely false notion in a
well-designed object model. VB can
clear the variables faster at the End
Sub line than you can from code, and
it checks the variables even if you
explicitly release your references.
Any effort you make is duplicated.
The problem, as I understand it, has to do with the fact that VB6 (and its predecessors) has its roots in COM, and its reference-counting garbage collection system.
Imagine, for instance, that you declare a refernece to an object from a 3rd party library. That object has a COM reference count that is used both to keep it alive and to determine when it should be destroyed. It isn't destroyed when you set it to Nothing, but when the object's reference count reaches zero.
Now, not all COM components were written in Visual Basic. Some were written in C or C++. Structured exception handling didn't exist across all languages. So if an error occurred, the reference count on the object was not guaranteed to be properly reduced, and COM objects were known to hang around longer than they were intended to. This wasn't a problem with Visual Basic, per se. It was a COM problem. (And that, you might note, is why .NET doesn't use reference counting.)
That's why Visual Basic developers became obsessive about releasing object references prior to exiting routines. You simply don't know what a component you're allocating is creating under the hood. But when you release your reference to it, you're at least releasing your reference count to it. It became almost a religious mantra. Declare, use, release. It was the COM way of doing things.
Sure, Visual Basic might be better or faster at dereferencing variables I declared on the stack. But dammit, I want it to be OBVIOUS that those objects were released. A little assurance goes a long way when you're trying to track down a memory leak.
Have you read this Aivosto web page (from the creators of Project Analyzer)?
If you are using static variables,
it's important to reclaim the memory
they occupied when you don't need the
variables any more. With dynamic
variables memory isn't so much of a
problem, because they are destroyed
when the procedure ends.
In other words, you don't need to worry about clearing ordinary, non-static, local variables.
I always do it for good practice, you never know what an exception might do if you fall in one and your objects are not deallocated. You should relase them in finally statements and ensure they are not using any memory otherwise you may run into a memory leak.
I had an issue inside of a simple time off tracker system where the server kept on crashing randomly, it took weeks to determine it was a memory leak of an object that was supposed to self destruct on its own. My code was being thrown into an exception and never cleaned up after itself causing the server (the actual web site not the entire server) to go down.
Yes, set all objects to Nothing and clean up as much as you can. VB6 is notorious for having memory leaks when not cleaning up your stuff. Garbage collection was sub-par in VB6/VBA.