I've noticed that whenever I have package-level constants (or any variables for that matter), whenever I recompile the package, persistent connections to the database get an error that "existing state of package body has been invalidated".
Is there a way to avoid this? Perhaps through synonyms? What are best-practices in this case?
In general, you should avoid replacing code in a live production instance.
If you really really have to be live 24/24 7/7, and you can't schedule ANY (even tiny) downtime you will have to avoid package-level variables since recompilation of a such a package will trigger the aforementionned error.
You could also catch the error in your client application and decide what to do. Maybe you have sufficient information to restart whatever the client was doing.
See also
This thread on AskTom covers the same topic.
The problem is if, at 10:00am, a session starts and the constant is set to 'A', then at 11:00am you change it to 'B', then the session 'blows up' in confusion.
The Serially_Reusable pragma may work for you. Basically it won't preserve state between calls. So at 11:0am it will just start using 'B'. If you can be 100% sure that won't break your code, it can work. Re-initializing the constants whenever they are needed may be an overhead.
Also look at calling DBMS_SESSION.MODIFY_PACKAGE_STATE or DBMS_SESSION.RESET_PACKAGE at appropriate intervals. That may reduce the number of errors you get.
You should also look at Edition-based redefinition in the new 11gR2. That's a more comprehensive solution, but I guess you'd need to upgrade for that.
In the past I've gotten around this by moving all state-related stuff into separate packages.
For example, if I had a package "CUSTOMER_PKG", I'd move all the global variables into a spec-only package called "CUSTOMER_GLOBALS_PKG".
Unfortunately this means exposing all private globals that were defined in the package body. We had to enforce a development standard so that CUSTOMER_GLOBALS_PKG was only allowed to be referred to by CUSTOMER_PKG.
Related
My firm's Access database has been having some serious problems recently. The errors we're getting seem like they indicate corruption -- here are the most common:
Error accessing file. Network connection may have been lost.
There was an error compiling this function.
No error, Access just crashes completely.
I've noticed that these errors only happen with a compiled database. If I decompile it, it works fine. If I take an uncompiled database and compile it, it works fine -- until the next time I try to open it. It appears that compiling the database into a .ACCDE file solves the problem, which is what I've been doing, but one person has reported that the issue returned for her, which has me very nervous.
I've tried exporting all of the objects in the database to text, starting with a brand new database, and importing them all again, but that doesn't solve the problem. Once I import all of the objects into the clean database, the problem comes back.
One last point that seems be related, but I don't understand how. The problem started right around the time that I added some class modules to the database. These class modules use the VBA Implements keyword, in an effort to clean up my code by introducing some polymorphism. I don't know why this would cause the problem, but the timing seems to indicate a relationship.
I've been searching for an explanation, but haven't found one yet. Does anyone have any suggestions?
EDIT: The database includes a few references in addition to the standard ones:
Microsoft ActiveX Data Objects 2.8
Microsoft Office 12.0
Microsoft Scripting Runtime
Microsoft VBScript Regular Expressions 5.5
Some of the things I do and use when debugging Access:
Test my app in a number of VM. You can use HyperV on Win8, VMWare or VirtualBox to set up various controlled test environments, like testing on WinXP, Win7, Win8, 32bit or 64 bits, just anything that matches the range of OS and bitness of your users.
I use vbWatchDog, a clever utility that only adds a few classes to your application (no external dependency) and allows you to trap errors at high level, and show you exactly where they happen. This is invaluable to catch and record strange errors, especially in the field.
If the issue appears isolated to one or a few users only, I would try to find out what is special about their config. If nothing seems out of place, I would completely unsintall all Office component and re-install it after scrubbing the registry for dangling keys and removing all traces of folders from the old install.
If your users do not need a complete version of Access, just use the free Access Runtime on their machine.
Make sure that you are using consistent versions of Access throughout: if you are using Access 2007, make sure your dev machine is also using that version and that all other users are also only using that version and that no components from Access 2010/2013 are present.
Try to ascertain if the crash is always happening around the same user-actions. I use a simple log file that I write to when a debugging flag is set. The log file is a simple text file that I open/write to/close everytime I log something (I don't keep it open to make sure the data is flushed to the file, otherwise when Access crashes, you may only have old data in the log file as the new one may still be in the buffer). Things I log are, for instance, sensitive function entry/exit, SQL queries that I execute from code, form open/close, etc.
As a generality, make sure your app compiles without issue (I mean when doing Debug > Compile from the IDE). Any issue at this stage must be solved.
Make absolutely sure you close all open recordsets, preferrably followed by setting their variables to Nothing. VBA is not as sensitive as it used to be about dangling references, but I found it good practice, especially when you cannot be absolutely sure that these references will be freed (especially when doing stuff at Module-level or Class-level for instance, where the scope may be longer-lived than expected).
Similarly, make sure you properly destroy any COM object you create in your classes (and subs/functions. The Class_Terminate destructor must explicitly clean up all. This is also valid when closing forms if you created COM objects (you mentioned using ADOX, scripting objects and regex). In general keeping track of created objects is paramount: make sure you explicitly free all your objects by resetting them (for instance using RemoveAll on a dictionary, then assigning their reference to Nothing.
Do not over-use On Error Resume or On Error Goto. I almost never use these except when absolutely necessary to recover from otherwise undetectable errors. Using these error trapping constructs can hide a lot of errors that would otherwise show you that something is wrong with your code. I prefer to program defensively than having to handle exceptions.
For testing, disable your error trapping to see if it isn't hiding the cause of your crashes.
Make sure that the front-end is local to the user machine, You mention they get their individual front-end from the network but I'm not sure if they run it from there or if it it copied on their local machine. At any rate, it should be local not on a remote folder.
You mention using SQL Server as a backend. Try to trace all the queries being executed. It's possible that the issue comes from communication with SQL Server, a corrupt driver, a security issue that prevents some queries from being run, a query returning unexpected data, etc. Watch the log files and event log on the server closely for strange errors, especially if they involve security.
Speaking of event log, look for the trace of the crash in the event log of your users. There may be information there, however cryptic.
If you use custom ribbon actions, make sure thy are not causing issues. I had strange problems over time with the ribbon. Log all all function calls made by the ribbon.
I'm currently trying to write a program in VB.NET which fluidly changes the DWM window colorization colors in Windows 7.
I first tried to edit Registry values directly, but I had to restart the UXSMS service. This solution was unsatisfying, because of the toggle of the taskbar.
I'm now searching for a function in a DLL such as user32.dll or themecpl.dll which can reproduce the behaviour of control panel when setting the window color.
I'm now on IDA, searching for the adquate function (CColorCplPage::SetDwmColorizationColor seems good!). If anyone has one, please share it!
(If anyone need screens or code, please ask. Sorry for my poor English.)
Your first attempt failed because manually editing the Registry is never the correct way to change system settings. As you found out, lots of Windows components (and other applications!) read those configuration values once and cache them, preventing your changes from being propagated. Another problem (and you'd be surprised how often I see this) is applications that attempt to muck around in the Registry generally end up corrupting things.
Instead, you should call the documented API to change the settings. There's almost always a documented way of doing this, and if there isn't, well then you shouldn't be doing it.
This appears to be one of those cases. There's a documented DwmGetColorizationColor function, but there's no corresponding DwmSetColorizationColor function, as one might expect.
The reason is that the user is supposed to be the only one who can change their colorization settings, not other applications. You might promise not to abuse this, and to only make such changes at the user's explicit request, but not all applications can be trusted to do this. Lots of people would use it maliciously, so these functions have not been documented and exposed.
But as usual, if you press on, you can usually find an undocumented way of doing things. The problem with using undocumented functions is that there's no guarantee they'll work or continue to work. They've been intentionally left undocumented because they're liable to change on new versions of Windows. You should only use them at your own risk.
In this case, if you use a program like DumpBin to obtain a list of all the exported functions from the DWM DLL (dwmapi.dll), you'll see a number of undocumented exported functions.
The ones you're interested in are DwmGetColorizationParameters and DwmSetColorizationParameters. Both of these functions take a COLORIZATIONPARAMS structure as an argument that contains the values they need.
So, you need to reverse engineer these functions and obtain the appropriate definitions. Then, you can call the DwmGetColorizationParameters function, passing in a COLORIZATIONPARAMS structure to obtain the current configuration settings; modify the member of the structure that contains the current colorization color; and then pass that modified version of the structure to the DwmSetColorizationParameters function.
Did I mention that I don't recommend doing this?
I'm very keen to update a number of our servers to PHP 5.3. This would be in readiness for Zend Framework 2 and also for the apparent performance updates. Unfortunately, i have large amounts of legacy code on these servers which in time will be fixed, but cannot all be fixed before the migration. I'm considering updating but disabling the deprecated function error on all but a few development sites where i can begin to work through updating old code.
error_reporting(E_ALL ^ E_DEPRECATED);
Is there any fundamental reason why this would be a bad idea?
Well, you could forget that you set the flag and wonder why your application breaks in a next PHP update. It can be very frustrating to debug an application without proper error reporting. That's one reason I can think of.
However, if you do it, document it somewhere. It can save you a couple of hours before you remember setting the flag at all.
If you haven't already you should read the migration guide with particular focus on Backward Incompatible Changes and Removed Extensions.
You have bigger issues than deprecation. Ignoring E_DEPRECATED will not suffice. Because of the incompatible changes there will also be other type of errors or, maybe, even worse, unexpected behaviors.
Here's a simple example:
<?php
function goto($line){
echo $line;
}
goto(7);
?>
This code will work fine and output 7 in PHP 5.2.x but will give you a parse error in PHP 5.3.x.
What you need to do is take each item in that guide and check your code and update where needed. To make this faster you could ignore the deprecated functionality in a first phase and just disable error reporting for E_DEPRECATED, but you can't assume that you will only get some harmless warnings when porting to another major PHP branch.
Also don't forget about your hack and fix the deprecated issues as soon as possible.
Regards,
Alin
Note: I tried to answer the question from a practical point of view, so please don't tell me that ignoring warnings is bad. I know that, but I also know that time is not an infinite resource.
I presume you have some kind of test server? If not, you really should set one up and test your code in PHP 5.3. If your code is thoroughly Unit Tested, testing it will take seconds, and fixing it will be fairly quick too, as the unit tests will tell you exactly where to look. If not, then consider making Unit Testing it all a priority before the next release, and in the meantime go through it all, first with E_DEPRECATED warnings disabled and fix anything which comes up, then with it re-enabled once you have time. You could also run a global find-and-replace for easier to fix errors.
I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.
I found the Wikipedia entry on the soft coding anti-pattern terse and confusing. So what is soft coding? In what settings is it a bad practice (anti-pattern)? Also, when could it be considered beneficial, and if so, how should it be implemented?
Short answer: Going to extremes to avoid Hard Coding and ending up with some monster convoluted abstraction layer to maintain that is worse than if the hard coded values had been there from the start. i.e. over engineering.
Like:
SpecialFileClass file = new SpecialFileClass( 200 ); // hard coded
SpecialFileClass file = new SpecialFileClass( DBConfig.Start().GetConnection().LookupValue("MaxBufferSizeOfSpecialFile").GetValue());
The main point of the Daily WTF article on soft coding is that because of premature optimization and fear a system that is very well defined and there is no duplicated knowledge is altered and becomes more complex without any need.
The main thing that you should keep in mind is if your changes actually improve your system and avoid to lightly label something as anti-pattern and avoid it by all means. Configuring your system and avoiding hardcoding is a simple cure for duplicated knowledge in your system (see point 11 : "DRY Don't Repeat Yourself" in The Pragmatic Programmer Quick Reference Guide) This is the driving need behind the suggestion of avoiding hardcoding. I.e. there should be ideally only one place in you system (that would be code or configuration) that should be altered if you have to change something as simple as an error message.
Ola, a good example of a real project that has the concept of softcoding built in to it is the Django project. Their settings.py file abstracts certain data settings so that you can make the changes there instead of embedding them within your code. You can also add values to that file if necessary and use them where necessary.
http://docs.djangoproject.com/en/dev/topics/settings/
Example:
This could be a snippet from the settings.py file:
num_rows = 20
Then within one of your files you could access that value:
from django.conf import settings
...
for x in xrange(settings.num_rows):
...
Soft-coding: it is process of inserting values from external source into computer program. like insert values through keyboard, command line interface. Soft-coding considered as good programming practice because developers can easily modify programs.
Hard-coding. Assign values to program during writing source code and make executable file of program.Now, it is very difficult process to change or modify the program source code values. like in block-chain technology, genesis block is hard-code that cannot changed or modified.
The ultimate in softcoding:
const float pi = 3.1415; // Don't want to hardcode this everywhere in case we ever need to ship to Indiana.