How to activate a dependencies object on test system? - sap

I have a big model with views and procedures implemented in HANA. These Objects are interdependent. On the development System every things work fine. After we Transport the objects using CTS on the Test-System, we got all the objects on Test-System marked red. If I run this Statement on Test-System:
select * from "SYS.OBJECTS" where OBJECT_NAME like '%<my_package>%';
I got nothing!
Is this a settings Problem on the Test-System? Can I transport a lot of dependence objects at once?

Related

Out of memory exception in executable but not debug mode (or in release mode run through VS)

I am running a program in VB.NET, with VS 2013, in 64-bit architecture, and have enabled allowverylargeobjects.
I have a list of objects of a class. The class has various properties that are data, kind of like
Class cMyClass
Property desc1 as String
Property desc2 as String
Property value as Double
End Class
I am populating this list via a read from SQL server. I can successfully put, in debug or release mode, 100 million objects of this class in the list, and operate on them just fine. At one point, though, I am populating the list with 150 million objects. When I run the program through Visual Studio in debug mode (or even in release mode, but through VS), I have no problems populating the list with 150 million objects. But when I use an executable (compiled from release mode), it throws an error at this point (the error box tells me it is in a particular subroutine where the only thing happening is the filling of this list) - "exceededSystem.OutOfMemoryException: Array dimensions exceeded supported range."
I get that it's bad practice to load so much stuff into memory, but I am already very far down this road and need to solve it just once. I can solve it, clearly, by running the program through VS, but would like to understand why this works for me in VS (in debug mode or release mode) but not when running the executable.
I'll add that I don't think it's a hardware problem. The program is using over 20gb memory when running, but it's running on a box with 128gb RAM.
Thank you.
Enable gcAllowVeryLargeObjects in your exe.config file (https://learn.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/gcallowverylargeobjects-element)
Even when this is active, you still have a limit on the number of elements:
4,294,967,295 in a multi-dimensional array
2,146,435,071 in a single dimensional array
2,147,483,591 for single byte arrays
Note that as stated in the comment from Tycobb, the gcAllowVeryLargeObjects works at object level, not at process level - so your process might use 20 gbs of RAM that are made up by the sum of many objects < 2 GB.

Can't eliminate Access corruption

My firm's Access database has been having some serious problems recently. The errors we're getting seem like they indicate corruption -- here are the most common:
Error accessing file. Network connection may have been lost.
There was an error compiling this function.
No error, Access just crashes completely.
I've noticed that these errors only happen with a compiled database. If I decompile it, it works fine. If I take an uncompiled database and compile it, it works fine -- until the next time I try to open it. It appears that compiling the database into a .ACCDE file solves the problem, which is what I've been doing, but one person has reported that the issue returned for her, which has me very nervous.
I've tried exporting all of the objects in the database to text, starting with a brand new database, and importing them all again, but that doesn't solve the problem. Once I import all of the objects into the clean database, the problem comes back.
One last point that seems be related, but I don't understand how. The problem started right around the time that I added some class modules to the database. These class modules use the VBA Implements keyword, in an effort to clean up my code by introducing some polymorphism. I don't know why this would cause the problem, but the timing seems to indicate a relationship.
I've been searching for an explanation, but haven't found one yet. Does anyone have any suggestions?
EDIT: The database includes a few references in addition to the standard ones:
Microsoft ActiveX Data Objects 2.8
Microsoft Office 12.0
Microsoft Scripting Runtime
Microsoft VBScript Regular Expressions 5.5
Some of the things I do and use when debugging Access:
Test my app in a number of VM. You can use HyperV on Win8, VMWare or VirtualBox to set up various controlled test environments, like testing on WinXP, Win7, Win8, 32bit or 64 bits, just anything that matches the range of OS and bitness of your users.
I use vbWatchDog, a clever utility that only adds a few classes to your application (no external dependency) and allows you to trap errors at high level, and show you exactly where they happen. This is invaluable to catch and record strange errors, especially in the field.
If the issue appears isolated to one or a few users only, I would try to find out what is special about their config. If nothing seems out of place, I would completely unsintall all Office component and re-install it after scrubbing the registry for dangling keys and removing all traces of folders from the old install.
If your users do not need a complete version of Access, just use the free Access Runtime on their machine.
Make sure that you are using consistent versions of Access throughout: if you are using Access 2007, make sure your dev machine is also using that version and that all other users are also only using that version and that no components from Access 2010/2013 are present.
Try to ascertain if the crash is always happening around the same user-actions. I use a simple log file that I write to when a debugging flag is set. The log file is a simple text file that I open/write to/close everytime I log something (I don't keep it open to make sure the data is flushed to the file, otherwise when Access crashes, you may only have old data in the log file as the new one may still be in the buffer). Things I log are, for instance, sensitive function entry/exit, SQL queries that I execute from code, form open/close, etc.
As a generality, make sure your app compiles without issue (I mean when doing Debug > Compile from the IDE). Any issue at this stage must be solved.
Make absolutely sure you close all open recordsets, preferrably followed by setting their variables to Nothing. VBA is not as sensitive as it used to be about dangling references, but I found it good practice, especially when you cannot be absolutely sure that these references will be freed (especially when doing stuff at Module-level or Class-level for instance, where the scope may be longer-lived than expected).
Similarly, make sure you properly destroy any COM object you create in your classes (and subs/functions. The Class_Terminate destructor must explicitly clean up all. This is also valid when closing forms if you created COM objects (you mentioned using ADOX, scripting objects and regex). In general keeping track of created objects is paramount: make sure you explicitly free all your objects by resetting them (for instance using RemoveAll on a dictionary, then assigning their reference to Nothing.
Do not over-use On Error Resume or On Error Goto. I almost never use these except when absolutely necessary to recover from otherwise undetectable errors. Using these error trapping constructs can hide a lot of errors that would otherwise show you that something is wrong with your code. I prefer to program defensively than having to handle exceptions.
For testing, disable your error trapping to see if it isn't hiding the cause of your crashes.
Make sure that the front-end is local to the user machine, You mention they get their individual front-end from the network but I'm not sure if they run it from there or if it it copied on their local machine. At any rate, it should be local not on a remote folder.
You mention using SQL Server as a backend. Try to trace all the queries being executed. It's possible that the issue comes from communication with SQL Server, a corrupt driver, a security issue that prevents some queries from being run, a query returning unexpected data, etc. Watch the log files and event log on the server closely for strange errors, especially if they involve security.
Speaking of event log, look for the trace of the crash in the event log of your users. There may be information there, however cryptic.
If you use custom ribbon actions, make sure thy are not causing issues. I had strange problems over time with the ribbon. Log all all function calls made by the ribbon.

Is there an easy way to determine which parts of a vb.net project is still used?

I maintain an old vb.net project that I didn't make and I was wondering if there's an easy way to determine which parts of the software is still used today by the staff where I work.
I would like to log all function calls without having to edit each one of them if possible.
The project has 27 forms and 6 modules.
Any ideas?
Thanks!
There is no way to 100% determine everything that is used by the system. Vb.Net supports dynamic invocation of methods / properties. Hence you can't even do tricks like delete some code and see if it recompiles. Even if it compiles it could be invoked dynamically.
One way to get a sense of what code is used is to profile the application. Start up the profiler, run the app and go through all of the ways in which the app is used. The resulting profile should give you a good sense of what parts are used. It's very possible though this approach will miss code though

Simulating multiple instances of an embedded processor

I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.

How to get rid of unmanaged code in VW 3.1d and ENVY

I have an old VW3/ENVY image with a parcel loaded as unmanaged code (exactly the situation Mastering ENVY/DEVELOPER warns against). Unfortunately, this problem happened a long time ago and it's too late to just "go back" to an image without the parcel loaded.
Apparently, there is a way to solve this problem (we have one development image where this has been solved, and there are normal configuration maps that contain the exact same code as the unmanaged parcel but they can't be loaded), but the exact way has long since been forgotten (and there are some problems with taking that particular dev image as the base for a new runtime image, so I need to find out how how to do it again).
In theory, it should be possible to remove the parcel and reload the code from a configuration map. In practice, all normal ways (using the ParcelBrowser or directly calling UnmanagedCode>>remove) fail. I even tried manually removing the offending selectors from the method dictionary, but past a certain point (involving a call to #primBecome:) the whole image hangs completely (I can't even drop into the debugger). I started hacking the instances of the classes and methods, hoping I'd trick ENVY into thinking that these particular methods are normal versioned code, but without any success yet.
Are there any smalltalk/envy gurus around that still remember enough of VW 3 to provide me with any pointers?
Status update
After a week of trying to solve the problem I finally made it, at least partially, so in case anyone's interested...
First, I had to fix file pointers for the umnanaged code (otherwise, all everything that tried to touch the methods would throw an exception). It looks like ENVY extends Parcel so that, in theory, all integer file pointers are changed to ENVY's void filepointer when loaded, but in my case, I had to do it manually (a Parcel provides enumeration for all selectors it defines). Another way would be to tweak the filePointer code, but that can't easily be done automatically on every image where it's needed.
Then, the parcel can be discarded, which drops the parcel information, but keeps the code. The official "Discard" mechanism needs to have a valid changes file (which envy doesn't use so it has to be set manually, and reset afterwards) and the parcel source (which we fortunately had).
To be able to make any changes to the methods (either manually, or via loading an application or class from ENVY), they need to get rid of their unmanaged status. This can be done by manually tweaking TheClass>>applicationAssocs (I also got rid of all references to the classes in UnmanagedCode sich as timestamps, and removed the reference to the discarded parcel). I actually had some info on how to get to this point from my boss, but I haven't been able to understand the instructions until I almost figured it out by myself.
This finally allowed me to load and reload all the Applications that contained the classes. In theory. In practice, the image still hung completely whenever I tried to load a newer version of the Application (that contained the code formerly in the parcel).
It turned out that the crashes had absolutely nothing to do with the code being unmanaged, but with the fact that the parcel in question modified InputState>>process:, where it caused an exception due to a missing and/or uninitialized class variable (the InputState>>initialize method wasn't called until after the new process: method was in place). I had to modify the Notifier class to dump all exceptions to a file to find out what was going on. Adding the class variable to the source of the class (instead of adding it via reflection), suspending the input processing thread via toBeLoadedCode and starting it again in the loaded method and creating a new version of the application solved even this problem.
Now everything works, in theory. In practice it's still unusable, because reloading the WindowSystem or VisualworksBase applications causes their initialization blocks to run, and a whole lot of settings are reset to their defaults - fonts and font sizes, window colors, UI settings... And there doesn't seem to be any way to just save the settings to a file and load them later on, or just to see what all the settings are (either the official Settings menu doesn't show everything, or we have a heavily tweaked image... so much for reconstructing it from scratch). But that's a completely different question.
Well, normally the recommendation would be that you should be able to rebuild your development image from scratch by loading your code from the repository. But if you had that, then the answer would be simple, just discard that image and reload. I think it's been long enough that I've lost whatever knowledge I've had about how to mess with the internal structures to get it back, and it sounds like you've tried a lot of things. So, although it might be painful, figuring out the recipe to rebuild your development image by loading stuff from the repository sounds like it may be your best bet. It probably isn't all that horrible, there just might be a few dependencies on the image state, or special doits that need to be executed.
You also probably need to validate what's in the repository against what's in the image you're working from. If there was unmanaged code loaded and then someone modified it and saved it, it's not clear to me that it would have been saved to ENVY. So you probably want to audit everything that was unmanaged code and if it's been changed, save that to a repository edition.
Sorry I don't have any better answers.