What is an enviromental Variable in Operating System Vs IDE's? - ide

I'm getting confused of how to place an environmental variable in my head. So it's like a global variable for the operating system? So a global for an IDE is just a reference for something the IDE needs to be able to allocate. Is this the correct idea to have?

At execution each process has an image. It consists of a set of variables in memory that can be accessed, read and modified at all time. It defines the state of a process at a certain point in time.
Some of these variables are created by the process itself for its internal logic, and others will be inherited from its parent process (usually the shell that's used to launch it). They usually describe a certain state of the system. They are called the environment variables. You can use them to define stuff like:
- location of certain executables on your computers (like java or python).
- location of some shared libraries (or DLL if you're on Windows).
- location of the database you're querying
- permission and user access (although that's not the safest place to define it).
The point is, you can define anything in your environment, and every process launched with this environment will inherit from these variables.
I don't know what you mean with the IDE, and quite frankly am not sure I completely understood your question. But since it's been a while and no one answered, I'm hoping it could help you get a beginning of an answer.

Related

Is there a way to get/nslog an array of all local variables in a function in objective-c [duplicate]

When attached to the debugger via Xcode, LLDB provides a useful view of local variables (the bottom left of the screenshot):
I found an LLDB command frame variable (and gdb's info locals) that provides a list of the local variables (as seen in the right side of the screenshot above).
My hope is that this functionality is possible to perform on the device at runtime. For example, I can access the stack trace using backtrace_symbols(), the current selector via _cmd, and a few others.
Has anyone had experience in this area? Thanks in advance.
Xcode/LLDB can show you this information because they have access to debug information in the binary, called a symbol table, which help it understand what memory locations correspond to which names in your source code. This is all outside the Objective-C runtime, and there's no interface in the runtime to get at it.
There's another reason why this won't work, though. When you're building code to run in the debugger, compiler optimizations are turned off, so all the variables you reference in your code are there.
When you build for release, though, generally the compiler optimizations get in there and re-arrange all your carefully named local variables to make things run faster. They might not even ever get stored in memory, just in CPU registers. Or they might not exist at all, if the optimizer can prove to itself that it doesn't need them.
My advice is to think again about the larger problem you're trying to solve...

Can't eliminate Access corruption

My firm's Access database has been having some serious problems recently. The errors we're getting seem like they indicate corruption -- here are the most common:
Error accessing file. Network connection may have been lost.
There was an error compiling this function.
No error, Access just crashes completely.
I've noticed that these errors only happen with a compiled database. If I decompile it, it works fine. If I take an uncompiled database and compile it, it works fine -- until the next time I try to open it. It appears that compiling the database into a .ACCDE file solves the problem, which is what I've been doing, but one person has reported that the issue returned for her, which has me very nervous.
I've tried exporting all of the objects in the database to text, starting with a brand new database, and importing them all again, but that doesn't solve the problem. Once I import all of the objects into the clean database, the problem comes back.
One last point that seems be related, but I don't understand how. The problem started right around the time that I added some class modules to the database. These class modules use the VBA Implements keyword, in an effort to clean up my code by introducing some polymorphism. I don't know why this would cause the problem, but the timing seems to indicate a relationship.
I've been searching for an explanation, but haven't found one yet. Does anyone have any suggestions?
EDIT: The database includes a few references in addition to the standard ones:
Microsoft ActiveX Data Objects 2.8
Microsoft Office 12.0
Microsoft Scripting Runtime
Microsoft VBScript Regular Expressions 5.5
Some of the things I do and use when debugging Access:
Test my app in a number of VM. You can use HyperV on Win8, VMWare or VirtualBox to set up various controlled test environments, like testing on WinXP, Win7, Win8, 32bit or 64 bits, just anything that matches the range of OS and bitness of your users.
I use vbWatchDog, a clever utility that only adds a few classes to your application (no external dependency) and allows you to trap errors at high level, and show you exactly where they happen. This is invaluable to catch and record strange errors, especially in the field.
If the issue appears isolated to one or a few users only, I would try to find out what is special about their config. If nothing seems out of place, I would completely unsintall all Office component and re-install it after scrubbing the registry for dangling keys and removing all traces of folders from the old install.
If your users do not need a complete version of Access, just use the free Access Runtime on their machine.
Make sure that you are using consistent versions of Access throughout: if you are using Access 2007, make sure your dev machine is also using that version and that all other users are also only using that version and that no components from Access 2010/2013 are present.
Try to ascertain if the crash is always happening around the same user-actions. I use a simple log file that I write to when a debugging flag is set. The log file is a simple text file that I open/write to/close everytime I log something (I don't keep it open to make sure the data is flushed to the file, otherwise when Access crashes, you may only have old data in the log file as the new one may still be in the buffer). Things I log are, for instance, sensitive function entry/exit, SQL queries that I execute from code, form open/close, etc.
As a generality, make sure your app compiles without issue (I mean when doing Debug > Compile from the IDE). Any issue at this stage must be solved.
Make absolutely sure you close all open recordsets, preferrably followed by setting their variables to Nothing. VBA is not as sensitive as it used to be about dangling references, but I found it good practice, especially when you cannot be absolutely sure that these references will be freed (especially when doing stuff at Module-level or Class-level for instance, where the scope may be longer-lived than expected).
Similarly, make sure you properly destroy any COM object you create in your classes (and subs/functions. The Class_Terminate destructor must explicitly clean up all. This is also valid when closing forms if you created COM objects (you mentioned using ADOX, scripting objects and regex). In general keeping track of created objects is paramount: make sure you explicitly free all your objects by resetting them (for instance using RemoveAll on a dictionary, then assigning their reference to Nothing.
Do not over-use On Error Resume or On Error Goto. I almost never use these except when absolutely necessary to recover from otherwise undetectable errors. Using these error trapping constructs can hide a lot of errors that would otherwise show you that something is wrong with your code. I prefer to program defensively than having to handle exceptions.
For testing, disable your error trapping to see if it isn't hiding the cause of your crashes.
Make sure that the front-end is local to the user machine, You mention they get their individual front-end from the network but I'm not sure if they run it from there or if it it copied on their local machine. At any rate, it should be local not on a remote folder.
You mention using SQL Server as a backend. Try to trace all the queries being executed. It's possible that the issue comes from communication with SQL Server, a corrupt driver, a security issue that prevents some queries from being run, a query returning unexpected data, etc. Watch the log files and event log on the server closely for strange errors, especially if they involve security.
Speaking of event log, look for the trace of the crash in the event log of your users. There may be information there, however cryptic.
If you use custom ribbon actions, make sure thy are not causing issues. I had strange problems over time with the ribbon. Log all all function calls made by the ribbon.

How does a OS X Launch Agent persist settings/state?

I'm writing an OS X launch agent (which watches, as it happens, for FSEvents);
it therefore has no UI and is not started from a bundle – it's just a program.
The relevant documentation and sample
code
illustrates persisting FS event IDs between invocations, and does so using
NSUserDefaults. This is clear the Right Thing To Do(TM).
The documentation for NSUserDefaults in the
Preferences and Settings Programming Guide
would appear to be the appropriate thing to read.
This shows only the Application and Global domains as being persistent, but (fairly
obviously) only the Application domain is writable for an application. However
the preferences in the application domain are keyed on the
ApplicationBundleIdentifier, which a launch agent won't have. So I'm at a loss
how such an agent should persist state.
All I can think of is that the Label in the launchd job can act as the
ApplicationBundleIdentifier – it at least has the correct form. But I can't see any
hint that that's correct, in the documentation.
The obvious (unix-normal) thing to do would be to write to a dot-file
in $HOME, but that's presumably not the Cocoa Way. Google searches
on 'osx daemon preferences', and the like, don't show up anything useful,
or else my google-fu is sadly lacking today. Googling for 'set application
bundle identifier' doesn't turn up anything likely, either.
NSUserDefaults:persistentDomainForName looks like it should be relevant,
but I can't work out its intentions from its method documentation.
I've found one question here which seems relevant, but while it's tantalisingly close, it doesn't say where the daemon gets its identifier from.
I've limited experience with Objective-C and Cocoa, which means that by
now I rather suspect I'm barking up the wrong tree, but don't really know where
to look next.
You can (and imo should) have an Info.plist even in a single-file executable.(see http://www.red-sweater.com/blog/2083/the-power-of-plist)
However, NSUserDefaults is a little more questionable. Conceptually, it's intended for user settings, rather than internal state. However, there's no real reason it wouldn't be suited to this, so I'd probably go ahead and do so.

Creating a process without inheriting parent process's environment variable

I have a program which has to start executing another process. But the child process should not inherit the environment from the parent process. i.e. It should be launched as if I had launched the program from explorer. On searching, I found some exec*() functions that allow you to send in an array of strings as environment variables. But this process is cumbersome as I need to strip out my program specific environment vars from the list and send it to the child. Is there any other way of achieving this?
The only way to achieve this is that which you feel is cumbersome. It's really not that difficult though, just some mindless string manipulation.

Simulating multiple instances of an embedded processor

I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.