I've seen this question pop up a few times in forums, but I have yet to see an answer. I'm trying to create a variable of a hashtable in one task of an agent phase and pass it to another with powershell. Specifically I'm trying to build a hashtable variable in one task and pass it to an ARM Template in the next as a secureobject parameter.
The only "answer" I've see is to use the write-output method like this:
Write-Output "##vso[task.setvariable variable=MyVariable]$VariableValue"
Which seems to work just fine if I'm trying to create a variable of a STRING. But I cannot seem to get this to work for an object. Specifically a hashtable. I have however been very successful in creating a variable of $(MyHashtable) with a value of "System.Collections.Hashtable" sadly this is not my goal.
Any help would be very much appreciated! Thanks!
Variables are strings, not objects, end of story. You can't pass an object between phases. Phases may run on different agents, and the agents may be on totally different operating systems with different software configurations. Strings are guaranteed to be portable between them.
This means you need to serialize and deserialize non-string values in a portable format, such as JSON. In PowerShell, ConvertTo-Json -Compress is what you're after.
Recently, I've been trying to write to a .PAK file while it is being used by another process in ring 0. This has been a problem for quite a while and i haven't had much success. I am able to use any programming language necessary to accomplish this, but C#/VB.net is preferred. I originally wanted to use a find and replace system when editing, but I will just choose and offset to write to and such instead.
No, I can't just terminate the process then edit; the process must be running. Yes, I obviously know the process with the file handle attached.
No, I can't just run as admin because the process is established in ring 0/the kernel.
I've tried multiple methods including setting the process speed temporarily to 0 to edit then revert, and changing the FileShare and other parameters, none with any success.
One approach which I have been told a lot and which I have no experience in is creating a "Kernel Driver". I'm not sure how to go about this and I cant find much info online so if you think that's is the best method please inform me on how to get started. Any help is appreciated!
Always create a temporary file (a copy of your original file). If you need to process a file within your codes, create a temp file, use the temp file and process that file. So if you need another process, there will be no problem.
I'm trying to pass data which is continuously changed from the inside of one While loop to the inside of another While loop of a sub-vi. The main program on the left is constantly reading new data and the program on the right is adding 1 to the new value. My issue is that I cannot input new values to a While loop which is already running and thus my sub-vi is never updated. I've tried a global variable ("write" from the main program control and then "read" into the sub-vi) but that doesn't work either (same result as if the main were just passing data into the sub).
I apparently don't have enough reputation to post a picture of my program but I'm basically trying to run parallel loops (almost inside each other). Can anyone lend me an experienced hand?
The most common problem with while loops are based on lack of knowledge how exactly does the while loop work in LabVIEW.
First of all the information will be given outside the loop only if the condition terminal (right down corner of the loop) will be flagged as true.
If you want to pass the data earlier (while the loop is running) you have to choose easiest option:
Use queue (is the most common and well working). I can elaborate how this one work in practise if you want, or just try to run an example from LabVIEW help.
local/shared variables - you can define in your own library variables and pass the data by READ/WRITE option.
Please try to upload some documentation to an external server (as you are blocked here), and post a link, and then I could help you with a specific example.
HelpĀ»Find Examples. Search for "queue". Pick out an example with parallel loops.
You might want to look into Queues or Notifiers as means of passing data between running loops.
I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.
I'm getting confused of how to place an environmental variable in my head. So it's like a global variable for the operating system? So a global for an IDE is just a reference for something the IDE needs to be able to allocate. Is this the correct idea to have?
At execution each process has an image. It consists of a set of variables in memory that can be accessed, read and modified at all time. It defines the state of a process at a certain point in time.
Some of these variables are created by the process itself for its internal logic, and others will be inherited from its parent process (usually the shell that's used to launch it). They usually describe a certain state of the system. They are called the environment variables. You can use them to define stuff like:
- location of certain executables on your computers (like java or python).
- location of some shared libraries (or DLL if you're on Windows).
- location of the database you're querying
- permission and user access (although that's not the safest place to define it).
The point is, you can define anything in your environment, and every process launched with this environment will inherit from these variables.
I don't know what you mean with the IDE, and quite frankly am not sure I completely understood your question. But since it's been a while and no one answered, I'm hoping it could help you get a beginning of an answer.