how to communicate between labview and DM software - dm-script

Hello,I need to use DM software to analyse a txt file and get numbers.Each number was send to a Labview software which controls the moving stage. Then Labview tells it's done then DM will take picture and save the files. So how can this be done?I found few samples with the DM script.Please give a direction.Thanks

If speed is no issue, you could make Labview save an empty .txt. Your dm software could check if the file exists and take a picture as soon as it does. Ofcourse better, faster/safer methods exist, but I dont know how versitile your dm software is. A virtual com-port for example, ActiveX, there are many options to make software communicate with eachother.

There are not a lot of 'outward' or 'inward' communcition possibilities in current DigitalMicrograph and some options are only available in later GMS versions.
I also don't know the options Labview has, so you will need to find out what works and what doesn't. Suggestions are:
If you are using GMS 2.3 or later, you can use the command LaunchExternalProcess() to start any routine from within DigitalMicrograph the same way you would do from the command prompt.
If Labview allows to trigger some functionality by being called with parameters from the command prompt, this might be the easiest option. The DM-script will continue either when the launched process is finished, or after a specified time-out.
If you are using GMS 3.1 or later, you can do the oposite and have an outside program call DigitalMicrograph.exe with a command line parameter to trigger the start of a DM-script.
Essentially, this is the reverse of the first suggestion. Labview would need to "call" DigitalMicrograph whenever it wants the next action performed. I do not know Labview enough to judge if this is a possibility or not.
There are script commands for serial communication via the COM port (RS232) provided your installation has the SerialControl.dll in the plugin-folder.
If Labview supports this, you may be able to establish the inter-program communication using this. The serial communication script calls are not officially supported, but the commands are rather self-explanatory:
Number SPOpen( Number port, Number baud, Number stop, Number parity, Number data )
Number SPOpen( String prefix )
void SPClose( Number serialPortL )
Number SPSendString( Number serialPortL, String string )
Number SPSendHex( Number serialPortL, String string )
void SPFlushInput( Number serialPortL )
Number SPGetPendingBytes( Number serialPortL )
Number SPGetTime( )
String SPReceiveString( Number serialPortL, Number maxLength, NumberVariable actual )
String SPReceiveHexString( Number serialPortL, Number maxLength, NumberVariable actual )
void SPSetRTS( Number serialPortL, Boolean on )
void SPSetDTR( Number serialPortL, Boolean on )
You can also establish 'communication' with a workaround as suggested by Gelliant in his answer. A DM-script can 'monitor' a specific folder on the harddrive and trigger some action whenever a (specific) file in this folder gets created or modified.
If Labview is capable of something similiar, this "write-to-disk" and "watch-for-change" method can be used to have two programs work in synchronization with each other.
If Labview does not support this directly, you may be able to achieve a similar "hacked" synchronisation by using a 3rd party 'scripting' language for the general system. I've personally used a software called AutoIt in the past to synchronize otherwise incompatible software to controll hardware.
If you know C++ programming, you may get the "Software Development Kit (SDK)" for DigitalMicrograph and create your own Labview-communication plugin for DigitalMicrograph.
This option is of course the most versatile, as you're only limited by what you can achieve by your own C++ code. The disadvantage is, that you might need to recompile the plugin-DLL for different versions of DigitalMicrograph.

Related

Reverse a decryption algorithm with a given .exe GUI

I am using a Keygen application (.exe). There are two input fields in it's GUI:
p1 - at least 1 digit, 10 digits max - ^[0-9]{1,10}$
p2 - 12 chars max - uppercase letters/digits/underscores - ^[A-Z0-9_]{0,12}$
Pressing generate button produce a key x.
x - 20 digits exactly - ^[0-9]{20}$
For each pair (p1,p2), there is only one x (in other words: f(p1,p2) = x is a function)
I am interested in it's encryption algorithm.
Is there any way of reverse engineering the algorithm?
I thought of two ways:
decompiling. I used snowman, but the output is too polluted. The decompiled code probably contains non-relevant parts, such as the GUI.
analyzing of input and output. I wonder if there any option to determine the used encryption algorithm by analyzing a set of f(p1,p2) = x results.
As you mentioned, using snowman or some other decompiling tools is probably the way to go.
I doubt you would be able to determine the algorithm just by looking at the input output combinations, since it is possible to write any kind of arbitrary algorithm, that can behave in any way.
Perhaps you could just ask the author what algorithm they're using ?
Unless it's something really simple, I'd rule out your option 2 of trying to figure it out by looking at input and output pairs.
For decompiling / reverse engineering a static binary, you should first determine whether it's a .NET application or something else. If it's written in .NET you can try this for decompilation:
https://www.jetbrains.com/decompiler/
It's really easy to use, unless the binary has been obfuscated.
If the application is not a .NET application, you can try Ghidra and/or Cutter which both has pretty impressive decompilers built in:
https://ghidra-sre.org/
https://cutter.re/
If static code analysis is not enough, you can add a debugger to it. Ghidra and x64dbg work really well together, and can be synced via a plugin installed in both.
If you're new to this, I can recommend both that you look into basic assembler for the x86 platform so you have a general idea of how the CPU works. Another way to get started is "crackme" style challenges from CTF competitions. Often there great write-ups with the solution, so you have both the question and answer available.
Good luck!
Type in p1 and p2. Scan the process for that byte string. Then put a hardware breakpoint for memory access on it. Generate the key, it will hit that hardware breakpoint. Then you have the address which accesses it and start reversing from there in Ghidra(Don't forget to use BASE + OFFSET) since ghidra's output won't have the same base as the running application. The relevant code HAS to access the inputs. So you know where the algorithm is. Since it either directly accesses it, or somewhere within that call chain is accessed relatively fast. Nobody can know without actually seeing the executable.

Synchronizing USRP source blocks - multiple B2xx devices

I am trying to create a synchronized usrp source block in gnu radio consisting of multiple B210 USRP devices. Lang: C++.
From what I have found I need to:
Instantiate multiple multi_usrp_sptr as each B210 requires one and multiple B210 devices cannot be addressed by using single sptr
Use external frequency and PPS sources - an option that can be selected from block or set programmatically
Synchronize re/tuning to achieve repeatable phase offset between nodes - this can be achieved using timed commands API https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD
Synchronize sample streams using time_spec property issue_stream cmd
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
I looked into the gr-uhd folder where the sink/source code resided and found functions that could be altered.
Unfortunately I don't know how to copy or export this library to do these modifications and later compile to insert my custom blocks to GNU Radio, because gr-uhd seems to be built in and compiled at GR installation.
I attempted coping and then making the lib but that's not the way - it didn't succeed. Should I add my own source block via gr_modtool and insert only the commands I need?
Compatibility with uhd and its functions apart from just adding a few lines would be advantageous not to write the source from scratch.
Please advise
Edit
Experimental flowchart, based on Marcus Müller suggestion:
Experimental usrp synchronization flow
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
For a USRP sink: add tags containing dictionaries with the correct command times to the streams. The GNU Radio API docs have information on how these dictionaries need to look like. The time field is what you need to set with an appropriate value.
For a USRP source: Use the set_start_time on the uhd_usrp_source block; use the same dictionaries described above to issue commands like tuning, gain setting at a coordinated time.
I was trying to find a proper way of synchronizing the USRPs via tags.
There are a few issues I came across in this approach:
Timed commands require the knowledge of the current moment in time, which is done via usrp.get_time_now(), even though I would request the USRP to give the time through tags I would have to somehow extract it from the output. (make some kind of loop and proper triggering) (source: https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD) or maybe plan everything not in a relative way - using absolute values instead of offsets. I have seen an approach to regularly reset the sense of time each PPS (set it to 0.0) and maybe then setting time of commands within range of 0.0-1.0 would be acceptable. Then the loop for reading and inserting time into commands would also be redundant.
I didn't found a way to create dicts in GR via blocks to make the solution scalable (without writing a few lines of code in textbox) or writing OOT block
In the end there is so little information to tell what kind of solution is most appropriate (PDU, events, are tags still relevant in GR
?), and the docs are so very scarce, that after some mailing I decided to add a simple class that inherits from the main top_bock.py and after instantiation of top_block it calls a few functions to synchronize the devices. This kind of solution is not the most flexible one, and the parent class top_block.py has to be called through the inheriting one, but it enables an easy programming interface.
Soon I will add an example of the code used in inheriting class just in case.
If there is any more neat, dynamic or scalable solution please let me know or point me to sources.

Labview diagram creation API

I need to drive a testbench with labview.
The test scenarios are written in a languages that can be automaticaly translated into labview diagrams.
Is this an API that allow to create "labview diagrams" from another software ? or with labview itself ?
I agree that LabVIEW scripting is one approach, but let me throw out another option.
If you are planning to do a one time migration from your test code to LabVIEW than scripting is great, but if you plan to regularly update your test code (because it's easier to use the "test" language than LabVIEW) than it could become quite painful to constantly perform the migration every time your test code has changed.
I've had great success with simply putting my state machine inside of a for loop and then reading in "commands" from a text file that was generated using my "test" language (see pic).
For example, to do an IV sweep my text file might say something like:
SourceV, 5
ReadI
Wait, 1
SourceV, 6
ReadI
This image is greatly simplified - I'm not using a state machine and I don't show how to use "parameters," but I can provide a more comprehensive example if needed. Again, I've had great success doing this with around 30 "commands" controlling multiple instruments and then I generated the text input using VBA or Python.
It's called LabVIEW scripting. You will need to enable an option in the VI Server page in the options dialog to see the relevant features.
A few things to note:
Scripting isn't complicated, but you do need to be aware of how LV code is built.
While scripting is public, it was initially created as an internal tool. There are still corners of it which are incomplete.
Scripting code can be tedious. If you can get away with it, try creating templates of code.
NI has something called CodeGen, which I believe are a series of functions which make some scripting easier, although I never really looked into it.

Simulating multiple instances of an embedded processor

I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.

Restricting Valgrind to a specific function

I have a big program to run. Using valgrind it takes hours and hours to run. I heard that there is something where we can call valgrind for a specific function in the program. And rest of program will be executed normally(without valgrind env).
Can anybody help me with this. I tried searching it over internet , May be I am missing the term to search.
It all depends on what tool you're wanting to use. For callgrind (the profiler in valgrind) there is an option --toggle-collect=function to allow you to collect information inside a particular function and all its children.
However if the tool you're interested in is memcheck (for capturing leaks / memory errors) then there is no available command line option.
Googling "valgrind profile specific function only" and go "I feel lucky"
In addition to enabling instrumentation, you must also enable event collection for the parts
of your program you are interested in. By default, event collection is enabled everywhere.
You can limit collection to a specific function by using --toggle-collect=function. This will
toggle the collection state on entering and leaving the specified functions. When this option
is in effect, the default collection state at program start is "off". Only events happening
while running inside of the given function will be collected. Recursive calls of the given
function do not trigger any action.
More here