I have LV application, where user can specify input and output channels for connected DAQ device. I want to synchronise both channels using trigger on the input channel, with analog output start as a trigger source (image on this site shows part of what I am trying to do).
My problem is that user specifies only IO channels, but how can I switch from a DAQmx Physical Channel (e.g. cDAQ1Mod4/ao0) line into source for the DAQmx Start Trigger block (probably /cDAQ1Mod4/ao0/StartTrigger in this case, but I am not sure) ?
I've found an answer, but I am not really happy with this way of solving a problem. I have to scan whole devices tree and compare channel or module names available for each device with the name of my selected channel ... simple channel property would be easier, but haven,t found any property which could give me what I need.
Related
I'm trying to implement the UPLINK of a Ground Station controlling a small satellite. The idea is that the link should stay always active in between each transmitted telecommand. For this, I need to insert some DUMMY or IDLE sequence bytes such as 0xAA or similar.
I have found that some people already faced a similar issue and posted their questions here:
https://www.ruby-forum.com/t/constant-carrier-digital-transmission/163379
https://lists.gnu.org/archive/html/discuss-gnuradio/2016-08/msg00148.html
So far, the best I could achieve was to modify the EventStream Source block from https://github.com/osh/gr-eventstream in order to preload the vectors with my dummy sequence (i.e. 0xAA) instead of preloading them with zeroes. This is a general overview of the GNURadio graph I'm using:
GNURadio Flowgraph Picture
This solution however introduces a huge latency and the sent message does not appear at the output until a huge amount of time has expired (in the order of several seconds).
Is there a way of programming the USRP using GNURadio so that it constantly sends a fixed sequence which should only be interrupted when an incoming message is passed? I assume that the USRP has the ability of reading tagged streams in order to schedule transmissions. However, I'm not sure how to fit this in my specific application.
Thanks beforehand!
Joa
I believe this could be done using a TCP or UDP source block.
Your control information could be sent to the socket using TCP/UDP. GNU Radio would then collect and transmit the packets. Your master control program would then have to handle the IDLE stuffing but solving the problem external to GNU Radio is easier.
Your master control program would basically do the following:
1. tx control data as needed
2. if no control data ready before next packet must be sent send an IDLE packet
I have built a simple PCI driver for reading and writing data to a PCI device. I have also added interrupt support, so when there is a PCI interrupt an ISR is called. This all seems to work.
I would like to inform an external application of the interrupt. So far I haven't found a suitable mechanism. The interrupt could come at any time, and is dependent on Sensors connected to the PCI device.
I have found the following:-
1 Event objects which can be passed to the KMDF driver via read, write, iocontrol commands (Overlapped object)
2 Plug and Play notifications, which can be use used by (Toaster example code) the driver to inform the app of PNP events.
A notification method would be ideal, however it doesn't look like one exists for my particular use case.
There are at least 2 ways to achieve what you are looking for
Inverted call model - send IOCTL(s) to the driver which the driver will keep pending and will complete them as and when it needs to notify the user mode about the occurrence of the event that it is interested in. You can read more about this approach here.
Use shared event handles. A user mode application communicates the event handle(s) to kernel mode using an IOCTL. The kernel mode increments the reference count to ensure that the handle remains valid when it needs to use it and then signals the event when necessary. You can read more about this approach here.
The first approach is more preferred for various reasons that you will find while reading the linked articles. If your use case requires the kernel mode to not only indicate the occurrence of an event but also send some data back to user mode then the second approach is not suitable for your requirement and you should focus on the first approach alone.
I successfully use WasapiLoopbackCapture() for recording audio played on system, but I'm looking for a way to record what the user would actually hear through the speakers.
I'll explain: If a certain application plays music, WASAPI Loopback shall intercept music samples, even if Windows main volume-control is set to 0, meaning: even if no sound is actually heard through audio-card's output-jack (speakers/headphone/etc).
I'd like to intercept the audio actually "reaching" the output-jack (after ALL mixers on the audio-path have "done their job").
Is this possible using NAudio (or other infrastructure)?
A code-sample or a link to a such could come in handy.
Thanks much.
No, this is not directly possible. The loopback capture provided by WASAPI is the stream of data being sent to the audio hardware. It is the hardware that controls the actual output sound, and this is where the volume level is applied to change the output signal strength. Apart from some hardware- and driver-specific options - or some interesting hardware solutions like loopback cables or external ADC - there is no direct method to get the true output data.
One option is to get the volume level from the mixer and apply it as a scaling factor on any data you receive from the loopback stream. This is not a perfect solution, but possibly the best you can do without specific hardware support.
I am working on a project which uses a PIC24FJ64GA002 mcu.
I am working on a bit-banged serial communication function that will use one wire to send data and then switch to receive mode to receive data on the same pin. A separate pin will be used for clocking which will always be controlled by a different board (always an input). I am wondering is there a way to configure the pin for open-collector operation that that it can be used as an input and and output or do I have to change the pin configuration every time i go from reading to writing?
You need to change the direction of the pin each time by using the TRIS register. If the pin is set up as an output, reading the PORT register will most likely only tell you what level you are driving the pin to (assuming there is a high impedance on the pin). If the pin is set for input, you won't be able to drive your desired output value.
Also, make sure that you read incoming data using the PORT register, but output the data using the LAT register. This ensures that you don't suffer any issues if your code (I assume you are programming in C here) gets converted into bset/bclr/btgl instructions which are Read-Modify-Write. If you are writing in assembler, the same rule applies but you know when you are using these R-M-W type instructions. If you want more reasoning on this, please ask.
it seems that as soon as data is ready for the host (such as when I use WriteFile to send a command to the HID in which I tell the HID to give back some data such as the port value) and the in packet ready bit is set, the host reads it (as confirmed by another USB interrupt) before ReadFile ever is called. ReadFile is later used to put this data into a buffer on the host. Is this the way it should happen? I would have expected the ReadFile call to cause the in interrupt.
So here is my problem: I have a GUI and HID that work well together. The HID can do I2C to another IC, and the GUI can tell the HID to do I2C just fine. Upon startup, the GUI reads data from the HID and gets a correct value (say, 0x49). Opening a second GUI to the same HID does the same initial data read from the HID and gets the correct value (say, 0x49; it should be the same as the first GUI's read). Now, if I go to the first GUI, and do an I2C read, the readback value is 0x49, which was the value that the 2nd GUI had requested from the HID. It seems that the HID puts this value on the in endpoint for all devices attached to it. Thus the 1st GUI incorrectly thinks that this is the correct value.
Per Jan Axelson's HID FAQ, "every open handle to the HID has its own report queue. Every report a device sends goes into all of the queues so multiple applications can read the same report." I believe this is my problem. How do I purge this and clear the endpoint before the 1st GUI does its request so that the correct value (which the HID does send per the debugger) gets through? I tried HidD_FlushQueue, but it keeps returning False (not working; keep getting "handle is invalid" errors, although the handle is valid per WriteFile/ReadFile success with the same handles). Any ideas?
Thanks!
You might not like this suggestion, but one option would be to only allow one GUI at a time to have an open handle. Use your favorite resource allocation lock mechanism and make the GUIs ask for the HID resource before opening the handle and using it.