I am working as automation test engineer in healthcare. We have requirement for automating a waveform received from ECG which will be displayed in a web application.
So is there any tool to automate the waveform ?
WaveForm Sample IMAGE
Tough item to test, theoretically if the waveform isn't 'flattened' (i.e. just a picture) you should be able to pull out individual points. Image recognition is another option as the two images should match. It depends a lot on what you have to work with for data, if it's just a picture of a waveform you are pretty limited to just image based testing.
Use Sikuli.
You have to be creative with it. As long as your application is running on any JVM supported platform and there is a feature to hold the waveform in a specific interval. Then , you can do image based assertion using Sikuli.
Related
I have this VI in Labview that streams video from a webcam (Logitech C300) and processes the colored layers of each image as arrays. I am trying to get raw Bayer data from the webcam using Logitech's program (http://web.archive.org/web/20100830135714/http://www.quickcamteam.net/documentation/how-to/how-to-enable-raw-streaming-on-logitech-webcams) and the Vision Acquisition tool but I only get as much data as with regular capture, instead of four times more.
Basically, I get 1280x1024 24-bit pixels where I want 1280*1024 32-bit or 2560*2048 8-bit pixels.
Has anyone had any experience with this and knows a way for Labview to process the camera's raw output, or how to actually record a raw file from the camera?
Thank you!
The driver flag you've enabled simply packs the raw pixel value (8/10 bpp) into the least significant bits of the 24bit values. Assuming that the 8bpp mode is used, the raw values can be extracted from the blue color plane as in the following example. It can then be debayered to obtain RGB values for example.
Unless you can improve on the debayer algorithms in the firmware, or have very specific needs this is not very useful. Normally, one can at least reduce the amount of data transferred by enabling raw mode - which is not the case here.
The above assumes that the raw video mode isn't being overwritten by the LabVIEW IMAQdx driver. If that is the case, you might be able to enable raw mode from LabVIEW through property nodes. This requires to manually configure the acquision, as the configurability of express VIs are limited. Use the EnumStrings property to get all possible attributes, and then see if there is something like the one specified outside of the diagram disable structure (this is from a different camera).
I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.
I am right now working on one application where I need to find out user's heartbeat rate. I found plenty of applications working on the same. But not able to find a single private or public API supporting the same.
Is there any framework available, that can be helpful for the same? Also I was wondering whether UIAccelerometer class can be helpful for the same and what can be the level of accuracy with the same?
How to implement the same feature using : putting the finger on iPhone camera or by putting the microphones on jaw or wrist or some other way?
Is there any way to check the blood circulation changes ad find the heart beat using the same or UIAccelerometer? Any API or some code?? Thank you.
There is no API used to detect heart rates, these apps do so in a variety of ways.
Some will use the accelerometer to measure when the device shakes with each pulse. Other use the camera lens, with the flash on, then detect when blood moves through the finger by detecting the light levels that can be seen.
Various DSP signal processing techniques can be used to possibly discern very low level periodic signals out of a long enough set of samples taken at an appropriate sample rate (accelerometer or reflected light color).
Some of the advanced math functions in the Accelerate framework API can be used as building blocks for these various DSP techniques. An explanation would require several chapters of a Digital Signal Processing textbook, so that might be a good place to start.
I am working on the simulation of care-o-bot technologies (beginner to ROS). I read the ROS documentation and found some two similar things i.e. RVIZ and GAZEBO. Would you please tell me the difference between them...
RViz is a visualization tool for data coming from the Navigation Stack.
Gazebo is a 3D Robot simulator
Gaebo is a robot simulator.
Rviz on the other hand is a limited - though very useful- visualization tool. When I say limited I mean that it is usually one-way. The ROS application publish messages and these are visualized in Rviz
There can be some kind of interactivity in Rviz (I have implemented menus in them for example) but it is not fully interactive
Do you know any application that will display me all the headers/parameters of a single H264 frame? I don't need to decode it, I just want to see how it is built up.
Three ways come to my mind (if you are looking for something free, otherwise google "h264 analysis" for paid options):
Download the h.264 parser from (from this thread # doom9 forums)
Download the h.264 reference software
libh264bitstream provides h.264 bitstream reading/writing
This should get you started. By the way, the h.264 bitstream is described in Annex. B. in the ITU specs.
I've created a Web version - https://mradionov.github.io/h264-bitstream-viewer/
Based on h264bitstream and inspired by H264Naked. Done by compiling h264bitstream into WebAssembly and building a simple UI on top of it. Output information for NAL units is taken from H264Naked at the moment. Also supports files of any size, just will take some time initially to load the file, but navigation throughout the stream should be seamless.
I had the same question. I tried h264 analysis, but it only supports windows. So I made a similar tool with Qt to support different platforms.Download H264Naked. This tool is essentially a wrapper around libh264bitstream