Is the visualvm graph jumping expected - visualvm

I really like visualVM and think it's a great tool. However, after running for about an hour the graphs (cpu, mem, and others) start jumping around and requires a restart. I suspect this has something to do with the app trying to sample the dataset because there is too much data to show on the graph.
I've searched the issues list and don't see this listed. Given that's it's behaved like this for as long as I've used it and that others are complaining about it, I'm wondering if it's intended behavior. I guess that's the first question.
The second would be, why is it keeping all the data points, why not throw the oldest point away when a new point arrives and the buffer is full?

For the first question see this: http://java.net/projects/visualvm/lists/dev/archive/2011-02/message/7.
For second question: VisualVM does not keep all the data points, it throws the oldest point away when a new point arrives and the buffer is full. How did you came to the conclusion, that keeps all data.

Related

QPSK works in simulation but not with SDR

I'm going to start off by saying that I'm very new to SDR and GNU Radio. This may be a dumb question, but I have been googling and testing things for about two months now trying to get this to work without success. Any help or pointers would be appreciated!
I'm attempting to use GNU Radio 3.8 to transfer a file using differential QPSK. I've tried to follow the tutorials on the wiki as well as several similar academic papers I found on the internet (which also seem to be based off the wiki tutorial). None of them worked on their own but combining what actually works from each one, I managed to create a flowgraph sans hardware that does indeed send and receive the data from a file. Here's the Flowgraph and here is a screenshot of the results. The results show the four constellation points, and the data from the file source matches up perfectly with the data having gone though the entire transmit+receive chain. In the simulation I have a throttle block and a channel model block where the LimeSDR Source and LimeSDR Sink block would be. So far so good (at least as far as I can tell).
When I actually start transmitting this signal with the SDR, the received data no longer matches up with what is transmitted. Here's the flowgraph I've been using for the transmission. I added a protocol formatter and some FEC blocks that I could have removed for this illustration, but the point is that simply looking at what bits are going into the modulator vs what's being recovered, the two do not match up. The constellation looks good (as far as I can tell) but the bits are all wrong. Here's a screenshot showing the bits being transmitted. You'll notice in the screenshot of the transmitted signal that the signal has a repeating series of three flat top "1's" surrounded on both sides by a period of "0's" (at time 1.5ms and 3.5ms). This is a screenshot of the received bits. At time 1ms and 3ms you can see how it is has significantly more transitions between 1 and 0 than it should.
So at this point I'm stumped. The simulation worked but the real world test does not. I've messed around with the RRC filter properties a significant amount. I have no clue if the values I have chosen are correct as I have not found a tutorial or explanation on how to do so. I just looked at some of the example flowgraphs and made some guesses as to how those values were derived and applied those guesses to my use case. It worked well in the simulation so I thought it would be fine in the real world test. I've tried a variety of samples per symbol but my goal is for a 4800 bit per second transfer speed, and using different samples per symbol didn't help anyway. What should I change in order to get this to work?
Bonus question: The constellation object has QPSK and DQPSK, and the constellation modulator has a differential checkbox. What is the best practice combination of selections to get a differential QPSK modulation?

Oxyplot: IsValidPoint on realtime LineSerie

I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!

labview - buffer data then save to excel file

My question is with respect to a labVIEW VI (2013), I am trying to modify. (I am only just learning to use this language. I have searched the NI site and stackoverflow for help without success, I suspect I am using the incorrect key words).
My VI consists of a flat sequence one pane of which contains a while loop where integer data is collected from a device and displayed on a graph.
I would like to be able to be able to buffer this data and then send it to disk when a preset number of samples have been collected. My attempts so far result in only the last record being saved.
Specifically I need to know how to save the data in a buffer (array) then when the correct number of samples are captured save it all to disk (saving as it is captured slows the process down to much).
Hope the question is clear and thanks very much in advance for any suggestions.
Tom
Below is a simple circular-buffer that holds the most recent 100 readings. Each time the buffer is refilled, its contents are written to a text file. Drag the image onto a VI's block diagram to try it out.
As you learn more about LabVIEW and as your performance and multi-threaded needs increase, consider reading about some of the LabVIEW design patterns mentioned in the other answers:
State machine: http://www.ni.com/tutorial/7595/en/
Producer-consumer: http://www.ni.com/white-paper/3023/en/
I'd suggest to split the data acquisition and the data saving in two different loops using a producer/consumer design pattern..
Moreover if you need a very high throughput consider using TDMS file format.
Have a look here for an overview: http://www.ni.com/white-paper/3727/en/
Screenshot will definitely help. However, some things are clear:
Unless you are dealing with very high volume of data, very slow hard drives or have other unusual requirements, open the file before your while loop, write to it every time you acquire a sample (leaving buffering to the OS), and close it afterwards.
If you decide you need to manage buffering on your own, you can use queues. See this example: https://decibel.ni.com/content/docs/DOC-14804 for reference (they stream data from disk, buffering it in the queue, but it is the same idea)
My VI consists of a flat sequence one pane of which
Substitute flat sequence for finite state machine (e.g. http://forums.ni.com/t5/LabVIEW/Ending-a-Flat-Sequence-Inside-a-case-structure/td-p/3170025)

Regrounding Zero Based ColumnSeries in Apache/Adobe Flex

I have tweeted an image illustrating the problem with Flex ColumnSeries on a PlotChart when trying to overlay one on top of another.
Essentially, it can display one series alright, two or more OK on initialization, but after a bit of manipulation (in the user session), the columns lose their sense of where zero is, and begin to float (these series have no minfield, thus zero is their starting point). FWIW: the axis for these columns is on the right, but that can change given the type of data displayed.
The app this is for allows users to turn multiple series of multiple plotting styles on and off, change visual parameters, and even the order in which the series stack on top of each other -- just to give you an idea of what's going on.
Due to how dynamic this all is, I am doing most of the code in ActionScript.
So the questions are:
Is this fixable? Googling around has provided no insights, regardless of inquiry.
Is there a refresh function or equivalent within PlotChart/CartesianCharts that may help?
May this not be a problem with the chart canvas, but more of the axis which the series points to? or the series itself?
If it has not been made clear already: I am lost on this. The issue I have known about for ~a year now was first discovered on a Beta version of the app I am working on now, but it took a while for it to surface in an average user session. As the complexity of the app has grown (by client demand), the issue takes a lot less time to surface.
The issue also occurs on all versions of Flex I have used: 4.5, 4.6, 4.9... etc.
Please help, or offer pointers. Thanks!

Accessing the most recent data from a shift register

I am fairly new to LabVIEW so please bear with me. I am working on a piece of code where I am reading data (in the form of an array) from a USB device, splitting this array to meet a required size, storing part of this array in a circular buffer and passing the rest of the data in a shift register. The problem I am encountering is that the shift register will save the data from all other iterations, however I simply want the data from the most recent iteration, but I am not sure how to do this in labVIEW. Perhaps the shift register is not my answer here, but I was wondering if anyone might have some suggestions.
Please let me know if this is clear enough.
I should probably mention that I am using LabVIEW 2011.
In the picture above, I am reading data coming from my hardware. This data is read as an array and I split the array to meet a specific size. I then store part of this array in a 2D array, which serves as a circular buffer and the other part of the array is set to a shift register, where with the next iteration this data will combine with the next set of data read back from my hardware.
The problem I am seeing right now, is that the size of my shift register is constantly growing.
I took, Adrian Keister advice and found my problem. CharlesB was correct the shift register does only show the data from the previous iteration. The reason why the contents of my shift register was consistently growing was because I did not account for the next set of data that would be read during each iteration. Well back to the drawing board
I don't know if I understand your problem correctly, but you should probably try using conditional appending of the array. In LabVIEW 2012 this operation is even simplier because of conditional indexing in for loops.
I provided an example here and hope that it helps. Similar condition can be created for your index modulo operation.
http://i.stack.imgur.com/AALLo.jpg