I am developing a program to remotely access the multimeter through GPIB. I am using visa read function to obtain measurement values and convert string of values to numerical values. At proper Visa timeout setting, program is able to execute without any problem. However, if it is not enough timeout, program is not able to capture values and it is not able to scan from string at the Read buffer. Same thing happen when I try to abort the program during its running. The only thing that help is to restart the Labview.
Related
My company recently started installing IO modules to production machines to capture data to our database. To do so we outsourced a 3rd party programmer to create an IO pulling software to draw data from IO module to a database in a collector PC.
The IO pulling software is running on the collector PC and will continue to draw data from machines into its local database for as long as the software is left open. No data will come in if the collector PC is shut down or the software is turned off.
The program works well but there was a bit of an oversight. If a machine gets unplugged or disconnected from the network (connection made via Ethernet cables), an unhandled exception error will appear for the pulling software for that particular machine.
Clearly its due to there being no connection and clicking Continue will solve this problem. But I'm afraid it will affect the quality of the data captured. Also our contract with the programmer who made this expired, so I'm the one who has to fix it.
The program was done in VB and I was thinking of just adding a procedure into the code to automatically bypass this error or at least tell the program to automatically select Continue and move on without human intervention. As I'm new to programming I'm not sure if my approach is even possible. Any ideas?
Basically I have a bunch of performance analysis that [given naive interpetation] claims 70% of the time is spent in synchronization on our web application under heavy load, and mostly in SNIReadSyncOverAsync which internally in the data reader calls. (SNIReadSyncOverAsync actually ends up sitting on a kernalbase.dll!WaitForSingleObjectEx) It would be informative to see if these waits are caller initiated or callee initiated.
Is there a way to see (interpret) this in a Visual Studio Contention or Concurrency Report? Or some other way?
More importantly for my understanding, is there a way to see the incoming buffer that holds data before the data get's consumed by the data reader?
It seems my question was ill-informed.
The datareader reads a record at a time, but it reads it from the
underlying database driver. The database driver reads data from the
database in blocks, typically using a buffer that is 8 kilobytes.
If your result records are small and you don't get very many, they
will all fit in the buffer, and the database driver will be able to
feed them all to the data reader without having to ask the database
for more data.
If you fetch a result that is larger than the buffer, you will only be
able to read the first part of it and when there will no data exist in
network buffer then datareader will inform sql server to send next
block of data.
How much data can be stored in network buffer when datareader is used
Not sure if this is possible but here it goes. I have a simple server set up where multiple clients could execute a program. Each time a client executes the script to start the program, a new instance of the program starts. Now when the client stops the program execution thru another script, the instance of the program is killed. The problem is if another client is on the server at the same time running the program that instance will be killed also. Is there any way to connect a particular instance to a particular client?
Here is more detail.
The server is used to stream media from the internet. I have streaming devices attached to tvs. When a particular channel is selected, it sends a signal to the server which in turn runs several scripts, one being a script to start a video conversion process thru program called ffmpeg. The ffmpeg coverts the stream, saves it to a folder on the server making it available to the streaming device/tv. Each time a user starts a channel, an instance of the ffmpeg starts because its converting a different stream. Once the user ends viewing, the device sends a signal back to the server thru php script, which in turn runs a script called cleanup. The cleanup script is a bat file that kills the ffmpeg and deletes the files that are no longer needed. All works great except if one individual elects to stop viewing while the other continues. I don't know how to tell the difference between each instance of ffmpeg. I don't want it to kill all instances just the one connected to the one particular stream that needs to end. I do have the capability of obtaining each device ip address when the user first selects the channel. Is there anyway to link the ip to the particular instance?
Actually took advice from above and renamed each instance as ip, stored ip in temp file, was then able to match it to the ip making the cancel call.
I want to write a program, that should be notified by O.S. whenever any running process on that OS dies.
I don't want to myself poll and compare everytime if a previously existing process has died. I want my program to be alerted by OS whenever a process termination happens.
How do I go about it? Some sample code would be very helpful.
PS: Looking for approaches in Java/C++.
Sounds like you want PsSetCreateProcessNotifyRoutine(). See this article to get started:
http://www.codeproject.com/KB/threads/procmon.aspx
Under Unix, you could use the sigchld signal to get notified of the death of the process. This requires, however, that the process being monitored is a child process of the monitoring process.
Under Windows, you might need to have a valid handle to the process. If you spawn the process yourself using CreateProcess, you get the handle for free, otherwise you must acquire by other means. It might then be possible to wait for the process to terminate by calling WaitForSingleObject on the handle.
Sorry, I don't have any example code for this. I am not even sure, that waiting on the process handle under Windows really awaits termination of the process (as opposed to some other "significant" condition, which causes the process handle to enter "signalled" state or something).
I don't have a code sample ready but one idea – on Linux – might be to find out the ID of the process you'd like to watch when first starting your watcher program (e.g. using $ pgrep) and then using inotify to watch /proc/<PID>/ – which gets deleted when the process dies. In contrast to polling, this doesn't cost any significant CPU resources.
Now, procfs is not completely supported by inotify, so I can't guarantee this approach would actually work but it is certainly worth looking into.
I've had this problem before and found that basically I've got a connection that I'm not closing quickly enough (leaving connections open and waiting for garbage collection isn't really a best practice).
Now I'm getting it again but I can't seem to find where I'm leaving my connections open. By the time is see the error the database has cleared out the old connections so I can't see all the locked up connections last command (very helpful last time I had this issue).
Any idea how I could instrument my code or database to track what's going on so I can find my offending piece of code?
The error you are providing doesnt really point to a connection that is left open; it is more likely that there is a query that is taking longer than the application expects.
you can increase the time it waits for a response, and you could use Sql to find which queries are the most taxing.
Hopefully you have one data access layer class, instead of a whole bunch of classes, each one creating its own connection, right? What language are you using? If your using C#, the biggest cause of this problem is DataReaders and returning these objects to the upper layers. Most likely some client class is not closing the DataReader it received from your DAL class, leaving the connection open/locked for who knows how long. Track down the DataReaders you're returning and make sure your client classes are closing/disposing of them properly.
I'd also start thinking about redesigning your data access layer by implementing Disposable pattern and possibly returning POCOs instead of Data (...Tables, ...Sets, ...Readers) objects.