I am writing a program to view an MJPEG stream with vlc. When running vlc directly through the command line I get the error message [mjpeg # 0x10203ea00] No JPEG data found in image over and over again (with different pid's). I would like to get rid of this, as I think all of that text output is bogging down my program (and makes my text output impossible to see as it is gone about .5 seconds after it is written to console)
I am connecting to blue iris, and am implementing my program with vlcj.
http://10.10.80.39:8080/mjpg/cam1/video.mjpeg I have tried all of the quiet options that I can find, set verbosity to 0, I am at a loss on how to ignore this error.
I am running vlc 2.1. The error happens on multiple computers, and multiple os's.
You simply can't disable everything that vlc, or the libraries that vlc depends on, may emit. Not all of the log/error messages you see can be controlled by setting vlc's log level.
For me the problem is mainly libdvdnav spewing irrelevant messages to stderr.
You say you're using vlcj, well I too wanted a way to easily ignore those error messages from inside my Java applications. With the latest vlcj-git (at time of writing), there is an experimental NativeStreams [1] class that might help you.
This class uses JNA to wrap the "C" standard library and programmatically redirect either or both of the native process stdout and stderr streams.
You can not simply redirect System.out and System.err as some might expect because clearly these messages are coming from the native code outside of the JVM and that of course does not use System.out or System.err.
You can redirect to log files (which could just keep growing), or to "/dev/null".
The downside is that if you redirect the native streams, you also inevitably redirect the corresponding Java stream - you would lose your own application output. In my own applications that's not a problem because I log to stdout (which I don't redirect), whereas the vlc messsages I don't want fortuitously go to stderr (which I redirect).
You could also just redirect your java process output streams in the usual way when you launch the JVM. I wanted to be able to do this programmatically rather than having to write a shell script.
So it's not an ideal solution, but it works for me (only tested on Linux).
[1] https://github.com/caprica/vlcj/blob/a95682d5cd0fd8ac1d4d9b7a768f4b5600c87f62/src/main/java/uk/co/caprica/vlcj/runtime/streams/NativeStreams.java
Related
So we are doing distributed testing of our web-app using JMeter. For that you need to have the jmeter-server.bat file running in background as it acts as sort of a listener. The problem arises when one of the slave machine out of 4 restarts due to the load and the test is effectively stuck right there as the master machine expects some output from the 4th machine. Currently the automation is done via ansible-playbooks which are called in Jenkins. There are more or less 15 tests that are downstream to one another. So even if one test is stuck, the time is wasted until someone check on the machines.
Things I've tried so far:
I've tried using the Windows Task Scheduler and kept the jmeter-server.bat to run without any user loggin in, but it starts the bat file in background which in-turn spawns all the child processes in the background as well i.e. starts Selenium Chrome in headless mode.
I've tried adding the jmeter-server.bat in startup and configuring the system to AutoLogon without any password to trigger a session which will call the startup file. But unfortunately the idea was scrapped by IT for being insecure.
Tried using the ansible playbook by using the win_command but it again gets stuck as the batch file never returns anything.
Created a service as well for the bat file, but again the child processes started in background.
The problem arises when one of the slave machine out of 4 restarts due to the load
Instead of trying to work around the issue I would rather recommend finding the root cause and fixing it.
Make sure to follow JMeter Best Practices
Configure Java to take heap dump on failure
Inspect Windows PerfMon and operating system/application logs
Check presence of .hprof files in the "bin" folder of your JMeter installation and see what do they say
In general using Selenium for conducting the load is not recommended, I would rather suggest using JMeter's HTTP Request samplers for that, given you properly configure JMeter to behave like a real browser from the system under test perspective there won't be any difference whether the load comes from HTTP Request samplers or from the real browser.
The same states documentation on the WebDriver Sampler
Note: It is NOT the intention of this project to replace the HTTP Samplers included in JMeter. Rather it is meant to compliment them by measuring the end user load time.
I am building a command line tool using only command line tools (mainly clang) in ObjC++ using AudioUnit v2(C) API. Output to speakers works fine but the input from microphone callback is never invoked. The iTerm or Terminal hosts have access according to Settings. The executable also has an embedded info.plist although I do not think this is relevant.
The precise security model is not clear to me, it looks like a major security hole if it worked (anything run from terminal would have access): my guess is that the process launched by an "App" has permissions which then propagate to any child process. However this view is confused by another case where an executable I generate does network access (as it happens only to localhost because it is a regression test) and in this case the executable is asking for network access, not the terminal.
The source code is actually written in Felix which is translated to C++ and then compiled and linked by clang with the -ObjC option so embedded Objective C is supported. The translator is mature enough to have reasonable confidence in its correctness in this kind of simple application. The AudioUnit configuration for the microphone input is:
// configure
var outputElement = 0u32;
var inputElement = 1u32;
// establish callback
status = AudioUnitSetProperty(
outputAudioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputElement,
(&inputCallback).address,
C_hack::sizeof[AURenderCallbackStruct].uint32
);
assert noErr == status;
and the inputElement is enabled and outputElement disabled. A second audio unit is constructed later with similar technology which pumps a sine wave to the speakers and that works fine. The actual callback just prints a diagnostic and exits, but the diagnostic is never seen. Originally, the terminal had no permissions, and we guessed the code was correct but failed due to lack of permission to access the microphone. The executable still has no permission but the terminal does now (if I try to run the executable from file manager a terminal pops up).
No errors are reported at any stage. The callback simply isn't invoked.
To get a callback, you need to
enable IO
set the audio unit input device
Number 2. trips people up because it's not necessary for output [which sensibly defaults to the default output device], nor is it necessary on iOS, probably because there is no concept of Audio Device there, at least not in the AudioUnit API.
Surprisingly, both these requirements are actually documented! Technote 2091 covers the steps needed to record audio using AudioUnits and code listings 3. and 4. have sample code that enables IO and sets the input device. Listing 4. sets the audio unit input device to whatever the default input device was, but any input device will do.
Since macOS Mojave (10.14), you need an NSMicrophoneUsageDescription string in your Info.plist. Without this, your app is aborted with an exception. With this, the user is shown a prompt requesting permission to access input devices. You can control when this happens using code found here.
For a command line tool, you can embed an Info.plist file during the link stage.
On Catalina you also seem to need to opt into audio-input enabled sandboxing or the hardened runtime (or both!). Without one of these your callback is called, but with silence! Both of these runtime environments are enabled using "entitlements" which are metadata that is embedded in your app via codesigning, so you will need some form of codesigning. I don't think this necessarily means you will need a certificate from Apple, there is "local/ad-hoc" code signing, which seems to embed entitlements without a certificate, although I'm not sure how distributable the resulting binaries will be.
A bit unsure where to look for this one...
Context:
HTML5 web page, that uses HTML5 EventSource / server-side events to get refresh notifications
OpenWrt BarrierBreaker server, running uHTTPd as the web server
a two-level CGI script that provides the server-side events:
the CGI is a shell script (ash, not bash), that parses QUERY_STRING, and calls...
a C application that do the true data extraction (from an SQLite database) and pushes the data to the web page
Everything works, except for a little detail: when the web page is closed,
the C application keeps running. Since it doesn't expect any user input, its current structure is a simple while(1). So after some time, the openwrt box has dozens of copies of the app running.
So the question: how can the application be changed to detect that the client isn't there anymore, and that it should quits?
Thanks
[Edit]
Since posting this a few hours ago, i investigated if the information was somehow available in the script's input stream. It appears it isn't.
I also found http://html5doctor.com/server-sent-events/ that describes a strategy to do exactly this in a Node.js environment, but I have no idea how to translate this in a script-based one.
[/Edit]
My iOS application, among its features, download files from a specific server. This downloading occurs entirely in the background, while the user is working on the app. When a download is complete, the resource associated with the file appears on the app screen.
My users report some misbehavior about missing resources that I could not reproduce. Some side information leads me to suspect that the problem is caused by the download of the resource's file to be aborted mid-way. Then the app has a partially downloaded file that never gets completed.
To confirm the hypothesis, to make sure any fix works, and to test for such random network vanishing under my feet, I would like to simulate the loss of the network on my test environment: the test server is web sharing on my development Mac, the test device is the iOS simulator running on the same Mac.
Is there a more convenient way to do that, than manually turning web sharing off on a breakpoint?
Depending on how you're downloading your file, one possible option would be to set the callback delegate to null halfway through the download. It would still download the data, but your application would simply stop receiving callbacks. Although, I don't know if that's how the application would function if it truly dropped the connection.
Another option would be to temporarily point the download request at some random file on an external web server, then halfway though just disconnect your computer from the internet. I've done that to test network connectivity issues and it usually works. The interesting problem in your case is that you're downloading from your own computer, so disconnecting won't help. This would just be so you can determine the order of callbacks within the application when this happens, (does it make any callbacks at all? In what order?) so that you can simulate that behavior when actually pointed to your test server.
Combine both options together, I guess, to get the best solution.
I have an application which works with sockets and reads / writes data. It uses Foundation framework combined with CFNetwork and stdio.
Here is the issue - when it is launched from console (bash shell) it works 100% fine and there is nothing wrong with. However when it is invoked by another application via NSTask madness begins. The whole application goes insane and it only reads the socket once and then hangs up (it is meant to exit after it is done).
This application does not rely on environmental variables or any other things like that. It is not a user issue either. When it is launched it sends a simple request to the server and 'printf's the response and reads again. This happens untill a termination packet is recieved.
I am really confused, and it feels like there is something inside the framework which makes the app insane just to piss the programmer off.
By the way, I'm on Mac OS X Snow Leopard and the application is for the same platform.
EDIT 1 : Redirecting stdout to an NSPipe causes it. But why ?
libc treats a pipe/file and a console connected to a (pseudo) terminal differently. In particular, the default buffering policy is different. See the extensive discussion in this Stack Overflow Q&A.
So, it's perfectly conceivable that a program which works when connected to a (pseudo) terminal won't work with a pipe. If you need more specific advice, you need to post (at least the skeleton of) your code.