Issue when playing DashCast live stream - apache

I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.

[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.

Related

Splunk 7.2.9.1 Universal forwarder on SUSE Linux12.4 not communicating and forwarding logs to Indexer after certain period of time

I have noticed Splunk 7.2.9.1 Universal forwarder on SUSE Linux12.4 is not communicating to deployment server and forwarding logs to indexer after certain period of time. "splunkd" process appears to be running while this issue persists.
I have to restart UFW for it to resume communication to deployment and forward logs. But this will again stop communication after certain period of time.
I cannot see any specific logs in splunkd.log while this issue occurs.
However, i noticed below message from watchdog.log
06-16-2020 11:51:09.055 +0200 ERROR Watchdog - No response received from IMonitoredThread=0x7f24365fdcd0 within 8000 ms. Looks like thread name='Shutdown' is busy !? Starting to trace with 8000 ms interval.
Can somebody help to understand what is causing this issue.
This appears to be a Known Issue. From the 7.2.9.1 release notes:
Universal Forwarders stop sending data repeatedly throughout the day
Workaround: In limits.conf, try changing file_tracking_db_threshold_mb
in the [inputproc] stanza to a lower value.
I did not find a version where this is not listed as a known problem.

ColdFusion 2018 - Requests Multiply Executed

with a new project we encountered some strange behaviour on our ColdFusion application.
Whenever a single request is initiated from the browser, the code of the cfml-templates is
executed multiple times. Upon viewing the corresponding log-files we found out, that indeed
for some reason the same request fires the evaluation in our application multiple times. One request
generates several entries. This is especially the case for long-running requests, such as database imports.
The ColdFusion application implements a REST-service, but even on manually requesting a resource,
such as a certain cfml page, on the same application - the code gets executed an unknown amount of times(variable initializations, database write-operations etc. take place), and if the request runs too long (cap at around ~4-6 seconds) there is no response to the browser.
About the infrastructure:
The application is Coldfusion18 with Tomcat Standard Edition
The webserver is an Apache (2.4.6).
Everything runs on a Linux machine with Cent OS 7.7
The corresponding Java version is 11.0.4
Our best guess is that there might be some misscommunication between the coldfusion connector with
the apache webserver. We actually searched for some configuration parameters that could cause the
problem, without success. Upon an installation on a windows machine we did not encounter that error.
Anyone got any idea?
we just found our answer in the following post:
Link to Solution

Apache Request/Response Too Slow

I have an Apache server with 16GB of Ram. The script cap.php returns a very small chunk of data (500B). It starts a mysql connection and makes a simple query.
However, the response from the server is, in my opinion, too lengthy.
I attach a screenshot of the Developer Tool Panel in Chrome.
Beside SSL and the TTFB there is a strange delay of 300ms (Stalled).
If I try a curl from the WebServer:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -k -H 'miyazaki' https://127.0.0.1/ui/cap.php
Lookup time: 0.000
Connect time: 0.000
PreXfer time: 0.182
StartXfer time: 0.266
Total time: 0.266
Does anyone know what that is?
Eventually, I found that if you use SSL it is really better and it does really matter to switch on the KeepAlive directive into Apache. See the picture below.
According to the Chrome documentation:
Stalled/Blocking
Time the request spent waiting before it could be sent. This time is inclusive of any time spent in proxy negotiation.
Additionally, this time will include when the browser is waiting for
an already established connection to become available for re-use,
obeying Chrome's maximum six TCP connection per origin rule.
So this appears to be a client issue with Chrome talking to the network rather than a server config issue. As you are only making one request I think we can rule out the TCP limit per origin (unless you have lots of other tabs using up these connections) so would guess either limitations on your PC (network card, RAM, CPU) or infrastructure issues (e.g. You connect via a proxy and it takes time to set up that connection).
Your curl request doesn't seem to show this delay as it has just a 0.182 wait time to send the request (which is easily explained with https negotiation) and then a 0.266 total time to download (including the 0.182). This compares with 0.700 seconds when using Chrome so don't understand why you say "total time is similar" when to me it's clearly not?
Finally I do not understand your follow up answer. It looks to me like you have made request, presumably after a recent other request as this has skipped the whole network connection stage (including any grey stalling, blue DNS lookup, orange initial connection and purple https connection). So of course this quicker. But it's not comparing like for like with your first screenshot in your question and is not addressing your question.
But yes you absolutely should be using keep-alives (they are on by default in most web server so usually takes extra efforts to turn them off) and https resumption techniques (not on by default unless you explicitly add this to your https config) to benefit any additional requests sent shortly after the first. But these will not benefit the first connection of the session.

Weblogic 12.2.1 managed server access.log not updating

I have developed some JAX-RS web services and deployed the WAR file to a managed server on WebLogic 12.2.1. When I call a web service, either through a client program, or via web browser, I noticed that nothing is getting updated in E:\MLM\MyDomain\servers\MyAppSrv01\logs\access.log. This file stays empty all the time. When the next day comes (at 12.00am), the file will roll over to access.logNNNNN (e.g. access.log00004) and then I can see some of the GET and POST calls of the previous day appearing in access.logNNNNN. The strange thing is that only some of the web service calls appear in access.logNNNNN, even though I make many calls throughout the testing. What could be the problem?
Thanks in advance.
You are not seeing access logs at Run Time due to Buffer Size defined. To reduce I/O Weblogic will write logs to Buffer first and when the limit reaches it will write to access.log file.
Log Buffer Size
The maximum size (in kilobytes) of the buffer that stores HTTP requests. When the buffer reaches this size, the server writes the data to the HTTP log file. Use the LogFileFlushSecs property to determine the frequency with which the server checks the size of the buffer.
You can set this value to 0 for run-time logging.

how to prevent the stdout.out in weblogic to increasing the size heavily (Windows)

I have deployed a system integrated with weblogic, but until now I faced a problem is the weblogic increasing the stdout.out size heavily(by GB per week), it caused the system to load slowly and slowly.
Any way to prevent it increase the size heavily or redirect into .log?
Thanks alot
As David Herget says above, using the WebLogic Scripting Tool (WLST) to redirect StdOut and StdErr did not actually work for me either; I had to also do so through the web console (even though they appear to be set on the console) and restart the relevant jvms.
I can't reply to David's comment above due to being a newbie. [Edited since for clarity]
Not totally sure to understand fully your question.
Are you talking about the {server_name}.out file located in the {Domain_Path}/servers/{server_name}/logs ?
If so, I've never found anyway to rotate those logs automatically so I run a script each day to rotate it (basically copying it to another name, zip it and echoing a NULL in the orginal file...erasing the older one after).
If you are talking about redirecting StdOut to the logs though, that can be done within the console for each server in the logging tab by checking "Redirect stdout logging enabled". Configuration to rotate those logs can also be done within that tab.
On that, StdErr can also be redirected, but not from the console (in WL9). You have to put "RedirectStderrToServerLogEnabled" at true in the MBean tree by wlst (it's located at /Servers/{server_name}/Log/{server_name}
I know the question was ask long time ago but hoping it would help nonetheless
Weblogic provides features of log files rotation based on the size and time interval.
You can try rotating the log files based on the size. You would need to configure the log rotation policy from the admin console. Please refer the below link for further details.
http://docs.oracle.com/cd/E12840_01/wls/docs103/ConsoleHelp/taskhelp/logging/RotateLogFiles.html
If you want to rotate the log files on demand, you can use the below WSLT script.
C:\>java weblogic.WLST
#connect WLST to an Administration Server
wls:/offline> connect('username','password')
#navigate to the ServerRuntime MBean hierarchy
wls:/mydomain/serverConfig> serverRuntime()
wls:/mydomain/serverRuntime>ls()
#navigate to the server LogRuntimeMBean
wls:/mydomain/serverRuntime> cd('LogRuntime/myserver')
wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()
-r-- Name myserver
-r-- Type LogRuntime
-r-x forceLogRotation java.lang.Void :
#force the immediate rotation of the server log file
wls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()
wls:/mydomain/serverRuntime/LogRuntime/myserver>
http://docs.oracle.com/cd/E12840_01/wls/docs103/logging/config_logs.html#wp1001654