Suddenly there is a PermissionDeniedError and getUserMedia error on RTCMultiConnection, while everything was working almost well.
And not only in Chrome.
Taking in consideration that the API is experimental and under changing restrictions and browsers' compatibility and knowing that this question has been asked again, without viewing any usable reply, on this case, I take the risk to ask.
I don't think that errors have to do with
getUserMedia() no longer works on insecure origins.
The above problem appeared in
Opera 34.0 and Chrome 47, while Firefox 40 is working fine.
It is not application's bug or camera compatibility, becaused I tested also in https://jsfiddle.net/zar6fg60/, both in desktop camera and laptop with the same errors below.
Console log errors
name PermissionDeniedErrorconnection.onMediaError # RTCMultiConnection.js:5592mediaConfig.onerror # RTCMultiConnection.js:594(anonymous function) # RTCMultiConnection.js:3931getUserMedia # RTCMultiConnection.js:3930_captureUserMedia # RTCMultiConnection.js:678captureUserMedia # RTCMultiConnection.js:503(anonymous function) # RTCMultiConnection.js:118initRTCMultiSession # RTCMultiConnection.js:228connection.open # RTCMultiConnection.js:108_.onclick # inter_stream.js:240
RTCMultiConnection.js:5593 constraintName {
"audio": {
"mandatory": {},
"optional": [
{
"chromeRenderToAssociatedSink": true
}
]
},
"video": true
}connection.onMediaError # RTCMultiConnection.js:5593mediaConfig.onerror # RTCMultiConnection.js:594(anonymous function) # RTCMultiConnection.js:3931getUserMedia # RTCMultiConnection.js:3930_captureUserMedia # RTCMultiConnection.js:678captureUserMedia # RTCMultiConnection.js:503(anonymous function) # RTCMultiConnection.js:118initRTCMultiSession # RTCMultiConnection.js:228connection.open # RTCMultiConnection.js:108_.onclick # inter_stream.js:240
RTCMultiConnection.js:5594 message Either:
Media resolutions are not permitted.
Another application is using same media device.
Media device is not attached or drivers not installed.
You denied access once and it is still denied.
Only secure origins are allowed (see: https://goo.gl/Y0ZkNV).connection.onMediaError # RTCMultiConnection.js:5594mediaConfig.onerror # RTCMultiConnection.js:594(anonymous function) # RTCMultiConnection.js:3931getUserMedia # RTCMultiConnection.js:3930_captureUserMedia # RTCMultiConnection.js:678captureUserMedia # RTCMultiConnection.js:503(anonymous function) # RTCMultiConnection.js:118initRTCMultiSession # RTCMultiConnection.js:228connection.open # RTCMultiConnection.js:108_.onclick # inter_stream.js:240
RTCMultiConnection.js:5595 original session Object {audio: true, video: true}
Solution
Updated to secure http and everything is working well right now, thanks to Muaz Khan. Chrome has a notice about secure origins and there is a w3c new context on media access at non-secure urls.
Please make sure that you're using RTCMultiConnection v2.2.2.
Make sure that your domain is allowed for webcam (Video): chrome://settings/contentExceptions#media-stream-camera
You seems using HTTPs. Which makes sense.
You seems using {audio:true,vide:true} so no "screen:true" exceptions here!
Please try AppRTC demo which is built using RTCMultiConnection v2.2.2
Can you please try this demo to see number of audio/video devices available on your system: https://www.webrtc-experiment.com/demos/MediaStreamTrack.getSources.html
If webcam is denied on Chrome, you'll see isWebcamAlreadyCaptured == false here: https://stackoverflow.com/a/30047627/552182
Additionally:
Please share your browser version: https://www.webrtc-experiment.com/DetectRTC/
Please make sure that another application (Firefox/etc.) is NOT using same camera.
Related
I have one flask powered app, I'm trying to enable OIDC SSO for this app. I opted for wso2 as the identity server. I have created a callback URL and added the needful things in both the Identity Server and the flask app as shown below. The app is able to flow through the credential logging page and after that, I'm getting an SSL certificate verification error.
My try:
I have tried by using self signed certificates and app.run(ssl_context='adhoc') didn't worked.
Code Snippet:
from flask import Flask, g
from flask_oidc import OpenIDConnect
# import ssl
logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
app.config.update({
'SECRET_KEY': 'SomethingNotEntirelySecret',
'TESTING': True,
'DEBUG': True,
'OIDC_CLIENT_SECRETS': 'client_secrets.json',
'OIDC_ID_TOKEN_COOKIE_SECURE': False,
'OIDC_REQUIRE_VERIFIED_EMAIL': False,
})
oidc = OpenIDConnect(app)
#app.route('/private')
#oidc.require_login
def hello_me():
# import pdb;pdb.set_trace()
info = oidc.user_getinfo(['email', 'openid_id'])
return ('Hello, %s (%s)! Return' %
(info.get('email'), info.get('openid_id')))
if __name__ == '__main__':
# app.run(host='sanudev', debug=True)
# app.run(debug=True)
# app.run(ssl_context='adhoc')
app.run(ssl_context=('cert.pem', 'key.pem'))
# app.run(ssl_context=('cert.pem', 'key.pem'))
Client Info:
{
"web": {
"auth_uri": "https://localhost:9443/oauth2/authorize",
"client_id": "hXCcX_N75aIygBIY7IwnWRtRpGwa",
"client_secret": "8uMLQ92Pm8_dPEjmGSoGF7Y6fn8a",
"redirect_uris": [
"https://sanudev:5000/oidc_callback"
],
"userinfo_uri": "https://localhost:9443/oauth2/userinfo",
"token_uri": "https://localhost:9443/oauth2/token",
"token_introspection_uri": "https://localhost:9443/oauth2/introspect"
}
}
App Info:
python 3.8
Flask 1.1.2
Hi Answering my own question just to reach the community effectively, here I can express where did I stuck and all the stories behind the fix.
TLDR:
The SSL issue was appearing because in OIDC flow wso2 server has to communicate or transfer secure-auth token only through the SSL tunnel. This is a mandatory standard need to keep for security purposes.
Yes carbon server has SSL certificate (self signed one) to make the secure token transfer through SSL Tunnel client also has to make at least self-signed certificate configuration.
Since I was using the flask-oidc library there is a provision to allow that, please refer to the configuration here.
{
"web": {
"auth_uri": "https://localhost:9443/oauth2/authorize",
"client_id": "someid",
"client_secret": "somesecret",
"redirect_uris": [
"https://localhost:5000/oidc_callback"
],
"userinfo_uri": "http://localhost:9763/oauth2/userinfo",
"token_uri": "http://localhost:9763/oauth2/token",
"token_introspection_uri": "http://localhost:9763/oauth2/introspect",
"issuer": "https://localhost:9443/oauth2/token" # This can solve your issue
}
}
For quick development purpose you can enable Secure connection in HTTPS by adding ad-hoc config in flask app run settings.
if __name__ == '__main__':
# app.run(ssl_context=('cert.pem', 'key.pem')) # for self signed cert
app.run(debug=True, ssl_context='adhoc') # Adhoc way of making https
Let me preface this answer with this one Caveat:
DO NOT DO THIS IN PRODUCTION ENVIRONMENTS
No, serously, do not do this in production, this should only be done for development purposes.
Anyways, open the oauth2client\transport.py file.
You're going to see this file location in your error that is spit out. for me it was in my anaconda env
AppData\Local\Continuum\anaconda3\envs\conda_env\lib\site-packages\oauth2client\transport.py
Find this line (line 73 for me)
def get_http_object(*args, **kwargs):
"""Return a new HTTP object.
Args:
*args: tuple, The positional arguments to be passed when
contructing a new HTTP object.
**kwargs: dict, The keyword arguments to be passed when
contructing a new HTTP object.
Returns:
httplib2.Http, an HTTP object.
"""
return httplib2.Http(*args, **kwargs)
change the return to
return httplib2.Http(*args, **kwargs, disable_ssl_certificate_validation=True)
You may need to do the same thing to line 445 of flask_oidc/__init__.py
credentials.refresh(httplib2.Http(disable_ssl_certificate_validation=True))
Upgrading certifi package should solve the problem.
pip install --upgrade certifi
It worked for me when I faced the exactly the same issue.
We use couchbase as session storage for mod_perl scripts. To avoid delays on clients caused by waiting for a new connection we do preconnect to couchbase on child_init apache stage. So during apache restart / new child creation it connects to couchbase automatically and later use that connection during apche child lifetime.
Generally everything works fine, but sometimes we got the following errors during that preconnection:
Couldn't connect: 0x13 (Operation not supported) at /perl/lib64/perl5/Couchbase/Bucket.pm line 38.
Usually it appears during apache restart and on several (or dozens) of childs, and almost never on one child only. Usually restarting apache again solves the problem.
What can cause such a problems? Is it a problem with code / server configuration / couchbase server itself?
May be it caused somehow with a lot of reconnections at the same time? Some ulimits stuff / or selinux restrictions?
UPD: versions
OS:
Centos 6, 2.6.32-358.2.1.el6.x86_64
libcouchbase:
libcouchbase-devel.x86_64 2.4.7-1.el6
libcouchbase2-core.x86_64 2.4.7-1.el6
libcouchbase2-libevent.x86_64 2.4.7-1.el6
couchbase server:
2.2.0 community edition (build-837)
SDK:
perl (Couchbase::Core v2.0.2)
connection code (isolated & simplified):
# in mod_perl environment
use Couchbase;
use Couchbase::Bucket;
use Couchbase::Document;
use Apache2::ServerUtil ();
my $cb = undef;
# connection handler, initialized once, used during apache child lifetime
sub connect_couchbase_on_child_init {
my ($child_pool, $s) = #_;
my $dsn = 'couchbase://192.168.0.1,192.168.0.2/my_bucket_name?detailed_errcodes=1';
eval { $cb = Couchbase::Bucket->new($dsn); };
# here we get the occasional warnings during apache restarts
if ($#) { warn "COUCHBASE CONNECTION ERROR! $#"; $cb = undef; }
return Apache2::Const::OK;
}
Apache2::ServerUtil->server->push_handlers(PerlChildInitHandler => \&connect_couchbase_on_child_init);
# in request handlers it used with the following calls (only if connected):
# $doc = Couchbase::Document->new($key);
# $cb->get($doc);
# ...
# $cb->replace($doc);
# ...
# $cb->insert($doc);
# ...
# $cb->remove($doc);
Because you are using server 2.2.0 and because this seems to happen when you are connecting many clients at once, my theory is that you are receiving the last error from the server. The current client bootstrap process attempts using bootstrap over memcached (which is only supported from version >= 2.5.0 of the server), that fails and it attempts to use 'terse' bootstrapping (again, only supported on >= 2.5.0 of the server) and finally 'classic' HTTP (which is available on all versions).
Add the following options to your DSN/connection string to cut out some of the steps for your server. Note that should you ever upgrade to >= 2.5 these options should be removed:
bootstrap_on=http Does not try memcached bootstrap
http_urlmode=2 Uses the pre-2.5 style of bootstrapping by default
These two options will not necessarily fix your issue, but they will at least cut out some of the initial connection time, and perhaps show a clearer reason for the error (you can also set LCB_LOGLEVEL=5 in the environment to get actual logging).
In your case, the connection string would be:
couchbase://192.168.0.1,192.168.0.2/my_bucket_name?detailed_errcodes=1&bootstrap_on=http&http_urlmode=2
I'm trying to set up automated testing with PhantomJS, Behat and Sahi on my vagrant machine.
I'm getting the following output, when trying to run a test with behat:
[Behat\SahiClient\Exception\ConnectionException]
Exception has been thrown in "afterStep" hook, defined in FeatureContext::afterStep()
Connection time limit reached
Here is my userdata.properties:
# dirs. Relative paths are relative to userdata dir. Separate directories with semi-colon
scripts.dir=scripts;
# default log directory.
logs.dir=logs
# Directory where auto generated ssl cerificates are stored
certs.dir=certs
# Use external proxy server for http
ext.http.proxy.enable=false
ext.http.proxy.host=
ext.http.proxy.port=
ext.http.proxy.auth.enable=false
ext.http.proxy.auth.name=kamlesh
ext.http.proxy.auth.password=password
# Use external proxy server for https
ext.https.proxy.enable=false
ext.https.proxy.host=
ext.https.proxy.port=
ext.https.proxy.auth.enable=false
ext.https.proxy.auth.name=kamlesh
ext.https.proxy.auth.password=password
# There is only one bypass list for both secure and insecure.
ext.http.both.proxy.bypass_hosts=localhost|127.0.0.1|*.internaldomain.com
# Mark this property true to disable the proxy alert
proxy_alert.disabled=false
And my browswer_types.xml:
<browserTypes>
<browserType>
<name>phantomjs</name>
<displayName>PhantomJS</displayName>
<icon>safari.png</icon>
<path>/usr/bin/phantomjs</path>
<options>--ignore-ssl-errors=yes --proxy=localhost:9999 --ssl-protocol=any /usr/local/sahi/phantomjs-sahi.js</options>
<processName>phantomjs</processName>
<capacity>100</capacity>
<force>true</force>
</browserType>
</browserTypes>
behat.yml:
default:
extensions:
Behat\MinkExtension\Extension:
javascript_session: sahi
browser_name: phantomjs
goutte: ~
sahi:
host: localhost
port: 9999
Sahi run output:
--------
SAHI_HOME: ..
SAHI_USERDATA_DIR: ../userdata
SAHI_EXT_CLASS_PATH:
--------
Sahi properties file = /usr/local/sahi/config/sahi.properties
Sahi user properties file = /usr/local/sahi/userdata/config/userdata.properties
Added shutdown hook.
>>>> Sahi OS v5.0 started. Listening on port: 9999
>>>> Configure your browser to use this server and port as its proxy
>>>> Browse any page and CTRL-ALT-DblClick on the page to bring up the Sahi Controller
-----
Reading browser types from: /usr/local/sahi/userdata/config/browser_types.xml
-----
I've tried reinstalling a bunch of stuff, tried playing around with the ports, processes, proxy settings, nothing.
your vagrant comes with an empty or no db. so when you try to connect to your app, e.g log in with some known user it will crash cause it won't find it!
all the best ;)
Since version 4.3.2 of BrowserType change settings. Since there is no tag force. please check.
https://sahipro.com/docs/using-sahi/sahi-headless-execution-with-phantomjs.html#Documentation since Sahi Pro V4.3.2
I am trying to upload data to amazon s3 bucket.
I am using aws-s3 gem for this purpose.
I am giving right access key and secure key but still not able to execute S3Object.store/Bucket calls, though the connection is established. They return with error "AWS::S3::SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method."
Interestingly I am running another rails app with paperclip plugin to upload images to S3, and that is working like a charm! with same access key and secure key.
I have tried referencing some links mentioning same problem but to no luck.
[ https://forums.aws.amazon.com/thread.jspa?threadID=16020&tstart=0 ]
Any pointers/help/suggestions would be great. :)
I just got this problem because I did not supply the correct region in the request.
I am using fog and Carrierwave as per the railscast here and I had to configure the region in the config/initializer for Carrierwave
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '[redacted]', # required unless using use_iam_profile
aws_secret_access_key: '[redacted]', # required unless using use_iam_profile
# use_iam_profile: false, # optional, defaults to false
region: 'eu-central-1', # optional, defaults to 'us-east-1'
# host: 's3.example.com', # optional, defaults to nil
# endpoint: 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'xxx' # required
# config.fog_public = false # optional, defaults to true
# config.fog_attributes = { cache_control: "public, max-age=#{365.days.to_i}" } # optional, defaults to {}
end
interestingly fog was redirected to the correct endpoint with the correct region by amazon, however, the redirected request got the failure on the authentication, maybe a problem with fog in such a situation. Fog did give a nice warning in the log
[fog][WARNING] fog: followed redirect to calm4-files.s3.amazonaws.com, connecting to the matching region will be more performant
but to be more accurate they should say not only more performant, but it will actually work as well
We are using Nagios to monitor our network with great success. However, we have a syslog for critical application errors and while I set up check_log, it doesn't seem to work as well as monitering a device.
The issues are:
It only shows the last entry
There doesn't seem to be a way to acknowledge the critical error and
return the monitor to a good state
Is nagios the wrong tool, or are we just not setting up the service monitering right?
Here are my entries
# log file
define command{
command_name check_log
command_line $USER1$/check_log -F /var/log/applications/appcrit.log -O /tmp/appcrit.log -q ?
}
# Define the log monitering service
define service{
name logfile-check ;
use generic-service ;
check_period 24x7 ;
max_check_attempts 1 ;
normal_check_interval 5 ;
retry_check_interval 1 ;
contact_groups admins ;
notification_options w,u,c,r ;
notification_period 24x7 ;
register 0 ;
}
define service{
use logfile-check
host_name localhost
service_description CritLogFile
check_command check_log
}
For monitoring logs with Nagios, typically the log checker will return a warning only for newly discovered error messages each time it is invoked (so it must retain some state in order to know to ignore them on subsequent runs). Therefore I usually set:
max_check_attempts 1
is_volatile 1
This causes Nagios to send out the alert immeidately, but only once, and then go back to normal.
My favorite log checker is logwarn, but I'm biased because I wrote it myself after not finding any existing ones that I liked. The logwarn package includes a Nagios plugin.
Nothing in your config jumps out at me as being misconfigured.
By design, check_log will only show either an OK message, or the last log entry that triggered an alert. If you need to see multiple entries, you'll need to modify the plugin.
However, I find the fact that you're not getting recoveries somewhat odd. The way check_log works (by comparing the current log to the previous version), you should get a recovery on the very next service check. Except of course, when there have been additional matching entries added to the log since the last check.
Does forcing another service check (or several) cause it to recover?
Also, I don't intend this in a mean way, but make sure it's really malfunctioning.
Is your log getting additional matching entries in between checks, causing it not to recover? Your check is matching "?" which will match anything new in the log. Is something else (a non-error) being added to the log and inadvertently causing a match?
If none of the above are the issue, I would suggest narrowing it down by taking Nagios out of the equation. Try running check_log manually (from the command line, but as the same user as nagios), and with a different oldlog. It should go something like this -
run check with a new "oldlog" - get initialization message
run check - check OK
make change to log
run check - check fails
run check - check OK
If this doesn't work, then you know to focus on the log, the oldlog, and how the check_log is doing the check.
If it works, then it points more towards a problem with your nagios configuration.
There is a Nagios plugin that you can use to check the log files: it's called check_logfiles and it's used to scan the lines of a file for regular expressions.
The following link shows how to install and configure check_logfiles for Nagios and Opsview:
https://www.opsview.com/resources/nagios-alternative/blog/syslog-monitoring-nagios-opsview
As there are many ways to achieve a goal, there is also a nice plugin from Consol available:
https://labs.consol.de/lang/en/nagios/check_logfiles/
supports regex
supports log rotation
To use it, you need a cfg file, this is an example for oracle databases
#searches = ({
tag => 'oraalerts',
options => 'sticky=28800',
logfile => '/u01/app/oracle/diag/rdbms/davmdkp/DAVMDKP1/trace/alert_DAVMDKP1.log',
criticalpatterns => [
'ORA\-0*204[^\d]', # error in reading control file
'ORA\-0*206[^\d]', # error in writing control file
'ORA\-0*210[^\d]', # cannot open control file
'ORA\-0*257[^\d]', # archiver is stuck
'ORA\-0*333[^\d]', # redo log read error
'ORA\-0*345[^\d]', # redo log write error
'ORA\-0*4[4-7][0-9][^\d]',# ORA-0440 - ORA-0485 background process failure
'ORA\-0*48[0-5][^\d]',
'ORA\-0*6[0-3][0-9][^\d]',# ORA-6000 - ORA-0639 internal errors
'ORA\-0*1114[^\d]', # datafile I/O write error
'ORA\-0*1115[^\d]', # datafile I/O read error
'ORA\-0*1116[^\d]', # cannot open datafile
'ORA\-0*1118[^\d]', # cannot add a data file
'ORA\-0*1122[^\d]', # database file 16 failed verification check
'ORA\-0*1171[^\d]', # datafile 16 going offline due to error advancing checkpoint
'ORA\-0*1201[^\d]', # file 16 header failed to write correctly
'ORA\-0*1208[^\d]', # data file is an old version - not accessing current version
'ORA\-0*1578[^\d]', # data block corruption
'ORA\-0*1135[^\d]', # file accessed for query is offline
'ORA\-0*1547[^\d]', # tablespace is full
'ORA\-0*1555[^\d]', # snapshot too old
'ORA\-0*1562[^\d]', # failed to extend rollback segment
'ORA\-0*162[89][^\d]', # ORA-1628 - ORA-1632 maximum extents exceeded
'ORA\-0*163[0-2][^\d]',
'ORA\-0*165[0-6][^\d]', # ORA-1650 - ORA-1656 tablespace is full
'ORA\-16014[^\d]', # log cannot be archived, no available destinations
'ORA\-16038[^\d]', # log cannot be archived
'ORA\-19502[^\d]', # write error on datafile
'ORA\-27063[^\d]', # number of bytes read/written is incorrect
'ORA\-0*4031[^\d]', # out of shared memory.
'No space left on device',
'Archival Error',
],
warningpatterns => [
'ORA\-0*3113[^\d]', # end of file on communication channel
'ORA\-0*6501[^\d]', # PL/SQL internal error
'ORA\-0*1140[^\d]', # follows WARNING: datafile #20 was not in online backup mode
'Archival stopped, error occurred. Will continue retrying',
]
});
I believe there's now a real Nagios plugin that monitors logs effectively.
http://support.nagios.com/forum/viewtopic.php?f=6&t=8851&p=42088&hilit=unixautomation#p42088
The home page of the Nagios plugin on that page is Nagios Log Monitor
Your [ commands.cfg file ] will contain:
define command {
command_name NagiosLogMonitor
command_line $USER1$/NagiosLogMonitor $HOSTNAME$ $ARG1$ $ARG2$ $ARG3$ $ARG4$ '$ARG5$' '$ARG6$' $ARG7$ $ARG8$ $ARG9$ $ARG10$
}
OR
define command {
command_name NagiosLogMonitor
command_line $USER1$/NagiosLogMonitor $HOSTADDRESS$ $ARG1$ $ARG2$ $ARG3$ $ARG4$ '$ARG5$' '$ARG6$' $ARG7$ $ARG8$ $ARG9$ $ARG10$
}
Your [ services.cfg file ] will look similar to:
define service {
check_command NagiosLogMonitor!logrobot!autofig!/var/log/proteus.log!15!500.html!500 Internal Server Error!1!2!-foundn
max_check_attempts 1
service_description 500_ERRORS_LOGCHECK
host_name sky.blat-01.net,sky.blat-02.net,sky.blat-03.net
use fifteen-minute-interval
}
Nagios now has a solution that integrates tightly with Nagios Core, XI, etc.
Nagios Log Server which can alert on any query on any log file on any system in your infrastructure.