I am trying to setup the basic hello world example.
I am using a t2.micro instance with Ubuntu 14.04 LTS and I have the github code for the example on my local machine.
The url I am using is:
https://localhost:8443/index.html?ws_uri=wss://ec2INSTANCE:8888&ice_servers=[{"urls":"stun:stun.l.google.com"}]#
I do not have the stun or turn configured on the server, but it should be ok since I am passing the stun server to use in the url.
Any advice on this?
I just checked my console and I see this, even though port 8888 is open in AWS Security group that this instance is in
VM8812:35 WebSocket connection to 'wss://ec2Instance:8888/' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSED
I was able to get past the issues that I was seeing
Follow basic instructions on a fresh EC2 (Ubuntu 14.04 LTS), using http://doc-kurento.readthedocs.io/en/stable/installation_guide.html
Add a STUN server in conf
Using stun:173.194.66.127:19302
Tested with http://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
Secure WS to use WSS, which is required due to HTTPS requirement since Chrome 47, using http://doc-kurento.readthedocs.io/en/stable/mastering/securing-kurento-applications.html#configure-javascript-applications-to-use-https
Uncomment secure section of /etc/kurento/kurento.conf.js
Create the self signed certificate, and placed in /etc/kurento
Go to https://ec2InstanceUrl:8433/kurento and accept insecure connection
Go to example https://ec2InstanceUrl:8443
You must verify that your STUN server is working by seeing something similar to the below image, with the srflx under Component Type.
Related
By "not working" I mean the page loads with an http:// prefix. If I manually type https:// it times out. I'm hoping that someone who has done this before can glance at the tutorials and see what might be missing.
The tutorials I've tried all tend to be the same:
https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-debian-8
https://wiki.debian.org/Self-Signed_Certificate
If I test the SSL connection with an online utility such as:
https://www.sslshopper.com/ssl-checker.html
I get this error:
No SSL certificates were found on mywebsite.com. Make sure that the name resolves to the correct server and that the SSL port (default is 443) is open on your server's firewall.
Relevant info:
$ sudo uname -a
Linux ip-172-26-14-207 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux
Running in an AWS Lightsail instance with Debian (OS only) and LAMP stack installed.
Solved it! After using nmap, wget, telnet, etc. to verify that port 443 was open locally but not externally, I remembered that my AWS Lightsail instance was a virtual private server and I might need to configure the VPS. Sure enough, in the Lightsail web interface there is a firewall setting.
Lightsail landing page > Manage instance > Networking > Firewall
I am new to node.js and am trying to get into the hang of actually using it. I am very familiar with JavaScript so the language itself is self-explanatory but the use of Node.js is quite different from the browser implementation.
I have my own remote virtual server and have installed Node and the Package Manager and everything works as expected. I am not exactly a server extraordinaire and have limited experience with the Terminal and Apache Configurations.
I can run my server using:
nodejs index.js
Which gives me: listening on *:3300 as expected.
I can then access my localhost from the terminal using: curl http://localhost:3300/ which gives me the response I expect.
Given that the website that links to my server is https://example.com, how do I allow this link to access: http://localhost:3300/ so that I can actually use my node server in production? For example, http://localhost:3300/ runs a Socket Server that I would like to use using Socket.io on https://example.com/chat.html with the JavaScript:
var socket = io.connect('http://localhost:3300/', {transports: ['websocket'], upgrade: false});
Ok, this question has nothing to do with nodeJS.
localhost is a hostname that means this computer. it's equivalent to 127.0.0.1 or whatever IP address you can refer to your computer.
After the double colon (:) you enter the port number.
So if you want to make an HTTP call to a web-server running on your server, you have to know what is the IP address of your server, or the domain name, and then you call it with the port number where the server is running.
For Instance, you would call https://example.com:3300/chat.html to make an HTTP call to a server running on example.com with port 3300.
Keep in mind, that you have to make sure with your firewall configuration, that the specific port is open for incoming HTTP requests.
I've enabled SSL Listen Port from the Admin Console of Weblogic 11g Version: 10.3.6.0
I've created a self-signed cert following: https://oracle-base.com/articles/11g/weblogic-configure-ssl-for-a-managed-server
But when try https on the browser of a remote machine I get a timeout.
If I try from the local machine using: curl -Ik I get the proper response, it seems that only remote access is disabled.
Accessing via http works fine from my remote machine browser. I did also try telnet but it only works with 7001 but not with 7002 (my secure port). I've already tried changing the secure port number but the result is the same.
My Weblogic server is on a Centos running on VMware ESXi.
What could be blocking the remote SSL connection?
A timeout indicates a firewalling of some sort. As you say yourself if you try locally with curl it works. There is nothing else to check if locally you can but remotely you get a timeout.
I'm using Vagrant with apache2 and specifically the command
vagrant share --https 443
It all starts fine and provides a URL. When I access that URL I'm presented with a 400 error:
Bad Request
Your browser sent a request that this server could not understand.
Apache/2.4.12 (Ubuntu) Server at *.vagrantshare.com Port 443
I have been accessing the vagrant machine using https just fine, but it doesn't seem to like to work with vagrant share.
This is a known Vagrant Share bug: https://github.com/webdevops/vagrant-docker-vm/issues/51
The only workarounds I've seen discussed are to use a custom domain or to use another product entirely (e.g. ngrok) to create the share. See the bug discussion here: https://github.com/mitchellh/vagrant/issues/5493#issuecomment-159792794
Vagrant Share docs for custom domains are here: https://atlas.hashicorp.com/help/vagrant/shares/custom-domains
We are using NSS as SSL engine in Apache server. Recently we applied latest SUSE Linux Enterprise server patches on Apache server which is hosting two IP based virtual hosts. After upgrade the first virtual host is working fine but the second one is not working.
Error log shows "Hostname vhost1.xxyyzz.com provided via SNI and hostname vhost2.xxyyzz.com provided via HTTP are different" when accessing vhost2.xxyyzz.com.
If we switch back to use mod_ssl the issue was gone. Obviously the issue is related to the following patches. Any help would be appreciated.
mozilla-nss 3.16.4-0.8.1
mozilla-nss-tools 3.16.4-0.8.1
apache2-mod_nss 1.0.8-0.4.9.1
Check your /etc/hosts file to see if you might be assigning the domain name to a local internal IP address or interface.
This caused the same error message for me and many 400 errors.
After changing /etc/hosts don't forget to restart the name service cache daemon ( service nscd restart ).
SNI isn't technically fully supported in that version of mod_nss but it has since been added: https://www.suse.com/support/update/announcement/2015/suse-ru-20150591-1.html
Saw the same error and saw it go away after applying the referenced patch.