Slackware NFS issues - nfs

I'm having trouble accessing my NFS share. It's actually a Slackware boot disk "NFS" issue. When trying to access the share, I get the following message:
mount: RPC: Port mapper failure - RPC: Timed out
Here's some pertinent information:
catch22bbs:~ # rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 20048 mountd
100005 1 tcp 20048 mountd
100005 2 udp 20048 mountd
100005 2 tcp 20048 mountd
100005 3 udp 20048 mountd
100005 3 tcp 20048 mountd
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049 nfs_acl
100227 3 tcp 2049 nfs_acl
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049 nfs_acl
100227 3 udp 2049 nfs_acl
100021 1 udp 53359 nlockmgr
100021 3 udp 53359 nlockmgr
100021 4 udp 53359 nlockmgr
100021 1 tcp 44247 nlockmgr
100021 3 tcp 44247 nlockmgr
100021 4 tcp 44247 nlockmgr
100011 1 udp 692 rquotad
100011 2 udp 692 rquotad
100011 1 tcp 693 rquotad
100011 2 tcp 693 rquotad
100024 1 udp 56306 status
100024 1 tcp 59686 status
catch22bbs:~ # showmount -e 192.168.1.26
Export list for 192.168.1.26:
/var/nfs 192.168.1.26/255.255.255.0
catch22bbs:~ # cat /etc/exports
/var/nfs 192.168.1.26/255.255.255.0(rw,sync,no_root_squash,no_subtree_check)
catch22bbs:~ # cat /etc/hosts.allow
portmap: 192.168.1.33
lockd: 192.168.1.33
rquotd: 192.168.1.33
mountd: 192.168.1.33
statd: 192.168.1.33
rpcbind: 127.0.0.1
catch22bbs:~ # cat /etc/hosts.deny
http-rman : ALL EXCEPT LOCAL
portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
catch22bbs:~ # cat /etc/fstab
/dev/disk/by-id/ata-ST31000524AS_6VPH1GC5-part2 swap swap defaults 0 0
/dev/disk/by-id/ata-ST31000524AS_6VPH1GC5-part1 / ext4 defaults 1 1
192.168.1.26:/var/nfs /mnt/nfs nfs rw,sync,hard,intr 0 0
I apologize if i'm stepping on anyone's toes, but i've exhausted almost every other avenue.
TIA,
JL

It should go as a comment... Can you also please provide your iptables settings iptables -L?
Your configuration looks ok. In my case firewall rules were a root of this problem.

Related

celery worker thread hangs indefinitely

I am running my spiders into celery worker. Spider scrape a website and then bunch of follow-up links. after some time spider stop processing any further.
lsof output shows that for thread , connection are in CLOSE_WAIT state
# lsof -i -n
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
celery 10 root 32u IPv4 105621511 0t0 TCP 127.0.0.1:6023 (LISTEN)
celery 10 root 33u IPv4 105603949 0t0 TCP 10.1.195.250:38162->104.17.38.150:http (ESTABLISHED)
celery 10 root 34u IPv4 105610494 0t0 TCP 10.1.195.250:41864->185.230.61.195:https (CLOSE_WAIT)
celery 10 root 35u IPv4 105614120 0t0 TCP 10.1.195.250:39742->185.230.61.195:http (CLOSE_WAIT)
celery 10 root 36u IPv4 105603950 0t0 TCP 10.1.195.250:52672->185.230.61.96:http (CLOSE_WAIT)
celery 10 root 37u IPv4 105620542 0t0 TCP 10.1.195.250:38200->209.236.228.178:http (CLOSE_WAIT)
celery 10 root 38u IPv4 105603948 0t0 TCP 10.1.195.250:51848->35.208.181.87:http (CLOSE_WAIT)
celery 10 root 39u IPv4 105614124 0t0 TCP 10.1.195.250:56290->185.230.61.96:https (CLOSE_WAIT)
celery 10 root 40u IPv4 105604983 0t0 TCP 10.1.195.250:43118->216.185.90.112:http (CLOSE_WAIT)
celery 10 root 41u IPv4 105618465 0t0 TCP 10.1.195.250:55006->209.59.212.167:http (CLOSE_WAIT)
celery 10 root 45u IPv4 105600888 0t0 TCP 10.1.195.250:34572->23.227.38.74:http (ESTABLISHED)
celery 10 root 46u IPv4 105620539 0t0 TCP 10.1.195.250:35846->205.178.189.129:http (CLOSE_WAIT)
celery 10 root 48u IPv4 105620541 0t0 TCP 10.1.195.250:39674->185.230.61.195:http (CLOSE_WAIT)
celery 10 root 49u IPv4 105610495 0t0 TCP 10.1.195.250:49450->178.128.150.108:http (CLOSE_WAIT)
celery 10 root 51u IPv4 105614122 0t0 TCP 10.1.195.250:53770->23.227.38.74:https (ESTABLISHED)
celery 10 root 52u IPv4 105614123 0t0 TCP 10.1.195.250:52930->54.86.91.237:https (CLOSE_WAIT)
celery 10 root 53u IPv4 105614125 0t0 TCP 10.1.195.250:37998->209.236.228.178:https (CLOSE_WAIT)
celery 10 root 54u IPv4 105614126 0t0 TCP 10.1.195.250:59992->35.208.181.87:https (CLOSE_WAIT)
celery 10 root 55u IPv4 105605002 0t0 TCP 10.1.195.250:39692->192.124.249.18:http (CLOSE_WAIT)
celery 10 root 56u IPv4 105612653 0t0 TCP 10.1.195.250:41912->185.230.61.195:https (CLOSE_WAIT)
celery 10 root 57u IPv4 105612657 0t0 TCP 10.1.195.250:47560->104.197.82.118:http (CLOSE_WAIT)
celery 10 root 58u IPv4 105612656 0t0 TCP 10.1.195.250:33926->209.59.212.167:https (CLOSE_WAIT)
celery 10 root 59u IPv4 105614129 0t0 TCP 10.1.195.250:41614->178.128.150.108:https (CLOSE_WAIT)
celery 10 root 62u IPv4 105614131 0t0 TCP 10.1.195.250:37534->34.66.87.174:http (CLOSE_WAIT)
celery 10 root 63u IPv4 105600910 0t0 TCP 10.1.195.250:47682->166.62.115.136:https (CLOSE_WAIT)
celery 10 root 64u IPv4 105614141 0t0 TCP 10.1.195.250:43222->216.185.90.112:http (CLOSE_WAIT)
celery 10 root 65u IPv4 105600912 0t0 TCP 10.1.195.250:41060->50.63.7.227:http (CLOSE_WAIT)
celery 10 root 66u IPv4 105600913 0t0 TCP 10.1.195.250:41254->104.197.82.118:https (CLOSE_WAIT)
celery 10 root 69u IPv4 105614695 0t0 TCP 10.1.195.250:42766->104.112.162.8:https (ESTABLISHED
ps -aux shows that thread is in sleep and waiting for an event
# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.1 0.0 80024 62700 ? Ss 17:23 0:05 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 8 0.0 0.0 118892 76360 ? S 17:23 0:00 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 10 0.0 0.0 902592 100916 ? Sl 17:23 0:01 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 485 0.0 0.0 121900 79376 ? S 18:07 0:00 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 486 10.0 0.1 950312 144056 ? Sl 18:07 1:19 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 501 0.4 0.0 455868 62432 ? Sl 18:11 0:02 /usr/local/bin/python /usr/local/bin/celery flower -A
root 508 0.3 0.0 121916 79388 ? S 18:17 0:00 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 509 22.4 0.1 958724 154876 ? Sl 18:17 0:42 /usr/local/bin/python /usr/local/bin/celery -A data_ex
root 520 0.5 0.0 2388 700 pts/0 Ss 18:20 0:00 /bin/sh
root 526 0.0 0.0 9392 3048 pts/0 R+ 18:20 0:00 ps -aux
Starce shows that thread is waiting on fd 69
# strace -p 10
strace: Process 10 attached
read(69,
Seems like spider are not closing connection properly.
how do i solve this?
I thought of adding timeouts to celery task , but all thread will hit HARD_LIMIT eventually.
how to make sure that scrapy is closing each connection propery?
This most likely has to do with the code that you are using for spidering. You may have to set a timeout on the library that you are using to make your http / https requests.

Why my WebRTC connection doesn't works at some networks?

I've customized Apprtc project (android version).
assume we have four internet connection (from different network connections):
NetA -- NetB
NetC -- NetD
I can connect from NetA to NetB, but i can't connect from NetC to NetD!
I have set turn server and stun server but i don't know what is wrong.
--
When connecting from NetA to NetB (success):
Offer SDP:
"v=0\r\no=- 632333030865012591 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS ARDAMS___\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 9 102 0 8 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 .............."
"candidate:2580031558 1 udp 2122260223 100.95.184.57 37422 typ host generation 0 ufrag NL4P network-id 3 network-cost 900"
"candidate:411053810 1 udp 1686052607 5.116.182.156 1026 typ srflx raddr 100.95.184.57 rport 37422 generation 0 ufrag NL4P network-id 3 network-cost 900"
"candidate:3902036248 1 udp 41885695 34.197.185.148 52061 typ relay raddr 5.116.182.156 rport 1026 generation 0 ufrag NL4P network-id 3 network-cost 900"
"candidate":"candidate:2786567656 1 udp 25108223 34.197.185.148 52062 typ relay raddr 5.116.182.156 rport 1032 generation 0 ufrag NL4P network-id 3 network-cost 900"
Answer SDP:
"v=0\r\no=- 3736097442176838392 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS ARDAMS___\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 9 102 0 8 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:7upj\r\na............"
"candidate:1892013251 1 udp 2122260223 192.168.7.7 37718 typ host generation 0 ufrag 7upj network-id 3 network-cost 10"
"candidate:3650771734 1 udp 1686052607 151.242.87.74 37718 typ srflx raddr 192.168.7.7 rport 37718 generation 0 ufrag 7upj network-id 3 network-cost 10"
"candidate:3902036248 1 udp 41885695 34.197.185.148 52063 typ relay raddr 151.242.87.74 rport 37718 generation 0 ufrag 7upj network-id 3 network-cost 10"
"candidate":"candidate:2786567656 1 udp 25108223 34.197.185.148 52064 typ relay raddr 151.242.87.74 rport 45889 generation 0 ufrag 7upj network-id 3 network-cost 10"
--
When connecting from NetC to NetD (failed):
Offer SDP:
"v=0\r\no=- 280763199112942253 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS ARDAMS___\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 9 102 0 8 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:UPpi\r\na=ice-pwd:Ow2J0AHuS86I0o0yZ5MOv6a/\r\na=ice-options:renomination\r\na=fingerprint:sha-256................."
"candidate":"candidate:2580031558 1 udp 2122260223 100.95.184.57 34267 typ host generation 0 ufrag UPpi network-id 3 network-cost 900"
"candidate":"candidate:411053810 1 udp 1686052607 5.116.182.156 1287 typ srflx raddr 100.95.184.57 rport 34267 generation 0 ufrag UPpi network-id 3 network-cost 900"
"candidate":"candidate:3902036248 1 udp 41885695 34.197.185.148 58779 typ relay raddr 5.116.182.156 rport 1287 generation 0 ufrag UPpi network-id 3 network-cost 900"
"candidate":"candidate:2786567656 1 udp 25108223 34.197.185.148 58780 typ relay raddr 5.116.182.156 rport 1201 generation 0 ufrag UPpi network-id 3 network-cost 900"
Answer SDP:
"v=0\r\no=- 6478139475592243492 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS ARDAMS___\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111 103 9 102 0 8 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=ice-ufrag:Js6x\r\na=ice-pwd:5tyUT023mAERirumK7aal+9F\r\na=ice-options:renomination\r\na=fingerprint:sha-256 45:97:7F:BC:37:90:4D:B6:35:E5:23:C8:12:09:5A:43:D7:4B:03:EC:A0:7B:70:EB:E4:DB:12:B8:7B:1C:6E:5D\r\na=setup:active.............."
"candidate","label":0,"id":"audio","candidate":"candidate:1106113138 1 udp 2122260223 192.168.1.169 44238 typ host generation 0 ufrag Js6x network-id 3 network-cost 10"
"candidate":"candidate:3232101574 1 udp 1686052607 151.247.139.59 44238 typ srflx raddr 192.168.1.169 rport 44238 generation 0 ufrag Js6x network-id 3 network-cost 10"
"candidate":"candidate:3902036248 1 udp 41885695 34.197.185.148 58781 typ relay raddr 151.247.139.59 rport 44238 generation 0 ufrag Js6x network-id 3 network-cost 10"
"candidate":"candidate:2786567656 1 udp 25108223 34.197.185.148 58782 typ relay raddr 151.247.139.59 rport 36519 generation 0 ufrag Js6x network-id 3 network-cost 10"
"candidate":"candidate:238873586 1 udp 2122194687 100.116.182.76 48966 typ host generation 0 ufrag Js6x network-id 4 network-cost 900"
"candidate":"candidate:3266434145 1 udp 1685987071 91.251.147.158 16369 typ srflx raddr 100.116.182.76 rport 48966 generation 0 ufrag Js6x network-id 4 network-cost 900"
"candidate":"candidate:3902036248 1 udp 41820159 34.197.185.148 58783 typ relay raddr 91.251.147.158 rport 16369 generation 0 ufrag Js6x network-id 4 network-cost 900"
"candidate":"candidate:2786567656 1 udp 25042687 34.197.185.148 58784 typ relay raddr 91.251.147.158 rport 16222 generation 0 ufrag Js6x network-id 4 network-cost 900"
EDIT :
Thanks to all. I found my problem is from my turn server. I can connect in any situation when using appr.tc ice servers (google turn servers). but i can't connect with my turn server. please check question about my turn server at this page :
Why my turn server doesn't work?
Take a look at this article Introduction to WebRTC protocols.
You need to set up STUN and TURN server, to pass the firewall. If peer is behind a firewall or a router, by connecting only to IP address you will hit router not the requested peer device. etc etc.
I would begin to test my TURN server if it is really working.
Disable direct links on your firewall (between NetC and NetD or just block the peer IP) to see if it is working via TURN. If not, then fix your TURN server or it's configuration.

Kurento RTSP to webRTC

Im a developping a Samsung Smart TV app with Samsung's TOAST and Caph-angular. I am on Windows 10.
When I call our server, I get an RSRP url encoded with H264 but, on smart TVs, RSRP is not supported. Then I have to "transform" the RSRP url to some webRTC one (I know very little about all this so sorry if the terms are incorrect).
I searched here and there and found Kurento which seems to be able to answer my needs.
Before trying Kurento within my app, I wanted to test it on a few demos.
I was able to test one2many example without any problem but I am having trouble running rtsp2webrtc demo.
I git cloned kms-windows and https://github.com/lulop-k/kurento-rtsp2webrtc and tried to run the demo but the player is not displaying anything.
1) I double-click kms-windows\bin\kurento-media-server
2) I launch http-server in kurento-rtsp2webrtc
3) I reach http://localhost:8080/, type my sample rtsp url (rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov) and click on "start"
No error seems to happen, see console logs:
Local icecandidate {"candidate":"candidate:4033732497 1 udp 2113937151 192.168.0.104 62879 typ host generation 0 ufrag tZFB network-cost 50","sdpMid":"audio","sdpMLineIndex":0}
Local icecandidate {"candidate":"candidate:4033732497 2 udp 2113937150 192.168.0.104 62881 typ host generation 0 ufrag tZFB network-cost 50","sdpMid":"audio","sdpMLineIndex":0}
Local icecandidate {"candidate":"candidate:4033732497 1 udp 2113937151 192.168.0.104 62883 typ host generation 0 ufrag tZFB network-cost 50","sdpMid":"video","sdpMLineIndex":1}
Local icecandidate {"candidate":"candidate:4033732497 2 udp 2113937150 192.168.0.104 62885 typ host generation 0 ufrag tZFB network-cost 50","sdpMid":"video","sdpMLineIndex":1}
PlayerEndpoint-->WebRtcEndpoint connection established
oniceconnectionstatechange -> checking
icegatheringstate -> gathering
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:1 1 UDP 2013266431 192.168.0.104 61810 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:1 1 UDP 2013266431 192.168.0.104 61810 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:2 1 TCP 1019216127 192.168.0.104 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:2 1 TCP 1019216127 192.168.0.104 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:3 1 TCP 1015021823 192.168.0.104 52180 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:3 1 TCP 1015021823 192.168.0.104 52180 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:4 1 UDP 2013266431 fe80::6403:eba1:c2a3:9605 61812 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:4 1 UDP 2013266431 fe80::6403:eba1:c2a3:9605 61812 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:5 1 TCP 1019217663 fe80::6403:eba1:c2a3:9605 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:5 1 TCP 1019217663 fe80::6403:eba1:c2a3:9605 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:6 1 TCP 1015023359 fe80::6403:eba1:c2a3:9605 52182 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:6 1 TCP 1015023359 fe80::6403:eba1:c2a3:9605 52182 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:7 1 UDP 2013266431 fe80::f99b:72cd:cb28:1424 61814 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:7 1 UDP 2013266431 fe80::f99b:72cd:cb28:1424 61814 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:8 1 TCP 1019217663 fe80::f99b:72cd:cb28:1424 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:8 1 TCP 1019217663 fe80::f99b:72cd:cb28:1424 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:9 1 TCP 1015023359 fe80::f99b:72cd:cb28:1424 52184 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:9 1 TCP 1015023359 fe80::f99b:72cd:cb28:1424 52184 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:10 1 UDP 2013266431 192.168.56.1 61816 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:10 1 UDP 2013266431 192.168.56.1 61816 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:11 1 TCP 1019216895 192.168.56.1 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:11 1 TCP 1019216895 192.168.56.1 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:12 1 TCP 1015022591 192.168.56.1 52186 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:12 1 TCP 1015022591 192.168.56.1 52186 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:1 2 UDP 2013266430 192.168.0.104 61811 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:1 2 UDP 2013266430 192.168.0.104 61811 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:2 2 TCP 1019216126 192.168.0.104 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:2 2 TCP 1019216126 192.168.0.104 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:3 2 TCP 1015021822 192.168.0.104 52181 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:3 2 TCP 1015021822 192.168.0.104 52181 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:4 2 UDP 2013266430 fe80::6403:eba1:c2a3:9605 61813 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:4 2 UDP 2013266430 fe80::6403:eba1:c2a3:9605 61813 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:5 2 TCP 1019217662 fe80::6403:eba1:c2a3:9605 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:5 2 TCP 1019217662 fe80::6403:eba1:c2a3:9605 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:6 2 TCP 1015023358 fe80::6403:eba1:c2a3:9605 52183 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:6 2 TCP 1015023358 fe80::6403:eba1:c2a3:9605 52183 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:7 2 UDP 2013266430 fe80::f99b:72cd:cb28:1424 61815 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:7 2 UDP 2013266430 fe80::f99b:72cd:cb28:1424 61815 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:8 2 TCP 1019217662 fe80::f99b:72cd:cb28:1424 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:8 2 TCP 1019217662 fe80::f99b:72cd:cb28:1424 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:9 2 TCP 1015023358 fe80::f99b:72cd:cb28:1424 52185 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:9 2 TCP 1015023358 fe80::f99b:72cd:cb28:1424 52185 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:10 2 UDP 2013266430 192.168.56.1 61817 typ host","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:10 2 UDP 2013266430 192.168.56.1 61817 typ host","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:11 2 TCP 1019216894 192.168.56.1 9 typ host tcptype active","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:11 2 TCP 1019216894 192.168.56.1 9 typ host tcptype active","sdpMLineIndex":1,"sdpMid":"video"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:12 2 TCP 1015022590 192.168.56.1 52187 typ host tcptype passive","sdpMLineIndex":0,"sdpMid":"audio"}
Remote icecandidate {"__module__":"kurento","__type__":"IceCandidate","candidate":"candidate:12 2 TCP 1015022590 192.168.56.1 52187 typ host tcptype passive","sdpMLineIndex":1,"sdpMid":"video"}
Player playing ...
oniceconnectionstatechange -> connected
icegatheringstate -> complete
oniceconnectionstatechange -> completed
icegatheringstate -> complete
But the player does not display anything. I still see the spinner.
I tried adding a STUN server (not sure what it is or if I even need one, just saw this in docs or other stackoverflow issues) and it did not solve anything.
Could you please help me?
Did I do anything wrong or forget something?
And, in the future, when I want to implement this into my tv web app, will I only need to include kurento-client.js and kurento-utils.js files or will there be other things to take care of?
Thanks in advance
Stun server cannot traverse symmetric NAT's so for that purpose if your server is behind NAT then you should try using TURN server
if you Kurento server is behind the NAT - you need to use TURN server for it

Unable to telnet into scrapy

I have a crawler running since a few days. I want to pause the crawler in order to do something else on the system. Scrapy documentation says, this can be done using telnet console but I am unable to login into telnet console. Here are the processes running in the system:
[root#xxx tmp]# telnet localhost 6073
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#xxx tmp]# ps aux | grep scrapy
root 5504 0.0 0.0 110400 860 pts/1 S+ 04:31 0:00 grep scrapy
root 31457 4.0 1.9 774880 299436 pts/1 Sl Sep21 141:27 /usr/local/pyenv/bin/python2.7 /usr/local/pyenv/bin/scrapy crawl myCrawler
Any help is appreciated. Thanks.
Hah, here I am answering my own question. As stated in documentation, scrapy runs on port range [6023, 6073]. So to find the port being used, I used this command:
netstat -l
Output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 xxx.localdomain:6025 *:* LISTEN
tcp 0 0 *:27017 *:* LISTEN
tcp 0 0 *:mysql *:* LISTEN
tcp 0 0 *:6379 *:* LISTEN
tcp 0 0 *:webcache *:* LISTEN
"6025" port is what I was looking for.

Port 80 blocked

So I've been trying for the past several hours to get my port 80 opened, so that I can access my Apache server. I'm running RHEL 6.5, and below is the configuration for my iptables.
# Generated by iptables-save v1.4.7 on Wed Jul 2 12:59:50 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [9:1332]
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
## Open 443 port i.e. HTTPS
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
COMMIT
# Completed on Wed Jul 2 12:59:50 2014
I've saved them and restarted, to no avail. I am using a port checker (http://www.checkmyports.net/) to check if it is open, but it isn't. Before you mark this as a duplicate, I have tried everything online. I've reconfigured my iptables multiple times, removed additional firewalls, disabled and re-enabled, and multiple other solutions, all to no avail. Any ideas on where I'm going wrong? Thanks.
Output of ps aux | grep 'httpd'
:
root 20353 0.0 0.7 175704 3668 ? Ss 12:59 0:00 /usr/sbin/httpd
apache 20355 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20356 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20357 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20358 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20359 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20360 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20361 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
apache 20362 0.0 0.4 175704 2408 ? S 12:59 0:00 /usr/sbin/httpd
root 21624 0.0 0.1 103244 856 pts/0 S+ 13:55 0:00 grep httpd
Output of netstat -tulpn:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 960/rpcbind
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 28361/sshd
tcp 0 0 0.0.0.0:36088 0.0.0.0:* LISTEN 978/rpc.statd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1108/sendmail
tcp 0 0 :::111 :::* LISTEN 960/rpcbind
tcp 0 0 :::80 :::* LISTEN 20353/httpd
tcp 0 0 :::51733 :::* LISTEN 978/rpc.statd
tcp 0 0 :::22 :::* LISTEN 28361/sshd
udp 0 0 0.0.0.0:111 0.0.0.0:* 960/rpcbind
udp 0 0 0.0.0.0:39182 0.0.0.0:* 978/rpc.statd
udp 0 0 0.0.0.0:68 0.0.0.0:* 20708/dhclient
udp 0 0 0.0.0.0:711 0.0.0.0:* 960/rpcbind
udp 0 0 0.0.0.0:730 0.0.0.0:* 978/rpc.statd
udp 0 0 :::111 :::* 960/rpcbind
udp 0 0 :::711 :::* 960/rpcbind
udp 0 0 :::35278 :::* 978/rpc.statd
Ensure there is something running on that port.
If you have port 80 open on your firewall but nothing is listening on that port (apache,http) then the port will show as closed.
Whats the output of
ps aux | grep 'httpd'
and
netstat -tulpn
You could try clearing out iptables entirely, get the web access working, and then turn it back on.
I have an iptables-clear.sh script that I run to do this.
Note this doesn't use the /etc/init.d version of iptables which you might have to turn off while you're fixing this. Just remember to turn it back on once you're done.
# Flush all tables
iptables -F
iptables -t nat -F
# Default policy to ACCEPT
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -P PREROUTING ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
iptables -t nat -P OUTPUT ACCEPT