How to send RTMP url's query part to RTMP server - rtmp

I try to hand write a RTMP publish / play client. The streaming server I used now is nginx with rtmp module.
My code follows the RTMP client's workflow, that something like Handshake -> Connect -> CreateStream -> Publish or Play.
It can work well when dealing simple RTMP url such as rtmp://127.0.0.1:1935/mylv/afv
But when verifying is needed, RTMP url will be rtmp://127.0.0.1:1935/mylv/afv?username=abc&password=123
rtmp {
server {
listen 1935;
chunk_size 4000;
application mylv {
live on;
hls on;
hls_path /mnt/datadisk0/uploadfile/live;
allow play all;
on_publish http://127.0.0.1:10078/user/auth;
}
}
}
/// auth.py
from flask import Flask, request, Response
app = Flask(__name__)
#app.route('/user/auth',methods=['POST'])
def auth():
user = request.form['username']
password = request.form['password']
print(user,'\t',password)
auth_user ='abc'
auth_passWord ='123'
if auth_user == user and auth_passWord == password:
return Response(response='success', status=200)
else:
return Response(status=500)
#Flask.abort(404)
return password
if __name__ == '__main__':
app.run(host='0.0.0.0', port=10078, debug=True)
How I should do to send the query part username=abc&password=123 to server?

I figured it out.
If I use 'afv?username=abc&password=123' as streamName to send commands, the code runs well. And player clients can access the stream using 'rtmp://127.0.0.1:1935/mylv/afv', if on_play verifying is not required.

Related

Is it safe to send file to server?

I'm creating a react native app that transform audio to text.
First, a user records the audio. Then the recording is sent with RNFS.uploadFiles to flask API. This flask API I created to convert the audio file into text and send back the text to the user.
Honestly, I'm not sure how it really works. For example, I don't see the audio files that were sent from react native to flask server. They are not saved in my server (or they are?)
Should I encrypt the recordings before they are sent?
I send audio with this function:
RNFS.uploadFiles({
toUrl: uploadUrl,
files: fileToSend,
method: 'POST',
headers: {
'Accept': 'application/json',
}
}).promise.then((response) => {
setResults(response.body)
})
.catch((err) => {
console.log(err)
});
}
}
To see the audio files sent from your application to the Flask server, you can use an HTTP Debugging tool such as Charles Proxy or Fiddler. This will show you the HTTP requests exchanged.
If your Flask server has SSL enabled (HTTPS) and your React Native is connecting through HTTPS, then the communication is already encrypted.

Client not receiving events from Flask-SocketIO server with Redis message queue

I want to add multiprocessing to my Flask-SocketIO server so I am trying to add a Redis message queue as per the Flask-SocketIO docs. Even without adding multiprocessing, the client is not receiving any events. Everything else is working fine (e.g. the web page is being served, HTTP requests are being made, database calls are being made). There are no error messages on the front or back end. Before I added the Redis queue it was working. I verified that the 'addname' SocketIO route is being hit and that the request.sid looks right. What am I doing wrong?
Very simplified server code:
external_sio = SocketIO(message_queue='redis://')
def requester(user, sid):
global external_sio
external_sio.emit('addname', {'data': 'hello'}, room=sid)
# do some stuff with requests and databases
external_sio.emit('addname', {'data': 'goodbye'}, room=sid)
def main():
app = Flask(__name__,
static_url_path='',
static_folder='dist',
template_folder='dist')
socketio = SocketIO(app)
#socketio.on('addname')
def add_name(user):
global external_sio
external_sio.emit('addname', {'data': 'test'}, room=request.sid)
requester(user.data, request.sid)
socketio.run(app, host='0.0.0.0', port=8000)
if __name__ == '__main__':
main()
Simplified client code (React Javascript):
const socket = SocketIOClient('ipaddress:8000')
socket.emit('addname', {data: 'somename'})
socket.on('addname', ({data}) => console.log(data))
The main server also needs to be connected to the message queue. In your main server do this:
socketio = SocketIO(app, message_queue='redis://')
In your external process do this:
external_sio = SocketIO(message_queue='redis://') # <--- no app on this one

Scrapy How to scrape HTTPS site through SSL proxy

I've SSL proxy server and I want to scrape https site. I mean the connection between scrapy and the proxy is encrypted then the proxy will open a connection to the website.
after some debugging I found the following:-
currently scrapy handle the situation as follows:-
if the site is http it use ScrapyProxyAgent which send client hello then send a connect request for the website to the proxy
but if the site is https
it use a TunnelingAgent which does not send client hello to the proxy and hence the connection is terminated.
What I need is to tell scrapy to first establish a connection via ScrapyProxyAgent then use a TunnelingAgent not sure how to do that.
I tried to create a https DOWNLOAD_HANDLERS but I'm not that expert
class MyHTTPDownloader(HTTP11DownloadHandler):
def download_request(self, request, spider):
"""Return a deferred for the HTTP download"""
timeout = request.meta.get('download_timeout') or self._connectTimeout
bindaddress = request.meta.get('bindaddress')
proxy = request.meta.get('proxy')
agent = ScrapyProxyAgent(reactor,proxyURI=to_bytes(proxy, encoding='ascii'),
connectTimeout=timeout, bindAddress=bindaddress, pool=self._pool)
_, _, proxyHost, proxyPort, proxyParams = _parse(proxy)
proxyHost = to_unicode(proxyHost)
url = urldefrag(request.url)[0]
method = to_bytes(request.method)
headers = TxHeaders(request.headers)
omitConnectTunnel = b'noconnect' in proxyParams
proxyConf = (proxyHost, proxyPort,
request.headers.get(b'Proxy-Authorization', None))
if request.body:
bodyproducer = _RequestBodyProducer(request.body)
if request.body:
bodyproducer = _RequestBodyProducer(request.body)
elif method == b'POST':
bodyproducer = _RequestBodyProducer(b'')
else:
bodyproducer = None
start_time = time()
tunnelingAgent = TunnelingAgent(reactor, proxyConf,
contextFactory=self._contextFactory, connectTimeout=timeout,
bindAddress=bindaddress, pool=self._pool)
agent.request(method, to_bytes(url, encoding='ascii'), headers, bodyproducer)
I need to establish a tunnel after the proxy agent is connected.
is that even possible?
thanks in advance

strophe connect openfire error

I input a jid and pwd on a html form, and use Strophe to connect to openfire, but when I press the login button, the xmpp server response is error 302.
I enabled the option on openfire, and restarted it.
var BOSH_SERVICE = 'http://ip:7070/http-bind';
$('#btn-login').click(function() {
if(!connected) {
connection = new Strophe.Connection(BOSH_SERVICE);
connection.connect($("#input-jid").val(), $("#input-pwd").val(), onConnect);
jid = $("#input-jid").val();
}
});
It seems a little harder than to use smack in java because of the network problem?
The problem is in the uri specified in BOSH_SERVICE.
Correct uri is:
http://ip:7070/http-bind/
Pay attention to the / at the bottom of the string.

Why Icinga2 telegram notification fails in specific services?

I have created custom telegram notification very similar to email notifications. The problem is that it works for hosts and most of the services but not for all of them.
I do not post the *.sh files in scripts folder as it works!
In constants.conf I have added the bot token:
const TelegramBotToken = "MyTelegramToken"
I wanted to manage telegram channels or chat ids in users file, so I have users/user-my-username.conf as below:
object User "my-username" {
import "generic-user"
display_name = "My Username"
groups = ["faxadmins"]
email = "my-username#domain.com"
vars.telegram_chat_id = "#my_channel"
}
In templates/templates.conf I have added the below code:
template Host "generic-host-domain" {
import "generic-host"
vars.notification.mail.groups = ["domainadmins"]
vars.notification["telegram"] = {
users = [ "my-username" ]
}
}
template Service "generic-service-fax" {
import "generic-service"
vars.notification["telegram"] = {
users = [ "my-username" ]
}
}
And in notifications I have:
template Notification "telegram-host-notification" {
command = "telegram-host-notification"
period = "24x7"
}
template Notification "telegram-service-notification" {
command = "telegram-service-notification"
period = "24x7"
}
apply Notification "telegram-notification" to Host {
import "telegram-host-notification"
user_groups = host.vars.notification.telegram.groups
users = host.vars.notification.telegram.users
assign where host.vars.notification.telegram
}
apply Notification "telegram-notification" to Service {
import "telegram-service-notification"
user_groups = host.vars.notification.telegram.groups
users = host.vars.notification.telegram.users
assign where host.vars.notification.telegram
}
This is all I have. As I have said before it works for some services and does not work for other services. I do not have any configuration in service or host files for telegram notification.
To test I use Icinga web2. Going to a specific service in a host and send custom notification. When I send a custom notification I check the log file to see if there is any error and it says completed:
[2017-01-01 11:48:38 +0000] information/Notification: Sending reminder 'Problem' notification 'host-***!serviceName!telegram-notification for user 'my-username'
[2017-01-01 11:48:38 +0000] information/Notification: Completed sending 'Problem' notification 'host-***!serviceName!telegram-notification' for checkable 'host-***!serviceName' and user 'my-username'.
I should note that email is sent as expected. There is just a problem in telegram notifications for 2 services out of 12.
Any idea what would be the culprit? What is the problem here? Does return of scripts (commands) affect this behaviour?
There is no Telegram config in any service whatsoever.
Some telegram commands may fail due to markdown parser.
I've encountered this problem:
If service name has one underscore ('_'), then parser will complain about not closed markdown tag and message will not be sent