I am receiving HDCP encrypted H264 content over TCP/IP to PC from an encoder. I need to decrypt and pass the buffer to GPU for decoding and HDMI(HDCP encrypted) output. I am planning to do the complete processing inside TEE. Where can I store HDCP encryption key?
Related
I want to parse a pcap file and find the number of encrypted data, both TLS and DTLS separately. Scapy doesn't support the DTLS layer, but I understand there is a support package (https://github.com/tintinweb/scapy-ssl_tls/) that can handle this issue.
scapy-ssl_tls
The problem is I got problems with importing packages e.g TLS and DTLS Records
from scapy_ssl_tls.ssl_tls import TLS
Can you please suggest any methods to find the number of encrypted data e.g. tshark?
Scenario: PeerA want to stream a video to PeerB and PeerC. PeerA wont receive anything from PeerB and PeerC and there is no communication between PeerB and PeerC. Hole punching happens between the Peers and SFU as SFU being a WebRTC endpoint. The resources and bandwidth on PeerA can be saved using SFU solution.
But SFU communicates to peers through random ports acquired from hole punching process. Whereas Turn Server allocated a single endpoint (ip:port) for a peer.
Now unfortunately PeerB and PeerC happen to be behind a Symmetric NAT.
My observation is that SFU approach will fail here as it cannot successfully complete the hole punching process with PeerB and PeerC
So PeerA sends the feed to PeerB and PeerC via turn server. Basically it means that PeerA is sending the feed twice and this is an inefficient method.
Question:
Can SFU with a public IP replace the need for a TURN server and connect to peers via a defined port? "In this case SFU can save PeerA's bandwidth by acquiring only one feed from peerA whereas Turn Server approach would have required a two feeds from PeerA for both PeerB and PeerC".
I'm currently using Redis in an IoT application to receive a stream of data from an acquisition board; all other communications between the PC and the board is based on Modbus/TPC protocol.
A colleague of mine has recently advanced the proposal to completely remove Modbus, and use Redis for all communications instead.
Supposedly this would require a mixture of variables exchange and PUB/SUB signals.
While the idea is attractive, I was just wondering if someone has already done some research in this direction.
Modbus is a widely used protocol to communicate between industrial devices on one side and computers / gateways on the other side. The device is the server, the computer is the client. Sensor data is polled, changes are pushed.
Redis provides a protocol RESP https://redis.io/topics/protocol between REDIS clients and the Redis server. The devices would then be clients, and the computer the server.
Replacing modbus with RESP would thus invert the client/server relationship.
While there are advantages (better typed data transfer) its uncommon the select a RESP in that area. MQTT or so would be more common.
I'm going to implement Java VoiP server to work with WebRtc. Implementation of browser p2p connection is really straightforward. Server to client connection is slightly more tricky.
After a quick look at RFC I wrote down what should be done to make Java server as browser. Kindly help me to complete list below.
Implement STUN server. Server should be abke to respond binding
request and keep-alive pings.
Implement DTLS protocol along with DTLS handshake. After the DTLS
handshake shared secret will be used as keying material within SRTP
and SRTCP.
Support multiplexing of SRTP and SRTCP stream. SRTP and SRTCP use
same port to adress NAT issue.
Not sure whether should I implement SRTCP. I believe connection will
not be broken, if server does not send SRTCP reports to client.
Decode SRTP stream to RTP.
Questions:
Is there anything else which should be done on server-side ?
How webRtc handles SRTCP reports ? Does it adjust sample rate/bit
rate depends on SRTCP report?
WebRtc claims that following issues will be addressed:
packet loss concealment
echo cancellation
bandwidth adaptivity
dynamic jitter buffering
automatic gain control
noise reduction and suppression
Is is webRtc internals or codec(Opus) internals? Do I need to do anything on server side to handle this issues, for example variable bitrate etc ?
The first step would be to implement Interactive Connectivity Establishement (RFC 5245). Whether you make use of a STUN/TURN server or not is irrelevant, your code needs to issue connectivity checks (which use STUN messages) to the browser and respond to the brower's connectivity checks. ICE is a fairly complex state machine, but it's doable.
You don't have to reinvent the wheel. STUN / TURN servers are external components. Use as they are. WebRTC source code is available which you can use in your application code and call the related methods.
Pls. refer to similar post - Server as WebRTC data channel peer
I have a sensor node which broadcast sensor data as UDP packets to a specific port. I have to secure this broadcast. I tried to find out how can I achieve that and found out that DTLS is the answer.
What all do I need to do to implement DTLS? Initially, I thought I do not need certificates, however, I learnt that DTLS is also using handshake to exchange keys. Do I need to create certificates for that ?
DTLS is a version of TLS (which is end-to-end security) used over UDP or other unreliable packet delivery mechanism. DLTS can not be used with broadcasting which is unidirectional.
Now, what is "secure" in your case? Do you need to encrypt the data? But encryption is a concept which is contrary to broadcasting (as the number of recipients grows, security drops exponentially). Signing of data is possible. It is of course possible to encrypt the data for one or multiple recipients (using either symmetric encryption or public-key encryption) but again this is hardly a broadcast and has nothing to do with UDP itself (or other transport).