Remove Bind from live Broadcast - youtube-livestreaming-api

How can I remove a bind of a stream from a live broadcast?
I cant find the code to remove a broadcast bind for python.
https://developers.google.com/youtube/v3/live/docs/liveBroadcasts/bind
After that I want to bind that same broadcast with another stream
This is the code for a normal bind of a stream to a broadcast:
def bind_broadcast(youtube, broadcast_id, stream_id):
bind_broadcast_response = youtube.liveBroadcasts().bind(
part="id,contentDetails",
id=broadcast_id,
streamId=stream_id
).execute()
print "Broadcast '%s' was bound to stream '%s'." % (
bind_broadcast_response["id"],
bind_broadcast_response["contentDetails"]["boundStreamId"])

In YouTube Live Streaming API V3 documentation for bind function you find:
The streamId parameter specifies the unique ID of the video stream that is being bound to a broadcast. If this parameter is omitted, the API will remove any existing binding between the broadcast and a video stream.
See: https://developers.google.com/youtube/v3/live/docs/liveBroadcasts/bind

Related

Ingest data into warp10 - Performance tip

We're looking for the best way to ingest data in warp10. We are on a Microservices architecture that use Kafka mainly.
Two solutions:
Use Ingress endpoint as defined here: https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/01_Ingress (This is the solution we use for now)
Use the warp10 Kafka plugin as defined here: https://blog.senx.io/introducing-the-warp-10-kafka-plugin/
As described here, we use Ingress solution as of now, based on an aggregation of data for x seconds, and call the Ingress API to send data per packet. (Instead of calling the API each time we need to insert something).
For few days, we are experimenting with the Kafka Plugin. We successfully set up the plugin and create an .mc2 responsible to consume data from a given topic and then insert them using UPDATE into warp10.
Questions:
Using the Kafka plugin, would it be better to apply the same buffer mechanism as the one applied when we use the Ingress endpoint? Or, is there any specific implementation in warp10 Kafka plugin that allows to consume message per message in the topic and call the UPDATE function for each ?
Today, as both solutions are working, we're trying to find differences to get the best performance results during ingestion of data. And if possible, without having to apply any buffer mechanism because we are trying to be in real-time as much as possible.
MC2 file:
{
'topics' [ 'our_topic_name' ] // List of Kafka topics to subscribe to
'parallelism' 1 // Number of threads to start for processing the incoming messages. Each thread will handle a certain number of partitions.
'config' { // Map of Kafka consumer parameters
'bootstrap.servers' 'kafka-headless:9092'
'group.id' 'senx-consumer'
'enable.auto.commit' 'true'
}
'macro' <%
// macro executed each time a kafka record is consumed
/*
// received record format :
{
'timestamp' 123 // The record timestamp
'timestampType' 'type' // The type of timestamp, can be one of 'NoTimestampType', 'CreateTime', 'LogAppendTime'
'topic' 'topic_name' // Name of the topic which received the message
'offset' 123 // Offset of the message in 'topic'
'partition' 123 // Id of the partition which received the message
'key' ... // Byte array of the message key
'value' ... // Byte array of the message value
'headers' { } // Map of message headers
}
*/
"recordArray" STORE
"preprod.write" "token" STORE
// macro can be called on timeout with an empty entry map
$recordArray SIZE 0 !=
<%
$recordArray 'value' GET // kafka record value is retrieved in bytes
'UTF-8' BYTES-> // convert bytes to string (WARP10 INGRESS format)
JSON->
"value" STORE
"Records received through Kafka" LOGMSG
$value LOGMSG
$value
<%
DROP
PARSE
// PARSE outputs a gtsList, including only one gts
0 GET
// GTS rename is required to use UPDATE function
"gts" STORE
$gts $gts NAME RENAME
%>
LMAP
// Store GTS in Warp10
$token
UPDATE
%>
IFT
%> // end macro
'timeout' 10000 // Polling timeout (in ms), if no message is received within this delay, the macro will be called with an empty map as input
}
If you want to cache something in Warp 10 to avoid lots of UPDATE per second, you can use SHM (SHared Memory). This is a built-in extension you need to activate.
Once activated, use it with SHMSTORE and SHMLOAD to keep objects in RAM between two WarpScript executions.
In you example, you can push all the incoming GTS in a list, or a list of list of GTS, using +! to append elements to an existing list.
The MERGE of all the GTS in the cache (by name + labels) and UPDATE in the database can then be done in a runner (don't forget to use a MUTEX)
Don't forget the total operation cost:
The ingress format can be optimized for ingestion speed, if you do not repeat classname and labels, and if you gather lines per gts. See here.
PARSE deserialize data from the Warp 10 ingress format.
UPDATE serialize data to the Warp 10 optimized ingress format (and push it to the update endpoint).
the update endpoint deserialize again.
It makes sense to do these deserialize/serialize/deserialize operation if your input data is far from the optimal ingress format. It also make sense if you want to RANGECOMPACT your data to save disk space, or do any preprocessing.

mediasoup - miss match between PayloadTypes

I'm trying to use mediasoup to forward RTP streams with room.createRtpStreamer
my problem is that the payload type (for OPUS) I get from producer.rtpParameters.codecs[i].payloadType is 111,
while the one I get on the actual RTP packets is 100 (seen on Wireshark)
I tried to set preferredPayloadType in my server's config, but it seems to make no difference.
Note:
if I hardcode 100 as the Payload Type for the OPUS stream I can view/hear the stream using FFPlay
I'm using Chrome 55 (latest) and mediasoup 2.0.5 (latest)
any help will be appreciated.
The Producer has the RTP parameters decided by the client (browser), so the PT of OPUS is 111 (the default value generated by Chrome).
But, once in mediasoup server, the Consumers associated to that Producer use the RTP parameters given during the room creation. So, if the codecs given to room = new server.Room(codecs) [1] have a preferredPayloadType field, that will be used within the Consumers (otherwise it will be randomly chosen by the server).
So, when you call room.createRtpStreamer() you provide a Producer, and the generated RtpStreamer [2] has an associated Consumer and PlainRtpTransport. So, you should read the rtpStreamer.consumer.rtpParameters rather than the producer's ones.
[1] https://mediasoup.org/documentation/mediasoup/api/#server-Room
[2] https://mediasoup.org/documentation/mediasoup/api/#RtpStreamer
You should have a look at the SDP of the call setup message and check whether you get 111 or 100 for the OPUS payload.
From there you can decide which part has the bug (Chrome or mediasoup).
In the call setup message (initiating the call) check the payload of the OPUS code.
The called party should respond with the same payload number if it accepts OPUS and then both parties should use the same payload number in the RTP packets.
So I found that the payload I get on producer.rtpParameters.codecs[i].payloadType was the original payload and that room.createRtpStreamer changes the payload type.
Ended up doing the below to resolve the issue
// get the payload (type) from the room.rtpCapabilities.codecs.preferredPayloadType for the specific codec
let payload = this.room.rtpCapabilities.codecs.find((c)=>{
return c.name === producer.rtpParameters.codecs[i].name;
}).preferredPayloadType;

PDU Response Parameters

I'm creating a SMPP server using Node.js.
It's all okay, but now, I have to send to client a custom parameter inside pdu.response(), like 'message_id'.
If I do:
session.send(pdu.response({comand_status: 999}));
It works, but if I do
session.send(pdu.response({message_id: 999}));
I always receive
PDU {
command_length: 16,
command_id: 2432432,
command_status: 0,
sequence_number: 1,
command: 'bind_transceiver_resp' }
So, I have a question. Can I do this? Or it's impossible using SMPP?
No,you cannot actually add any custom parameters to the pdu response.but,try to check the optional parameters whether it is fitting your needs

How to write to different hbase tables in apache flume

I have configured Apache Flume to receive messages (JSON type) in HTTP source. My sinks are MongoDB and HBase.
How can I write the message according to a specified field to different collections and tables?
For example: let's assume we have T_1 and T_2. Now there is an incoming message that should be saved in T_1. How can I handle those messages and assign them where to be saved?
Try using the Multiplexing Channel Selector. The default one (Replicating Channel Selector copies the Flume event produced by the source to all its configured channels. Nevertheless, the multiplexing one is able to put the event into a specific channel depending on the value of a header within the Flume event.
In order to create such a header accordingly to your application logic you will need to create a custom handler for the HTTPSource. This can be easily done by implementing the HttpSourceHandler interface of the API.
you can use regex for tagging message type + multiplexing for sending it to right destination.
example , based on message "TEST"
regex for a string / field
agent.sources.s1.interceptors.i1.type=regex_extractor
agent.sources.s1.interceptors.i1.regex=(TEST1)
assign interceptor to serializer SE1
agent.sources.s1.interceptors.i1.serializers=SE1
agent.sources.s1.intercetpros.i1.serializers.SE1.name=Test
send to required channel , channels (c1,c2) you can map to different sinks
agent.sources.s1.selector.type=multiplexing
agent.sources.s1.selector.header=Test
agent.sources.s1.selector.mapping.Test=c1
all events of test regex will go to channel c1 , others will be defaulted to C2
agent.sources.s1.selector.default=c2

VEMap and a GeoRSS feed(hosted separately)

The scenario is as follows:
A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it.
A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object).
Now, VEMap can accept an input feed in this format via the following:
var layer = new VEShapeLayer();
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer);
map.ImportShapeLayerData(veLayerSpec, onComplete, true);
onComplete is a callback function I'm using to replace the default pin graphic with something custom.
The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format.
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer);
When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error.
The order of operations is currently: remote feed -> local handler -> VEMap import
If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.
The format I have above is actually very close to what I needed. A similar solution was found by Mike McDougall. Although I was passing the RSS feed directly through the handler(writing the read stream directly), I just needed to specify the following from within the handler:
context.Response.ContentType = "text/xml";
context.Response.ContentEncoding = System.Text.Encoding.UTF8;
With the above fix, I'm able to have a remote GeoRSS feed successfully load a separately hosted Virtual Earth map instance.