What is the best way to use HLS.js to make a custom license request to play encrypted HLS content? What I'm looking for is to override the AES URI that's in the manifest with one of my own, a.k.a. overriding the LAURL
const vid = document.getElementsByTagName('video')[0];
const config = {
debug: true,
startPosition: 100,
maxBufferHole: 0.5,
maxSeekHole: 2,
};
const hls = new Hls(config);
hls.attachMedia(vid);
hls.on(Hls.Events.MEDIA_ATTACHED, function () {
console.log('hls and videojs are bound together');
hls.loadSource('https://example.com/movie.m3u8');
hls.on(Hls.Events.MANIFEST_PARSED, function (event, data) {
console.log('manifest parsed, found ' + data.levels.length + ' quality levels');
});
});
Should I be using HLS.js's custom loader?
Related
I've build a simple Here Map using Vue 2 and the JS API in version 3.1.30.17. The map works fine in all browsers except Firefox v102.
This is the error message in Firefox:
Tangram [error]: Error for style group 'non-collision' for tile 20/7/68/41/7 CanvasRenderingContext2D.drawImage: Passed-in canvas is empty: loadTexture#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:417267
e/sn.addWorker/<#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:63015
EventListener.handleEvent*e/sn.addWorker#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:62515
e/value/<#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:515089
value#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:515502
value#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:514847
e/value/this.initializing<#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:511497
promise callback*value#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:431:511472
Ul#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:335:441
p.eh#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:378:446
p.Ge#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:329:436
p.Ge#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:376:356
S#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:369:214
T.prototype.u#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:410:166
T#https://js.api.here.com/v3/3.1.30.17/mapsjs-core.js:401:290
... The error is even bigger
The following method I'm using to init Here Maps inside Vue's mounted:
async initializeHereMap() {
const mapContainer = this.$refs.hereMap;
const H = window.H;
// Initialize the platform object:
this.platform = new H.service.Platform({
apikey: this.apiKey,
});
await this.geocode(this.platform, this.originAddress)
.then(data => this.routingParameters.origin = data[0]);
await this.geocode(this.platform, this.destinationAddress)
.then(data => this.routingParameters.destination = data[0]);
// Obtain the default map types from the platform object
const defaultLayers = this.platform.createDefaultLayers({
lg: window.navigator.language,
});
// Instantiate (and display) a map object:
const map = new H.Map(mapContainer, defaultLayers.vector.normal.map, {
zoom: this.zoom,
center: this.center,
padding: {
top: this.padding,
bottom: this.padding,
left: this.padding,
right: this.padding,
},
});
// create map pins
const mapPinOrigin = this.addMapPin('A', 40);
const mapPinDestination = this.addMapPin('B', 40);
let linestring = new H.geo.LineString();
linestring.pushPoint(this.routingParameters.origin);
linestring.pushPoint(this.routingParameters.destination);
// Create a polyline to display the route:
let routeLine = new H.map.Polyline(linestring, {
linestring,
style: { strokeColor: '#3F80C4', lineWidth: 5 },
});
// Create a marker for the start point:
let startMarker = new H.map.Marker(this.routingParameters.origin, { icon: mapPinOrigin });
// Create a marker for the end point:
let endMarker = new H.map.Marker(this.routingParameters.destination, { icon: mapPinDestination });
// Add the route polyline and the two markers to the map:
map.addObjects([routeLine, startMarker, endMarker]);
// Set the map's viewport to make the whole route visible:
map.getViewModel().setLookAtData({bounds: routeLine.getBoundingBox()});
// add behavior control
if (this.behaviors) new H.mapevents.Behavior(new H.mapevents.MapEvents(map));
// add UI
if (this.controls) H.ui.UI.createDefault(map, defaultLayers);
},
Is there some one facing the same issue and could solved it?
Did you follow this example on: https://developer.here.com/tutorials/how-to-implement-a-web-map-using-vuejs/ ?
Tangram error is related to rendering e.g. when render map objects (like icons, markers, polylines etc.) and map vector tiles.
I don't think so that this issue related to HERE JS Map API.
Because all examples on https://developer.here.com/documentation/examples/maps-js are working well on my FireFox 102.0.1
It could be some map objects like icons or markers etc. are created in some moment but is not finished yet and then try to push onto map? Creating an icon in some asynchron function?
Or in like your code:
await this.geocode(this.platform, this.originAddress)
.then(data => this.routingParameters.origin = data[0]);
You don't know when func geocode will be finished (e.g. due slow internet connectivity)
It could be this command above is not done yet, but your code is already start to run this code:
linestring.pushPoint(this.routingParameters.origin);<-- 'this.routingParameters.origin' could be null
linestring.pushPoint(this.routingParameters.destination);
// Create a polyline to display the route:
let routeLine = new H.map.Polyline(linestring, { <-- Could cause Tangram error because 'linestring' is strange due undefined origin yet!
linestring,
style: { strokeColor: '#3F80C4', lineWidth: 5 },
});
The polyline options is not correct in:
new H.map.Polyline(linestring, {
linestring,
style: { strokeColor: '#3F80C4', lineWidth: 5 },
}
Why in above the 'linestring' is second time in options?
Please follow correct syntax on:
https://developer.here.com/documentation/maps/3.1.31.0/api_reference/H.map.Polyline.html#.Options
I try to make a group Audio recorder with Agora.io, so I first need to create an empty .aac audio file so that I can record the audio on this File.
I use the react-native-fetch-blob library to handle the File System.
Here is my code for Recording:
const handleAudio = async () => {
const fs = RNFetchBlob.fs;
const dirs = fs.dirs;
if (!startAudio) {
fs.createFile(dirs.DocumentDir + '/record.aac', 'foo', 'utf8').then(() => {
_engine?.startAudioRecording(
dirs.DocumentDir + '/record.aac',
AudioSampleRateType.Type44100,
AudioRecordingQuality.Medium,
);
setStartAudio(true);
});
} else {
_engine?.stopAudioRecording();
}
};
The problem is that the file 'record.aac' always stays the same and the Agora.io recorder does not update this new file, it remains with 'foo'...
The startAudioRecording function expect a directory instead of a file.
Example: /sdcard/emulated/0/audio/aac.
It also returns a promise that you can check for the result.
In my React Native 0.63.2 app, after user uploads images of artwork, the app will do 2 things:
1. save artwork record and image records on backend server
2. save the images into cloud storage
Those 2 things are related and have to be done successfully all together. Here is the code:
const clickSave = async () => {
console.log("save art work");
try {
//save artwork to backend server
let art_obj = {
_device_id,
name,
description,
tag: (tagSelected.map((it) => it.name)),
note:'',
};
let img_array=[], oneImg;
imgs.forEach(ele => {
oneImg = {
fileName:"f"+helper.genRandomstring(8)+"_"+ele.fileName,
path: ele.path,
width: ele.width,
height: ele.height,
size_kb:Math.ceil(ele.size/1024),
image_data: ele.image_data,
};
img_array.push(oneImg);
});
art_obj.img_array = [...img_array];
art_obj = JSON.stringify(art_obj);
//assemble images
let url = `${GLOBAL.BASE_URL}/api/artworks/new`;
await helper.getAPI(url, _result, "POST", art_obj); //<<==#1. send artwork and image record to backend server
//save image to cloud storage
var storageAccessInfo = await helper.getStorageAccessInfo(stateVal.storageAccessInfo);
if (storageAccessInfo && storageAccessInfo !== "upToDate")
//update the context value
stateVal.updateStorageAccessInfo(storageAccessInfo);
//
let bucket_name = "oss-hz-1"; //<<<
const configuration = {
maxRetryCount: 3,
timeoutIntervalForRequest: 30,
timeoutIntervalForResource: 24 * 60 * 60
};
const STSConfig = {
AccessKeyId:accessInfo.accessKeyId,
SecretKeyId:accessInfo.accessKeySecret,
SecurityToken:accessInfo.securityToken
}
const endPoint = 'oss-cn-hangzhou.aliyuncs.com'; //<<<
const last_5_cell_number = _myself.cell.substring(myself.cell.length - 5);
let filePath, objkey;
img_array.forEach(item => {
console.log("init sts");
AliyunOSS.initWithSecurityToken(STSConfig.SecurityToken,STSConfig.AccessKeyId,STSConfig.SecretKeyId,endPoint,configuration)
//console.log("before upload", AliyunOSS);
objkey = `${last_5_cell_number}/${item.fileName}`; //virtual subdir and file name
filePath = item.path;
AliyunOSS.asyncUpload(bucket_name, objkey, filePath).then( (res) => { //<<==#2 send images to cloud storage with callback. But no action required after success.
console.log("Success : ", res) //<<==not really necessary to have console output
}).catch((error)=>{
console.log(error)
})
})
} catch(err) {
console.log(err);
return false;
};
};
The concern with the code above is that those 2 async calls may take long time to finish while user may be waiting for too long. After clicking saving button, user may just want to move to next page on user interface and leaves those everything behind. Is there a way to do so? is removing await (#1) and callback (#2) able to do that?
if you want to do both tasks in the background, then you can't use await. I see that you are using await on sending the images to the backend, so remove that and use .then().catch(); you don't need to remove the callback on #2.
If you need to make sure #1 finishes before doing #2, then you will need to move the code for #2 intp #1's promise resolving code (inside the .then()).
Now, for catching error. You will need some sort of error handling that alerts the user that an error had occurred and the user should trigger another upload. One thing you can do is a red banner. I'm sure there are packages out there that can do that for you.
This one I've been banging my head against for a few weeks now.
Scenario:
Let user generate an image out of layers they select
Convert image to canvas
Share image from canvas on facebook wall using share_open_graph (along with the image, a short text and title will be shared)
I've already had a solution in place using publish_actions but that was recently removed from the API and is no longer available.
I am using js and html for all code handling.
The issue is that I can generate a png image from the canvas but that is saved as base64 and share_open_graph doesn't allow this type of image, it needs a straight forward url such as './example.png'. I have tried using several approaches and with canvas2image, converting and saving image using file-system but all of these fail.
Does anyone have similar scenario and possible solution from April/May 2018 using share_open_graph ?
My current code looks like this - it fails at the image conversion and save to a file (Uncaught (in promise) TypeError: r.existsSync is not a function at n (file-system.js:30)). But I am open to different solutions as this is clearly not working.
html2canvas(original, { width: 1200, height: 628
}).then(function(canvas)
{
fb_image(canvas);
});
var fb_image = function(canvas) {
canvas.setAttribute('id', 'canvas-to-share');
document.getElementById('img-to-share').append(canvas);
fbGenerate.style.display = 'none';
fbPost.style.display = 'block';
var canvas = document.getElementById('canvas-to-share');
var data = canvas.toDataURL('image/png');
var encodedPng = data.substring(data.indexOf(',') + 1, data.length);
var decodedPng = base64.decode(encodedPng);
const buffer = new Buffer(data.split(/,\s*/)[1], 'base64');
pngToJpeg({ quality: 90 })(buffer).then(output =>
fs.writeFile('./image-to-fb.jpeg', output));
var infoText_content = document.createTextNode('Your image is being
posted to facebook...');
infoText.appendChild(infoText_content);
// Posting png from imageToShare to facebook
fbPost.addEventListener('click', function(eve) {
FB.ui(
{
method: 'share_open_graph',
action_type: 'og.shares',
href: 'https:example.com',
action_properties: JSON.stringify({
object: {
'og:url': 'https://example.com',
'og:title': 'My shared image',
'og:description': 'Hey I am sharing on fb!',
'og:image': './image-to-fb.jpeg',
},
}),
},
function(response) {
console.log(response);
}
);
});
};
In opentok, with OT.initPublisher, you only can pass a deviceId to the audioSource. Does someone know a method to stream an audio file ?
For example, I have done this:
navigator.getUserMedia({audio: true, video: false},
function(stream) {
var context = new AudioContext();
var microphone = context.createMediaStreamSource(stream);
var backgroundMusic = context.createMediaElementSource(document.getElementById("song"));
var mixedOutput = context.createMediaStreamDestination();
microphone.connect(mixedOutput);
backgroundMusic.connect(mixedOutput);
},
handleError);
Like this, I can have a stream with the voice and my music but how to put this stream to a publisher ? Is it possible or is there another way to do this ?
Update: There is now an official way to do this, using the videoSource and audioSource properties provided to OT.initPublisher, please see the documentation: https://tokbox.com/developer/sdks/js/reference/OT.html#initPublisher
This is an example of how to stream a canvas element as a video track: https://github.com/opentok/opentok-web-samples/tree/master/Publish-Canvas
You can apply the same technique to stream an audio track.
Old Answer:
It's not currently possible with the officially supported API but there is a way to do it.
Please see the TokBox blog post about Camera Filters: https://tokbox.com/blog/camera-filters-in-opentok-for-web/
In order to modify the stream before it reaches the OpenTok JS SDK we use the mockGetUserMedia function to intercept the stream:
https://github.com/aullman/opentok-camera-filters/blob/master/src/mock-get-user-media.js
You could invoke mockGetUserMedia with a function which does your audio mixing. Something like this:
mockGetUserMedia(function(originalStream) {
var context = new AudioContext();
var microphone = context.createMediaStreamSource(originalStream);
var backgroundMusic = context.createMediaElementSource(document.getElementById("song"));
var mixedOutput = context.createMediaStreamDestination();
microphone.connect(mixedOutput);
backgroundMusic.connect(mixedOutput);
var stream = mixedOutput.stream;
originalStream.getVideoTracks().map(function(track) {
stream.addTrack(track);
});
return stream;
});
Note: I have not tested this function but it should lead you in the right direction. Remember that this technique is error prone and not officially supported by TokBox.
We are currently working on a new feature which will enable this use case but I cannot give a time estimate of when it will be available.
Thank you for the help but we cannot make it work since this morning.
So we made a different file with this code which is implemented before the opentok library in our html :
function mockGetUserMedia(mockOnStreamAvailable) {
var oldGetUserMedia = void 0;
if (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia) {
oldGetUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
navigator.webkitGetUserMedia = navigator.getUserMedia = navigator.mozGetUserMedia = function getUserMedia(constraints, onStreamAvailable, onStreamAvailableError, onAccessDialogOpened, onAccessDialogClosed, onAccessDenied) {
return oldGetUserMedia.call(navigator, constraints, function (stream) {
onStreamAvailable(mockOnStreamAvailable(stream));
}, onStreamAvailableError, onAccessDialogOpened, onAccessDialogClosed, onAccessDenied);
};
} else {
console.warn('Could not find getUserMedia function to mock out');
}
};
mockGetUserMedia(function(stream) {
var context = new AudioContext();
var bgMusic = context.createMediaElementSource(document.getElementById("song"));
var microphone = context.createMediaStreamSource(stream);
var destination = context.createMediaStreamDestination();
bgMusic.connect(destination);
microphone.connect(destination);
var mixedStream = destination.stream;
stream.getVideoTracks().map(function(track) {
mixedStream.addTrack(track);
});
return mixedStream;
});
In our angular, we init the session, create a publisher and publish it but get the error :
Uncaught DOMException: Failed to execute 'createMediaElementSource' on 'BaseAudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode.
This error, I think, is throw because the function is executed twice. When the js load, and when we publish.
I am not sure how to use this mockGetUserMedia function, do you know what is wrong with our code ?
EDIT
We made it work with some if condition. Thank you so much man, very appreciated.