Hi I am trying to stream the following video on my website from cloudinary, it is playing in all browsers except safari
I have attempted f_auto, q_50, and many other combinations
I have even tried to convert it to mp4
I also noticed that most files above 50MB are doing the same thing
Please help !!!
It is to mention that I am in cloudinary free version.
video link:
https://res.cloudinary.com/impromek-com/video/upload/v1652337443/ucfj0kmirnsmgffrhzyd.mkv
Safari doesn't support MKV's natively so you'd have to convert your video before playback by using a transformation.
Doing that with the video you've provided results in an error In general, to help debug Cloudinary URLs, the response includes an x-cld-error header, and looking at the URL you've shared, I see the header comes back with:
Video is too large to process synchronously, please use an eager transformation with eager_async=true to resolve
There is an online (synchronous) video transformation limit(40MB for free plans and 100 MB for paid ones), which means that for videos larger than the limit you'll need to perform the video transformations eagerly.
Eager transformations can be set upon upload or by updating your current resources using the explicit API.
Please take a look and see if that resolves the issue for you, and if you have any additional questions or need any guidance, just let me know.
Here is a sample code that I used,
const upload = v2.uploader.explicit(
public_id,
{
type: 'upload',
resource_type: 'video',
eager_async: true,
eager: [
{
fetch_format: 'mp4',
},
],
eager_notification_url: your_notification_url,
},
(error, result) => {
if (error) return reject(error);
resolve(result);
},
);
Related
I need to build a recording feature on top of a web conferencing app that makes use of WebRTC. To do this I am using the RecordRTC js library.
The recording is NOT uploaded at the end of the call, but for practical reasons every 3 seconds one portion of the stream is uploaded from client to server. This is to avoid waiting at the end for a large upload.
Here's the JavaScript:
RTC_recorder = RecordRTC(stream, {
type: 'video',
mimeType: 'video/webm;codecs=vp8',
timeSlice: 3000,
ondataavailable: function(blob){
upload_to_server(blob);
}
});
I have been able to save separate blobs on the server:
-blob1.webm (readable video)
-blob2.webm (not readable)
-blob3.webm (not readable)
But unfortunately, I don't understand how to merge the blobs into 1 video (SERVER SIDE), and haven't found any working example in the documentation, nor any clear answer to this question.
Can anyone help?
Thanks.
Concatenating the files without any further modification should result in a valid file.
A simple search revealed this question which was about how concatenating files works in PHP.
Good morning everyone,
I'm having a hard time trying to add a simple Authorization token to playback widevine protected content from azure media services using react-native-video.
here is my code :
ref={(ref: Video) => { this.video = ref }}
source={{
uri: "https://swannmediaservice-euwe.streaming.media.azure.net/95aae6ef-55a4-411d-9706-73890f5d2ba5/L'Homme qui courait après le Te.ism/manifest(format=mpd-time-cmaf,encryption=cenc)",type: 'mpd',
drm: {
type: 'widevine',
headers: {
'Authorization': 'Bearer=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL3d3dy5zYXRvcmlwb3AuY29tLyIsImF1ZCI6InVybjpzYXRvcmlwb3AiLCJleHAiOjE3MTA4MDczODksIm5iZiI6MTYwMTMwMDA3OH0.O_41HbAcE8kFDivOM9Q4AL2z-4TMUTLchuUoyxCdDKY'
}
}
}}
this ofc is just for testing purposes.
This always gives me this error in the log output when i test it on Android :
Caused by: java.lang.IllegalStateException: Media requires a DrmSessionManager
Which I'm assuming is due to the video not being able to play.
I tested this in the azure media player and everything works correctly.
here is a link for that :
https://aka.ms/azuremediaplayer?url=https%3A%2F%2Fswannmediaservice-euwe.streaming.media.azure.net%2F057936a3-2899-4d63-b287-3c50976c1bc4%2FFrench%20audiobook%20The%20Caliph%20A2x_.ism%2Fmanifest(format%3Dmpd-time-csf%2Cencryption%3Dcenc)&widevine=true&widevinetoken=Bearer%3DeyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL3d3dy5zYXRvcmlwb3AuY29tLyIsImF1ZCI6InVybjpzYXRvcmlwb3AiLCJleHAiOjE3MTA4MDczODksIm5iZiI6MTYwMTMwMDA3OH0.O_41HbAcE8kFDivOM9Q4AL2z-4TMUTLchuUoyxCdDKY
and it works fine.
is there anything I'm missing here?
Platform
iOS
Android ExoPlayer
Video sample
URI : https://swannmediaservice-euwe.streaming.media.azure.net/95aae6ef-55a4-411d-9706-73890f5d2ba5/L'Homme qui courait après le Te.ism/manifest(format=mpd-time-cmaf,encryption=cenc)
Header: Bearer=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL3d3dy5zYXRvcmlwb3AuY29tLyIsImF1ZCI6InVybjpzYXRvcmlwb3AiLCJleHAiOjE3MTA4MDczODksIm5iZiI6MTYwMTMwMDA3OH0.O_41HbAcE8kFDivOM9Q4AL2z-4TMUTLchuUoyxCdDKY
well ,since i figured out my own problem, here is the solution :
Since azure media services consults their own licensing sevice to play media ( and not using a 3d party licensing service ) , exoplayer doesn't realise that and tries to play the video regardless even if there is a licenseServer in the response (manifest settings) or not since it expects a a licenseService pasted in it's DRM settings so it can use it to fetch the license and pass a Header with it.
So basically , when i didn't use a licensServer and thought everything worked automatically , i was wrong.
so what my code was doing , is : it requests a license frorm no where , and it wont work.
So you have to parse the data that is coming from the link , extract the license server manually and add it to the request like this :
ref={(ref: Video) => { this.video = ref }}
source={{
uri: "your mpd url",type: 'mpd',
drm: {
type: 'widevine',
licenseServer: 'the Parsed license server form the mpd file',
headers: {
'Authorization': 'Bearer=yourtoken'
}
}
}}
Implemented that , now everything works great.
May be it comes from the manifest name which may be problematic. I recommend to not use accented characters or special characters like ' as the name is exposed in the the streaming Url. Remove also spaces when possible.
To do so, rename the source file, re-encode the content and give another try.
I'm doing a app which takes input from the user and generates PDF file.i"m trying to upload generated pdf file to api.but, i'm not getting how to do it, i'm totally new to swift.can anyone please give me a exsample on how to upload a file to api
Thanks in advance
First, I'd suggest you consider using Alamofire if you don't want to get lost in the idiosyncrasies of composing network requests. If you do it yourself, it can get pretty hairy (see https://stackoverflow.com/a/26163136/1271826 for example of how to construct multipart/formdata request manually).
Second, in terms of how that request should be formed, it depends entirely upon how you designed the API. But the easiest for file uploads is to support multipart/formdata requests (e.g. in PHP, using the $_FILES mechanism). See http://php.net/manual/en/features.file-upload.php. Or see example here that not only uploads image file (which you can easily modify to accept PDFs), but constructs JSON response, too: https://stackoverflow.com/a/19743872/1271826.
Anyway, if your server is designed to handle multipart/formdata requests, you can create request and parse the response using Alamofire, as shown in the Uploading MultipartFormData section of the README file:
Alamofire.upload(
.POST,
"https://httpbin.org/post",
multipartFormData: { multipartFormData in
multipartFormData.appendBodyPart(fileURL: unicornImageURL, name: "unicorn")
multipartFormData.appendBodyPart(fileURL: rainbowImageURL, name: "rainbow")
},
encodingCompletion: { encodingResult in
switch encodingResult {
case .Success(let upload, _, _):
upload.responseJSON { response in
debugPrint(response)
}
case .Failure(let encodingError):
print(encodingError)
}
}
)
Basically subj. I am using Kurento-Utils for JS. That topic has been discussed for the case of lower-level work, but at this point in project, it is too late to go switch approach :(
When i stream webcam with audio it is recorded nicely into a .webm file. But, how do i stream audio only, or video only? An attempt results in file being of 0 size with no error messages.
Is there any sample code for Kurento-utils/js which would demonstrate that use case?
You need to provide the appropriate MediaType when instantiating the recorder, and connecting the elements.
pipeline.create('RecorderEndpoint', {uri: filepath,mediaProfile:'WEBM_AUDIO_ONLY'},
function (error, recorder) {
webrtcEp.connect(recorder,'AUDIO', function (err) {
recorder.record();
console.log("recording started ...");
});
});
I've run into the following problem when porting an app from REST API to GDAA.
The app needs to download some of (thousands of) JPEG images based on user selection. The way this is solved in the app is by downloading a thumbnail version first, using this construct of the REST API:
private static InputStream getCont(String rsid, boolean bBig){
InputStream is = null;
if (rsid != null) try {
File gFl = bBig ?
mGOOSvc.files().get(rsid).setFields("downloadUrl" ).execute():
mGOOSvc.files().get(rsid).setFields("thumbnailLink").execute();
if (gFl != null){
GenericUrl url = new GenericUrl(bBig ? gFl.getDownloadUrl() : gFl.getThumbnailLink());
is = mGOOSvc.getRequestFactory().buildGetRequest(url).execute().getContent();
}
} catch (UserRecoverableAuthIOException uraEx) {
authorize(uraEx.getIntent());
} catch (GoogleAuthIOException gauEx) {}
catch (Exception e) { }
return is;
}
It allows to get either a 'thumbnail' or 'full-blown' version of an image based on the bBig flag. User can select a thumbnail from a list and the full-blown image download follows (all of this supported by disk-base LRU cache, of course).
The problem is, that GDAA does not have an option to ask for reduced size / thumbnail version of an object (AFAIK), so I have to resort to combining both APIs, which makes the code more convoluted then I like (bottom of the page). Needles to state that the 'Resource ID' needed by the REST may not be immediately available.
So, the question is: Is there a way to ask GDAA for a 'thumbnail' version of a document?
Downloading thumbnails isn't currently available in the Drive Android API, and unfortunately I can't give a timeframe to when it will be available. Until that time, the Drive Java Client Library is the best way to get thumbnails on Android.
We'd appreciate if you go ahead and file a feature request against our issue tracker: https://code.google.com/a/google.com/p/apps-api-issues/
That gives requests more visibility to our teams internally, and issues will be marked resolved when we release updates.
Update: I had an error in the discussion of the request fields.
As Ofir says, you can't get thumbnails with the Drive Android API and you can get thumbnails with the Drive Java Client Library. This page has is a really good primer for getting started:
https://developers.google.com/drive/v3/web/quickstart/android
Oddly, I can't get the fields portion of the request to work as it is on that quick start. As I've experienced, you have to request the fields a little differently.
Since you're doing a custom field request you have to be sure to add the other fields you want as well. Here is how I've gotten it to work:
Drive.Files.List request = mService.files()
.list()
.setFields("files/thumbnailLink, files/name, files/mimeType, files/id")
.setQ("Your file param and/or mime query");
FileList files = request.execute();
files.getFiles(); //Each File in the collection will have a valid thumbnailLink
A sample query might be:
"mimeType = 'image/jpeg' or mimeType = 'video/mp4'"
Hope this helps!