I am looking for a standardized way to create and serve MP4 files made from a series of dynamically created canvas elements which I am currently able to do with h264-mp4-encoder on a serverless Vue app, but I wish to use a more recent and supported way. I have researched it and although I really want to use ffmpeg.wasm I am not sure how to integrate it in the context of a serverless Vue SPA and browser native APIs such as MediaRecorder do not seem to be best suited for my application. The very minimum would be to be able to control the frames per second (each frame added being a HTML Canvas element), the quality of the compression and if possible add metadata and allow for variable sized canvases (both features h264-mp4-encoder does not have).
Related
I am currently working on a project where I want to display a map to a user on which the user can click on certain regions. The user should also be able to pan and zoom the map to be able to click on certain smaller regions.
I have formatted a shapefile with a +-500 polygons nicely into json data I can use. The data format is not relevant, I can change it to the library I'm using.
What I have tried.
First, I went for react-native-maps because it looked promising as it allows for inserting custom polygons and clicking on them. But then it turned out that I need a Google Maps API Key for just using the library even when I am not actually using Google Maps at all. I have no interest in setting up an API Key and an account on Google for something I don't use so I started looking for an alternative.
Turns out, there aren't much alternatives that support only rendering polygons and not using an API Key(Mapbox has the same issue).
I also tried the react-native-svg-pan-zoom library but this had performance issues because I couldn't use an acceptable resolution where people would not see the pixels after zooming in a bit (canvasWidth above 2000) or my app would just crash, and with values of 1500 the movement was also not smooth.
I'm a bit stuck now because I can't find any alternatives, it's just very frustrating because react-native-maps will probably do the trick but it requires an API Token for no reason.
I created this question because maybe there is still a way to not use an API Key or some other library I just couldn't find but maybe you know where to find it.
We're developing a react native mobile application where we need to generate several different types of graphs (plots, charts, line, etc.) and then convert those graphs into PDF format where we could download or send the graphs as pdf files outside the in-app visualization/rendering of them. Currently we are using react-native-chart-kit, and haven't been able to come up with an easy solution yet. We are open to using a different library if one exists that has similar graphing capabilities to that library and can easily generate pdf's from the graphing data. At the moment, all we can think of doing is to send the graphing data to a web server where we have access to other graphing libraries (js, react web, python, etc.) generate the pdf data then send it back to the mobile application. This isn't ideal, as we'd prefer to do everything directly in the mobile app.
Does anyone have any experience generating graphs within a react-native application and also exporting those graphs as pdf files? Any library we might have overlooked, or an easier solution we haven't though about? Any tips are greatly appreciated!
Not a lot of hits here, but I also cross posted this on reddit and got some traction. I'm posting a link here as the answer in case anyone else comes across this post:
https://www.reddit.com/r/reactnative/comments/v1wm0o/generating_a_graph_and_converting_to_a_pdf_within/
The two best answers we debated between (both valid):
You could potentially use https://github.com/gre/react-native-view-shot. It will “screenshot” anything within a particular View and you can then do what you want with the image data. Perhaps even find a way to write it to a PDF file?
and the other solution is to just do the pdf rendering and graph generation on server-side, though that means this would be an online only feature.
I'm trying to see if there is a way to use a dynamic map service layer as a source for the Search Widget provided in the ArcGis Javascript API. As far as I can see from the documentation, only feature layers are supported.
Is there some sort of work around? Potentially using Locators?
Is it possible to sync multiple live radio streams to a pre-recorded video simultaneously and vary the volume at defined time-indexes throughout? Ultimately for an embedded video player.
If so, what tools/programming languages would be best suited for doing this?
I've looked at Gstreamer, WebChimera and ffmpeg but am unsure which route to go down.
This can be done with WebChimera, as it is open source and extremely flexible.
The best possible implementation of this is in QML by modifying the .qml files from WebChimera Player directly with any text editor.
The second best implementation of this is in JavaScript with the Player JS API.
The difference between these two methods would firstly be resource consumption.
The second method that would use only JavaScript would require adding one <object> tag for the video, and one more for each audio file you need to play. So for every media source you add to the page, you will need to call a new instance of the plugin.
While the first method made only in QML (mostly knowing JavaScript would be needed here too, as that handles the logic part behind QML), would load all your media sources in one plugin instance, with multiple VlcVideoSurface components that each has it's own Plugin QML API.
The biggest problem I can foresee for what you want to do is the buffering state, as all media sources need to be paused as soon as one video/audio starts buffering. Synchronizing them by time should not be to difficult though.
WebChimera Wiki is a great place to start, it has lots of demos and examples. And at WebChimera Questions we've helped developers modify WebChimera Player to suit even the craziest of needs. :)
I'd like to have a library that would offer a javascript API to control a player and manage its events, nothing more.
All the GUI would (optionnaly) not be part of the library. I've tried to set a player without controls, but even in that case the GUI is created in the DOM but not shown.
I can see 2 benefits: i can reuse my previous GUI in an easier way, and the video.js script is smaller.
But this also question the nature of the polyfill. Adding a track to manage subtitles without an interface to render it would not make a true polyfill. There would be 2 kinds of polyfill: the first would just let the browser play the video, and the second would create a consistent graphical interface to manage all the player's features.
The answerable question is: does video.js offers a way to only provide a js API (and modify the dom when flash is required)?
If there is not such feature, is it an option for the future (and why not)?
Thank you all!