I have a folder with 18 .png files, each 800x600px. I've created a List of Bitmaps and loaded each bitmap from file, storing them as Bitmaps in the list.
I'd like to know how to write a video file (for example, .AVI) in VB .NET using the collection of images. Specifically, I'm not just looking to simply put the images together — I'd like to have the option to loop through the collection of images multiple times, adding each Bitmap as a new frame to the end of the video. This would enable me to create a 60-minute long video of the same 18 frames repeating if I wanted to.
I'd need to be able to specify the framerate, and I won't be including audio.
To put this in context, I'm effectively creating an animated image for my digital photo frame. It can't animate the GIFs it displays, but it is capable of playing videos. The 18 frames are very similar to each other.
Please help!
http://www.aforgenet.com/
The AForge Library has quite an extensive toolset for all sorts of video and image processing tasks.
Or you could use FFMPEG as a standalone tool. There you can combine images to a video:
ffmpeg Video from Images
Related
I know how to take a picture with my camera using the “expo-camera” module, but I don’t know how to setup a system where it takes a photo about 10 times a second, analyses the colors in the image, to be used for tracking. Expo camera can return images as base64, so I’m guessing I would have to use that, but I don’t know to efficiently take a picture constantly and analyse it.
react-native-vision-camera might be more appropriate for what you're trying to achieve. It allows you to write frame processors to analyse frame contents.
But if you wanted to use expo-camera then you could do it the way you described, but you'd need to then find a module to take the Base64 encoded image and turn it into an array or stream of pixel values. This is likely to be very slow and use a lot of memory to run on the JS thread because each image from a standard 12MP camera is going to mean you'll be looping over an array of 12 million RGB values.
I've read many questions about this, but they don't satisfy what I'm trying to do. I'm trying to use a TTF file for the text's font on my application, I thought of using Direct Draw, but the tutorial from Microsoft website only explains how to use it with Direct2D. How am I supposed to load data from this file and render text for my Direct3D application using this file's font? I've also read about the AddFontResourceExA() function, but I didn't find any content of how I could use this. I'm really lost here, so any help is appreciated.
There are basically two approaches for rendering text on a Direct3D 11 Render Target / Texture.
Rendering using a 'sprite sheet'. Here you capture the font at a particular resolution and generate a texture from it. Then you use the texture to render the glyphs as textured triangles. This is very fast and inexpensive to render, but does not scale to arbitrary resolutions (you can capture the 'sprite sheet' at multiple point sizes to get some scaling) and does not work well with "CKJ" languages due to the large size of the fonts. For an example of this, see SpriteFont in the DirectX Tool Kit. This is what legacy D3DX9/D3DX10 did as well.
Rendering using vector fonts directly. Here you have some kind of library that generates triangles 'on-the-fly' from the "TrueType" vector font data. This is what Direct2D+DirectWrite is designed to do. You can use interop with Direct3D 11 surfaces, but essentially you are using DirectWrite -> Direct2D -> shared texture. Then you draw the shared texture with Direct3D as a 'sprite'. This is more complicated to setup, but results in arbitrary resolutions scaling, support for large character set fonts, and handles complex writing systems.
I have some code that used CreateJS /EaselJS to create a MovieClip that contains a Tween that contains an mp4 video. In MovieClip there is a method called 'gotoAndPlay' that you can use to change the timeline position of the playhead to a certain frame number. When using this method to change the play position of the video the tweens work but not the Tween that contains the mp4 movie...this object does not load is result in a blank video tag on the page except for the first play through of the clip. Once the mp4 video has been played it didn't play again if the position was set to it through gotoAndPlay...any ideas on how to fix this or if something wrong might be happening?
In ActionScript animations, FLV movies can be locked to the timeline. But in HTML Canvas animations, MP4 movies are not really fully-fledged "Animate" objects. They look the same for the most part but the integration is not as tight as in Flash.
Since the videos exist outside of the Canvas, you'll need to use jQuery or JavaScript to address them. This can be done by using the Code Snippets in the HTML5 Canvas - Components - Video folder.
As an advance warning, "seeking" to different locations in an MP4 video the way you described is not as reliable as it was in Flash. Browsers like Internet Explorer don't handle seeking well and will likely crash. If frame -by-frame accuracy is important, you may find the best visual results by avoiding the video component and converting your movie to an actual MovieClip in Animate CC, which will increase your file size significantly.
I wrote code that loops through a video feed of a computer screen and recognizes certain PNG images by looping through pixels. I get 60fps with 250% CPU usage(1280x800 video feed). The code is a blend of Objective-C and C++.
I'm trying to find a faster alternative. Can Core Image detect instances of an image within another image and give me the pixel location? If not, is OpenCV fast enough to do that kind of processing at 60fps?
If Core Image and OpenCV aren't the correct tools, is there another tool that would be better suited?
(I haven't found any documentation showing Core Image can do what I need, I am trying to get a OpenCV demo working to benchmark)
I'm trying to create an objective C classe for my iPad application which can convert a powerpoint file to a jpeg file.
Accordingly i've to read into the pptx format to see how the file is structured and create an image, from scratch, in which i can say this element goes there, this one here, this text there.
But actually i've no idea how to do this, if the best way is to use a already existing framework in iOS or an additional library?
Thanks to everyone ;)
Bye
The fastest way to visualize elements is, to me, OpenGL ES. You can use mobile GPU to visualize then there is CIImage for managing image.
Take a look at Quartz 2D, the drawing engine used as the main workhorse for 2D graphics on iOS. It gives you all the primitives for drawing shapes, fills, text and other objects you need to render the presentation.