How do you build a capsule in EDK2 and how do you put UEFI images inside that capsule? - firmware

I am testing around with EDK2 by Tianocore (https://github.com/tianocore/edk2) and I can build BIOS images as well as UEFI Applications and drivers but when it comes to building a UEFI capsule, I am not sure how to go about doing this.
https://uefi.org/sites/default/files/resources/UEFI%20Fall%202018%20Intel%20UEFI%20Capsules.pdf this points to some ideas but I am not sure the exact path to take here.
I see two possibilities:
https://github.com/tianocore/edk2/tree/master/FmpDevicePkg this is the package mentioned in the PDF link above and the PDF also mentions an integrated build pipeline for making a capsule. It also mentions a standalone python script which is option two.
https://github.com/tianocore/edk2/tree/c640186ec8aae6164123ee38de6409aed69eab12/BaseTools/Source/Python/GenFds there are standalone scripts to make these images and artifacts like capsules and headers at this location but I am unsure if these are intended to be used as is or only as a part of a larger build pipeline.
My end goal here is to produce a UEFI capsule and place UEFI drivers inside it as the payload so any tips or help would be appreciated.

Related

Questions about the way D435 avoids obstacles

Required Info
Camera Model
D435
Firmware Version
05.12.13.50
Operating System & Version
Linux (Ubuntu18.04.5)
Kernel Version (Linux Only)
4.9.201
Platform
NVIDIA JetsonNano B01
SDK Version
2.41.0
Language
ROS packages }
Segment
Robot
Hello, I need to use the obstacle avoidance function in the process of using D435. At present, there are two ways I have inquired:
1、Use depthimage_to_laserscan to convert the depth information into a lidar signal, but the current problem is that there is also a lidar on my robot. Now these two topics are scan, so there is a conflict. I don't know how to solve it.
2、I want to know whether the two lidar signals can be fused, and what configuration is needed to be fused. Is there any relevant information or code?
3、Using plotcloud2 point cloud information, I don't understand how to do this at present. Although the point cloud image can be seen on the map now, it does not have the effect of avoiding obstacles. And does this point cloud information need to be passed to AMCL? If so, how does it need to be delivered? So I hope someone can help me.

ImageResizer - Outputting resized images with embedded ICC profiles

I've been playing around with ImageResizer for a week or so now and am wondering if it's possible to output resized images with the same ICC profile embedded (and in the same colour space) as the source image, for example the Adobe 1998 colour space?
I intend to use ImageResizer as part of my company's workflow for thumbnailling our source images before our imaging dept looks at them for consistency, so the same colour profile is essential. Generation speed isn't too important as they will be cached once generated.
Can someone please tell me if ImageResizer is capable of this and which parts of the pipeline I would need to overload to achieve this?
Thanks.
ImageResizer targets the web, always produces images in the sRGB color space; browsers don't support ICC profiles, so we don't use them.
That said, you can certainly modify a few lines and get the result you want.
ImageResizer has 3 pipelines - GDI+ (the default), FreeImage, and WIC.
GDI+ does not support output profiles - at least not through System.Drawing.
FreeImage is supposed to have very good ICC profile support
WIC should have ICC profile support as well, but is likely to share bugs with GDI+ as both use the same codecs.
To make changes, go to the ImageResizer repository and click the 'fork' button.
When you clone your fork, be sure that you switch to the develop branch before making changes - otherwise your changes will not be compatible with the next major version.
You'll probably want to modify Plugins/FreeImage/FreeImageEncoder.cs. FreeImage documentation is in pdf form (yay!), so you'll probably want to scan that quickly to understand how it handles profiles.
When testing, make sure you enable the 3 FreeImage plugins by installing them and activating them via their command strings, &builder=freeimage will activate the full FreeImage pipeline, but &encoder=freeimage will just activate the encoder portion (useful if you want to edit the image with GDI+, but save via FreeImage).
You will likely also want to use the ignoreIcc=true command, so that you don't see sRGB values interpreted as Adobe RGB.

Template rendering engine on Raspberry Pi

I have a project in which I am using a Raspberry Pi to print ticket to a thermal printer.
It is pretty much the same principle as in this video.
Tickets are generated from templates that may include text and images. Both text and images are dynamic, for example I may want to print the current time. I receive the template as a .psd from a designer and the thermal printer takes bitmap data. The Raspberry Pi communicates to the printer with a python library. Everything must be done locally as cloud access is not guaranteed. Performance is important.
I investigated several options:
Latex + ImageMagick
Webkit + Phantom.js
Pillow (Python Imaging Library), especially the module ImageDraw
The first option is not quite satisfying because Latex generates a pdf file and then ImageMagick is very slow to convert it to a .png.
The second option is seducing but if I am not mistaken, I would need to start a server locally.
The third option would be great because it will be pure python, but requires to build a basic typesetting system on top of PIL.
Has anyone been confronted to a similar problem ?

Getting started with image processing on Mac OS X

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.

A PDF reader - please guide - a step by step guidance - reference to guidance-

I have to make a hardware project using a microcontroller, memory, screens, etc.
Is it possible to make an independent PDF / documents reader, which is capable of running on battery power?
Please note I don't want to use any technology which needs licensing. It must be all freeware readers, etc., and programing language can be assembly, C, Flash or any.
I have submitted proposal of PDF reader project (independent hardware). Many say it's impossible. What should I do?
Reading and displaying a PDF document is quite a "high level operation".
You should start with a microcontroller starter kit, with an ARM9 processor or something similar. Then install a Linux operating system on it, include a standard display driver and run an X server. Then you should be able to find a Linux based PDF reader with X drivers.
To 2nd another comment here, I would say that you're not going to to do this with a microcontroller, you're going to need to get some more powerful ARM CPU like an ARM9, Cortex-A8 or similar with a decent amount of RAM.
You'll probably need something that's capable of running Linux if you want to start with pieces of software that won't require writing quite a large volume of software from scratch.
Note that for commercial devices that are out there, including the Kindle, run Linux, and aren't based on a micrcontroller.
You might be best off getting something like a BeagleBoard, attach a display to that, and start from there with an X-based PDF viewer.