UWP 10 RenderTargetBitmap a cropped area of UIElement in UWP 10 - xaml

How does one crop a RenderTargetBitmap? The equivalent of:
RenderTargetBitmap bmp = new RenderTargetBitmap();
await bmp.RenderAsync(element , cropRect );
This question seems simple enough, yet there seems to be no real way of doing it. The above semantically sums up my usage case. I want to render part of a Xaml tree. It's a perfectly legit use case.
Saving to a file, which seems to be the most common way of cropping, is really not a good answer. Sure, maybe one day I will save a cropped image into my media library, but not today.

There are BitmapTransform and BitmapDecoder classes, which among other functions allow you to crop images. But I failed to make them work with RenderTargetBitmap, each time bumping on HRESULT: 0x88982F50 exception when trying to pass pixel data from one source to another.
As for different approach, I can think of bringing big guns and implementing it with Win2D. It might not be the most convenient solution, but it does work:
var renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(element, width, height);
var pixels = await renderTargetBitmap.GetPixelsAsync();
var currentDpi = DisplayInformation.GetForCurrentView().LogicalDpi;
var device = CanvasDevice.GetSharedDevice();
var imageSource = new CanvasImageSource(device, width, height, currentDpi);
using (var drawingSession = imageSource.CreateDrawingSession(Colors.Transparent))
using (var bitmap = CanvasBitmap.CreateFromBytes(
drawingSession, pixels.ToArray(), width, height,
DirectXPixelFormat.B8G8R8A8UIntNormalized, drawingSession.Dpi))
{
var cropEffect = new CropEffect
{
Source = bitmap,
SourceRectangle = cropRect,
};
drawingSession.DrawImage(cropEffect);
}
ResultImage.Source = imageSource;
Note that I'm not Win2D expret and someone more knowledgeable might want to make corrections to this code.

Related

Camera position cause objects to disappear

I'm developing an app with blazor client-side and I need to render a 3D scene.
I have an issue and I guess it is material-related.
I have a composition of parallelepipeds where one of them is fully opaque and the others are transparent.
Depending on the camera angle, the transparent ones completely disappear:
Example where everything is visible:
Example with 2 missing:
Example with all missing:
Code for transparent parallelepipeds
var geometry = new THREE.CubeGeometry(item.xDimension * _scene.normalizer, item.yDimension * _scene.normalizer, item.zDimension * _scene.normalizer);
var material = new THREE.MeshBasicMaterial();
var box = new THREE.Mesh(geometry, material);
box.material.color = new THREE.Color("gray");
box.material.opacity = 0.8;
box.material.transparent = true;
Code for the camera:
var camera = new THREE.PerspectiveCamera(60, width / height, 0.1, 100);
camera.position.set(1.3, 1.3, 1.3);
camera.lookAt(0, 0, 0);
I'm using OrbitControls and every object size is between 0 an 1 (_scene.normalizer is for that purpose)
Do you know why this is happening?
Edit:
I found it being a material depth function issue. Do you know which should I use?
Thanks,
Transparency is tricky with WebGL because a transparent object writes to the depthmap, and then the renderer assumes that subsequent objects behind it are occluded so it skips drawing them. You could avoid this by playing with the material's .depthTest and .depthWrite attributes (see the docs):
box.material.color = new THREE.Color("gray");
box.material.opacity = 0.8;
box.material.transparent = true;
box.material.depthWrite = false;

Create js clone a Shape with dynamically drawn graphics

I have a Image on a stage and I'm drawing over it and erasing it. Using the following method
http://jsfiddle.net/lannymcnie/ZNYPD/
Now i want to take a clone of the user drawings but its not working. I tried
var drawingAreaClone = drawingArea.clone(true);
but its not working .
Is there a way to clone it. kindly Help
The demo you posted doesn't clear the stage, but instead clears the graphics each frame. If you clone the shape, it will have no instructions.
#Catalin's answer is right if you just need a visual -- but another option is to use the Graphics store() method instead of clearing the graphics: http://createjs.com/docs/easeljs/classes/Graphics.html#method_store
Essentially this method just sets an internal pointer to where the graphics draw from. By storing after each draw, only future draw calls are made. This will have the same application as the demo you posted, only you can call unstore() later to reset the Graphics to draw from the beginning. If you clone it this way, it should work.
var erase = document.getElementById("toggle").checked;
wrapper.updateCache(erase?"destination-out":"source-over");
//drawing.graphics.clear();
drawing.graphics.store(); // Don't draw anything before this point
// Later
var newGraphics = drawing.graphics.clone();
newGraphics.unstore(); // Might be redundant
var shape = new Shape(newGraphics);
Note that cloning Graphics doesn't recreate the entire graphics tree, and simply clones the array that stores the instructions. Modifying the individual instructions after the fact would impact any clones of that Graphics object.
Hope that helps.
If the drawn line shape is a child of the drawingAreaClone then the clone should work properly.
However, if for some reason you can't make it work with that, you can take a snapshot of the canvas and save it as an img type varaible like this:
var snapshot = new Image();
snapshot.src = canvas.toDataURL();
Also, if you don't want to snapshot the whole canvas, after you saved the initial image, you can limit the drawing area to a rectangle with these extra instructions:
var ctx = canvas.getContext('2d');
canvas.width = snapshot.width;
canvas.height = snapshot.height;
ctx.drawImage(snapshot, rectangle.x, rectangle.y, rectangle.width, rectangle.height, 0, 0, rectangle.width, rectangle.height);
snapshot.src = canvas.toDataURL();

Cannot stamp certain PDFs

PDF stamping works for nearly every document I have tried. However, a client scanned some pages and his computer generated a PDF document that is resistant to stamping. The embedded image files are in JBIG2 format, but I am not sure if that is important. I have debugged the PDF with Apache's pdfbox, and I can see the text is embedded. It just doesn't show up.
Here is the PDF that won't stamp: http://demo.clearvillageinc.com/plans.pdf
And my code:
static void Main(string[] args) {
string stamp = "<div style=\"color:#F00;\">Reviewed for Code Compliance</div>";
string fileName = #"C:\temp\source.pdf";
string outputFileName = #"C:\temp\source-output.pdf";
// Open a destination stream.
using (var destStream = new System.IO.MemoryStream()) {
using (var sourceReader = new PdfReader(fileName)) {
// Convert the HTML into a stamp.
using (var stampData = FromHtml(stamp)) {
using (var stampReader = new PdfReader(stampData)) {
using (var stamper = new PdfStamper(sourceReader, destStream)) {
stamper.Writer.CloseStream = false;
// Add the stamp stream to the source document.
var stampPage = stamper.GetImportedPage(stampReader, 1);
// Process all of the pages in the source document.
for (int i = 1; i <= sourceReader.NumberOfPages; i++) {
var canvas = stamper.GetOverContent(i);
canvas.AddTemplate(stampPage, 0, -50);
}
}
}
}
}
// Finished. Save the file.
using (var fs = new System.IO.FileStream(outputFileName, FileMode.Create)) {
destStream.Position = 0;
destStream.CopyTo(fs);
}
}
}
public static System.IO.Stream FromHtml(string html) {
var ms = new System.IO.MemoryStream();
// Convert html to pdf.
using (var document = new iTextSharp.text.Document()) {
var writer = iTextSharp.text.pdf.PdfWriter.GetInstance(document, ms);
writer.CloseStream = false;
document.Open();
using (var sr = new System.IO.StringReader(html)) {
XMLWorkerHelper.GetInstance().ParseXHtml(writer, document, sr);
}
}
ms.Position = 0; // Reset for reading.
return ms;
}
One part of a page definition is the "MediaBox" which controls the page's size. This property takes two locations that specify the coordinates of two opposite corners of a rectangle. Although not required, most PDFs specify the lower left corner first followed by the upper right corner. Also, most PDF use 0x0 for the lower left and then whatever the page's width and height for the top corner. So an 8.5x11 inch PDF would be 0,0 and 612,792 (8.5 * 72 = 612 and 11 * 72 = 792) and this would be written as 0,0,612,792.
Your scanned PDF, however, has for whatever reason decided to treat 0,7072 as the lower left corner and 614,7864 as the top right corner. That still gives us (almost) an 8.5x11 page size but if you try to draw something at 0,0 it will be 7,072 pixels below the actual page. You can see this in Acrobat Pro by zooming out very far (1% for me), picking Tools, Edit Object and then doing a Select All. You should see something way far down selected, too.
To get around this, you need to respect the page's boundaries.
for (int i = 1; i <= sourceReader.NumberOfPages; i++) {
//Get the page to be stamped
var pageToBeStamped = sourceReader.GetPageSize(i);
var canvas = stamper.GetOverContent(i);
//Offset our new page by 50 pixels off of the destination page's bottom
canvas.AddTemplate(stampPage, pageToBeStamped.Left, pageToBeStamped.Bottom - 50);
}
The code above gets the rectangle for the imported page and uses bottom offset by 50 pixels (from your original code). Also, although not a problem in your case, we use the imported page's actual left edge instead of just zero.
This code can still break, however. The math in the first paragraph uses 72 which is the default for PDFs but this can be changed. Most people don't change it but most people also don't change 0,0. Currently your -50 assumes the 72 which gives the visual perception of moving the stamp about seven-tenths of an inch from the top edge. If you run into this scenario you'll want to look into retrieving the user unit.
Also, as I said in the first paragraph, most applications use lower left upper right but this isn't a hard rule. Someone could specify upper right and bottom left or even top left and bottom right. This is a hard one to take into account but it is something that you should at least be aware of.

Three.js: overlaying CSS Renderer doesn't work when using a post processing shader (EffectsComposer)

I have a scene with 2 renderers sized exactly the same and put on top of each other sharing the same Perspective Camera like this:
var renderer = new THREE.WebGLRenderer();
renderer.antialias = true;
renderer.setClearColor(0xFFFFFF, 1);
renderer.setSize(Params.win.w, Params.win.h);
renderer.domElement.style.position = 'absolute';
renderer.domElement.style.top = '0px';
renderer.domElement.style.left = '0px';
renderer.domElement.style.zIndex = '-9999';
document.body.appendChild(renderer.domElement);
var scene = new THREE.Scene();
var cssRenderer = new THREE.CSS3DRenderer();
cssRenderer.setSize(Params.win.w, Params.win.h);
cssRenderer.domElement.style.position = 'absolute';
cssRenderer.domElement.style.top = '0px';
cssRenderer.domElement.style.left = '0px';
cssRenderer.domElement.style.zIndex = '-9998';
document.body.appendChild(cssRenderer.domElement);
var cssScene = new THREE.Scene();
//animate loop
renderer.render(scene, camera);
cssRenderer.render(cssScene, camera);
CSS Objects can be placed directly over objects in the WebGL Scene simply by referencing their positions and setting the position of the CSS Objects.
However when I add this effect:
var effectFXAA = new THREE.ShaderPass(THREE.FXAAShader);
effectFXAA.uniforms['resolution'].value.set(1 / (Params.win.w), 1 / (Params.win.h));
effectFXAA.renderToScreen = true;
var composer = new THREE.EffectComposer(renderer);
composer.setSize(Params.win.w, Params.win.h);
composer.addPass(new THREE.RenderPass(scene, camera));
composer.addPass(effectFXAA);
And call the composer to render instead like this composer.render();
My CSS Objects are no longer placed correctly.
The perspective is off since zooming will slide the CSS Objects around while the WebGL Objects retain their positions.
Any idea why the extra shader passes might be changing the perspective in the WebGL rendering resulting in the CSS Objects not staying aligned properly and sliding around?
So I figured out that despite the fact that only the composer is calling render, the original renderer also needs to have it's size reset in whatever resize method you have.
The composer is based on the original renderer, and I had commented out the resizing of this object and only resized the composer.
This is something I must have overlooked, hope this helps others!
camera.aspect = Params.win.w / Params.win.h;
camera.updateProjectionMatrix();
//MUST ALSO UPDATE RENDERER SIZE
renderer.setSize(Params.win.w, Params.win.h);
cssRenderer.setSize(Params.win.w, Params.win.h);
//composer
effectFXAA.uniforms['resolution'].value.set(1 / Params.win.w, 1 / Params.win.h);
composer.setSize(Params.win.w, Params.win.h);

Soundcloud custom widget : change container background color w/o waveform.js?

Is there any way to change the sc-waveform container background from #efefef without having to load the waveform.js library? I have enough libraries loading already and the conainer bg conflicts with our site background color
I am experiencing the same issue and have had this problem before ( Overlay visible areas of transparent black silhouette PNG with pattern using CSS3 or JS ).
Here is an example with your waveform: http://jsfiddle.net/eLmmA/3/
$(function() {
var canvas = document.createElement('canvas');
canvas.width = 1800; // must match
canvas.height = 280; // must match
var canvas_context = canvas.getContext("2d");
var img = new Image();
img.onload = function(){
var msk = new Image();
msk.onload = function(){
canvas_context.drawImage(img, 0, 0);
canvas_context.globalCompositeOperation = "destination-in";
canvas_context.drawImage(msk, 0, 0);
canvas_context.globalCompositeOperation = "source-over";
};
msk.src = 'WAVEFORM_IMAGE.EXT';
}
img.src = 'OVERLAY_IMAGE.EXT';
document.body.appendChild(canvas);
});
I think I understand what you mean. You want to change the color of the waveform background, the gray stuff around the waveform:
The problem here is that waveforms you get from SoundCloud API are represented as partly transparent PNG images, where waveform itself is transparent and the chrome around it is gray (#efefef) color.
So, unless the library you want to use for waveform customisation is using HTML5 canvas, you won't be able to use change the color of that chrome (so no, not possible with either HTML5 Widget API or Custom Player API). And you have to use waveform.js or the likes (or modify the waveform image on canvas yourself).
You could try to experiment with newest CSS filters (webkit only for now) and SVG filters, and possibly some MS IE filters for older IE versions, but I am not sure you'd manage to just change the color.