iText merge, scale and rotate pages in existing pdf - pdf

I want to concat existing pdf and also scale and rotate them to A4 portrait pages. Today i have done this with pdfWriter:
for (int pageNum = 1; pageNum <= numberOfPages; pageNum++) {
PdfImportedPage importedPage = writer.getImportedPage( reader, pageNum );
AffineTransform transform = getAffineTransform(reader, writer, pageNum);
pdfContentByte.addTemplate(importedPage, transform);
document.newPage();
}
private AffineTransform getAffineTransform(PdfReader reader, PdfWriter writer, int pageNum) {
Rectangle readerPageSize = reader.getPageSize( pageNum );
Rectangle writerPageSize = writer.getPageSize();
float rPageHeight = readerPageSize.getHeight();
float rPageWidth = readerPageSize.getWidth();
float wPageHeight = writerPageSize.getHeight();
float wPageWidth = writerPageSize.getWidth();
int pageRotation = reader.getPageRotation( pageNum );
boolean rotate = (rPageWidth > rPageHeight) && (pageRotation == 0 || pageRotation == 180);
if(!rotate)
rotate = ((rPageHeight > rPageWidth) && (pageRotation == 90 || pageRotation ==270));
//if changing rotation gives us better space rotate an extra 90 degrees.
if(rotate) pageRotation += 90;
double randrotate = (double)pageRotation * Math.PI/(double)180;
AffineTransform transform = new AffineTransform();
float margin = 0;
float scale = 1.0f;
if(pageRotation == 90 || pageRotation == 270 ){
scale = Math.min((wPageHeight - 2 * margin) / rPageWidth, (wPageWidth- 2 * margin) / rPageHeight);
} else {
scale = Math.min(wPageHeight / rPageHeight, wPageWidth / rPageWidth);
}
transform.translate((wPageWidth/2) + margin, wPageHeight/2 + margin);
transform.rotate(-randrotate);
transform.scale(scale,scale);
transform.translate(-rPageWidth/2,-rPageHeight/2);
return transform;
}
This works fine, but removes layers (OCG) and annotations.
Is it possible to get this to work with layers (OCG) and annotations? This is used for print, so I only need the pdf to display the same, I don't actually need the layers or annotations.

Related

How to get parallel GPU pixel rendering? For voxel ray tracing

I made a voxel raycaster in Unity using a compute shader and a texture. But at 1080p, it is limited to a view distance of only 100 at 30 fps. With no light bounces yet or anything, I am quite disappointed with this performance.
I tried learning Vulkan and the best tutorials are based on rasterization, and I guess all I really want to do is compute pixels in parallel on the GPU. I am familiar with CUDA and I've read that is sometimes used for rendering? Or is there a simple way of just computing pixels in parallel in Vulcan? I've already got a template Vulkan project that opens a blank window. I don't need to get any data back from the GPU just render straight to the screen after giving it data.
And with the code below would it be significantly faster in Vulkan as opposed to a Unity compute shader? It has A LOT of if/else statements in it which I have read is bad for GPUs but I can't think of any other way of writing it.
EDIT: I optimized it as much as I could but it's still pretty slow, like 30 fps at 1080p.
Here is the compute shader:
#pragma kernel CSMain
RWTexture2D<float4> Result; // the actual array of pixels the player sees
const float width; // in pixels
const float height;
const StructuredBuffer<int> voxelMaterials; // for now just getting a flat voxel array
const int voxelBufferRowSize;
const int voxelBufferPlaneSize;
const int voxelBufferSize;
const StructuredBuffer<float3> rayDirections; // I'm now actually using it as points instead of directions
const float maxRayDistance;
const float3 playerCameraPosition; // relative to the voxelData, ie the first voxel's bottom, back, left corner position, no negative coordinates
const float3 playerWorldForward;
const float3 playerWorldRight;
const float3 playerWorldUp;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
Result[id.xy] = float4(0, 0, 0, 0); // setting the pixel to black by default
float3 pointHolder = playerCameraPosition; // initializing the first point to the player's position
const float3 p = rayDirections[id.x + (id.y * width)]; // vector transformation getting the world space directions of the rays relative to the player
const float3 u1 = p.x * playerWorldRight;
const float3 u2 = p.y * playerWorldUp;
const float3 u3 = p.z * playerWorldForward;
const float3 direction = u1 + u2 + u3; // the direction to that point
float distanceTraveled = 0;
int3 directionAxes; // 1 for positive, 0 for zero, -1 for negative
int3 directionIfReplacements = { 0, 0, 0 }; // 1 for positive, 0 for zero, -1 for negative
float3 axesUnit = { 1 / abs(direction.x), 1 / abs(direction.y), 1 / abs(direction.z) };
float3 distancesXYZ = { 1000, 1000, 1000 };
int face = 0; // 1 = x, 2 = y, 3 = z // the current face the while loop point is on
// comparing the floats once in the beginning so the rest of the ray traversal can compare ints
if (direction.x > 0) {
directionAxes.x = 1;
directionIfReplacements.x = 1;
}
else if (direction.x < 0) {
directionAxes.x = -1;
}
else {
distanceTraveled = maxRayDistance; // just ending the ray for now if one of it's direction axes is exactly 0. You'll see a line of black pixels if the player's rotation is zero but this never happens naturally
directionAxes.x = 0;
}
if (direction.y > 0) {
directionAxes.y = 1;
directionIfReplacements.y = 1;
}
else if (direction.y < 0) {
directionAxes.y = -1;
}
else {
distanceTraveled = maxRayDistance;
directionAxes.y = 0;
}
if (direction.z > 0) {
directionAxes.z = 1;
directionIfReplacements.z = 1;
}
else if (direction.z < 0) {
directionAxes.z = -1;
}
else {
distanceTraveled = maxRayDistance;
directionAxes.z = 0;
}
// calculating the first point
if (playerCameraPosition.x < voxelBufferRowSize &&
playerCameraPosition.x >= 0 &&
playerCameraPosition.y < voxelBufferRowSize &&
playerCameraPosition.y >= 0 &&
playerCameraPosition.z < voxelBufferRowSize &&
playerCameraPosition.z >= 0)
{
int voxelIndex = floor(playerCameraPosition.x) + (floor(playerCameraPosition.z) * voxelBufferRowSize) + (floor(playerCameraPosition.y) * voxelBufferPlaneSize); // the voxel index in the flat array
switch (voxelMaterials[voxelIndex]) {
case 1:
Result[id.xy] = float4(1, 0, 0, 0);
distanceTraveled = maxRayDistance; // to end the while loop
break;
case 2:
Result[id.xy] = float4(0, 1, 0, 0);
distanceTraveled = maxRayDistance;
break;
case 3:
Result[id.xy] = float4(0, 0, 1, 0);
distanceTraveled = maxRayDistance;
break;
default:
break;
}
}
// traversing the ray beyond the first point
while (distanceTraveled < maxRayDistance)
{
switch (face) {
case 1:
distancesXYZ.x = axesUnit.x;
distancesXYZ.y = (floor(pointHolder.y + directionIfReplacements.y) - pointHolder.y) / direction.y;
distancesXYZ.z = (floor(pointHolder.z + directionIfReplacements.z) - pointHolder.z) / direction.z;
break;
case 2:
distancesXYZ.y = axesUnit.y;
distancesXYZ.x = (floor(pointHolder.x + directionIfReplacements.x) - pointHolder.x) / direction.x;
distancesXYZ.z = (floor(pointHolder.z + directionIfReplacements.z) - pointHolder.z) / direction.z;
break;
case 3:
distancesXYZ.z = axesUnit.z;
distancesXYZ.x = (floor(pointHolder.x + directionIfReplacements.x) - pointHolder.x) / direction.x;
distancesXYZ.y = (floor(pointHolder.y + directionIfReplacements.y) - pointHolder.y) / direction.y;
break;
default:
distancesXYZ.x = (floor(pointHolder.x + directionIfReplacements.x) - pointHolder.x) / direction.x;
distancesXYZ.y = (floor(pointHolder.y + directionIfReplacements.y) - pointHolder.y) / direction.y;
distancesXYZ.z = (floor(pointHolder.z + directionIfReplacements.z) - pointHolder.z) / direction.z;
break;
}
face = 0; // 1 = x, 2 = y, 3 = z
float smallestDistance = 1000;
if (distancesXYZ.x < smallestDistance) {
smallestDistance = distancesXYZ.x;
face = 1;
}
if (distancesXYZ.y < smallestDistance) {
smallestDistance = distancesXYZ.y;
face = 2;
}
if (distancesXYZ.z < smallestDistance) {
smallestDistance = distancesXYZ.z;
face = 3;
}
if (smallestDistance == 0) {
break;
}
int3 facesIfReplacement = { 1, 1, 1 };
switch (face) { // directionIfReplacements is positive if positive but I want to subtract so invert it to subtract 1 when negative subtract nothing when positive
case 1:
facesIfReplacement.x = 1 - directionIfReplacements.x;
break;
case 2:
facesIfReplacement.y = 1 - directionIfReplacements.y;
break;
case 3:
facesIfReplacement.z = 1 - directionIfReplacements.z;
break;
}
pointHolder += direction * smallestDistance; // the acual ray marching
distanceTraveled += smallestDistance;
int3 voxelIndexXYZ = { -1,-1,-1 }; // the integer coordinates within the buffer
voxelIndexXYZ.x = ceil(pointHolder.x - facesIfReplacement.x);
voxelIndexXYZ.y = ceil(pointHolder.y - facesIfReplacement.y);
voxelIndexXYZ.z = ceil(pointHolder.z - facesIfReplacement.z);
//check if voxelIndexXYZ is within bounds of the voxel buffer before indexing the array
if (voxelIndexXYZ.x < voxelBufferRowSize &&
voxelIndexXYZ.x >= 0 &&
voxelIndexXYZ.y < voxelBufferRowSize &&
voxelIndexXYZ.y >= 0 &&
voxelIndexXYZ.z < voxelBufferRowSize &&
voxelIndexXYZ.z >= 0)
{
int voxelIndex = voxelIndexXYZ.x + (voxelIndexXYZ.z * voxelBufferRowSize) + (voxelIndexXYZ.y * voxelBufferPlaneSize); // the voxel index in the flat array
switch (voxelMaterials[voxelIndex]) {
case 1:
Result[id.xy] = float4(1, 0, 0, 0) * (1 - (distanceTraveled / maxRayDistance));
distanceTraveled = maxRayDistance; // to end the while loop
break;
case 2:
Result[id.xy] = float4(0, 1, 0, 0) * (1 - (distanceTraveled / maxRayDistance));
distanceTraveled = maxRayDistance;
break;
case 3:
Result[id.xy] = float4(0, 0, 1, 0) * (1 - (distanceTraveled / maxRayDistance));
distanceTraveled = maxRayDistance;
break;
}
}
else {
break; // should be uncommented in actual game implementation where the player will always be inside the voxel buffer
}
}
}
Depending on the voxel data you give it it produces this:
And here is the shader after "optimizing" it and taking out all branching or diverging conditional statements (I think):
#pragma kernel CSMain
RWTexture2D<float4> Result; // the actual array of pixels the player sees
float4 resultHolder;
const float width; // in pixels
const float height;
const Buffer<int> voxelMaterials; // for now just getting a flat voxel array
const Buffer<float4> voxelColors;
const int voxelBufferRowSize;
const int voxelBufferPlaneSize;
const int voxelBufferSize;
const Buffer<float3> rayDirections; // I'm now actually using it as points instead of directions
const float maxRayDistance;
const float3 playerCameraPosition; // relative to the voxelData, ie the first voxel's bottom, back, left corner position, no negative coordinates
const float3 playerWorldForward;
const float3 playerWorldRight;
const float3 playerWorldUp;
[numthreads(16, 16, 1)]
void CSMain(uint3 id : SV_DispatchThreadID)
{
resultHolder = float4(0, 0, 0, 0); // setting the pixel to black by default
float3 pointHolder = playerCameraPosition; // initializing the first point to the player's position
const float3 p = rayDirections[id.x + (id.y * width)]; // vector transformation getting the world space directions of the rays relative to the player
const float3 u1 = p.x * playerWorldRight;
const float3 u2 = p.y * playerWorldUp;
const float3 u3 = p.z * playerWorldForward;
const float3 direction = u1 + u2 + u3; // the transformed ray direction in world space
const bool anyDir0 = direction.x == 0 || direction.y == 0 || direction.z == 0; // preventing a division by zero
float distanceTraveled = maxRayDistance * anyDir0;
const float3 nonZeroDirection = { // to prevent a division by zero
direction.x + (1 * anyDir0),
direction.y + (1 * anyDir0),
direction.z + (1 * anyDir0)
};
const float3 axesUnits = { // the distances if the axis is an integer
1.0f / abs(nonZeroDirection.x),
1.0f / abs(nonZeroDirection.y),
1.0f / abs(nonZeroDirection.z)
};
const bool3 isDirectionPositiveOr0 = {
direction.x >= 0,
direction.y >= 0,
direction.z >= 0
};
while (distanceTraveled < maxRayDistance)
{
const bool3 pointIsAnInteger = {
(int)pointHolder.x == pointHolder.x,
(int)pointHolder.y == pointHolder.y,
(int)pointHolder.z == pointHolder.z
};
const float3 distancesXYZ = {
((floor(pointHolder.x + isDirectionPositiveOr0.x) - pointHolder.x) / direction.x * !pointIsAnInteger.x) + (axesUnits.x * pointIsAnInteger.x),
((floor(pointHolder.y + isDirectionPositiveOr0.y) - pointHolder.y) / direction.y * !pointIsAnInteger.y) + (axesUnits.y * pointIsAnInteger.y),
((floor(pointHolder.z + isDirectionPositiveOr0.z) - pointHolder.z) / direction.z * !pointIsAnInteger.z) + (axesUnits.z * pointIsAnInteger.z)
};
float smallestDistance = min(distancesXYZ.x, distancesXYZ.y);
smallestDistance = min(smallestDistance, distancesXYZ.z);
pointHolder += direction * smallestDistance;
distanceTraveled += smallestDistance;
const int3 voxelIndexXYZ = {
floor(pointHolder.x) - (!isDirectionPositiveOr0.x && (int)pointHolder.x == pointHolder.x),
floor(pointHolder.y) - (!isDirectionPositiveOr0.y && (int)pointHolder.y == pointHolder.y),
floor(pointHolder.z) - (!isDirectionPositiveOr0.z && (int)pointHolder.z == pointHolder.z)
};
const bool inBounds = (voxelIndexXYZ.x < voxelBufferRowSize && voxelIndexXYZ.x >= 0) && (voxelIndexXYZ.y < voxelBufferRowSize && voxelIndexXYZ.y >= 0) && (voxelIndexXYZ.z < voxelBufferRowSize && voxelIndexXYZ.z >= 0);
const int voxelIndexFlat = (voxelIndexXYZ.x + (voxelIndexXYZ.z * voxelBufferRowSize) + (voxelIndexXYZ.y * voxelBufferPlaneSize)) * inBounds; // meaning the voxel on 0,0,0 will always be empty and act as a our index out of range prevention
if (voxelMaterials[voxelIndexFlat] > 0) {
resultHolder = voxelColors[voxelMaterials[voxelIndexFlat]] * (1 - (distanceTraveled / maxRayDistance));
break;
}
if (!inBounds) break;
}
Result[id.xy] = resultHolder;
}
Compute shader is what it is: a program that runs on a GPU, be it on vulkan, or in Unity, so you are doing it in parallel either way. The point of vulkan, however, is that it gives you more control about the commands being executed on GPU - synchronization, memory, etc. So its not neccesseraly going to be faster in vulkan than in unity. So, what you should do is actually optimise your shaders.
Also, the main problem with if/else is divergence within groups of invocations which operate in lock-step. So, if you can avoid it, the performance impact will be far lessened. These may help you with that.
If you still want to do all that in vulkan...
Since you are not going to do any of the triangle rasterisation, you probably won't need renderpasses or graphics pipelines that the tutorials generally show. Instead you are going to need a compute shader pipeline. Those are far simplier than graphics pipelines, only requiring one shader and the pipeline layout(the inputs and outputs are bound via descriptor sets).
You just need to pass the swapchain image to the compute shader as a storage image in a descriptor (and of course any other data your shader may need, all are passed via descriptors). For that you need to specify VK_IMAGE_USAGE_STORAGE_BIT in your swapchain creation structure.
Then, in your command buffer you bind the descriptor sets with image and other data, bind the compute pipeline, and dispatch it as you probably do in Unity. The swapchain presentation and submitting the command buffers shouldn't be different than how the graphics works in the tutorials.

glm rotation does not according to model's new orientation

I want to achieve camera rotation around an object, however when i rotate the camera in different directions then apply more rotation, the model rotates with respect to it's initial orientation not the new orientation I don't know if I am missing anything or not, how can I resolve this issue?
my MVP initialization
ubo.model = attr.transformation[0];
ubo.view = glm::lookAt(glm::vec3(0.0f, 10.0f, 20.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(50.0f), extend.width / (float)extend.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
my update code
while (accumulatedTime >= timeFPS)
{
SDL_PollEvent(&event);
if (event.type == SDL_QUIT)
{
exit = true;
}
if (event.type == SDL_MOUSEMOTION && leftMouseButtonPressed == false)
{
prev.x = event.button.x;
prev.y = event.button.y;
}
if (event.type == SDL_MOUSEBUTTONDOWN && event.button.button == SDL_BUTTON_LEFT)
{
leftMouseButtonPressed = true;
}
if (event.type == SDL_MOUSEBUTTONUP && event.button.button == SDL_BUTTON_LEFT)
{
leftMouseButtonPressed = false;
}
if (event.type == SDL_MOUSEMOTION && leftMouseButtonPressed == true)
{
New.x = event.button.x;
New.y = event.button.y;
delta = New - prev;
if(delta.x != 0)
ubo.view = glm::rotate(ubo.view, timeDelta * delta.x/20, glm::vec3(0.0f, 1.0f, 0.0f));
if (delta.y != 0)
ubo.view = glm::rotate(ubo.view, timeDelta * delta.y/20, glm::vec3(1.0f, 0.0f, 0.0f));
prev = New;
}
accumulatedTime -= timeFPS;
v.updateUniformBuffer(ubo);
}
v.drawFrame();
}
my vertex Buffer
#version 450
#extension GL_ARB_separate_shader_objects : enable
....
void main()
{
gl_Position = ubo.proj * ubo.view * ubo.model * vec4 (inPosition, 1.0);
fragTexCoord = inTexCoord;
Normal = ubo.proj * ubo.view * ubo.model * vec4 (inNormals, 1.0);
}
As it turns out i was missing some essential parts of vector rotation, my conclusions are as follows:-
We need to have four vectors cameraPos, cameraDirection, cameraUp, cameraRight (the latter two vector are the up and right vectors relative to the camera orientation)
Each time we rotate our camera position we need to keep track of the cameraUp and cameraRight vectors (we need to rotate them as well; this was the part that i was missing)
The final camera orientation is calculated from the newly transformed cameraPos, cameraUp and cameraRight vectors
You can find a good tutorial about this in Here

libgdx, animatios dont scale on android

I was testing my app on android,
my app have 1 background image with 2 animations using TextureAtlas
it works fine on desctop, the sprite and the animations all scale but when I teste it on android, the sprite resize corretly but the animations dont resize at all
constantes.VIRTUAL_WIDTH=1920;
constantes.VIRTUAL_HEIGHT=1080;
.....
public static void show() {
camera = new OrthographicCamera(constantes.VIRTUAL_WIDTH, constantes.VIRTUAL_HEIGHT); //Aspect Ratio Maintenance
batch = new SpriteBatch();
texturafundo = new Texture("cenarios/penhasco.jpg");
spriteFundo = new Sprite(texturafundo);
spriteFundo.setSize(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());
// animations
textureAtlas = new TextureAtlas(Gdx.files.internal("anima/rio.txt"));
animacao = new Animation(1/10f, textureAtlas.getRegions());
textureAtlas2 = new TextureAtlas(Gdx.files.internal("anima/portal.txt"));
animacao2 = new Animation(1/10f, textureAtlas2.getRegions());
}
public void render(float delta) {
// update camera
camera.update();
// set viewport
Gdx.gl.glViewport((int) viewport.x, (int) viewport.y,
(int) viewport.width, (int) viewport.height);
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
elapsedTime+=delta;
batch.begin();
spriteFundo.draw(batch);
//sesenha animacao 1
batch.draw(animacao.getKeyFrame(elapsedTime, true), 0, 0);
batch.draw(animacao2.getKeyFrame(elapsedTime, true), 788, 249);
batch.end();
}
public void resize(int width, int height) {
float aspectRatio = (float)width/(float)height;
float scale = 1f;
Vector2 crop = new Vector2(0f, 0f);
if(aspectRatio > constantes.ASPECT_RATIO) {
scale = (float) height / (float) constantes.VIRTUAL_HEIGHT;
crop.x = (width - constantes.VIRTUAL_WIDTH * scale) / 2f;
} else if(aspectRatio < constantes.ASPECT_RATIO) {
scale = (float) width / (float) constantes.VIRTUAL_WIDTH;
crop.y = (height - constantes.VIRTUAL_HEIGHT * scale) / 2f;
} else {
scale = (float) width / (float) constantes.VIRTUAL_WIDTH;
}
float w = (float) constantes.VIRTUAL_WIDTH * scale;
float h = (float) constantes.VIRTUAL_HEIGHT * scale;
viewport = new Rectangle(crop.x, crop.y, w, h);
}

Centre Buttons In Swift - Objective C Conversion Issue

In Objective C, I am using a function to centre some buttons, which works fine.
The buttons centre perfectly on the screen.
However, converting the code to swift it isn't working as expected. There is extra space on the left hand side:
func centerImageViews()
// Measure the total width taken by the buttons
var width:CGFloat = 0.0
for (index, item) in enumerate(self.soundBoxes) {
if(item.tag>=numberOfBoxes){
item.alpha=0
}else{
item.alpha=1
width += item.bounds.size.width + 10
}
if (width > 10){
width -= 10;
}
// If the buttons width is shorter than the visible bounds, adjust origin
var origin:CGFloat = 0;
if (width < self.view.frame.size.width){
origin = (self.view.frame.size.width - width) / 2.0;
}
// Place buttons
var x:CGFloat = origin;
var y:CGFloat = 200;
for (index, item) in enumerate(self.soundBoxes) {
item.center = CGPointMake(x + item.bounds.size.width/2.0, y + item.bounds.size.height/2.0);
x += item.bounds.size.width + 10;
}
}
}
In Objective C:
- (void)centerImageViews:(NSArray*)imageViews withCount:(int)number
{
int kButtonSeperator=10;
// Measure The Total Width Of Our Buttons
CGFloat width = 0;
for (UIImageView* hit in soundBoxes)
if (hit.tag>=numberOfBoxes) {
hit.alpha=0;
} else {
width += hit.bounds.size.width + kButtonSeperator;
hit.alpha=1;
}
if (width > kButtonSeperator)
width -= kButtonSeperator;
CGFloat origin = 0;
if (width < self.view.frame.size.width)
origin = (self.view.frame.size.width - width) / 2.f;
CGFloat x = origin;
CGFloat y = 85;
UIImageView* hit;
for (hit in imageViews) {
hit.center = CGPointMake(x + hit.bounds.size.width/2.f, y + hit.bounds.size.height/2.f);
//NSLog(#"X Is %f",hit.center.x);
x += hit.bounds.size.width + kButtonSeperator;
}
}
}

Image manipulation with Kinect

I'm trying to create an application that zooms in and out of an image and rotates it via Kinect. So far it works but for either or the cases. What I would like is that if I have rotated the image, that new value is saved for when I zoom, so I zoom on an image that has been rotated X degrees. The way I have it now, if I first rotate and then try to zoom, the image goes back to the initial stage.
private void TrackDistances(Skeleton skeleton)
{
if (skeleton.TrackingState == SkeletonTrackingState.Tracked)
{
...
if (wristLeft.Y > shoulderLeft.Y && wristRight.Y > shoulderRight.Y)
{
float distance = Math.Abs(wristLeft.X - wristRight.X);
image_Zoom(distance);
}
if (wristLeft.Y < shoulderLeft.Y && wristRight.Y < shoulderRight.Y)
{
angleDeg = GetJointAngle(zeroPoint, anglePoint);
image_Rotate(angleDeg);
}
}
}
private void image_Zoom(float distance)
{
//TransformGroup transformGroup = (TransformGroup)image.RenderTransform;
//ScaleTransform scale = (ScaleTransform)transformGroup.Children[0];
//double zoom = distance * 1.5;
//scale.ScaleX = zoom;
//scale.ScaleY = zoom;
double zoom = distance * 1.5;
double ScaleX = zoom;
double ScaleY = zoom;
ScaleTransform scale = new ScaleTransform(ScaleX, ScaleY);
image.RenderTransform = scale;
}
private void image_Rotate(double angleDeg)
{
var angle = angleDeg - 180;
RotateTransform rotate = new RotateTransform(angle);
image.RenderTransform = rotate;
}
Any suggestions?
Thanks!
I think it's because you change RenderTransform to be either ScaleTranform or RotateTransform.
You can set the ScaleTransform and RotateTransform of the image in the XAML, and just change the angle or zoom parameter in the code behind.
also see here:
How can I do both zoom and rotate on an inkcanvas?