set camera Preview Aspect ratio manually with current display metrics - kotlin

I tried to develop camera preview UI with selected specific aspect ratios. But I couldn't get the correct resolution based on the selected ratio. I have 9:16(full screen), 16:9, 2.35:1 and 1:1 ratio options portrait mode. My XML like below:
<FrameLayout ..>
<RelativeLayout
android:id="#+id/cameraContainer"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_marginTop="?actionBarSize">
<LinearLayout
android:id="#+id/cameraPreview"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerVertical="true"/>
</RelativeLayout>
</FrameLayout>
I tried to change cameraContainer width and height change and then I expect to same changing to camera. But It could not happen. My kotlin code is below:
fun setRatio(selectedRatioId:Int){
when(selectedRatioId){
1->{
//9:16
cameraContainerParams.height = gettingDisplayMetrics.height
}
2->{
//16:9
cameraContainerParams.height = (gettingDisplayMetrics.width*9) /16
}
3->{
//1:1
cameraContainerParams.height = gettingDisplayMetrics.width
}
3->{
//2.35:1
cameraContainerParams.height = ((gettingDisplayMetrics.width) / 2.35).toInt()
}
}
cameraContainerParams.width = gettingDisplayMetrics.width
cameraContainer.layoutParams = cameraContainerParams
}
result is(p40 lite):
9:16 is output 1080x2090 but camera output 992x 1920
16:9 is output 1080x 607 (expected 1080 x 608)
1:1 is output 480 x 480 (I expect 1080 x 1080)
2.35:1 is output 1080 x 459( expected 1080 x 460)
how can I set the correct aspect value camera with the selection?

Related

How to rotate Annotations when converting HDF5 files to dm3?

I am working on converting Velox file (HDF5) to .dm3 file using Tore Niermann's plugin (gms_plugin_hdf5) to read string. Annotations on HDF5 file also need to transfer to .dm3 file. HDF5 file maybe rotate in any angle. But the position coordinate of annotation read from hdf5 file is corresponding to images without rotating.
I found that the annotations didn't move with rotating images. I had to re-calculate the position coordinate for every annotation. It isn't convenient for annotations such as box or oval. And I need to extract maximum area when rotating images. So the image size will change with rotation angle. So is there better solutions for rotating the annotations? Thanks.
Here is a sample function from my script. I didn't attach all because it's quite long.
image GetAnnotations(Taggroup names, string filename, string name, Taggroup Annotations, Image VeloxImg, number Angle)
{
number i, j, imagex, imagey, xscale, yscale
String Displaypath, AnnotationStr, DisplayStr, units
taggroup attr = NewTagList()
getsize(VeloxImg, imagex, imagey)
number centerx=imagex/2
number centery=imagey/2
getscale(veloximg, xscale, yscale)
units=getunitstring(veloximg)
component imgdisp=imagegetimagedisplay(VeloxImg, 0)
For (j=0; j<TagGroupCountTags(Annotations); ++j)
{
TagGroupGetIndexedTagAsString(Annotations, j, AnnotationStr)
string AnnotPath=h5_read_string_dataset(filename, AnnotationStr)
string AnnotDataPath=GetValueFromLongStr(AnnotPath, "dataPath\": \"", "\"")
AnnotDataPath=ReplaceStr(AnnotDataPath, "\\/", "\/")
string AnnotLabel=GetValueFromLongStr(AnnotPath, "label\": \"", "\"")
string AnnotDrawPath=h5_read_string_dataset(filename, AnnotDataPath)
image img := RealImage( "", 4, 1, 1 )
TagGroup AnnoTag=alloc(MetaStr2TagGroup).ParseText2ImageTag(AnnotDrawPath, img )
deleteimage(img)
string AnnotDrawType=TagGroupGetTagLabel(AnnoTag,0)
//AnnoTag.TagGroupOpenBrowserWindow( "AnnotationsTag", 0 )
if (AnnotDrawType=="arrow")
{
number p1_x,p1_y,p2_x,p2_y
TagGroupGetTagAsNumber(AnnoTag, "arrow:p1:x", p1_x)
TagGroupGetTagAsNumber(AnnoTag, "arrow:p1:y", p1_y)
TagGroupGetTagAsNumber(AnnoTag, "arrow:p2:x", p2_x)
TagGroupGetTagAsNumber(AnnoTag, "arrow:p2:y", p2_y)
//VeloxImg.CreateArrowAnnotation( p1y, p1x, p2y, p2x )
number p1_x_new=(p1_x-0.5)*cos(Angle)+(p1_y-0.5)*sin(Angle)+0.5
number p1_y_new=-(p1_x-0.5)*sin(Angle)+(p1_y-0.5)*cos(Angle)+0.5
number p2_x_new=(p2_x-0.5)*cos(Angle)+(p2_y-0.5)*sin(Angle)+0.5
number p2_y_new=-(p2_x-0.5)*sin(Angle)+(p2_y-0.5)*cos(Angle)+0.5
result(p1_x+" "+p1_y+" new "+p1_x_new+" "+p2_y_new+"\n")
component arrowAnno=newarrowannotation(p1_y_new*imagey, p1_x_new*imagex, p2_y_new*imagey, p2_x_new*imagex)
arrowAnno.ComponentSetForegroundColor( 1, 0 , 0 )
arrowAnno.ComponentSetDrawingMode( 2 )
imgdisp.ComponentAddChildAtEnd( arrowAnno )
}
Not directly answering your question, but maybe nevertheless of interest to you:
While GMS does not support the rotation of annotations (Rect, Oval, Text, ImageDisplay...) it does support a rotation property for ROIs. So maybe you can just use rect-ROIs and oval-ROIs instead of annotations in your application.
Example (never mind, that I did the shift-computation wrongly):
image test := realImage("Test",4,512,512)
test= abs(sin(6*PI()*icol/iwidth))*abs(cos(4*PI()*irow/iheight*iradius/150))
test.showimage()
imageDisplay disp = test.ImageGetImageDisplay(0)
ROI box = NewROI()
box.RoiSetRectangle(46,88,338,343)
box.RoiSetVolatile(0)
ROI oval = NewROI()
oval.ROISetOval(221,226,287,254)
oval.RoiSetVolatile(0)
disp.ImageDisplayAddROI(box)
disp.ImageDisplayAddROI(oval)
number rot_deg = 8
image rot := test.rotate( pi()/180*rot_deg)
rot.ShowImage()
imageDisplay disp_rot = rot.ImageGetImageDisplay(0)
ROI box_rot = box.ROIClone()
ROI oval_rot = oval.ROIClone()
disp_rot.ImageDisplayAddROI(box_rot)
disp_rot.ImageDisplayAddROI(oval_rot)
number shift_x = (rot.ImageGetDimensionSize(0)-test.ImageGetDimensionSize(0)) / 2
number shift_y = (rot.ImageGetDimensionSize(1)-test.ImageGetDimensionSize(1)) / 2
number t,l,b,r
box_rot.ROIGetRectangle(t,l,b,r)
box_rot.ROISetRectangle(t+shift_y,l+shift_x,b+shift_y,r+shift_x)
oval_rot.ROIGetOval(t,l,b,r)
oval_rot.ROISetOval(t+shift_y,l+shift_x,b+shift_y,r+shift_x)
box_rot.ROISetRotationAngle( rot_deg )
oval_rot.ROISetRotationAngle( rot_deg )
If I understood you correctly, then your source data (HDF5) stores the image (2D array?) plus a rotation angle, but the annotations in the coordinate system of the (not rotated) image? How is the source-data displayed in the original software then? (Is it showing a rotated rectangle-image?)
GMS does not support rotating imagesDisplays (as objects) and consequently also not rotations of annotations. The coordinates systems are always screen-axis aligned orthogonal. Hence the need for interpolation when "rotating" images. The data values are re-computed for the new grid.
If you don't need the annotations to be adjustable after your input, one potential thing you could do would be to create an "as displayed" image after import prior rotation, and then rotate the image with the annotations "burnt in".
This is obviously only good for creating "final display images" though.
image before := realImage("Test",4,512,512)
before = abs(sin(6*PI()*icol/iwidth))*abs(cos(4*PI()*irow/iheight*iradius/150))
before.showimage()
before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewArrowAnnotation(30,80,60,430))
before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewArrowAnnotation(30,80,206,240))
before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewOvalAnnotation(206,220,300,256))
// Create as-displayed image
Number t,l,b,r,ofx,ofy,scx,scy
before.ImageGetOrCreateImageDocument().ImageDocumentGetViewExtent(t,l,b,r)
before.ImageGetOrCreateImageDocument().ImageDocumentGetViewToWindowTransform(ofx,ofy,scx,scy)
image asShown := before.ImageGetOrCreateImageDocument().ImageDocumentCreateRGBImageFromDocument(round((r-l)*scx),round((b-t)*scy),0,0)
asShown.ShowImage()
number angle_deg = 8
image rotated := asShown.Rotate( angle_deg/180*PI() )
rotated.ShowImage()

Multisampled input and output (vulkan)

I want to blend multisampled image on other multisampled image. Images have the same number of samples and the same sizes. I tried 2 approaches:
multisampled texture. Fragment shader:
#version 450
layout (set = 1, binding = 0) uniform sampler2DMS textureSampler;
layout (location = 0) out vec4 outFragColor;
void main()
{
outFragColor = texelFetch(textureSampler, ivec2(gl_FragCoord.xy), gl_SampleID);
}
input attachment. Fragment shader:
#version 450
layout(input_attachment_index = 0, set = 1, binding = 0) uniform subpassInputMS offscreenBuffer;
layout(location = 0) out vec4 color;
void main(void)
{
color = subpassLoad(offscreenBuffer, gl_SampleID);
}
I enabled sampleRateShading feature, set minSampleShading to 1.0f and sampleShadingEnable to VK_TRUE. The result is somewhat different, but it's still some rectangular garbage. In case of input attachment the form of the blended image looks like the original image but as if pixels of the original image were replaced with blocks of garbage.
I have tested my render passes and pipelines on non-multisampled images (but I use other shaders for non-multisampled images) and everything works fine.
What am I doing wrong?

WearableRecyclerView scrolls not curved

My WearableRecyclerView the lines go like a triangle and not really curved like the homescreen of the watch. I can't find any mistakes in my code.
ShareActivity:
private void initListView() {
WearableRecyclerView recyclerView = findViewById(R.id.lVDevices);
recyclerView.setEdgeItemsCenteringEnabled(true);
final ScrollingLayoutCallback scrollingLayoutCallback =
new ScrollingLayoutCallback();
adapter = new ShareAdapter(new ShareAdapter.Callback() {
#Override
public void onDeviceClicked(int position, String deviceName) {
onListItemClick(position, deviceName);
}
});
recyclerView.setLayoutManager(
new WearableLinearLayoutManager(this, scrollingLayoutCallback));
recyclerView.setAdapter(adapter);
}
ScrollingLayoutCallback:
public class ScrollingLayoutCallback extends WearableLinearLayoutManager.LayoutCallback {
private static final float MAX_ICON_PROGRESS = 0.65f;
#Override
public void onLayoutFinished(View child, RecyclerView parent) {
// Figure out % progress from top to bottom
float centerOffset = ((float) child.getHeight() / 2.0f) / (float) parent.getHeight();
float yRelativeToCenterOffset = (child.getY() / parent.getHeight()) + centerOffset;
// Normalize for center
float mProgressToCenter = Math.abs(0.5f - yRelativeToCenterOffset);
// Adjust to the maximum scale
mProgressToCenter = Math.min(mProgressToCenter, MAX_ICON_PROGRESS);
child.setScaleX(1 - mProgressToCenter);
child.setScaleY(1 - mProgressToCenter);
}
}
ListView-XML
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<android.support.v4.widget.SwipeRefreshLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/refreshlayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#android:color/transparent">
<android.support.wear.widget.WearableRecyclerView
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/lVDevices"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#android:color/transparent"
android:scrollbars="vertical"/>
</android.support.v4.widget.SwipeRefreshLayout>
</RelativeLayout>
XML of the row
<TextView xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/lVDevices_row"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="#android:color/transparent"
android:drawableStart="#drawable/point_img"
android:layout_centerHorizontal="true"
android:textSize="18sp">
</TextView>
Is the problem in the Callback method? I mean is there a mistake in the calculation because it is from the developer website.
Edit 1: New Math in ScrollingCallback class
float centerOffset = ((float) child.getHeight() / 2.0f) / (float) parent.getHeight();
float yRelativeToCenterOffset = (child.getY() / parent.getHeight()) + centerOffset;
float progresstoCenter = (float) Math.sin(yRelativeToCenterOffset * Math.PI);
child.setScaleX(progresstoCenter);
child.setScaleY(progresstoCenter);
I've created a solution.
Like the OP, I never liked the solution given in the official Android documentation, which causes the list elements to move along a triangular path (straight line from the top center to the middle left, and then straight line from there to the bottom center).
The solution is simple, using trig to change the path to semi-circular. Replace the CustomScrollingLayoutCallback with this:
class CustomScrollingLayoutCallback extends WearableLinearLayoutManager.LayoutCallback {
#Override
public void onLayoutFinished(View child, RecyclerView parent) {
final float MAX_ICON_PROGRESS = 0.65f;
try {
float centerOffset = ((float) child.getHeight() / 2.0f) / (float) parent.getHeight();
float yRelativeToCenterOffset = (child.getY() / parent.getHeight()) + centerOffset;
// Normalize for center, adjusting to the maximum scale
float progressToCenter = Math.min(Math.abs(0.5f - yRelativeToCenterOffset), MAX_ICON_PROGRESS);
// Follow a curved path, rather than triangular!
progressToCenter = (float)(Math.cos(progressToCenter * Math.PI * 0.70f));
child.setScaleX (progressToCenter);
child.setScaleY (progressToCenter);
} catch (Exception ignored) {}
}
}
The "... * 0.70f" on that 4th-to-last line was determined by trial-and-error, and alters the curve to best match the official Android Wear curve.

PGraphics camera positioning only happens in the next frame

From this Github issue:
Computer specs: Mac OS Sierra 10.12.3, Processing 3.2.3
When using dynamic values in a Processing PGraphics camera, these only get applied in the next frame. I have not been able to save the current frame to a file with this offset not being a problem. Is this the expected behavior?
Consider the code below:
It will show a rotating cube, a red rotating square, and the current frame count.
There is an x_up global variable that controls that value in the camera (default 0.0).
If the frameCount % 90 == 0:
changes the x_up (from 0.0 to 1.0).
changes the fill to transparent blue.
saves a file "output/#####_" + x_up + "_.png" (e.g: 00090_1.0_.png)
If the frameCount % 90 == 1:
saves another file with same convention, no fill, no x_up change (e.g: 00091_0.0_.png)
PGraphics pg;
PMatrix mat_scene;
float x_up;
void setup() {
size(600, 600, P3D);
pg = createGraphics(width, height, P3D);
mat_scene = getMatrix();
}
void draw() {
pg.beginDraw();
pg.hint(DISABLE_DEPTH_TEST);
pg.background(200);
pg.noFill();
// change stuff if frame % 90
if (frameCount % 90 == 0) {
x_up = 1.0;
pg.fill(0, 0, 255, 10);
} else {
x_up = 0.0;
}
// the red rect
pg.pushMatrix();
pg.setMatrix(mat_scene);
pg.stroke(255, 0, 0);
pg.rectMode(CENTER);
pg.translate(width * .5, height * .5, -600);
pg.rotateZ(radians(float(frameCount)));
pg.rect(0, 0, 600, 600);
pg.popMatrix();
// the cube
pg.pushMatrix();
pg.stroke(128);
pg.translate(10, 100, -200);
pg.rotateZ(radians(float(frameCount)));
pg.box(300);
pg.popMatrix();
// the camera
pg.beginCamera();
pg.camera(width, height, -height, 0, 0, 0, x_up, 0.0, 1.0);
pg.endCamera();
// the frame counter
pg.pushMatrix();
pg.fill(255);
pg.setMatrix(mat_scene);
pg.textSize(20);
pg.text(frameCount, 20, 30);
pg.popMatrix();
pg.endDraw();
image(pg, 0, 0);
if (frameCount > 10 && frameCount % 90 == 0) {
saveFrame("output/#####_" + x_up + "_.png");
}
if (frameCount > 10 && frameCount % 90 == 1) {
saveFrame("output/#####_" + x_up + "_.png");
}
}
You can see the “blip” happen every 90 frames. If you look at the output folder, you will see something like this in frame 90:
and something like this in frame 91:
Notice that you can tell it is only the camera because both attributes (blue and camera x_up) are changed in frame 90 but only frame 91 shows the change in camera. Frame 90 correctly shows the blue fill in both boxes. This happens even if I set the frame rate to 1. It also happens if I use pg.save instead of saveFrame.
Is this a bug? I might be missing something obvious, but I'm not an expert in 3D transformations or cameras.
You're calling the camera() function after you've done all your drawing. So each frame, you do this:
Move the objects in your scene and take a picture.
Now move the camera.
So on frame 90, you draw your scene, then move the camera. So on frame 91, the camera is using the position from the last frame.
To fix this, just move your call to camera() to before you draw everything (but after you set the x_up variable.

Windows UWP Extended splash screen image rendering incorrectly on mobile

I built an extended splash screen for my windows uwp application. I followed the example code including the xaml for extended spash screen from this page
Display a splash screen for more time
It renders correctly on the desktop window, it is centered perfectly and aligned exactly with the initial splash screen image, however when I try a mobile emulator, (I tried one of 5 inch screen at 720p) the extended splash screen page image seems too large (it looks almost twice or three times larger), and appears cut off to the bottom right of the page, I'm assuming the progress ring is below the image and is beyond the page boundary so it is not visible.
Here is what it looks like on mobile, left image is the initial splash screen, and the one on the right is the extended splash page.
My XAMLfor the extended splash page is like this
<Page
x:Class="MyApp_Win10.ExtendedSplash"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MyApp_Win10"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid Background="#FF000012" >
<Canvas>
<Image x:Name="extendedSplashImage" Source="Assets/SplashScreen/SplashScreenImage3.png"/>
<ProgressRing Name="splashProgressRing" IsActive="True" Width="60" Height="60" HorizontalAlignment="Center"></ProgressRing>
</Canvas>
</Grid>
</Page>
And my package.appmanifest looks like this. (There is one image in the Assets forlder that was created as SplashScreenImage3.scale-200.png with dimensions 1240 w x 600 h)
EDIT: I added the remaining 3 image scales 150, 125, and 100 to the package.appmanifest but it made no difference. Since the extended splash page is not the same as the initial splash page, I think it is choosing the exact image file I write in the XAML - which is the full sized on of dimension 1240 w x 600 h.
Also in the codebehind for the extended splash, these are the coordinates of the splash screen
EDIT: My PositionImage() and PositionRing() functions
void PositionImage()
{
extendedSplashImage.SetValue(Canvas.LeftProperty, splashImageRect.X);
extendedSplashImage.SetValue(Canvas.TopProperty, splashImageRect.Y);
extendedSplashImage.Height = splashImageRect.Height;
extendedSplashImage.Width = splashImageRect.Width;
}
void PositionRing()
{
splashProgressRing.SetValue(Canvas.LeftProperty, splashImageRect.X + (splashImageRect.Width * 0.5) - (splashProgressRing.Width * 0.5));
splashProgressRing.SetValue(Canvas.TopProperty, (splashImageRect.Y + splashImageRect.Height + splashImageRect.Height * 0.1));
}
Make sure in your PositionImage() and PositionRing() functions that you handle the case when the device is a phone as follows
void PositionImage()
{
extendedSplashImage.SetValue(Canvas.LeftProperty, splashImageRect.X);
extendedSplashImage.SetValue(Canvas.TopProperty, splashImageRect.Y);
if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
extendedSplashImage.Height = splashImageRect.Height / ScaleFactor;
extendedSplashImage.Width = splashImageRect.Width / ScaleFactor;
}
else
{
extendedSplashImage.Height = splashImageRect.Height;
extendedSplashImage.Width = splashImageRect.Width;
}
}
void PositionRing()
{
splashProgressRing.SetValue(Canvas.LeftProperty, splashImageRect.X + (splashImageRect.Width * 0.5) - (splashProgressRing.Width * 0.5));
splashProgressRing.SetValue(Canvas.TopProperty, (splashImageRect.Y + splashImageRect.Height + splashImageRect.Height * 0.1));
if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
splashProgressRing.Height = splashProgressRing.Height / ScaleFactor;
splashProgressRing.Width = splashProgressRing.Width / ScaleFactor;
}
}
and
//Variable to hold the device scale factor (use to determine phone screen resolution)
private double ScaleFactor = DisplayInformation.GetForCurrentView().RawPixelsPerViewPixel;
Why haven't you set another scale dimensions?
Just add them and try again.
Looks that your phone isn't has Scale 200
I have the same problem.
In fact the height and width of the splashscreen returned are wrong on the mobile. It return the size of the screen instead!!!! see picture above ( height=1280 width=720, when the actual splashscreen have a width bigger than the height.
I tried to recalculate the splashscreen size by using the width of the screen and guessing the size of the splashscreen used and dividing it by the scle factor, but there is a small difference of size due to a marging or something....
It would be great if someone know a better way to calculate the correct size instead of kind of guessing....
if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
ScaleFactor = DisplayInformation.GetForCurrentView().RawPixelsPerViewPixel;
double screenwidth = _splashImageRect.Width;
if (screenwidth <= 1240)
{
// small screen will use the splashscreen scale 100 - 620x300
ExtendedSplashImage.Height = 300 / ScaleFactor;
ExtendedSplashImage.Width = 620 / ScaleFactor;
}
else if (screenwidth > 1240 && screenwidth <= 2480)
{
// medium screen will use the splashscreen scale 200 - 1240x600
ExtendedSplashImage.Height = (600 / ScaleFactor);
ExtendedSplashImage.Width = (1240 / ScaleFactor);
}
else if (screenwidth > 2480)
{
// big screen will use the splashscreen scale 400 - 2480x1200
ExtendedSplashImage.Height = 1200 / ScaleFactor;
ExtendedSplashImage.Width = 2480 / ScaleFactor;
}
}
else
{
ExtendedSplashImage.Height = splashImageRect.Height;
ExtendedSplashImage.Width = splashImageRect.Width;
}
I have found the solution for the mobile and if you try to share your app with another app:
private void PositionImage()
{
if(Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
double screenWidth = _splashImageRect.Width;
ExtendedSplashImage.Width = screenWidth/ ScaleFactor;
// use the ratio of your splashscreen to calculate the height
ExtendedSplashImage.Height = ((screenWidth / ScaleFactor) * 600) / 1240;
}
else
{
// if the app is shared with another app the _splashImageRect is not returns properly too
if (_splashImageRect.Width > windowContext.VisibleBounds.Width || _splashImageRect.Width < 1)
{
ExtendedSplashImage.Width = windowContext.VisibleBounds.Width;
// use the ratio of your splashscreen to calculate the height
ExtendedSplashImage.Height = ((windowContext.VisibleBounds.Width) * 600) / 1240;
}
else
{
ExtendedSplashImage.Height = _splashImageRect.Height;
ExtendedSplashImage.Width = _splashImageRect.Width;
}
}
Double left = windowContext.VisibleBounds.Width / 2 - ExtendedSplashImage.ActualWidth / 2;
Double top = windowContext.VisibleBounds.Height / 2 - ExtendedSplashImage.ActualHeight / 2;
ExtendedSplashImage.SetValue(Canvas.LeftProperty, left);
ExtendedSplashImage.SetValue(Canvas.TopProperty, top);
ProgressRing.SetValue(Canvas.LeftProperty, left + ExtendedSplashImage.ActualWidth / 2 - ProgressRing.Width / 2);
ProgressRing.SetValue(Canvas.TopProperty, top + ExtendedSplashImage.ActualHeight);
}