Raphael -- How to fit the paper size to the browser's window size? - resize

I want to change the paper(objects base) size of Raphael to fit the window resizing. [ using Firefox_13.0, Raphael_2.1.0, WindowsXP ]
If it is available, I would like to fit full-screen-mode.
==================================================
(steps)
I created the paper : paper = Raphael(0, 50, 800, 600); // initial width and height are 800 and 600.
I placed objects on the paper.
The window size of browser is checked by windowW = window.innerWidth and winnowH = window.innerHeight (on Firefox).
Scaling value is calculated by sv = windowW/800;
And scaling the paper by paper.scale(sv, sv);
==================================================
(the script)
window.onload = function () {
paper = Raphael(0, 50, 800, 600);
var background = paper.rect(0, 0, 800, 600).attr({fill:'#669999'});
// placing the objects
var circle = ...;
var rect = ...;
var ellipse = ...;
winowW = window.innerWidth;
winowH = window.innerHeight;
sv = winowW/800.
paper.scale(sv, sv);
}
==================================================
(result)
Though circle.scale(sv), rect.scale(sv, sv) and ellipse.scale(sv, sv) are valid, paper.scale(sv, sv) and background.scale(sv, sv) are not.
Why this case is happen ? I can get the window size by window.onresize = function() {...} on real-time. If there are better methods, please tell me.
Thanks,

I've succeeded by following two points:
1) "paper" itself is not manipulative object. I think we should look it as billboard.
2) use st = paper.set() and put the objects(circle, rect, ...) in it. And use st.scale(sv, sv, 0, 0);
* third and fourth parameter (0, 0) are very impotent.
(caution)
Serial resizing operation is not good for the function "scale()". Because each of resizing coefficient is piled as the
power of a number. So when one have done 1.1 times resizing operation 5 times, the scale will be 1.1^5.

Use setViewBox()
It should do the work
http://raphaeljs.com/reference.html#Paper.setViewBox

Related

BabylonJS - Remove smooth animation on the camera

I'm using BabylonJS to make a little game.
I'm using this code to build a camera :
this.cam = new BABYLON.FreeCamera("playerCamera", new BABYLON.Vector3(x, y, z), s);
this.cam.checkCollisions = true;
this.cam.applyGravity = false;
this.cam.keysUp = [90]; // Z
this.cam.keysDown = [83]; // S
this.cam.keysLeft = [81]; // Q
this.cam.keysRight = [68]; // D
this.cam.speed = v;
this.cam.ellipsoid = new BABYLON.Vector3(1, h, 1);
this.cam.angularSensibility = a;
And it works, i have a camera, i can move around ect ...
But my problem is here : by default they are a smooth animation when a move and when i change the orientation of the camera.
Let me explain : When i move with my arrow keys (aproximately 20 pixels to the left) it will go to 25 pixels (20 pixels + 5 smooths pixels).
I don't want it :/ Do you know how o disable it ? (To move and change orientation of the camera).
This is due to the inertia defined in the free camera.
To remove this "smooth" movements, simply disable inertia:
this.cam.inertia = 0;

How can I expand a MKMapRect by a fixed percentage?

I want to get a result MKMapRect that's 10-20% larger in all directions than the current visibleMapRect. If this were a CGRect I'd use CGRectInset with negative x and y values, providing me with an inverse inset (i.e. a larger rect). Unfortunately, MKMapInset doesn't support negative inset values so it's not quite that easy.
This might be easier if the the values for the map rect were recognizable units but the origin x and y values are on the order of 4.29445e+07, and the width/height is 2500-3000.
I'm about 10 seconds from writing a category to do this manually but I wanted to make sure I wasn't missing something first. Is there an easier way to expand MKMapRect?
In iOS7, rectForMapRect: and mapRectForRect: has been deprecated and now are part of the MKOverlayRenderer class. I'd rather recommend to use the MapView mapRectThatFits: edgePadding: methods. Here is a sample code :
MKMapRect visibleRect = self.mapView.visibleMapRect;
UIEdgeInsets insets = UIEdgeInsetsMake(50, 50, 50, 50);
MKMapRect biggerRect = [self.mapView mapRectThatFits:visibleRect edgePadding:insets];
latest Swift for 2017...
func updateMap() {
mkMap.removeAnnotations(mkMap.annotations)
mkMap.addAnnotations(yourAnnotationsArray)
var union = MKMapRectNull
for p in yourAnnotationsArray {
// make a small, say, 50meter square for each
let pReg = MKCoordinateRegionMakeWithDistance( pa.coordinate, 50, 50 )
// convert it to a MKMapRect
let r = mkMapRect(forMKCoordinateRegion: pReg)
// union all of those
union = MKMapRectUnion(union, r)
// probably want to turn on the "sign" for each
mkMap.selectAnnotation(pa, animated: false)
}
// expand the union, using the new #edgePadding call. T,L,B,R
let f = mkMap.mapRectThatFits(union, edgePadding: UIEdgeInsetsMake(70, 0, 10, 35))
// NOTE you want the TOP padding much bigger than the BOTTOM padding
// because the pins/signs are actually very tall
mkMap.setVisibleMapRect(f, animated: false)
}
What about converting the visibleMapRect to a CGRect with rectForMapRect:, getting a new CGRect with CGRectInset and then converting it back to a MKMapRect with mapRectForRect:?
Simple and clean solution for Xcode 10+, Swift 4.2
Just set the edge insets for the maps' margins like this:
self.mapView.layoutMargins = UIEdgeInsets(top: 8, right: 8, bottom: 8, left: 8)
self.mapView.showAnnotations(map.annotations, animated: true)
Please let us know if it works for you.
Refined and corrected user2285781's answer for Swift 4:
// reference: https://stackoverflow.com/a/15683034/347339
func MKMapRectForCoordinateRegion(region:MKCoordinateRegion) -> MKMapRect {
let topLeft = CLLocationCoordinate2D(latitude: region.center.latitude + (region.span.latitudeDelta/2), longitude: region.center.longitude - (region.span.longitudeDelta/2))
let bottomRight = CLLocationCoordinate2D(latitude: region.center.latitude - (region.span.latitudeDelta/2), longitude: region.center.longitude + (region.span.longitudeDelta/2))
let a = MKMapPointForCoordinate(topLeft)
let b = MKMapPointForCoordinate(bottomRight)
return MKMapRect(origin: MKMapPoint(x:min(a.x,b.x), y:min(a.y,b.y)), size: MKMapSize(width: abs(a.x-b.x), height: abs(a.y-b.y)))
}
// reference: https://stackoverflow.com/a/19307286/347339
// assuming coordinates that create a polyline as well as a destination annotation
func updateMap(coordinates: [CLLocationCoordinate2D], annotation: MKAnnotation) {
var union = MKMapRectNull
var coordinateArray = coordinates
coordinateArray.append(annotation.coordinate)
for coordinate in coordinateArray {
// make a small, say, 50meter square for each
let pReg = MKCoordinateRegionMakeWithDistance( coordinate, 50, 50 )
// convert it to a MKMapRect
let r = MKMapRectForCoordinateRegion(region: pReg)
// union all of those
union = MKMapRectUnion(union, r)
}
// expand the union, using the new #edgePadding call. T,L,B,R
let f = mapView.mapRectThatFits(union, edgePadding: UIEdgeInsetsMake(70, 35, 10, 35))
// NOTE you want the TOP padding much bigger than the BOTTOM padding
// because the pins/signs are actually very tall
mapView.setVisibleMapRect(f, animated: false)
}

How to resize bitmap image to be <200 KB and meet Tile restrictions (WinRT)

I am developing a routine to scale some bitmap images to be part of tile notifications for my Window-8 app
The tile images must be <200KB and less than 1024x1024 px in dimension. I am able to use a scaling routine to resize the source image as necessary to fit the 1024x1024 px dimension limitation.
How can I alter the source image to guarantee the size restriction will be met?
My first attempt was to continue to scale down the image until it clears the size threshold, and use isTooBig = destFileStream.Size > MaxBytes to determine the size. But, the code below results in an infinite loop. How can I reliably measure the size of the destination file?
bool isTooBig = true;
int count = 0;
while (isTooBig)
{
// create a stream from the file and decode the image
using (var sourceFileStream = await sourceFile.OpenAsync(Windows.Storage.FileAccessMode.Read))
using (var destFileStream = await destFile.OpenAsync(FileAccessMode.ReadWrite))
{
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(sourceFileStream);
BitmapEncoder enc = await BitmapEncoder.CreateForTranscodingAsync(destFileStream, decoder);
double h = decoder.OrientedPixelHeight;
double w = decoder.OrientedPixelWidth;
if (h > baselinesize || w > baselinesize)
{
uint scaledHeight, scaledWidth;
if (h >= w)
{
scaledHeight = (uint)baselinesize;
scaledWidth = (uint)((double)baselinesize * (w / h));
}
else
{
scaledWidth = (uint)baselinesize;
scaledHeight = (uint)((double)baselinesize * (h / w));
}
//Scale the bitmap to fit
enc.BitmapTransform.ScaledHeight = scaledHeight;
enc.BitmapTransform.ScaledWidth = scaledWidth;
}
// write out to the stream
await enc.FlushAsync();
await destFileStream.FlushAsync();
isTooBig = destFileStream.Size > MaxBytes;
baselinesize *= .90d * ((double)MaxBytes / (double)destFileStream.Size);
}
}
Can you not calculate it using width x height x colourDepth (where colourDepth is in bytes, so 32bit=4bytes). Presumably you're maintaining aspect ratio so you just need to scale down width/height until you find it less than 200KB.
This assumes the output is an a bitmap and therefore uncompressed.
Considering that tile size either 150x150 for square tiles or 310x150 for wide tiles you should be able to shrink image down to the appropriate size and with jpeg compression you are pretty much guaranteed to be under 200k. Set compression quality around 80. It will give you good compression ratio while keeping decent image quality.

Rotating camera around the X-axis (three.js)

I am trying to rotate the camera around to X-axis of the scene.
At this point my code is like this:
rotation += 0.05;
camera.position.y = Math.sin(rotation) * 500;
camera.position.z = Math.cos(rotation) * 500;
This makes the camera move around but during the rotation something weird happens and either the camera flips, or it skips some part of the imaginary circle it's following.
You have only provided a snippet of code, so I have to make some assumptions about what you are doing.
This code:
rotation += 0.05;
camera.position.x = 0;
camera.position.y = Math.sin(rotation) * 500;
camera.position.z = Math.cos(rotation) * 500;
camera.lookAt( scene.position ); // the origin
will cause the "flipping" you refer to because the camera is trying to remain "right side up", and it will quickly change orientation as it passes over the "north pole."
If you offset the camera's x-coordinate like so,
camera.position.x = 200;
the camera behavior will appear more natural to you.
Three.js tries to keep the camera facing up. When you pass 0 along the z-axis, it'll "fix" the camera's rotation. You can just check and reset the camera's angle manually.
camera.lookAt( scene.position ); // the origin
if (camera.position.z < 0) {
camera.rotation.z = 0;
}
I'm sure this is not the best solution, but if anyone else runs across this question while playing with three.js (like I just did), it'll give one step further.
This works for me, I hope it helps.
Rotating around X-Axis:
var x_axis = new THREE.Vector3( 1, 0, 0 );
var quaternion = new THREE.Quaternion;
camera.position.applyQuaternion(quaternion.setFromAxisAngle(x_axis, rotation_speed));
camera.up.applyQuaternion(quaternion.setFromAxisAngle(x_axis, rotation_speed));
Rotating around Y-Axis:
var y_axis = new THREE.Vector3( 0, 1, 0 );
camera.position.applyQuaternion(quaternion.setFromAxisAngle(y_axis, angle));
Rotating around Z-Axis:
var z_axis = new THREE.Vector3( 0, 0, 1 );
camera.up.applyQuaternion(quaternion.setFromAxisAngle(z_axis, angle));
I wanted to move my camera to a new location while having the camera look at a particular object, and this is what I came up with [make sure to load tween.js]:
/**
* Helper to move camera
* #param loc Vec3 - where to move the camera; has x, y, z attrs
* #param lookAt Vec3 - where the camera should look; has x, y, z attrs
* #param duration int - duration of transition in ms
**/
function flyTo(loc, lookAt, duration) {
// Use initial camera quaternion as the slerp starting point
var startQuaternion = camera.quaternion.clone();
// Use dummy camera focused on target as the slerp ending point
var dummyCamera = camera.clone();
dummyCamera.position.set(loc.x, loc.y, loc.z);
// set the dummy camera quaternion
var rotObjectMatrix = new THREE.Matrix4();
rotObjectMatrix.makeRotationFromQuaternion(startQuaternion);
dummyCamera.quaternion.setFromRotationMatrix(rotObjectMatrix);
dummyCamera.up.set(camera)
console.log(camera.quaternion, dummyCamera.quaternion);
// create dummy controls to avoid mutating main controls
var dummyControls = new THREE.TrackballControls(dummyCamera);
dummyControls.target.set(loc.x, loc.y, loc.z);
dummyControls.update();
// Animate between the start and end quaternions
new TWEEN.Tween(camera.position)
.to(loc, duration)
.onUpdate(function(timestamp) {
// Slerp the camera quaternion for smooth transition.
// `timestamp` is the eased time value from the tween.
THREE.Quaternion.slerp(startQuaternion, dummyCamera.quaternion, camera.quaternion, timestamp);
camera.lookAt(lookAt);
})
.onComplete(function() {
controls.target = new THREE.Vector3(scene.children[1].position-0.001);
camera.lookAt(lookAt);
}).start();
}
Example usage:
var pos = {
x: -4.3,
y: 1.7,
z: 7.3,
};
var lookAt = scene.children[1].position;
flyTo(pos, lookAt, 60000);
Then in your update()/render() function, call TWEEN.update();
Full example

Line joining in emgucv

I have images of the following kind
I want that small lines which I have encircled using yellow, should be combined to form a single line., i.e. if distance between 2 lines is less than some threshold, they should be joined.
I tried using Dilate command of emgucv, but the unwanted lines also get bold.
Thanx in advance :-)
What your after is the Houghline function I provided a method bellow. By changing these settings you can join lines up and only display those with the strongest characteristics. However, this method may struggle since you have a very very noisy image you may want to look into a better edge detection method first before attempting to find those lines.
For every line:
private Image<Bgr, Byte> apply_Hough(Image<Bgr, Byte> Input_Image)
{
LineSegment2D[] lines = Input_Image.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
50, //threshold
100, //min Line width
1 //gap between lines
)[0]; //Get the lines from the first channel
Image<Bgr, Byte> lineImage = img.Copy();
foreach (LineSegment2D line in lines)
Input_Image.Draw(line, new Bgr(Color.Red), 2);
return Input_Image;
}
Hough lines uses and very advanced voting method in which only the strongest lines will be shown and they should be listed accordingly. So also try using a for loop in replace of the foreach loop to only display the first 6 strongest lines such as this.
For the 6 strongest lines:
private Image<Bgr, Byte> apply_Hough(Image<Bgr, Byte> Input_Image)
{
LineSegment2D[] lines = Input_Image.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 90.0, //Angle resolution measured in radians.
50, //threshold
100, //min Line width
1 //gap between lines
)[0]; //Get the lines from the first channel
Image<Bgr, Byte> lineImage = img.Copy();
for (int i = 0; i <= 6; i++)
{
Input_Image.Draw(lines[i], new Bgr(Color.Red), 2);
}
return Input_Image;
}
Hope this helps,
Cheers,
Chris