I wanted to implement zooming in my 2d bevy game. After some code browsing I found out that Camera2dBundle uses OrthographicProjection by default and can not zoom in as required.
I tried using Camera3dBundle which does define projection: PerspectiveProjection by default but my sprite seems to disappear from the scene.
Could you give me some pointers to what I'm doing wrong? I have included some test code below.
Thanks
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(zoom_in)
.run();
}
fn setup(
mut commands: Commands
) {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0., 0., 1000.).looking_at(Vec3::ZERO, Vec3::Z),
..Default::default()
});
commands.spawn_bundle(SpriteBundle {
sprite: Sprite { custom_size: Some(Vec2 { x: 50., y: 50. }), ..Default::default()},
..Default::default()
});
}
pub fn zoom_in(mut query: Query<&mut Transform, With<Camera>>, time: Res<Time>) {
for mut transform in query.iter_mut() {
transform.translation.z -= 100. * time.delta_seconds();
warn!("{}", transform.translation.z);
}
}
You do not see the sprite, because you apparently look at it from the wrong side. If you have a 2D scene I would advise you to stick to the Camera2DBundle.
Contrary to what you stated in your question, in order to zoom you can set the scale of OrthographicProjection like so:
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(zoom_in)
.run();
}
fn setup(
mut commands: Commands
) {
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
sprite: Sprite { custom_size: Some(Vec2 { x: 50., y: 50. }), ..Default::default()},
..Default::default()
});
}
pub fn zoom_in(mut query: Query<&mut OrthographicProjection, With<Camera>>, time: Res<Time>) {
for mut projection in query.iter_mut() {
projection.scale -= 0.1 * time.delta_seconds();
println!("Current zoom scale: {}", projection.scale);
}
}
Note that you might want to implement logarithmic zoom, so that your zoom does "feel" linear and does not speed up approaching infinity when the scale approaches zero.
Here is a sample using logarithmic zoom:
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(zoom_in)
.run();
}
fn setup(
mut commands: Commands
) {
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
sprite: Sprite { custom_size: Some(Vec2 { x: 50., y: 50. }), ..Default::default()},
..Default::default()
});
}
pub fn zoom_in(mut query: Query<&mut OrthographicProjection, With<Camera>>, time: Res<Time>) {
for mut projection in query.iter_mut() {
let mut log_scale = projection.scale.ln();
log_scale -= 0.1 * time.delta_seconds();
projection.scale = log_scale.exp();
println!("Current zoom scale: {}", projection.scale);
}
}
Related
I'm trying to draw points on the window, using the PolyPoint XCB request.
Note that I'm using the crate "xcb" in Rust.
Here is my function :
fn set_pixels(&mut self, pixels: Vec<(usize, usize, u32)>) {
self.connection.send_request(
&x::PolyPoint {
coordinate_mode: x::CoordMode::Origin,
drawable: x::Drawable::Window(self.handle.unwrap()),
gc: self.gc.unwrap(),
points: pixels.into_iter().map(|(x, y, colour)| {
x::Point {
x: x as i16,
y: y as i16,
}
})
.collect::<Vec<x::Point>>().as_slice(),
}
);
}
At first, I'm not sure if this part is the easiest way to get a slice of x::Point from the vector :
pixels.into_iter().map(|(x, y, colour)| {
x::Point {
x: x as i16,
y: y as i16,
}
})
.collect::<Vec<x::Point>>().as_slice(),
Well, as we can see, we got a "colour" for each pixel, and I would like to use x::PolyPoint with a colour for each point I want to draw.
I know I can use ChangeGc to set a drawing colour :
self.connection.send_request(
&x::ChangeGc {
gc: self.gc.unwrap(),
value_list: &[
x::Gc::Foreground(/* hex colour */),
],
}
);
But this would set the same colour for all the pixels.
How can I use "PolyPoint" to set pixels of different colours ? Without passing by a loop that would ChangeGC then just after use PolyPoint for one single pixel (this solution is too slow).
Earlier, I was doing a loop calling this function, to set pixels one by one. But this is too slow :
fn set_pixel(&mut self, x: usize, y: usize, hex_colour: u32) {
self.connection.send_request(
&x::ChangeGc {
gc: self.gc.unwrap(),
value_list: &[
x::Gc::Foreground(hex_colour),
],
}
);
self.connection.send_request(
&x::PolyPoint {
coordinate_mode: x::CoordMode::Origin,
drawable: x::Drawable::Window(self.handle.unwrap()),
gc: self.gc.unwrap(),
points: &[
x::Point {
x: x as i16,
y: y as i16,
}
]
}
)
}
You cannot set different colors for a single drawing request in X11. I think this is not even possible with the RENDER extension. So, all the options you have are the ones you or others already mention.
Well, one more idea: If you usually have few different colors, you could group things by color. Your input seems to be Vec<(usize, usize, u32)>. You could transform this into a HashMap<u32, Vec<(usize,usize)>> and then use that to draw all pixels of a single color at once. Of course, this does not make sense if you expect few pixels of each color.
I'm now working with the double-buffering method.
A simple way is to draw on a x::Pixmap object, and then create an update() function for the window's structure :
/// Copies the `self.pixmap` area to the window.
fn update(&mut self) {
self.connection.send_and_check_request(
&x::CopyArea {
src_drawable: x::Drawable::Pixmap(self.pixmap.unwrap()),
dst_drawable: x::Drawable::Window(self.window.unwrap()),
gc: self.gc.unwrap(),
src_x: 0,
src_y: 0,
dst_x: 0,
dst_y: 0,
width: self.width as u16,
height: self.height as u16,
}
)
.expect("double buffering: unable to copy the buffer to the window");
}
Here is my set_pixels method (renamed to draw_points :
fn draw_points(&mut self, coordinates: &Vec<(isize, isize)>, colour: u32) {
self.change_draw_colour(colour);
// Creates an `x::Point` vector.
let points = coordinates.into_iter().map(|coordinate: &(isize, isize)| {
x::Point {
x: coordinate.0 as i16,
y: coordinate.1 as i16,
}
})
.collect::<Vec<x::Point>>();
self.connection.send_and_check_request(
&x::PolyPoint {
coordinate_mode: x::CoordMode::Origin,
drawable: x::Drawable::Pixmap(self.pixmap.unwrap()),
gc: self.gc.unwrap(),
points: points.as_slice(),
}
)
.expect("unable to draw points on the pixmap");
}
For external reasons that I won't go into, I'm not directly using a vector of x::Point, there is why I transform my coordinates value to a Vec<x::Point>.
For optimisation, I'm also saving the previous colour to avoid changing colour to the same colour :
fn change_draw_colour(&mut self, colour: u32) {
if self.previous_colour == Some(colour) {
return;
}
self.connection.send_and_check_request(
&x::ChangeGc {
gc: self.gc.unwrap(),
value_list: &[
x::Gc::Foreground(colour),
],
}
)
.expect("unable to change the graphics context colour");
self.previous_colour = Some(colour);
}
My tests fail when using floating point numbers f64 due to precision errors.
Playground:
use std::ops::Sub;
#[derive(Debug, PartialEq, Clone, Copy)]
struct Audio {
amp: f64,
}
impl Sub for Audio {
type Output = Self;
fn sub(self, other: Self) -> Self::Output {
Self {
amp: self.amp - other.amp,
}
}
}
#[test]
fn subtract_audio() {
let audio1 = Audio { amp: 0.9 };
let audio2 = Audio { amp: 0.3 };
assert_eq!(audio1 - audio2, Audio { amp: 0.6 });
assert_ne!(audio1 - audio2, Audio { amp: 1.2 });
assert_ne!(audio1 - audio2, Audio { amp: 0.3 });
}
I get the following error:
---- subtract_audio stdout ----
thread 'subtract_audio' panicked at 'assertion failed: `(left == right)`
left: `Audio { amp: 0.6000000000000001 }`,
right: `Audio { amp: 0.6 }`', src/lib.rs:23:5
How to test for structs with floating numbers like f64 ?
If the comparing were to be done with numbers without struct,
let a: f64 = 0.9;
let b: f64 = 0.6;
assert!(a - b < f64:EPSILON);
But with structs we need to take extra measures.
First need to derive with PartialOrd to allow comparing with other structs.
#[derive(Debug, PartialEq, PartialOrd)]
struct Audio {...}
next create a struct for comparison
let audio_epsilon = Audio { amp: f64:EPSILON };
now I can compare regularly (with assert! not assert_eq!)
assert!(c - d < audio_epsilon)
An other solution is to implement PartialEq manually:
impl PartialEq for Audio {
fn eq(&self, other: &Self) -> bool {
(self.amp - other.amp).abs() < f64::EPSILON
}
}
I'm trying to implement Serialize for an enum that includes struct variants. The serde.rs documentation indicates the following:
enum E {
// Use three-step process:
// 1. serialize_struct_variant
// 2. serialize_field
// 3. end
Color { r: u8, g: u8, b: u8 },
// Use three-step process:
// 1. serialize_tuple_variant
// 2. serialize_field
// 3. end
Point2D(f64, f64),
// Use serialize_newtype_variant.
Inches(u64),
// Use serialize_unit_variant.
Instance,
}
With that in mind, I proceeded to implemention:
use serde::ser::{Serialize, SerializeStructVariant, Serializer};
use serde_derive::Deserialize;
#[derive(Deserialize)]
enum Variants {
VariantA,
VariantB { k: u32, p: f64 },
}
impl Serialize for Variants {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
Variants::VariantA => serializer.serialize_unit_variant("Variants", 0, "VariantA"),
Variants::VariantB { ref k, ref p } => {
let mut state =
serializer.serialize_struct_variant("Variants", 1, "VariantB", 2)?;
state.serialize_field("k", k)?;
state.serialize_field("p", p)?;
state.end()
}
}
}
}
fn main() {
let x = Variants::VariantB { k: 5, p: 5.0 };
let toml_str = toml::to_string(&x).unwrap();
println!("{}", toml_str);
}
The code compiles, but when I run it it fails:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: UnsupportedType', src/libcore/result.rs:999:5
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
I figured the issue must be in my use of the API, so I consulted the API documentation for StructVariant and it looks practically the same as my code. I'm sure I'm missing something, but I don't see it based on the docs and output.
Enabling external tagging for the enum enables Serde to serialize/deserialize it to TOML:
#[derive(Deserialize)]
#[serde(tag = "type")]
enum Variants {
VariantA,
VariantB { k: u32, p: f64 },
}
toml::to_string(&Variants::VariantB { k: 42, p: 13.37 })
serializes to
type = VariantB
k = 42
p = 13.37
This works well in Vecs and HashMaps, too.
The TOML format does not support enums with values:
use serde::Serialize; // 1.0.99
use toml; // 0.5.3
#[derive(Serialize)]
enum A {
B(i32),
}
fn main() {
match toml::to_string(&A::B(42)) {
Ok(s) => println!("{}", s),
Err(e) => eprintln!("Error: {}", e),
}
}
Error: unsupported Rust type
It's unclear what you'd like your data structure to map to as TOML. Using JSON works just fine:
use serde::Serialize; // 1.0.99
use serde_json; // 1.0.40
#[derive(Serialize)]
enum Variants {
VariantA,
VariantB { k: u32, p: f64 },
}
fn main() {
match serde_json::to_string(&Variants::VariantB { k: 42, p: 42.42 }) {
Ok(s) => println!("{}", s),
Err(e) => eprintln!("Error: {}", e),
}
}
{"VariantB":{"k":42,"p":42.42}}
In this minimalist program, I'd like the file_size function to include the path /not/there in the Err so it can be displayed in the main function:
use std::fs::metadata;
use std::io;
use std::path::Path;
use std::path::PathBuf;
fn file_size(path: &Path) -> io::Result<u64> {
Ok(metadata(path)?.len())
}
fn main() {
if let Err(err) = file_size(&PathBuf::from("/not/there")) {
eprintln!("{}", err);
}
}
You must define your own error type in order to wrap this additional data.
Personally, I like to use the custom_error crate for that, as it's especially convenient for dealing with several types. In your case it might look like this:
use custom_error::custom_error;
use std::fs::metadata;
use std::io;
use std::path::{Path, PathBuf};
use std::result::Result;
custom_error! {ProgramError
Io {
source: io::Error,
path: PathBuf
} = #{format!("{path}: {source}", source=source, path=path.display())},
}
fn file_size(path: &Path) -> Result<u64, ProgramError> {
metadata(path)
.map(|md| md.len())
.map_err(|e| ProgramError::Io {
source: e,
path: path.to_path_buf(),
})
}
fn main() {
if let Err(err) = file_size(&PathBuf::from("/not/there")) {
eprintln!("{}", err);
}
}
Output:
/not/there: No such file or directory (os error 2)
While Denys Séguret's answer is correct, I like using my crate SNAFU because it provides the concept of a context. This makes the act of attaching the path (or anything else!) very easy to do:
use snafu::{ResultExt, Snafu}; // 0.2.3
use std::{
fs, io,
path::{Path, PathBuf},
};
#[derive(Debug, Snafu)]
enum ProgramError {
#[snafu(display("Could not get metadata for {}: {}", path.display(), source))]
Metadata { source: io::Error, path: PathBuf },
}
fn file_size(path: impl AsRef<Path>) -> Result<u64, ProgramError> {
let path = path.as_ref();
let md = fs::metadata(&path).context(Metadata { path })?;
Ok(md.len())
}
fn main() {
if let Err(err) = file_size("/not/there") {
eprintln!("{}", err);
}
}
It can be useful to iterate over multiple variables at once, overlapping (slice::windows), or not (slice::chunks).
This only works for slices; is it possible to do this for iterators, using tuples for convenience?
Something like the following could be written:
for (prev, next) in some_iter.windows(2) {
...
}
If not, could it be implemented as a trait on existing iterators?
It's possible to take chunks of an iterator using Itertools::tuples, up to a 4-tuple:
use itertools::Itertools; // 0.9.0
fn main() {
let some_iter = vec![1, 2, 3, 4, 5, 6].into_iter();
for (prev, next) in some_iter.tuples() {
println!("{}--{}", prev, next);
}
}
(playground)
1--2
3--4
5--6
If you don't know that your iterator exactly fits into the chunks, you can use Tuples::into_buffer to access any leftovers:
use itertools::Itertools; // 0.9.0
fn main() {
let some_iter = vec![1, 2, 3, 4, 5].into_iter();
let mut t = some_iter.tuples();
for (prev, next) in t.by_ref() {
println!("{}--{}", prev, next);
}
for leftover in t.into_buffer() {
println!("{}", leftover);
}
}
(playground)
1--2
3--4
5
It's also possible to take up to 4-tuple windows with Itertools::tuple_windows:
use itertools::Itertools; // 0.9.0
fn main() {
let some_iter = vec![1, 2, 3, 4, 5, 6].into_iter();
for (prev, next) in some_iter.tuple_windows() {
println!("{}--{}", prev, next);
}
}
(playground)
1--2
2--3
3--4
4--5
5--6
If you need to get partial chunks / windows, you can get
TL;DR: The best way to have chunks and windows on an arbitrary iterator/collection is to first collect it into a Vec and iterate over that.
The exact syntax requested is impossible in Rust.
The issue is that in Rust, a function's signature is depending on types, not values, and while Dependent Typing exists, there are few languages that implement it (it's hard).
This is why chunks and windows return a sub-slice by the way; the number of elements in a &[T] is not part of the type and therefore can be decided at run-time.
Let's pretend you asked for: for slice in some_iter.windows(2) instead then.
Where would the storage backing this slice live?
It cannot live:
in the original collection because a LinkedList doesn't have a contiguous storage
in the iterator because of the definition of Iterator::Item, there is no lifetime available
So, unfortunately, slices can only be used when the backing storage is a slice.
If dynamic allocations are accepted, then it is possible to use Vec<Iterator::Item> as the Item of the chunking iterator.
struct Chunks<I: Iterator> {
elements: Vec<<I as Iterator>::Item>,
underlying: I,
}
impl<I: Iterator> Chunks<I> {
fn new(iterator: I, size: usize) -> Chunks<I> {
assert!(size > 0);
let mut result = Chunks {
underlying: iterator, elements: Vec::with_capacity(size)
};
result.refill(size);
result
}
fn refill(&mut self, size: usize) {
assert!(self.elements.is_empty());
for _ in 0..size {
match self.underlying.next() {
Some(item) => self.elements.push(item),
None => break,
}
}
}
}
impl<I: Iterator> Iterator for Chunks<I> {
type Item = Vec<<I as Iterator>::Item>;
fn next(&mut self) -> Option<Self::Item> {
if self.elements.is_empty() {
return None;
}
let new_elements = Vec::with_capacity(self.elements.len());
let result = std::mem::replace(&mut self.elements, new_elements);
self.refill(result.len());
Some(result)
}
}
fn main() {
let v = vec!(1, 2, 3, 4, 5);
for slice in Chunks::new(v.iter(), 2) {
println!("{:?}", slice);
}
}
Will return:
[1, 2]
[3, 4]
[5]
The canny reader will realize that I surreptitiously switched from windows to chunks.
windows is more difficult, because it returns the same element multiple times which require that the element be Clone. Also, since it needs returning a full Vec each time, it will need internally to keep a Vec<Vec<Iterator::Item>>.
This is left as an exercise to the reader.
Finally, a note on performance: all those allocations are gonna hurt (especially in the windows case).
The best allocation strategy is generally to allocate a single chunk of memory and then live off that (unless the amount is really massive, in which case streaming is required).
It's called collect::<Vec<_>>() in Rust.
And since the Vec has a chunks and windows methods (by virtue of implementing Deref<Target=[T]>), you can then use that instead:
for slice in v.iter().collect::<Vec<_>>().chunks(2) {
println!("{:?}", slice);
}
for slice in v.iter().collect::<Vec<_>>().windows(2) {
println!("{:?}", slice);
}
Sometimes the best solutions are the simplest.
On nightly
The chunks version is now available on nightly under the name array_chunks
#![feature(iter_array_chunks)]
for [a, b, c] in some_iter.array_chunks() {
...
}
And it handles remainders nicely:
#![feature(iter_array_chunks)]
for [a, b, c] in some_iter.by_ref().array_chunks() {
...
}
let rem = some_iter.into_remainder();
On stable
Since Rust 1.51 this is possible with const generics where the iterator yields constant size arrays [T; N] for any N.
I built two standalone crates which implement this:
iterchunks provides array_chunks()
iterwindows provides
array_windows()
use iterchunks::IterChunks; // 0.2
for [a, b, c] in some_iter.array_chunks() {
...
}
use iterwindows::IterWindows; // 0.2
for [prev, next] in some_iter.array_windows() {
...
}
Using the example given in the Itertools answer:
use iterchunks::IterChunks; // 0.2
fn main() {
let some_iter = vec![1, 2, 3, 4, 5, 6].into_iter();
for [prev, next] in some_iter.array_chunks() {
println!("{}--{}", prev, next);
}
}
This outputs
1--2
3--4
5--6
Most times the array size can be inferred but you can also specific it explicitly. Additionally, any reasonable size N can be used, there is no limit like in the Itertools case.
use iterwindows::IterWindows; // 0.2
fn main() {
let mut iter = vec![1, 2, 3, 4, 5, 6].into_iter().array_windows::<5>();
println!("{:?}", iter.next());
println!("{:?}", iter.next());
println!("{:?}", iter.next());
}
This outputs
Some([1, 2, 3, 4, 5])
Some([2, 3, 4, 5, 6])
None
Note: array_windows() uses clone to yield elements multiple times so its best used for references and cheap to copy types.