How to draw points of different colours in XCB - optimization

I'm trying to draw points on the window, using the PolyPoint XCB request.
Note that I'm using the crate "xcb" in Rust.
Here is my function :
fn set_pixels(&mut self, pixels: Vec<(usize, usize, u32)>) {
self.connection.send_request(
&x::PolyPoint {
coordinate_mode: x::CoordMode::Origin,
drawable: x::Drawable::Window(self.handle.unwrap()),
gc: self.gc.unwrap(),
points: pixels.into_iter().map(|(x, y, colour)| {
x::Point {
x: x as i16,
y: y as i16,
}
})
.collect::<Vec<x::Point>>().as_slice(),
}
);
}
At first, I'm not sure if this part is the easiest way to get a slice of x::Point from the vector :
pixels.into_iter().map(|(x, y, colour)| {
x::Point {
x: x as i16,
y: y as i16,
}
})
.collect::<Vec<x::Point>>().as_slice(),
Well, as we can see, we got a "colour" for each pixel, and I would like to use x::PolyPoint with a colour for each point I want to draw.
I know I can use ChangeGc to set a drawing colour :
self.connection.send_request(
&x::ChangeGc {
gc: self.gc.unwrap(),
value_list: &[
x::Gc::Foreground(/* hex colour */),
],
}
);
But this would set the same colour for all the pixels.
How can I use "PolyPoint" to set pixels of different colours ? Without passing by a loop that would ChangeGC then just after use PolyPoint for one single pixel (this solution is too slow).
Earlier, I was doing a loop calling this function, to set pixels one by one. But this is too slow :
fn set_pixel(&mut self, x: usize, y: usize, hex_colour: u32) {
self.connection.send_request(
&x::ChangeGc {
gc: self.gc.unwrap(),
value_list: &[
x::Gc::Foreground(hex_colour),
],
}
);
self.connection.send_request(
&x::PolyPoint {
coordinate_mode: x::CoordMode::Origin,
drawable: x::Drawable::Window(self.handle.unwrap()),
gc: self.gc.unwrap(),
points: &[
x::Point {
x: x as i16,
y: y as i16,
}
]
}
)
}

You cannot set different colors for a single drawing request in X11. I think this is not even possible with the RENDER extension. So, all the options you have are the ones you or others already mention.
Well, one more idea: If you usually have few different colors, you could group things by color. Your input seems to be Vec<(usize, usize, u32)>. You could transform this into a HashMap<u32, Vec<(usize,usize)>> and then use that to draw all pixels of a single color at once. Of course, this does not make sense if you expect few pixels of each color.

I'm now working with the double-buffering method.
A simple way is to draw on a x::Pixmap object, and then create an update() function for the window's structure :
/// Copies the `self.pixmap` area to the window.
fn update(&mut self) {
self.connection.send_and_check_request(
&x::CopyArea {
src_drawable: x::Drawable::Pixmap(self.pixmap.unwrap()),
dst_drawable: x::Drawable::Window(self.window.unwrap()),
gc: self.gc.unwrap(),
src_x: 0,
src_y: 0,
dst_x: 0,
dst_y: 0,
width: self.width as u16,
height: self.height as u16,
}
)
.expect("double buffering: unable to copy the buffer to the window");
}
Here is my set_pixels method (renamed to draw_points :
fn draw_points(&mut self, coordinates: &Vec<(isize, isize)>, colour: u32) {
self.change_draw_colour(colour);
// Creates an `x::Point` vector.
let points = coordinates.into_iter().map(|coordinate: &(isize, isize)| {
x::Point {
x: coordinate.0 as i16,
y: coordinate.1 as i16,
}
})
.collect::<Vec<x::Point>>();
self.connection.send_and_check_request(
&x::PolyPoint {
coordinate_mode: x::CoordMode::Origin,
drawable: x::Drawable::Pixmap(self.pixmap.unwrap()),
gc: self.gc.unwrap(),
points: points.as_slice(),
}
)
.expect("unable to draw points on the pixmap");
}
For external reasons that I won't go into, I'm not directly using a vector of x::Point, there is why I transform my coordinates value to a Vec<x::Point>.
For optimisation, I'm also saving the previous colour to avoid changing colour to the same colour :
fn change_draw_colour(&mut self, colour: u32) {
if self.previous_colour == Some(colour) {
return;
}
self.connection.send_and_check_request(
&x::ChangeGc {
gc: self.gc.unwrap(),
value_list: &[
x::Gc::Foreground(colour),
],
}
)
.expect("unable to change the graphics context colour");
self.previous_colour = Some(colour);
}

Related

how to zoom in to a sprite in bevy

I wanted to implement zooming in my 2d bevy game. After some code browsing I found out that Camera2dBundle uses OrthographicProjection by default and can not zoom in as required.
I tried using Camera3dBundle which does define projection: PerspectiveProjection by default but my sprite seems to disappear from the scene.
Could you give me some pointers to what I'm doing wrong? I have included some test code below.
Thanks
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(zoom_in)
.run();
}
fn setup(
mut commands: Commands
) {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0., 0., 1000.).looking_at(Vec3::ZERO, Vec3::Z),
..Default::default()
});
commands.spawn_bundle(SpriteBundle {
sprite: Sprite { custom_size: Some(Vec2 { x: 50., y: 50. }), ..Default::default()},
..Default::default()
});
}
pub fn zoom_in(mut query: Query<&mut Transform, With<Camera>>, time: Res<Time>) {
for mut transform in query.iter_mut() {
transform.translation.z -= 100. * time.delta_seconds();
warn!("{}", transform.translation.z);
}
}
You do not see the sprite, because you apparently look at it from the wrong side. If you have a 2D scene I would advise you to stick to the Camera2DBundle.
Contrary to what you stated in your question, in order to zoom you can set the scale of OrthographicProjection like so:
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(zoom_in)
.run();
}
fn setup(
mut commands: Commands
) {
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
sprite: Sprite { custom_size: Some(Vec2 { x: 50., y: 50. }), ..Default::default()},
..Default::default()
});
}
pub fn zoom_in(mut query: Query<&mut OrthographicProjection, With<Camera>>, time: Res<Time>) {
for mut projection in query.iter_mut() {
projection.scale -= 0.1 * time.delta_seconds();
println!("Current zoom scale: {}", projection.scale);
}
}
Note that you might want to implement logarithmic zoom, so that your zoom does "feel" linear and does not speed up approaching infinity when the scale approaches zero.
Here is a sample using logarithmic zoom:
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(zoom_in)
.run();
}
fn setup(
mut commands: Commands
) {
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
sprite: Sprite { custom_size: Some(Vec2 { x: 50., y: 50. }), ..Default::default()},
..Default::default()
});
}
pub fn zoom_in(mut query: Query<&mut OrthographicProjection, With<Camera>>, time: Res<Time>) {
for mut projection in query.iter_mut() {
let mut log_scale = projection.scale.ln();
log_scale -= 0.1 * time.delta_seconds();
projection.scale = log_scale.exp();
println!("Current zoom scale: {}", projection.scale);
}
}

Why do I get an UnsupportedType error when serializing to TOML with a manually implemented Serialize for an enum with struct variants?

I'm trying to implement Serialize for an enum that includes struct variants. The serde.rs documentation indicates the following:
enum E {
// Use three-step process:
// 1. serialize_struct_variant
// 2. serialize_field
// 3. end
Color { r: u8, g: u8, b: u8 },
// Use three-step process:
// 1. serialize_tuple_variant
// 2. serialize_field
// 3. end
Point2D(f64, f64),
// Use serialize_newtype_variant.
Inches(u64),
// Use serialize_unit_variant.
Instance,
}
With that in mind, I proceeded to implemention:
use serde::ser::{Serialize, SerializeStructVariant, Serializer};
use serde_derive::Deserialize;
#[derive(Deserialize)]
enum Variants {
VariantA,
VariantB { k: u32, p: f64 },
}
impl Serialize for Variants {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match *self {
Variants::VariantA => serializer.serialize_unit_variant("Variants", 0, "VariantA"),
Variants::VariantB { ref k, ref p } => {
let mut state =
serializer.serialize_struct_variant("Variants", 1, "VariantB", 2)?;
state.serialize_field("k", k)?;
state.serialize_field("p", p)?;
state.end()
}
}
}
}
fn main() {
let x = Variants::VariantB { k: 5, p: 5.0 };
let toml_str = toml::to_string(&x).unwrap();
println!("{}", toml_str);
}
The code compiles, but when I run it it fails:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: UnsupportedType', src/libcore/result.rs:999:5
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
I figured the issue must be in my use of the API, so I consulted the API documentation for StructVariant and it looks practically the same as my code. I'm sure I'm missing something, but I don't see it based on the docs and output.
Enabling external tagging for the enum enables Serde to serialize/deserialize it to TOML:
#[derive(Deserialize)]
#[serde(tag = "type")]
enum Variants {
VariantA,
VariantB { k: u32, p: f64 },
}
toml::to_string(&Variants::VariantB { k: 42, p: 13.37 })
serializes to
type = VariantB
k = 42
p = 13.37
This works well in Vecs and HashMaps, too.
The TOML format does not support enums with values:
use serde::Serialize; // 1.0.99
use toml; // 0.5.3
#[derive(Serialize)]
enum A {
B(i32),
}
fn main() {
match toml::to_string(&A::B(42)) {
Ok(s) => println!("{}", s),
Err(e) => eprintln!("Error: {}", e),
}
}
Error: unsupported Rust type
It's unclear what you'd like your data structure to map to as TOML. Using JSON works just fine:
use serde::Serialize; // 1.0.99
use serde_json; // 1.0.40
#[derive(Serialize)]
enum Variants {
VariantA,
VariantB { k: u32, p: f64 },
}
fn main() {
match serde_json::to_string(&Variants::VariantB { k: 42, p: 42.42 }) {
Ok(s) => println!("{}", s),
Err(e) => eprintln!("Error: {}", e),
}
}
{"VariantB":{"k":42,"p":42.42}}

Equal widths of subviews with SwiftUI

I'm trying to build a simple watchOS UI with SwiftUI with two pieces of information side-by-side above a button.
I'd like each side (represented as a VStack within an HStack) to take up half of the available width (so it's an even 50/50 split within the yellow parent view) divided where the | character is centered on the button in the example below.
I want the Short and Longer!!! text to each be centered within each side's 50%.
I started with this code, to get the elements in place and show the bounds of some of the different stacks:
var body: some View {
VStack {
HStack {
VStack {
Text("Short").font(.body)
}
.background(Color.green)
VStack {
Text("Longer!!!").font(.body)
}
.background(Color.blue)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.yellow)
Button (action: doSomething) {
Text("|")
}
}
}
Which gave me this result:
Then, when it comes to making each side-by-side VStack 50% of the available width, I'm stuck. I thought it should work to add .relativeWidth(0.5) to each VStack, which should, as I understand it, make each VStack half the width of its parent view (the HStack, with the yellow background):
var body: some View {
VStack {
HStack {
VStack {
Text("Short").font(.body)
}
.relativeWidth(0.5)
.background(Color.green)
VStack {
Text("Longer!!!").font(.body)
}
.relativeWidth(0.5)
.background(Color.blue)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.yellow)
Button (action: doSomething) {
Text("|")
}
}
}
But this is the result I get:
How can I get the behavior I want with SwiftUI?
Update: After reviewing the SwiftUI documentation more, I see the example here that sets a frame and then defines a relative width in comparison to that frame, so maybe I'm not supposed to use relativeWidth in this way?
I'm a step closer to what I want with the following code:
var body: some View {
VStack {
HStack {
VStack {
Text("Short").font(.body)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.green)
VStack {
Text("Longer!!!").font(.body)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.blue)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.yellow)
Button (action: doSomething) {
Text("|")
}
}
}
which produces this result:
Now, I am trying to figure out what's creating that extra space in the middle between the two VStacks. So far, experimenting with getting rid of padding and ignoring safe areas does not seem to affect it.
I'm still confused about when and how relativeWidth is supposed to be used, but I was able to achieve want I wanted without using it. (EDIT 18 July 2019: According to the iOS 13 Beta 4 release notes, relativeWidth is now deprecated)
In the last update to my question I had some extra spacing between the two sides, and realized that was the default spacing coming in on the HStack and I was able to remove that by setting its spacing to 0. Here's the final code and result:
var body: some View {
VStack {
HStack(spacing: 0) {
VStack {
Text("Short").font(.body)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.green)
VStack {
Text("Longer!!!").font(.body)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.blue)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.yellow)
Button (action: doSomething) {
Text("|")
}
}
}
And here is the result:
Here's how to create an EqualWidthHStack for watchOS 9, iOS 16, tvOS 16 & macOS 13
Here's the usage:
struct ContentView: View {
private let strings = ["Hello,", "very very very big", "world!"]
var body: some View {
EqualWidthHStack {
ForEach(strings, id: \.self) { string in
ZStack {
RoundedRectangle(cornerRadius: 10, style: .continuous)
.opacity(0.2)
Text(string)
.padding(10)
}
}
}
}
}
First create a struct that conforms to Layout.
struct EqualWidthHStack: Layout {
...
}
It will come with two default methods, here's how you can implement them.
Size that Fits:
func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
let maxSize = maxSize(subviews: subviews)
let spacing = spacing(subviews: subviews)
let totalSpacing = spacing.reduce(0.0, +)
return CGSize(width: maxSize.width * CGFloat(subviews.count) + totalSpacing,
height: maxSize.height)
}
Place Subviews:
func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
let maxSize = maxSize(subviews: subviews)
let spacing = spacing(subviews: subviews)
let sizeProposal = ProposedViewSize(width: maxSize.width,
height: maxSize.height)
var x = bounds.minX + maxSize.width / 2
for index in subviews.indices {
subviews[index].place(at: CGPoint(x: x, y: bounds.midY),
anchor: .center,
proposal: sizeProposal)
x += maxSize.width + spacing[index]
}
}
You will need the following two helper methods.
Max Size:
private func maxSize(subviews: Subviews) -> CGSize {
let subviewSizes = subviews.map { $0.sizeThatFits(.unspecified) }
let maxSize: CGSize = subviewSizes.reduce(.zero, { result, size in
CGSize(width: max(result.width, size.width),
height: max(result.height, size.height))
})
return maxSize
}
Spacing:
private func spacing(subviews: Subviews) -> [CGFloat] {
subviews.indices.map { index in
guard index < subviews.count - 1 else { return 0.0 }
return subviews[index].spacing.distance(to: subviews[index + 1].spacing,
along: .horizontal)
}
}
Here's Apples WWDC22 video on how to make it:
Compose custom layouts with SwiftUI
You have set background of HStack to yellow color and HStack has some default inter child views spacing. By adding spacing: 0 in HStack will solve the problem see the updated code below.
var body: some View {
VStack {
HStack(spacing: 0) { // Set spacing here
VStack {
Text("Short").font(.body)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.green)
VStack {
Text("Longer!!!").font(.body)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.blue)
}
.frame(minWidth: 0, maxWidth: .infinity)
.background(Color.yellow)
Button (action: doSomething) {
Text("|")
}
}
}

QML: SplineSeries without ChartView

Is it possible to draw a SplineSeries like in this example, but there'd be just the spline, no other components of the chart view (axis, background etc.)? Or do I have to use canvas or something similar? In that case what would be the best possibility here? I have my x, y values, so if I just could place the SplineSeries itself to my layout like this, but that's probably not possible:
ColumnLayout {
SplineSeries {
name: "spline"
XYPoint { x: 0; y: 0.0 }
XYPoint { x: 1.1; y: 3.2 }
XYPoint { x: 1.9; y: 2.4 }
XYPoint { x: 2.1; y: 2.1 }
}
}

Specify the color for a Pie in Dojo Charting

I am using Dojo 1.9, using memoryStore and the store has 4 data elements, in addition to the key. For each of the 4 data elements, I need to plot a Pie-Chart. working fine but only issue is that I do not know how to specify the color.
The identifier could be one of the Following - Low, Moderate,High and Extreme.
I want to use the same colors for each identifier, in all the charts. Is it possible for me to specify a color based on the value of the identifier?
The code snippet is as shown below:
var store = new Observable(new Memory({
data: {
identifier: "accumulation",
items: theData
}
}));
theChart.setTheme(PrimaryColors)
.addPlot("default", {
type: Pie,
font: "normal normal 11pt Tahoma",
fontColor: "black",
labelOffset: -30,
radius: 80
}).addSeries("accumulation", new StoreSeries(store, { query: { } }, dataElement));
I'm possibly misunderstanding your question here (is the plot interacting directly with the store? StoreSeries?), but is the fill property what you're looking for?
// Assuming data is an array of rows retrieved from the store
for(var i etc...) {
// make chart
// ...
chart.addSeries("things", [
{ y: data[i]["low"], fill: "#55FF55", text: "Low" },
{ y: data[i]["mod"], fill: "#FFFF00", text: "Moderate" },
{ y: data[i]["high"], fill: "#FFAA00", text: "High" },
{ y: data[i]["extr"], fill: "#FF2200", text: "Extreme" }
]);
}
Update: When using a StoreSeries, the third argument (dataElement in your code) can also be a function. You can use the function to return an object (containing the properties above, such as fill) instead of just a value.
chart.addSeries("thingsFromStore", new StoreSeries(store, {}, function(i) {
return {
y : i[dataElement],
text: "Label for " + i.accumulation,
fill: getColorForAccumulation(i)
};
}));