Custom stippled pen - wxwidgets

I'm trying to draw what amounts to a screen-tone effect over an existing image, but I'd like to leave an area blank. Think of how spotlights in old games are sometimes done, where most of the image is darkened, and but part of it is the "normal" color.
To do this, I need my overlay to be transparent (since the original image has to show through). I'm also drawing this with wxDC.DrawCircle(...) (with a transparent brush), since it allows me to not draw over a circular area.
Problem is, the stipple (screen tone effect) isn't transparent, it's solid. I've tried just about everything I can think of, but nothing seems to work.
My current code is roughly like this:
const char* ScreenToneColor[] =
{
/* columns rows colors chars-per-pixel */
"3 3 2 1",
"X c Black",
"O c None",
/* pixels */
"OOO",
"OXO",
"OOO"
};
CustomPanel::CustomPanel(wxWindow* parent)
: wxPanel(parent, wxID_ANY, wxDefaultPosition, wxSize(151, 151))
{
SetBackgroundStyle(wxBG_STYLE_PAINT);
// MemberVariables
m_Stipple = wxBitmap(wxImage(ScreenToneColor));
m_ScreenTone = wxPen(*wxBLACK, 2 * VeryLargeRadius, wxPENSTYLE_STIPPLE);
m_ScreenTone.SetStipple(m_Stipple);
}
// Supplied with a wxAutoBufferedDC
void CustomPanel::Render(wxDC& dc)
{
dc.SetBrush(*wxGREEN_BRUSH);
dc.DrawRectangle(m_PanelRectange);
// "fade out" trimmed areas by drawing a ring.
dc.SetBrush(*wxTRANSPARENT_BRUSH);
dc.SetPen(m_ScreenTone);
dc.DrawCircle(m_AnimatedCenter, VeryLargeRadius + m_VisibleRadius);
}
I've tried supplying the mask, using the different stipple masks (avoiding wxPENSTYLE_STIPPLE_MASK_OPAQUE), etc.
I'm on Windows 10 and compiling against wxWidgets 3.1, although the project is being built/run on other OSs, and possibly a lower library version.

wxDC doesn't support transparency with just the only exception of drawing bitmaps with alpha channel. If you want to do anything involving alpha, you need to use wxGraphicsContext and related classes. I'm not sure if GDI+ or Direct2D implementations of it currently handle this correctly, but after checking the code it seems like at least the former one should.

Related

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

How to make large 2d tilemap easier to load in Unity

I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.

Famo.us/Angular Sticky Background Position ScrollView Sync

I'm trying to create functionality very similar to most websites these days.
The concept is 3 sections the size of the browser, the background images are supposed to be fixed positioned and revealed by the div scrolling up and down.
We need this to function as beautifully on mobile as it does on desktop, and it looks like Famous/angular is the solution.
Here is a pen.
http://codepen.io/LAzzam2/pen/XJrwbo
I'm using famous' Scroll.sync, firing javascript that positions the background image on every start / update / end.
scrollObject.sync.on("update", function (event) {
console.log('update');
test(event);
});
here is the function positioning the backgrounds.
function test(data){
var scroller = document.getElementsByClassName('famous-group');
styles = window.getComputedStyle(scroller[0], null);
tr = styles.getPropertyValue("-webkit-transform").replace('matrix(1, 0, 0, 1, 0,','').replace(')','');
var distanceTop = -(parseInt(tr));
var sections = document.getElementsByClassName('section');
sections[3].style.backgroundPosition="50% "+distanceTop+"px";
sections[4].style.backgroundPosition="50% "+(-(window.innerHeight)+distanceTop)+"px";
sections[5].style.backgroundPosition="50% "+(-(window.innerHeight*2)+distanceTop)+"px";
};
Any input / suggestions / advice would be wonderful, really just looking for a proof of concept with these 3 background images scrolling nicely.
That jittery-ness is unfortunate, I can't tell what would be causing the issue, except maybe the order in which events are fired?
**There are known issues, only works in -webkit browsers as of now
I think your idea to use Famous is good, but probably what I would do, would be taking a different approach to the problem.
You are solving this by touching the DOM, that is exactly what both Angular and Famous are meant to avoid.
If I had to face the same goal, I would probably use a Famous surface for the background instead of changing the property of the main one and synchronize its position with the scrolling view.
So, in your code, it would be something like this:
function test(data){
var scrollViewPosition = scrollObject.getAbsolutePosition();
var newBackgroundPosition = // Calculate the new background position
var newForegroundPosition = // Calculate the new foreground position
var backgroundSurface = backgroundSurface.position.set(newBackgroundPosition);
var foregroundSurface = foregroundSurface.position.set(newForegroundPosition);
};

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

(C++/CLI) Testing Mouse Position in a Rectangle Relevent to Parent

I've been messing around with the Graphics class to draw some things on a panel. So far to draw, I've just been using the Rectangle Structure. On a panel, by clicking a button, it makes a rectangle in a random place and adds it to an array of other rectangles (They're actually a class called UIElement, which contains a Rectangle member). When this panel is clicked, it runs a test with all the elements to see if the mouse is inside any of them, like this:
void GUIDisplay::checkCollision()
{
Point cursorLoc = Cursor::Position;
for(int a = 0; a < MAX_CONTROLS; a++)
{
if(elementList[a] != nullptr)
{
if(elementList[a]->bounds.Contains(cursorLoc))
{
elementList[a]->Select();
//MessageBox::Show("Click!", "Event");
continue;
}
elementList[a]->Deselect();
}
}
m_pDisplay->Refresh();
}
The problem is, when I click the rectangle, nothing happens.
The UIElement class draws its rectangles in the following bit of code. However, I've modified it a bit, because in this example it uses the DrawReversibleFrame method to do the actually drawing, as I was using Graphics.FillRectangle method. When I changed it, I noticed DrawReversibleFrame drew in a different place than FillRectangle. I believe this is because DrawReversibleFrame draws with its positions relative to the window, while FillRectangle does it relative to whatever Paint event its in (Mines in a panel's Paint method.) So let me just show the code:
void UIElement::render(Graphics^ g)
{
if(selected)
{
Pen^ line = gcnew Pen(Color::Black, 3);
//g->FillRectangle(gcnew SolidBrush(Color::Red), bounds);
ControlPaint::DrawReversibleFrame(bounds, SystemColors::Highlight, FrameStyle::Thick);
g->FillRectangle(gcnew SolidBrush(Color::Black), bounds);
//g->DrawLine(line, bounds.X, bounds.Y, bounds.Size.Width, bounds.Size.Height);
}
else
{
ControlPaint::DrawReversibleFrame(bounds, SystemColors::ControlDarkDark, FrameStyle::Thick);
//g->FillRectangle(gcnew SolidBrush(SystemColors::ControlDarkDark), bounds);
}
}
I add in both DrawReverisbleFrame and FillRectangle so that way I could see the difference. This is what it looked like when I clicked the frame drawn by DrawReversibleFrame:
The orange frame is where I clicked, the black is where its rendering. This shows me that the Rectangle's Contains() method is look for the rectangle relevant to the window, and not the panel. That's what I need fixed :)
I'm wondering if this is happening because the collision is tested outside of the panels Paint method. But I don't see how I could implement this collision testing inside the Paint method.
UPDATE:
Ok, so I just discovered that it appears that what DrawReversibleFrame and FillRectangle draw are always a certain distance apart. I don't quite understand this, but someone else might.
Both Cursor::Position and DrawReversableFrame operate in screen coordinates. That is for the entire screen, everything on your monitor, and not just your window. FillRectangle on the other hand operates on window coordinates, that is the position within your window.
If you take your example where you were drawing with both and the two boxes are always the same distance apart, and move your window on the screen then click again, you will see that the difference between the two boxes changes. It will be the difference between the top left corner of your window and the top left corner of the screen.
This is also why when you check to see what rectangle you clicked isn't hitting anything. You are testing the cursor position in screen coordinates against the rectangle coordinates in window space. It is possible that it would hit one of the rectangles, but it probably won't be the one you actually clicked on.
You have to always know what coordiante systems your variables are in. This is related to the original intention of Hungarian Notation which Joel Spolsky talks about in his entry Making Wrong Code Look Wrong.
Update:
PointToScreen and PointToClient should be used to convert coordinates between screen and window coordinates.