Resizing desktop application with ligdx issues - resize

I'm trying to make an app using libgdx. When I first launch it on my desktop, my map print well. But if I resize the window, the map is no longer completely in the screen.
But if I change the size on the beginning, the map print well. So there might be an issue with my resizing function.
In my GameScreen class I've got :
public void resize(int width, int height)
{
renderer.setSize(width, height);
}
And my renderer got this :
public void setSize (int w, int h)
{
ppuX = (float)w / (float)world.getWidth();
ppuY = (float)h / (float)world.getHeight();
}
For example, if I reduce my window, the map is too small for the window. If I extend it, the map doesn't in it.

If you have a fixed viewport, which is the case most of the time. You don't need to do anything in your resize method. The camera will fill the window to draw everything even if you resize it at runtime.
I, myself, never put anything on resize.

Related

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

How to make large 2d tilemap easier to load in Unity

I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.

Creating a flexible UI contianer for an image and a label

I thought this would be like pretty simple task to do, but now I have tried for hours and cant figure out how to get around this.
I have a list of friends which should be displayed in a scrollable list. Each friend have a profile image and a name associated to him, so each item in the list should display the image and the name.
The problem is that I cant figure out how to make a flexible container that contains both the image and the name label. I want to be able to change the width and height dynamically so that the image and the text will scale and move accordingly.
I am using Unity 5 and Unity UI.
I want to achieve the following for the container:
The width and height of the container should be flexible
The image is a child of the container and should be left aligned, the height should fill the container height and should keep its aspect ratio.
The name label is a child of the contianer and should be left aligned to the image with 15 px left padding. The width of the text should fill the rest of the space in the container.
Hope this is illustrated well in the following attached image:
I asked the same question here on Unity Answers, but no answers so far. Is it really possible that such a simple task is not doable in Unity UI without using code?
Thanks a lot for your time!
Looks like can be achieved with layout components.
The image is a child of the container and should be left aligned, the height should fill the container height and should keep its aspect ratio.
For this try to add Aspect Ratio Fitter Component with Aspect mode - Width Controls Height
The name label is a child of the container and should be left aligned to the image with 15 px left padding. The width of the text should fill the rest of the space in the container.
For this you can simply anchor and stretch your label to the container size and use BestFit option on the Text component
We never found a way to do this without code. I am very unsatisfied that such a simple task cannot be done in the current UI system.
We did create the following layout script that does the trick (tanks to Angry Ant for helping us out). The script is attached to the text label:
using UnityEngine;
using UnityEngine.EventSystems;
[RequireComponent (typeof (RectTransform))]
public class IndentByHeightFitter : UIBehaviour, UnityEngine.UI.ILayoutSelfController
{
public enum Edge
{
Left,
Right
}
[SerializeField] Edge m_Edge = Edge.Left;
[SerializeField] float border;
public virtual void SetLayoutHorizontal ()
{
UpdateRect ();
}
public virtual void SetLayoutVertical() {}
#if UNITY_EDITOR
protected override void OnValidate ()
{
UpdateRect ();
}
#endif
protected override void OnRectTransformDimensionsChange ()
{
UpdateRect ();
}
Vector2 GetParentSize ()
{
RectTransform parent = transform.parent as RectTransform;
return parent == null ? Vector2.zero : parent.rect.size;
}
RectTransform.Edge IndentEdgeToRectEdge (Edge edge)
{
return edge == Edge.Left ? RectTransform.Edge.Left : RectTransform.Edge.Right;
}
void UpdateRect ()
{
RectTransform rect = (RectTransform)transform;
Vector2 parentSize = GetParentSize ();
rect.SetInsetAndSizeFromParentEdge (IndentEdgeToRectEdge (m_Edge), parentSize.y + border, parentSize.x - parentSize.y);
}
}

Windows Phone 8.1 Action Center like page design

I am trying to move an object in a page form top to down on user finger movement using translate transform. we should see page contents as the bar goes down the page and sits at bottom.
Just like Action Center in windows phone 8.1.
Please let me know any ideas how we can design. Thanks.
Good question!
My first thought was to do something like this.
You could get the touch input location and then move a rectangle from the top of the screen and translate it down according to they Y Cord of the Touch Input.
EDIT:
Okay so here is what you can do.
Create a Canvas and position it somewhere at the top ( i gave it the height of 14 in collapsed state).
Then create a private void cn_ManipulationDelta(object sender, System.Windows.Input.ManipulationDeltaEventArgs e) event and make it set the height of the Canvas. I also included a float i to later make it snap back or cover the entire screen if the user lets go during the pull-event.
private void cn_ManipulationDelta(object sender, System.Windows.Input.ManipulationDeltaEventArgs e)
{
cn.Height += e.DeltaManipulation.Translation.Y;
i = (float)e.CumulativeManipulation.Translation.Y;
}
And thats it. You can also add this event to make it snap back or go cover the full screen when the user lets go.
private void cn_ManipulationCompleted(object sender, System.Windows.Input.ManipulationCompletedEventArgs e)
{
if(i < 100)
{
cn.Height = 14;
}
else
{
cn.Height = Application.Current.Host.Content.ActualHeight;
}
}
Of course you can add smoother animations so it slowly goes back into the collapsed view or fills the entire page.
I hope this helps!

Android View.onDraw() always has a clean Canvas

I am trying to draw an animation. To do so I have extended View and overridden the onDraw() method. What I would expect is that each time onDraw() is called the canvas would be in the state that I left it in and I could choose to clear it or just draw over parts of it (This is how it worked when I used a SurfaceView) but each time the canvas comes back already cleared. Is there a way that I can not have it cleared? Or maybe save the previous state into a Bitmap so I can just draw that Bitmap and then draw over top of it?
I'm not sure if there is a way or not. But for my custom views I either redraw everything each time onDraw() is called, or draw to a bitmap and then draw the bitmap to the canvas (like you suggested in your question).
Here is how i do it
class A extends View {
private Canvas canvas;
private Bitmap bitmap;
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
if (bitmap != null) {
bitmap .recycle();
}
canvas= new Canvas();
bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
canvas.setBitmap(bitmap);
}
public void destroy() {
if (bitmap != null) {
bitmap.recycle();
}
}
public void onDraw(Canvas c) {
//draw onto the canvas if needed (maybe only the parts of animation that changed)
canvas.drawRect(0,0,10,10,paint);
//draw the bitmap to the real canvas c
c.drawBitmap(bitmap,
new Rect(0,0,bitmap.getWidth(),bitmap.getHeight()),
new Rect(0,0,bitmap.getWidth(),bitmap.getHeight()), null);
}
}
you should have a look here to see the difference between basic view and surfaceView. A surfaceView has a dedicated layer for drawing, which I suppose keeps track of what you drew before. Now if you really want to do it on a basic View, you could try to put each item you draw in an array, like the exemple of itemized overlay for the mapview.
It should work pretty much the same way
Your expectations do not jib w/ reality :) The canvas will not be the way you left it, but it blank instead. You could create an ArrayList of objects to be drawn (canvas.drawCircle(), canvas.drawBitmap() etc.), then iterate though the ArrayList in the OnDraw(). I am new to graphics programming but I have used this on a small scale. Maybe there is a much better way.