I have a large level set up, am overriding default Game Mode with a custom one and within the Game Mode am overriding the Default Pawn Class to a custom one I've made. I also have the same configuration for GM & Pawn in project settings. There are no other pawns or cameras on the map to be possessed.
Despite this a default static Camera Actor gets spawned and possessed at world origin while my custom Pawn never spawns when I start playing.
Turns out my custom pawn does spawn as expected when it's below ~ 1,000,000 units away from world origin, anything beyond that and the static Camera Actor gets spawned instead. Had to go to the World Settings for the level and check "Enable Large Worlds". Now everything works as expected.
Related
Can I somehow get information that the player has touched a specific tile located in the resource (tilemap.res)?
It is necessary to determine which tile it stands on (for example, on land or in water)
It depends on what kind of behavior you're expecting. There's two major ways to do this:
In each individual script that you want to be affected by the tiles, get the tile it's on every time it moves, then run the logic directly from there.
As above, get the tile the unit is on each time it moves, but instead of just running the logic every time, emit a signal whenever the tile changes, and keep the current tile cached so you can check when it's on a new tile; have the TileMap itself attach to the signal, and when it receives the signal, act on the Node2D until it leaves. This is what I'd do.
The exact method to find the tile is TileMap.world_to_map(TileMap.get_cellv(Node2D.position))--you just need to have the Node2D and the TileMap, and you can get both in the same function using the above two methods.
Technically, you could also procedurally add Area2Ds to the tilemap based on the tiles at various positions using TileMap.get_used_cells_by_id, making sure that the ID is the one that has the special behavior, then attaching the TileMap to those areas' body/area_entered and body/area_exited and using that, but spamming a whole lot of Area2Ds isn't necessary when you can check the tile a Node2D is on directly.
I have two issues in my Metal App.
My call to currentPassDescriptor is stalling. I have too many drawables, apparently.
I'm wholly confused on how to most performantly configure the multiple MTKViews I am using.
Issue (1)
I have a problem with currentPassDescriptor in my app. It is occasionally blocking (for 1.00s) which, according to the docs, is because there is no currentDrawable available.
Background: I have 4 HD 1920x1080 videos playing concurrently, tiled out onto a 3840x2160 second external display as a debugging configuration. The pixel buffers of these AVPlayer instances are captured by 4 independent CVDIsplayLink callbacks and, from within the callback, there is the draw call to its assigned MTKView. A total of 4 MTKViews are subviews tiled on a single NSWindow, and are configured for manual drawing.
I'm using CVDisplayLink callbacks manually. If I don't, then I get stutter when mousing up on the app’s menus, for example.
Within each draw call, I do a bit of kernel shader work then attempt to obtain the currentPassDescriptor. If successful, I do one pass of a fragment/vertex shader and then present the drawable. My code flow follows Apple’s sample code as well as published examples.
According to the Metal System Trace, most of draw calls take under 5ms. The GPU is about 20-25% utilized and there’s about 25% of the GPU memory free. I can also cause the main thread to usleep() for 1 second without any hiccups.
Without any user interaction, there’s about a 5% chance of the videos stalling out in the first minute. If there’s some UI work going then I see that as windowServer work in Instruments. I also note that AVFoundation seems to cache about 15 frames of video onto the GPU for each AVPlayer.
If the cadence of the draw calls is upset, there's about a 10% chance that things stall completely or some of the videos -- some will completely stall, some will stall with 1hz updates, some won't stall at all. There's also less chance of stalling when running Metal System Trace. The movies that have stalled seem to have done so on obtaining a currentPassDescriptor.
This is really a poor design to have this currentPassDescriptor block for ≈1s during a render loop. So much so that I’m thinking of eschewing the MTKView all together and just drawing to a CAMetalLayer myself. But the docs on CAMetalLayer seem to indicate the same blocking behaviour will occur.
I also grab these 4 pixel buffers on the fly and render sub-size regions-of-interest to 4 smaller MTKViews on the main monitor; but the stutters still occur if this code is removed.
Is the drawable buffer limit per MTKView or per the backing CALayer? The docs for maximumDrawableCount on CAMetalLayer say the number needs to be 2 or 3. This question ties into the configuration of the views.
Issue (2)
My current setup is a 3840x2160 NSWindow with a single content view. This subclass of NSView does some hiding/revealing of the mouse cursor by introducing an NSTrackingRectTag. The MTKViews are tiled subviews on this content view.
Is this the best configuration? Namely, one NSWindow with tiled MTKViews… or should I do one MTKView per window?
I'm also not sure how to best configure these windows/layers — ie. by setting (or clearing) wantsLayer, wantsUpdateLayer, and/or canDrawSubviewsIntoLayer. I'm currently just setting wantsLayer to YES on the single content view. Any hints on this would be great.
Does adjusting these properties collapse all the available drawables to the backing layer only; are there still 2 or 3 per MTKView?
NB: I've attached a sample run of my Metal app. The longest 'work' on the top graph is just under 5ms. The clumps of green/blue are rendering on the 4 MTKViews. The 'work' alternates a bit because one of the videos is a 60fps source; the others are all 30fps.
I was wondering how in Games when the character goes forward the screen moves with them and the level continues on, past the extent of a storyboard (Like the background is longer then the storyboard. Does anyone know how? (Not using a game engine)
I was wondering how in Games when the character goes forward the screen moves with them and the level continues on, past the extent of a storyboard
The objects on the game board generally aren't laid out in a storyboard file. The view that is the game board could certainly be part of the storyboard, but the content that that view draws is typically generated by the game's code. For example, it's common to use a sprite framework like SpriteKit or Cocos2D to draw objects using sprites. The data that the game manages is the collection of objects that appear in the game, and a sprite might be used to represent each of those objects.
Consider a platform jumping game like Doodle Jump, where the player hops from one platform to the next. The platforms keep coming and coming and coming as long as the player doesn't fall. Those platforms aren't laid out visually in the storyboard editor. Instead, a list of platform positions is somehow generated -- they could be read from a file, or calculated using some function, or create by some algorithm. A sprite is created for each platform as needed to fill the screen, and probably destroyed after it scrolls off the screen.
I am working on an app for small kids. The app is basically an eye candy game where the kid can touch the screen and make flowers or balloons appear. I am using touchesBegan: and touchesend: to figure out when the kid is pressing down (start the animation) and when he lifts his finger up (stop the animation).
My problem is that, some of the kids I beta tested, with held the iPhone with their thumb on the screen. This extra touch messes with my logic that controls the position of the animation. I believe I can take care of this with one of two methods
setting exclusive touch so that once the first finger is down, all other touches are ignored, thus forcing the child to lift their thumb up if they want to make the game do anything.
by capturing the position of the touch begin and making sure that in my touch end logic, I am responding to the correct finger.
I was just curious if anyone else has run into this problem and if they had come up with a better approach.
I went with #1 "setting exclusive touch so that once the first finger is down, all other touches are ignored, thus forcing the child to lift their thumb up if they want to make the game do anything." I tested it with a few kids and very quickly they figured out what the "rules" were and adapted.
What age group? From observing 1-, 3-, and 5-year olds with touch screen phones, it seems to me that at the age where the child could be trusted to hold the phone and not drop it (late two or early three) they can learn how to hold the phone on a palm. The ones who need to grip can easily, after being shown only once in most cases, learn to grip with the thumb from the top and bottom areas where there is no screen.
I agree that your app should handle the errant input intelligently, but don't discount minimum instruction in the general use of the phone (holding it) first. This isn't a case of 'fix the user' but rather a skill that the user (the child) will need to use any app on the phone.
I'm creating a zombie preparedness app for iOS and I thought it would be cool to have an "Apocalypse mode" which is similar to Airplane mode in that it replaces the status bar carrier icon with a little airplane except possibly with a little mushroom cloud or something instead?
Apocalypse mode would just be a boolean flag in my app the disables all data connection required features (only within the app, not using any private APIs or anything...). If possible, I would still like to have the clock, battery life, Bluetooth icons and whatever else that pops up onto the status bar during normal operation.
I'm looking at the MTStatusBarOverlay library to implement this feature. Related (Stackoverflow post here). I know there is a possibility my app could get rejected for style because of this, but my thought is that I don't want to stray to far from the norm and cross my fingers Apple doesn't jump on me for it.
My question is
How can I copy over the clock and battery life icons? Do I need to hook into an event or is there a UI element I can add.
Am I going about this the right way? Would it be better to just make a transparent overlay on top of the normal status bar with a mushroom cloud that overlays the carrier icon instead of replacing the status bar entirely? I'm worried about variable length carrier icons...
Of course option 3 is I just forget that idea entirely and make some sort of different background or something for this mode, but that seems lame :P
I had a go with something similar a while ago. I created a status bar overlay that accepted touch events, but didn't block the status bar from receiving touches, which is crucial for app store acceptance.
You can check out my question and my answer, however keep in mind it might not be actual anymore, it worked great in iOS4, but never tested it on 5. Worth a try though.
As for the overlay itself, I suggest covering everything up to the clock, and leaving the rest transparent, it should do the job.