Utilizing MKOverlay `canReplaceMapContent` on only part of the MKMap content - mapkit

I have created an overlay that I want to cover the whole world except a specific region where the app is focused. I subclassed MKPolygon as follows:
class ReplacingPolygion:MKPolygon{
override func canReplaceMapContent()->Bool{
return true
}
override var boundingMapRect: MKMapRect { MKMapRect.world }
}
As I understand it, this means that my overlay will replace existing map content and be drawn on the whole world. I have initialized the above class with coordinates that cover the whole world Map except the interior polygon which cuts my needed region out from the larger overlay. Thus, this creates an overlay for the whole world except the region I want.
ReplacingPolygion(coordinates: worldPolygon!, count: worldPolygon!.count, interiorPolygons: [userPolygon])
The issue is that, because I set canReplaceMapContent() to true it doesn't draw the map anywhere, including the interior polygon which is transparent.
Is there a way to force MKMapView to render the tiles in the area described by the interiorPolygon above?

Related

Is there any way to add a regular ARAnchor on a detected image?

I'm trying to put a 3D object on top of the detected image, and it worked. But when I moved the camera around the image, the object didn't stay at the center of image. Is there any way to add a regular anchor at the center of image to help me fix the 3D object at the right position? The following code is what I've tried, but it not worked.
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor
{
if ([anchor isKindOfClass:[ARImageAnchor class]]) {
ARAnchor *newAnchor = [[ARAnchor alloc] initWithTransform:anchor.transform];
[self.sceneView.session addAnchor:newAnchor];
}
}
I detect an image and put a plane on it, it looks center correctly
But when I move the camera to another position, it doesn't locate on the center of image
There's no need to create a new anchor because there's already one provided by ARKit. You should add your 3D content to the node provided by this method.
According to the renderer:didAddNode:forAnchor: documentation:
You can provide visual content for the anchor by attaching geometry (or other SceneKit features) to this node or by adding child nodes.
So, in this method:
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor
{
[node addChildNode: your3DObjectNode];
}
Then it should stay at the center of your image.

How to make large 2d tilemap easier to load in Unity

I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.

Animating UIVisualEffectView Blur Radius?

As the title says it, is there a way to animate a UIVisualEffectView's blur radius? I have a dynamic background behind the view so the ImageEffects addition can't be used... The only thing that can do this as far as I know is to animate the opacity but iOS complains saying that doing that breaks the EffectView so it definitely seems like a bad idea... Any help would be gladly appreciated.
The answer is yes. Here's an example for animating from no blur -> blur:
// When creating your view...
let blurView = UIVisualEffectView()
// Later, when you want to animate...
UIView.animateWithDuration(1.0) { () -> Void in
blurView.effect = UIBlurEffect(style: .Dark)
}
This will animate the blur radius from zero (totally transparent, or rather - no blur effect at all) to the default radius (fully blurred) over the duration of one second. And to do the reverse animation:
UIView.animateWithDuration(1.0) { () -> Void in
blurView.effect = nil
}
The resulting animations transform the blur radius smoothly, even though you're actually adding/removing the blur effect entirely - UIKit just knows what to do behind the scenes.
Note that this wasn't always possible: Until recently (not sure when), a UIVisualEffectView had to be initialized with a UIVisualEffect, and the effect property was read-only. Now, effect is both optional and read/write (though the documentation isn't updated...), and UIVisualEffectView includes an empty initializer, enabling us to perform these animations.
The only restriction is that you cannot manually assign a custom blur radius to a UIVisualEffectView - you can only animate between 'no blur' and 'fully blurred'.
EDIT: In case anybody is interested, I've created a subclass of UIVisualEffectView that gives you full control over blur radius. The caveat is that it uses a private UIKit API, so you probably shouldn't submit apps for review using it. However, it's still interesting and useful for prototypes or internal applications:
https://github.com/collinhundley/APCustomBlurView

Can I get an Unclickable geoxml3 Shadow Layer that sits entirely behind a Clickable Marker Layer?

In order to have markers that are clickable and marker shadows that are not, I'm setting up two geoxml3 parsers, one for the markers and one for the shadows. That works, but I'm hoping that having two layers will also let me keep the shadow of one marker from falling on another marker. It's a subtle thing, but having a visually horizontal shadow overlaid on a visually vertical marker undercuts the 3-D effect. And in a cluster of markers, things get pretty murky down among the marker stems.
Now, I get that icons are rendered from north to south, so that an icon will peek over the top of an overlapping icon to the south of it. What I was expecting was that each parser would create its own layer, in the sense that a marker layer would appear entirely in front of a preceding shadow layer, with no shadow falling on any marker. It sure looks, though, like the parsers are working north to south down both "layers" at the same time. It seems like for each point they render the shadow image and then the corresponding marker image before moving down to the next point. If the next marker is pretty close to the southwest of the previous marker, its shadow image falls onto that previous marker.
To make sure I wasn't seeing some sort of illusion, as an exercise I put together a map with a couple of big, overlapping shadowed markers. What I'd hope for would be to have the images layered, bottom to top:
East Greenland Shadow
Greenland Shadow
East Greenland Marker
Greenland Marker
Instead, they appear to be layered:
East Greenland Shadow
East Greenland Marker
Greenland Shadow
Greenland Marker
with the Greenland Shadow falling on the East Greenland Marker.
So, can I get all of the markers to appear, collectively, in front of all the shadows? I can't track it down at the moment, but I believe I saw a list of standard Google Maps layers somewhere, which included something like a non-clickable "Shadow Layer". When I create a google.maps.KmlLayer with standard icons, the API automatically pulls up the corresponding shadow images and places those on what I guess is the Shadow Layer, which sits entirely behind the KmlLayer I asked for.
In my current project, I need a geoxml3 marker layer, so I can programatically access the placemarks. Since I can actually work with 32x32 icons, in this case I can just fall back to using a KmlLayer for the shadows, but for future reference it would be great to have the option of a non-clickable geoxml3 layer that sits entirely behind a clickable layer. Is there a way to do that? Would that be a matter of somehow rendering onto that Google Maps Shadow Layer?
Here's the script:
function initialize() {
var mapOptions = {
center: new google.maps.LatLng(71, -45),
zoom: 4,
preserveViewport: true
};
var map = new google.maps.Map(document.getElementById("map-canvas"), mapOptions);
// Shadow Layer
var shadow = new geoXML3.parser({
map: map,
zoom: false,
markerOptions: {clickable: false}
});
shadow.parse('greenland_shadow_5.kml');
// Marker Layer
var blues = new geoXML3.parser({
map: map,
singleInfoWindow: true,
zoom: false,
suppressDirections: true,
markerOptions: {
shape: {
type: 'circle',
coords: [38,38,38]
}
}
});
blues.parse('greenland_5.kml');
}
google.maps.event.addDomListener(window, 'load', initialize);
The two KML files are identical except for the IconStyles:
<IconStyle>
<Icon>
<href>bluemarker_76x128.png</href>
<scale>1.0</scale>
</Icon>
<hotSpot x="38" y="0" xunits="pixels" yunits="pixels" />
</IconStyle>
versus:
<IconStyle>
<Icon>
<href>markershadow_188x128.png</href>
<scale>1.0</scale>
</Icon>
<hotSpot x="96" y="0" xunits="pixels" yunits="pixels" />
</IconStyle>
You could take the "MarkerShadow" class and use it to make a layer of just shadows. and make a layer with just markers: proof of concept
- disadvantage: processes the same KML twice.
I can think of 4 options for you:
Put your shadows in a separate KML file and display them using the native google.maps.KmlLayer, that should put them underneath all the google.maps.Marker objects, which is what geoxml3 uses to render the icons. The issue with KmlLayer is that it does not support scaling, all icons are scaled to 64x64 and if they can't be, they are replaced by the default blue icon. KmlLayer is rendered in the overlayLayer pane.
create custom "markers" using Custom Overlays that support combining a marker image with a shadow image. Used to be supported natively by the Google Maps Javascript API v3, but they removed that functionality with the "visual refresh". It looks like the "shadowPane" still exists (at least for now), you could put all the shadows there.
overlayShadow contains the marker shadows. It may not receive DOM events. (Pane 2).
mapPanes reference
Use the zIndex option of the google.maps.Marker object to put the shadows below the markers. Put all the shadows at zIndex = 0 (so they are on the bottom, then use an algorithm to put the markers in their default orientations:
zIndex: Math.round(latlng.lat()*-100000)<<5
"manually" add a shadow to the markers in a custom "createMarker" function (append the shadow image to the shadowPane)
proof of concept marker with shadow

(C++/CLI) Testing Mouse Position in a Rectangle Relevent to Parent

I've been messing around with the Graphics class to draw some things on a panel. So far to draw, I've just been using the Rectangle Structure. On a panel, by clicking a button, it makes a rectangle in a random place and adds it to an array of other rectangles (They're actually a class called UIElement, which contains a Rectangle member). When this panel is clicked, it runs a test with all the elements to see if the mouse is inside any of them, like this:
void GUIDisplay::checkCollision()
{
Point cursorLoc = Cursor::Position;
for(int a = 0; a < MAX_CONTROLS; a++)
{
if(elementList[a] != nullptr)
{
if(elementList[a]->bounds.Contains(cursorLoc))
{
elementList[a]->Select();
//MessageBox::Show("Click!", "Event");
continue;
}
elementList[a]->Deselect();
}
}
m_pDisplay->Refresh();
}
The problem is, when I click the rectangle, nothing happens.
The UIElement class draws its rectangles in the following bit of code. However, I've modified it a bit, because in this example it uses the DrawReversibleFrame method to do the actually drawing, as I was using Graphics.FillRectangle method. When I changed it, I noticed DrawReversibleFrame drew in a different place than FillRectangle. I believe this is because DrawReversibleFrame draws with its positions relative to the window, while FillRectangle does it relative to whatever Paint event its in (Mines in a panel's Paint method.) So let me just show the code:
void UIElement::render(Graphics^ g)
{
if(selected)
{
Pen^ line = gcnew Pen(Color::Black, 3);
//g->FillRectangle(gcnew SolidBrush(Color::Red), bounds);
ControlPaint::DrawReversibleFrame(bounds, SystemColors::Highlight, FrameStyle::Thick);
g->FillRectangle(gcnew SolidBrush(Color::Black), bounds);
//g->DrawLine(line, bounds.X, bounds.Y, bounds.Size.Width, bounds.Size.Height);
}
else
{
ControlPaint::DrawReversibleFrame(bounds, SystemColors::ControlDarkDark, FrameStyle::Thick);
//g->FillRectangle(gcnew SolidBrush(SystemColors::ControlDarkDark), bounds);
}
}
I add in both DrawReverisbleFrame and FillRectangle so that way I could see the difference. This is what it looked like when I clicked the frame drawn by DrawReversibleFrame:
The orange frame is where I clicked, the black is where its rendering. This shows me that the Rectangle's Contains() method is look for the rectangle relevant to the window, and not the panel. That's what I need fixed :)
I'm wondering if this is happening because the collision is tested outside of the panels Paint method. But I don't see how I could implement this collision testing inside the Paint method.
UPDATE:
Ok, so I just discovered that it appears that what DrawReversibleFrame and FillRectangle draw are always a certain distance apart. I don't quite understand this, but someone else might.
Both Cursor::Position and DrawReversableFrame operate in screen coordinates. That is for the entire screen, everything on your monitor, and not just your window. FillRectangle on the other hand operates on window coordinates, that is the position within your window.
If you take your example where you were drawing with both and the two boxes are always the same distance apart, and move your window on the screen then click again, you will see that the difference between the two boxes changes. It will be the difference between the top left corner of your window and the top left corner of the screen.
This is also why when you check to see what rectangle you clicked isn't hitting anything. You are testing the cursor position in screen coordinates against the rectangle coordinates in window space. It is possible that it would hit one of the rectangles, but it probably won't be the one you actually clicked on.
You have to always know what coordiante systems your variables are in. This is related to the original intention of Hungarian Notation which Joel Spolsky talks about in his entry Making Wrong Code Look Wrong.
Update:
PointToScreen and PointToClient should be used to convert coordinates between screen and window coordinates.