Kotlin Collisione Imageview and a button - kotlin

val rc_img1 = Rect()
Imageview1.getDrawingRect(rc_img1)
val rc_img2 = Rect()
buotton1.getDrawingRect(rc_img2)
if (Rect.intersects(rc_img1, rc_img2)) {
...move <-
else
...move ->
make me collesion detection always!
I try collide Imageview1 with buttonDX but give me always collision detection true!

Related

Can I use drawPath to both fill shape and draw border only one time in Jetpack Compose

I use Code A to draw a shape with a path, I hope to fill the shape in a color, and draw border with different color and width. I get the Image A as I expected.
I find Code A to launch drawPath operation two times, maybe it's not good way, can I use drawPath to both fill shape and draw border only one time?
Code A
fun setProcess(drawScope: DrawScope, dataList: List<Double>){
drawScope.drawIntoCanvas{
val step = xAxisLength / maxPointCount
val shadowPath = Path()
shadowPath.moveTo(0f.toX, 0f.toY)
for (i in dataList.indices) {
...
}
shadowPath.close()
it.drawPath(shadowPath, paintTablePath)
it.drawPath(shadowPath, paintTableBorder)
}
}
val paintTablePath = Paint().also {
it.isAntiAlias = true
it.style = PaintingStyle.Fill
it.strokeWidth = 1f
it.color = Color(0xffdfecfe)
}
val paintTableBorder = Paint().also {
it.isAntiAlias = true
it.style = PaintingStyle.Stroke
it.strokeWidth = 3f
it.color = Color.Red
}
Image A
What you ask is neither available in Compose nor Android Canvas. As you can check here but with Jetpack Compose Canvas.
You don't need to use the ones with Paint unless you need some properties of Paint or functions like drawText
DrawScope already has functions such as
fun drawPath(
path: Path,
color: Color,
/*#FloatRange(from = 0.0, to = 1.0)*/
alpha: Float = 1.0f,
style: DrawStyle = Fill,
colorFilter: ColorFilter? = null,
blendMode: BlendMode = DefaultBlendMode
)
And instead of passing DrawScope as parameter it would help creating extension functions of DrawScope as default Compose source code often does.
inline fun DrawScope.rotate(
degrees: Float,
pivot: Offset = center,
block: DrawScope.() -> Unit
) = withTransform({ rotate(degrees, pivot) }, block)
inline fun DrawScope.clipRect(
left: Float = 0.0f,
top: Float = 0.0f,
right: Float = size.width,
bottom: Float = size.height,
clipOp: ClipOp = ClipOp.Intersect,
block: DrawScope.() -> Unit
) = withTransform({ clipRect(left, top, right, bottom, clipOp) }, block)

Problem with simple rectangle collison in OpenGL

I try to implement collision detection and collision effects. This is my code for collision detection:
fun collisionDetection(objcect : Renderable?, object2 : Renderable?) : Boolean {
var collisionX : Boolean = objcect?.getPosition()?.x?.plus(quadSizeX)!! >= object2?.getPosition()?.x!! && object2?.getPosition()?.x?.plus(quadSizeX)!! >= objcect?.getPosition()?.x!!
var collisionY : Boolean = objcect?.getPosition()?.y?.plus(quadSizeY)!! >= object2?.getPosition()?.y!! && object2?.getPosition()?.y?.plus(quadSizeY)!! >= objcect?.getPosition()?.y!!
var collisionZ : Boolean = objcect?.getPosition()?.z?.plus(quadSizeZ)!! >= object2?.getPosition()?.z!! && object2?.getPosition()?.z?.plus(quadSizeZ)!! >= objcect?.getPosition()?.z!!
return collisionX && collisionY && collisionZ }
This seems to work, when used with overlapping objects set to overlapping values by transformations. This way
wall.translateLocal(Vector3f(0.0f, 1.0f, -8.0f))
and
wall2.translateLocal(Vector3f(0.0f, 2.0f, -8.0f))
collisionDetection is true.
When moving wall2 on the z-axis with a different starting point and speed, but x and y position are the same, the effect of the collision should be that wall2 stops moving, when z position of both objects is equal. I tried it this way in the main loop (and many other ways...), but it is still not working:
fun render(dt: Float, t: Float) {
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
staticShader.use()
camera.bind(staticShader)
wall.render(staticShader)
wall2.render(staticShader)
if (collisionDetection(wall, wall2) == true) {
wall2.getPosition()
}
else {wall2.translateLocal(Vector3f(0.0f, 0.0f, -t.toFloat()* 0.001f))}
}
I have no idea now, what is wrong and how to fix it. It seems like the collisionDetection function doesn't get information about the changing positions on the z-axis of the wall2. Or maybe I need a different approach.
Everything works fine now. Forgot to calculate QuadSize for the second object. This results in two objects on the same z.coordinate.Nothing has been wrong with the function for detecting collisions.

AGSMAPView callout not showing on touch point

I am new to ARCGIS. Any help will be appreciated.
I am showing callout on didtap delegate Like this
func geoView(_ geoView: AGSGeoView, didTapAtScreenPoint screenPoint: CGPoint, mapPoint: AGSPoint) {
isFromSearch = false
MBProgressHUD.showAdded(to: self.view, animated: true)
self.mapView.identifyLayers(atScreenPoint: screenPoint, tolerance: 12, returnPopupsOnly: false, maximumResultsPerLayer: 10) { (identifyLayerResults: [AGSIdentifyLayerResult]?, error: Error?) in
//check for errors and ensure identifyLayerResults is not nil
MBProgressHUD.hide(for: self.view, animated: true)
if let error = error {
print(error)
return
}
guard let identifyLayerResults = identifyLayerResults else { return }
// iterate the identify layer results
guard identifyLayerResults.count > 0 else {return}
guard identifyLayerResults[0].sublayerResults.count > 0 else {return}
guard identifyLayerResults[0].sublayerResults[0].geoElements.count > 0 else {return}
let result = identifyLayerResults[0].sublayerResults[0].geoElements[0].attributes
self.identifyLayerResult = identifyLayerResults[0]
var title: String? = nil
var subtitle: String? = nil
if ((result["SiteCode"] as? String) != nil) && ((result["SiteName"] as? String) != nil){
title = (result["SiteCode"] as? String)
subtitle = (result["SiteName"] as? String)
}
else {
title = (result["company"] as? String)
subtitle = (result["identifier"] as? String)
}
self.mapView.callout.title = title
self.mapView.callout.detail = subtitle
self.mapView.callout.show(at: mapPoint, screenOffset: .zero, rotateOffsetWithMap: false, animated: true)
}
}
Everything is Working fine first time . But User can also search for places using REST API
and then mapview is moves to that point and show callout
https://******/arcgis/rest/services/Google/MobileiOS3/MapServer/find?
It returns Site and I create ViewPoint using Latitude and Longitude and show callout with zoom out and zoom in animation Code is given below
let pointView = AGSViewpoint(latitude: center.latitude, longitude: center.longitude, scale: 12E7)
self.mapView.setViewpoint(pointView, duration: 2) { (value) in
let pointView1 = AGSViewpoint(latitude: center.latitude, longitude: center.longitude, scale: 12E4)
self.mapView.setViewpoint(pointView1, duration: 2) { (true) in
let wgs84 = AGSSpatialReference(wkid: 4236)
let point = AGSPoint(x: center.latitude, y: center.longitude, spatialReference: wgs84)
let marker = AGSPictureMarkerSymbol(image: UIImage(named: "BluePushpin.png")!)
marker.leaderOffsetX = 9
marker.leaderOffsetY = -16
let graphics = AGSGraphic(geometry: point, symbol: marker, attributes: nil)
self.mGraphicOverlay.graphics.add(graphics)
let cgPoint = CGPoint(x: self.mapView.center.x, y: self.mapView.center.y - (self.mapView.callout.frame.height + 33))
print(cgPoint)
self.mapView.callout.show(at: graphics.geometry as! AGSPoint, screenOffset: cgPoint, rotateOffsetWithMap: false, animated: true)
}
}
After that when I tap on map any point Callout always shows to top Left Corner While first time didtap delegate was working fine
When I debug code and print callout frame it always shows zero x and zero y
There are a few things going on here:
Firstly, I think you've got the wrong spatial reference. You're using 4236 but WGS84 is 4326.
Note, you can avoid this type of typo by just referencing AGSSpatialReference.wgs84(), so you could say this:
let point = AGSPoint(x: center.latitude, y: center.longitude, spatialReference: .wgs84())
But look closely at that: you're also using latitude as X and longitude as Y. It's unfortunately confusing, but when you create an x,y point you need to specify longitude,latitude, not latitude,longitude:
let point = AGSPoint(x: center.longitude, y: center.latitude, spatialReference: .wgs84())
It's a common mistake. We have a helper function to simplify working with lat/lon source data:
AGSPointMakeWGS84(center.latitude, center.longitude)
You're also doing a bit more work than you need to (especially in forcing spatial reference conversions) and are introducing a few potential areas where errors could creep in.
So, assuming you actually need to set the map scale twice (I guess you are zooming to the location, and then zooming in a bit to focus attention), you could try something like this, which seems much less prone to errors:
let centerPoint = AGSPointMakeWGS84(center.latitude, center.longitude)
self.mapView.setViewpoint(AGSViewpoint(center: centerPoint, scale: 12E7), duration: 2) { _ in
self.mapView.setViewpoint(AGSViewpoint(center: centerPoint, scale: 12E4), duration: 2) { _ in
let marker = AGSPictureMarkerSymbol(image: UIImage(named: "BluePushpin.png")!)
marker.leaderOffsetX = 9
marker.leaderOffsetY = -16
let graphics = AGSGraphic(geometry: centerPoint, symbol: marker, attributes: nil)
self.mGraphicOverlay.graphics.add(graphics)
let cgPoint = CGPoint(x: self.mapView.center.x, y: self.mapView.center.y - (self.mapView.callout.frame.height + 33))
print(cgPoint)
self.mapView.callout.show(at: centerPoint, screenOffset: cgPoint, rotateOffsetWithMap: false, animated: true)
}
}
I confess I'm not entirely sure what your callout screenOffset calculation is doing, but I've left that as is.
I might also suggest adding the graphic before you animate the view, or after the first animation and before the second one (i.e. you're looking at the right location but have yet to zoom in a bit), but that's up to you.
Also, could I suggest that you post questions like this over at the ArcGIS Runtime SDK for iOS forum? We do monitor this space if you use the right tags, but you'll generally get more eyes on your questions over there.

ARKit – Spatial Audio barely changes the volume over distance

I created a SCNNode and added an Audio to it.
It is a Mono audio. Everything is set up correctly.
It is working as Spatial Audio, that's not the problem.
The problem is that as i get closer or far away it barely changes the volume. I know it changes if i get very very far away, but it's nothing like Apple demonstrated here:
https://youtu.be/d9kb1LfNNU4?t=23
Some other games i see the audio volume really changing from one step distance.
With mine, with one step you can't even tell the volume changed. You need at least 4 steps.
Anyone has any clue why?
Code bellow:
SCNNode *audioNode = [[SCNNode alloc] init];
SCNAudioSource *audioSource = [[SCNAudioSource alloc] initWithFileNamed:audioFileName];
audioSource.loops = YES;
[audioSource load];
audioSource.volume = 0.05; // <-- i used different values. won't change much either
audioSource.positional = YES;
//audioSource.shouldStream = NO; // <-- makes no difference
[audioNode addAudioPlayer:[SCNAudioPlayer audioPlayerWithSource:audioSource]];
[audioNode runAction:[SCNAction playAudioSource:audioSource waitForCompletion:NO] completionHandler:nil];
[massNode addChildNode:audioNode];
Maybe scale of the nodes?
The whole scene is the size of around 4 feet.
When i add an object i usually scale it to 0.005 (otherwise it gets way too big).
But i also tried with one that was already in the right size from .scn file.
It shouldn't affect anything tho, since the result is a coffee table size scene and i can see the objects alright.
Updated.
Here's a working code for controlling sound's decay (works in iOS and macOS):
import AVFoundation
import ARKit
class ViewController: UIViewController, AVAudioMixing {
#IBOutlet var sceneView: SCNView!
// #IBOutlet var sceneView: ARSCNView!
func destination(forMixer mixer: AVAudioNode,
bus: AVAudioNodeBus) -> AVAudioMixingDestination? {
return nil
}
var volume: Float = 0.0
var pan: Float = 0.0
var sourceMode: AVAudio3DMixingSourceMode = .bypass
var pointSourceInHeadMode: AVAudio3DMixingPointSourceInHeadMode = .bypass
var renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.sphericalHead
var rate: Float = 1.2
var reverbBlend: Float = 40.0
var obstruction: Float = -100.0
var occlusion: Float = -100.0
var position = AVAudio3DPoint(x: 0, y: 0, z: 10)
let audioNode = SCNNode()
override func viewDidLoad() {
super.viewDidLoad()
let myScene = SCNScene()
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 0)
myScene.rootNode.addChildNode(cameraNode)
// let sceneView = view as! SCNView
sceneView.scene = myScene
sceneView.backgroundColor = UIColor.orange
let myPath = Bundle.main.path(forResource: "Mono_Audio", ofType: "mp3")
let myURL = URL(fileURLWithPath: myPath!)
let mySource = SCNAudioSource(url: myURL)!
mySource.loops = true
mySource.isPositional = true // Positional Audio
mySource.shouldStream = false // FALSE for Positional Audio
mySource.volume = volume
mySource.reverbBlend = reverbBlend
mySource.rate = rate
mySource.load()
let player = SCNAudioPlayer(source: mySource)
let sphere: SCNGeometry = SCNSphere(radius: 0.1)
let sphereNode = SCNNode(geometry: sphere)
sphereNode.addChildNode(audioNode)
myScene.rootNode.addChildNode(sphereNode)
audioNode.addAudioPlayer(player)
sceneView.audioEnvironmentNode.distanceAttenuationParameters.maximumDistance = 2
sceneView.audioEnvironmentNode.distanceAttenuationParameters.referenceDistance = 0.1
sceneView.audioEnvironmentNode.renderingAlgorithm = .auto
// sceneView.audioEnvironmentNode.reverbParameters.enable = true
// sceneView.audioEnvironmentNode.reverbParameters.loadFactoryReverbPreset(.plate)
let hither = SCNAction.moveBy(x: 0, y: 0, z: 1, duration: 2)
let thither = SCNAction.moveBy(x: 0, y: 0, z: -1, duration: 2)
let sequence = SCNAction.sequence([hither, thither])
let loop = SCNAction.repeatForever(sequence)
sphereNode.runAction(loop)
}
}
And, yes, you're absolutely right – there are some obligatory settings.
But there are 7 of them:
use AVAudioMixing protocol with its stubs (properties and methods).
use MONO audio file.
use source.isPositional = true.
use source.shouldStream = false.
assign maximumDistance value to distanceAttenuationParameters property.
assign referenceDistance value to distanceAttenuationParameters property.
and location of mySource.load() is very important in your code.
P.S. If the aforementioned tips didn't help you, then use additional instance properties to make your sound even quieter using a graph, obstacles and orientation of implicit listener:
var rolloffFactor: Float { get set } // attenuation's graph, default = 1
var obstruction: Float { get set } // default = 0.0
var occlusion: Float { get set } // default = 0.0
var listenerAngularOrientation: AVAudio3DAngularOrientation { get set } //(0,0,0)
It definitely works if you'll write it in Objective-C.
In this example the distance of audioNode is 1 meter away from a listener.
If none of the above answers seem to work, try the following code:
sceneView.audioEnvironmentNode.reverbParameters.enable = true
And if even these seem to barely work, or if you wanna optimal performance, there is a property called level where you can set the level of how spatial the code can be.
sceneView.audioEnvironmentNode.reverbParameters.level = 40
(the level of the reverbParameters ranges between -40 to 40 parameters)

SceneKit avoid lighting on specific node

In SceneKit I'm building a node made of lines to draw the XYZ axes at the center of the scene, like in Cinema4D.
I would like these 3 nodes not to participate to the global lighting and be viewable even if the light is dark / inexistent / too strong. In the picture below you can see that the Z axis appears too heavily lighten and can't be seen.
Is there a way to stop a node participating to the scene's lighting, like with category masks for physics?
In this case, how would the node be lighten in order for it to appear anyway?
SCNLight has a categoryBitMask property. This lets you choose which nodes are affected by the light (Although this is ignored for ambient lights). You could have 2 light source categories, one for your main scene, and another that only affects your lines.
Here is a simple example with 2 nodes, each lit with a different colour light:
struct LightType {
static let light1:Int = 0x1 << 1
static let light2:Int = 0x1 << 2
}
class GameViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/scene.scn")!
let lightNode1 = SCNNode()
lightNode1.light = SCNLight()
lightNode1.light!.type = .omni
lightNode1.light!.color = UIColor.yellow
lightNode1.position = SCNVector3(x: 0, y: 10, z: 10)
lightNode1.light!.categoryBitMask = LightType.light1
scene.rootNode.addChildNode(lightNode1)
let lightNode2 = SCNNode()
lightNode2.light = SCNLight()
lightNode2.light!.type = .omni
lightNode2.light!.color = UIColor.red
lightNode2.position = SCNVector3(x: 0, y: 10, z: 10)
lightNode2.light!.categoryBitMask = LightType.light2
scene.rootNode.addChildNode(lightNode2)
let sphere1 = scene.rootNode.childNode(withName: "sphere1", recursively: true)!
sphere1.categoryBitMask = LightType.light1
let sphere2 = scene.rootNode.childNode(withName: "sphere2", recursively: true)!
sphere2.categoryBitMask = LightType.light2
let scnView = self.view as! SCNView
scnView.scene = scene
}
}
I think it would be much easier to set the material's lightning model to constant.
yourNode.geometry?.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant