Is there any way to add a regular ARAnchor on a detected image? - objective-c

I'm trying to put a 3D object on top of the detected image, and it worked. But when I moved the camera around the image, the object didn't stay at the center of image. Is there any way to add a regular anchor at the center of image to help me fix the 3D object at the right position? The following code is what I've tried, but it not worked.
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor
{
if ([anchor isKindOfClass:[ARImageAnchor class]]) {
ARAnchor *newAnchor = [[ARAnchor alloc] initWithTransform:anchor.transform];
[self.sceneView.session addAnchor:newAnchor];
}
}
I detect an image and put a plane on it, it looks center correctly
But when I move the camera to another position, it doesn't locate on the center of image

There's no need to create a new anchor because there's already one provided by ARKit. You should add your 3D content to the node provided by this method.
According to the renderer:didAddNode:forAnchor: documentation:
You can provide visual content for the anchor by attaching geometry (or other SceneKit features) to this node or by adding child nodes.
So, in this method:
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor
{
[node addChildNode: your3DObjectNode];
}
Then it should stay at the center of your image.

Related

Utilizing MKOverlay `canReplaceMapContent` on only part of the MKMap content

I have created an overlay that I want to cover the whole world except a specific region where the app is focused. I subclassed MKPolygon as follows:
class ReplacingPolygion:MKPolygon{
override func canReplaceMapContent()->Bool{
return true
}
override var boundingMapRect: MKMapRect { MKMapRect.world }
}
As I understand it, this means that my overlay will replace existing map content and be drawn on the whole world. I have initialized the above class with coordinates that cover the whole world Map except the interior polygon which cuts my needed region out from the larger overlay. Thus, this creates an overlay for the whole world except the region I want.
ReplacingPolygion(coordinates: worldPolygon!, count: worldPolygon!.count, interiorPolygons: [userPolygon])
The issue is that, because I set canReplaceMapContent() to true it doesn't draw the map anywhere, including the interior polygon which is transparent.
Is there a way to force MKMapView to render the tiles in the area described by the interiorPolygon above?

How to save a scaled image inside QGRaphicsView using QFileDialog

I have a user interface with:
N.1 Push button (used to upload images)
N.2 QGraphicsView (left and right)
N.1 Push button that takes a print screen of the current image loaded on QGraphicsView left
Using the mouse is possible to:
1) zoom-in/zoom-out from the image
2) draw rectangles on the image.
I want to take the print screen of the image according to the zoom-in or zoom-out area I am using. However, once the file is saved it shows the entire image (wrong because I wanted only the enlarged or shrank part) with the rectangles drawn (this is correct).
According to this post QFileDialog was used in a similar way I am trying to do. I successfully used QFileDialog::getSaveFileName() to save the image. However it is not entirely solving the problem.
Below the pushbutton that takes care of taking the print screen of the image in the QGraphicsView left:
void MainWindow::on_addNewRecordBtn_clicked()
{
leftScene->clearSelection(); // Selections would also render to the file
leftScene->setSceneRect(leftScene->itemsBoundingRect()); // Re-shrink the scene to it's bounding contents
QImage image(leftScene->sceneRect().size().toSize(), QImage::Format_ARGB32); // Create the image with the exact size of the shrunk scene
image.fill(Qt::transparent); // Start all pixels transparent
QPainter painter(&image);
leftScene->render(&painter);
image.save(QFileDialog::getSaveFileName(this, tr("New Image Name"), QDir::rootPath(),
"Name (*.jpg *.jpeg *.png *.tiff *.tif)"));
}
The expected result would be saving the zoomed image (zoom.jpg for example) like this:
However when I save the image (zoom.jpg) the result that I am obtaining is constantly the entire image with the drawn features:
So if anyone needs, it is possible to take a print screen of an image no matter what the zoom in. Meaning you can zoom-in and zoom-out and take a print screen.
The following statement will do the job, grabbing the image your present (zoomed in or out status):
QImage image = ui->leftView->grab().toImage();
The only glitch is that the scroll bars horizontal and vertical (depending on the zoom) are also printed in your image. You can avoid that by setting them off right before taking the print screen and putting them back on right after.
Basically my previous function can be better written as follows:
void MainWindow::on_addNewRecordBtn_clicked()
{
leftScene->setSceneRect(leftScene->itemsBoundingRect());
// Setting off the scroll bars
ui->leftView->setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
ui->leftView->setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
QImage image = ui->leftView->grab().toImage();
image.save(QFileDialog::getSaveFileName(this, tr("New Image Name"), QDir::rootPath(),
"Name (*.jpg *.jpeg *.png *.tiff *.tif)"));
// Putting the scroll bars back on
ui->leftView->setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOn);
ui->leftView->setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOn);
}
Hope this will be helpful in case you encountered my same problem.

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Thumbnail of previous image instead of cancel button while camera is open

i want to show the thumbnail of the previous image taken by camera instead of the cancel button while camera is running ...
Is that possible ?? Need help ..
Yep. Just capture the last image, keep it in memory (or save it to disk), then use it as one of the controls. We can do this using the overlay property of the Titanium.Media.showCamera function. Here is a quick example:
First we need an overlay view to show the image.
var overlayView = Ti.UI.createView();
var imageView = Ti.UI.createImageView({
width:44,
height:44,
left : 5
});
overlayView.add(imageView);
Now this is the function we use to open the camera with the overlay view. Note that we don't have controls so you need to add those (for closing etc). All we do right now is set the overlays image.
Titanium.Media.showCamera({
success:function(event) {
// called when media returned from the camera
imageView.image = event.media;
},
cancel:function() {},
error:function(error) {},
saveToPhotoGallery:true,
allowEditing:true,
mediaTypes:[Ti.Media.MEDIA_TYPE_PHOTO],
overlay : overlayView,
showControls: false // This is important!
});
To really make this work, you may need to save the event.media in a global variable, or use a similiar technique make sure overlayView will not be nulled out / garbage collected.
Also this is a barebones solution, not very robust, but this is the basic method I would use!

(C++/CLI) Testing Mouse Position in a Rectangle Relevent to Parent

I've been messing around with the Graphics class to draw some things on a panel. So far to draw, I've just been using the Rectangle Structure. On a panel, by clicking a button, it makes a rectangle in a random place and adds it to an array of other rectangles (They're actually a class called UIElement, which contains a Rectangle member). When this panel is clicked, it runs a test with all the elements to see if the mouse is inside any of them, like this:
void GUIDisplay::checkCollision()
{
Point cursorLoc = Cursor::Position;
for(int a = 0; a < MAX_CONTROLS; a++)
{
if(elementList[a] != nullptr)
{
if(elementList[a]->bounds.Contains(cursorLoc))
{
elementList[a]->Select();
//MessageBox::Show("Click!", "Event");
continue;
}
elementList[a]->Deselect();
}
}
m_pDisplay->Refresh();
}
The problem is, when I click the rectangle, nothing happens.
The UIElement class draws its rectangles in the following bit of code. However, I've modified it a bit, because in this example it uses the DrawReversibleFrame method to do the actually drawing, as I was using Graphics.FillRectangle method. When I changed it, I noticed DrawReversibleFrame drew in a different place than FillRectangle. I believe this is because DrawReversibleFrame draws with its positions relative to the window, while FillRectangle does it relative to whatever Paint event its in (Mines in a panel's Paint method.) So let me just show the code:
void UIElement::render(Graphics^ g)
{
if(selected)
{
Pen^ line = gcnew Pen(Color::Black, 3);
//g->FillRectangle(gcnew SolidBrush(Color::Red), bounds);
ControlPaint::DrawReversibleFrame(bounds, SystemColors::Highlight, FrameStyle::Thick);
g->FillRectangle(gcnew SolidBrush(Color::Black), bounds);
//g->DrawLine(line, bounds.X, bounds.Y, bounds.Size.Width, bounds.Size.Height);
}
else
{
ControlPaint::DrawReversibleFrame(bounds, SystemColors::ControlDarkDark, FrameStyle::Thick);
//g->FillRectangle(gcnew SolidBrush(SystemColors::ControlDarkDark), bounds);
}
}
I add in both DrawReverisbleFrame and FillRectangle so that way I could see the difference. This is what it looked like when I clicked the frame drawn by DrawReversibleFrame:
The orange frame is where I clicked, the black is where its rendering. This shows me that the Rectangle's Contains() method is look for the rectangle relevant to the window, and not the panel. That's what I need fixed :)
I'm wondering if this is happening because the collision is tested outside of the panels Paint method. But I don't see how I could implement this collision testing inside the Paint method.
UPDATE:
Ok, so I just discovered that it appears that what DrawReversibleFrame and FillRectangle draw are always a certain distance apart. I don't quite understand this, but someone else might.
Both Cursor::Position and DrawReversableFrame operate in screen coordinates. That is for the entire screen, everything on your monitor, and not just your window. FillRectangle on the other hand operates on window coordinates, that is the position within your window.
If you take your example where you were drawing with both and the two boxes are always the same distance apart, and move your window on the screen then click again, you will see that the difference between the two boxes changes. It will be the difference between the top left corner of your window and the top left corner of the screen.
This is also why when you check to see what rectangle you clicked isn't hitting anything. You are testing the cursor position in screen coordinates against the rectangle coordinates in window space. It is possible that it would hit one of the rectangles, but it probably won't be the one you actually clicked on.
You have to always know what coordiante systems your variables are in. This is related to the original intention of Hungarian Notation which Joel Spolsky talks about in his entry Making Wrong Code Look Wrong.
Update:
PointToScreen and PointToClient should be used to convert coordinates between screen and window coordinates.