I have an app in Java which needs to get the camera image from ARCore for custom rendering. Recently, two new APIs have been added to ARCore (v1.1.0) which should support this. Well, everything else works for me except these two calls. Both calls are failing with same error (SIGINIT):
SIGINIT
and I have this message in the Logcat:
04-18 14:52:32.007 32623-32623/ A/Ion: [java/com/google/vr/dynamite/client/native/dynamite_client.cc:74] CHECK failed: expression='"env"'
04-18 14:52:32.007 32623-32623/ A/Ion: Dumping stack:
relevant code is:
try
{
// Obtain the current frame from session.
mSession.setCameraTextureName( mTextureId );
Frame frame = mSession.update();
TrackingState trackingState = frame.getCamera().getTrackingState();
if ( trackingState == TrackingState.TRACKING )
{
// Compute lighting, this is also crashing
//final float[] colorCorrectionRgba = new float[4];
//frame.getLightEstimate().getColorCorrection( colorCorrectionRgba, 0 );
//getPixelIntensity works fine
float lightIntensity = frame.getLightEstimate().getPixelIntensity();
Log.d( TAG, " light intensity is " + lightIntensity );
//This is also crashing
android.media.Image image = frame.acquireCameraImage();
//do something useful with the image
image.close();
app level build.gradle has the following entry under dependencies:
implementation 'com.google.ar:core:1.1.0'
and I have tried this with both proguard enabled and disabled.
Can anyone spot what I am doing wrong?
PS: In case it's not clear, the failing calls are frame.acquireCameraImage() and frame.getLightEstimate().getColorCorrection().
Related
I'm newbie to the Intel Realsense SDK and coding in Visual Studio 2017(C or C++) for Intel Realsense camera D435.
In my example I have the following,
static rs2::frameset current_frameset;
auto color = current_frameset.get_color_frame();
frame = cvQueryFrame(color);
I've got an error on line 3 as "can not convert 'rs2::video_frame' to 'CvCapture'"
I've not being able to find a solution to this issue and it's proving difficult and resulted in more errors.
Does anyone know how I can overcome this problem?
Thanks for the help!
The cvQueryFrame accepts cvCapture instance, and it is used to retrieve the frame from camera. In LibRS, the frame you retrieved back can be used already, you don't have to get back it again. attached the snippet of CV example in LibRS, you can refer to the complete code here
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
using namespace cv;
const auto window_name = "Display Image";
namedWindow(window_name, WINDOW_AUTOSIZE);
while (waitKey(1) < 0 && cvGetWindowHandle(window_name))
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame());
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
// Update the window with new data
imshow(window_name, image);
}
I am attempting to visualize audio coming out of an element on a webpage. The source for that element is a WebRTC stream connecting to an Asterisk call via sip.js. The audio works as intended.
However, when I attempt to get the frequency data using web audio api, it returns an array of all 0's, even though the audio is working. This seems be a problem with createMediaElementSource. If I call getUserMedia and use createMediaStreamSource to connect my microphone to the input, I indeed get the frequency data returned.
This was attempted in both Chrome 40.0 and Firefox 31.4. In my search I found similar errors with Android Chrome but my versions of desktop Chrome and Firefox seem like they should be functioning correctly. So far I have a feeling that the error may be due to the audio player getting it's audio from another AudioContext in sip.js, or something having to do with CORS. All of the demos that I have tried work correctly, but only use createMediaStreamSource to get mic audio, or use createMediaElementSource to play a file (rather than streaming to an element).
My Code:
var context = new (window.AudioContext || window.webkitAudioContext)();
var analyser = context.createAnalyser();
analyser.fftSize = 64;
analyser.minDecibels = -90;
analyser.maxDecibels = -10;
analyser.smoothingTimeConstant = 0.85;
var frequencyData = new Uint8Array(analyser.frequencyBinCount);
var visualisation = $("#visualisation");
var barSpacingPercent = 100 / analyser.frequencyBinCount;
for (var i = 0; i < analyser.frequencyBinCount; i++) {
$("<div/>").css("left", i * barSpacingPercent + "%").appendTo(visualisation);
}
var bars = $("#visualisation > div");
function update() {
window.requestAnimationFrame(update);
analyser.getByteFrequencyData(frequencyData);
bars.each(function (index, bar) {
bar.style.height = frequencyData[index] + 'px';
console.debug(frequencyData[index]);
});
};
$("audio").bind('canplay', function() {
source = context.createMediaElementSource(this);
source.connect(analyser);
update();
});
Any help is greatly appreciated.
Chrome doesn't support WebAudio processing of RTCPeerConnection output streams (remote streams); see this question. Their bug is here.
Edit: they now support this in Chrome 50
See the test code for firefox about to land as part of this bug:
Bug 1081819. This bug will add webaudio input to RTCPeerConnections in Firefox; we've had working WebAudio processing of output MediaStreams for some time. The test code there tests both sides; note it depends a lot on the test framework, so just use it as a guide on hooking to webaudio.
I'm newbie in using ogre3D and I need help on a certain point!
I'm trying a library mixing ogre3D engine and qtml :
http://advancingusability.wordpress.com/2013/08/14/qmlogre-is-now-a-library/
this library works fine when you want to draw some object and rotate or translate these objects already initialise in a first step.
void initialize(){
// we only want to initialize once
disconnect(this, &ExampleApp::beforeRendering, this, &ExampleApp::initializeOgre);
// start up Ogre
m_ogreEngine = new OgreEngine(this);
m_root = m_ogreEngine->startEngine();
m_ogreEngine->setupResources();
m_ogreEngine->activateOgreContext();
//draw a small cube
new DebugDrawer(m_sceneManager, 0.5f);
DrawCube(100,100,100);
DebugDrawer::getSingleton().build();
m_ogreEngine->doneOgreContext();
emit(ogreInitialized());
}
but If you want to draw or change the scene after this initialisation step it is problematic!
In fact in Ogre3D only (without the qtogre library), you have to use a frameListener
which will connect the rendering thread and allow a repaint of your scene.
But here, we have two ContextOpengl: one for qt and the other one for Ogre.
So If you try to put the common part of code :
createScene();
createFrameListener();
// La Boucle de rendu
m_root->startRendering();
//createScene();
while(true)
{
Ogre::WindowEventUtilities::messagePump();
if(pRenderWindow->isClosed())
std::cout<<"pRenderWindow close"<<std::endl;
if(!m_root->renderOneFrame())
std::cout<<"root renderOneFrame"<<std::endl;
}
the app will freeze! I know that startRendering is a render loop itself, so the loop below never gets executed.
But I don't know where to put those line or how to correct this part!
I've also try to add a background buffer and to swap them :
void OgreEngine::updateOgreContext()
{
glPopAttrib();
glPopClientAttrib();
m_qtContext->functions()->glUseProgram(0);
m_qtContext->doneCurrent();
delete m_qtContext;
m_BackgroundContext= QOpenGLContext::currentContext();
// create a new shared OpenGL context to be used exclusively by Ogre
m_BackgroundContext = new QOpenGLContext();
m_BackgroundContext->setFormat(m_quickWindow->requestedFormat());
m_BackgroundContext->setShareContext(m_qtContext);
m_BackgroundContext->create();
m_BackgroundContext->swapBuffers(m_quickWindow);
//m_ogreContext->makeCurrent(m_quickWindow);
}
but i've also the same error:
OGRE EXCEPTION(7:InternalErrorException): Cannot create GL vertex buffer in GLHardwareVertexBuffer::GLHardwareVertexBuffer at Bureau/bibliotheques/ogre_src_v1-8-1/RenderSystems/GL/src/OgreGLHardwareVertexBuffer.cpp (line 46)
I'm very stuck!
I don't know what to do?
Thanks!
I am trying to save an image from the photo gallery to local storage so that I can load the image up across application sessions. Once the user is done selecting the image, the following logic is executed. On the simulator I see the error message is written out to the log. Even though I am seeing the error message I think the image is still saved in the simulato because when I restart the application I am able to load the saved image. When I run this on the device though, I still get the error message you see in the code below and the default background is loaded which indicates the write was not successful.
Can anyone see what I am doing wrong and why the image won't save successfully?
var image = i.media.imageAsResized(width, height);
backgroundImage.image = image;
function SaveBackgroundImage(image)
{
var file = Ti.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory,W.CUSTOM_BACKGROUND);
if(file.write(image, false))
{
W.analytics.remoteLog('Success Saving Background Image');
}
else
{
W.analytics.remoteLog('Error Saving Background Image');
}
file = null;
}
Try this Code:
var parent = Titanium.Filesystem.getApplicationDataDirectory();
var f = Titanium.Filesystem.getFile(parent, 'image_name.png');
f.write(image);
Ti.API.info(f.nativePath); // it will return the native path of image
In your code i thing you are not giving the type of the image (png/jpeg) thats why your getting error.
I'm following a tutorial on making a top down/isometric camera and have run into a bit of a snag. See, the following comes up when I compile.
BGCGamePawn.uc(15) : Error, Type mismatch in '='
Now, I've managed to get this far so I understand that the problem lies in the following bit of code. Line 15 is bold.
//override to make player mesh visible by default
simulated event BecomeViewTarget( PlayerController PC )
{
local UTPlayerController UTPC;
Super.BecomeViewTarget(PC);
if (LocalPlayer(PC.Player) != None)
{
**UTPC = BGCGamePlayerController (PC);**
if (UTPC != None)
{
//set player ctrl to behind view and make mesh visible
UTPC.SetBehindView(true);
SetMeshVisibility(True);
UTPC.bNoCrosshair = true;
}
}
}
Does BGCGamePlayerController extend from UTPlayerController? If not, that would be the problem: you're trying to cast your PlayerController parameter into a BGCGamePlayerController but then store it in a local UTPlayerController variable. You'll need to change the type for your local variable or change the hierarchy for your BGCGamePlayerController.