I am trying to write a custom camera application in android.
For that I need to open the camera application.
for that i am tring following code..
Camera camera = Camera.open();
but is showing error like
method open undefined for type Camera
i did as suggested here http://developer.android.com/reference/android/hardware/Camera.html#open(int)
any suggestion..
Thanks,
Ravindra Gupta
You most likely imported the wrong camera class at the top of your source file, which is android.graphics.Camera.
You need android.hardware.Camera instead.
Thanks
I think you have not added the camera permission. See below - you need to add this in your manifest;
<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Check your imports. I had a similar problem and the Camera object Eclipse chose for me was: import android.graphics.Camera; instead it should be: import android.hardware.Camera;
If none of the above work:
check to see if you are requesting camera permission manually. Newer Android permissions (API > 23) are set at runtime, not install time.
See: https://developer.android.com/training/permissions/requesting.html
Please create a variable like this:
android.hardware.Camera camera ;
and then try open method :
camera = camera.open();
// this is working on my android studio
I have Faced the same problem till i reached that older versions of android will work properly until Android Marshmallow so it needs a runtime permission in order to proceed and show the camera ...
you can read about it in this link https://developer.android.com/training/permissions/requesting.html
for me i used a 3rd party library to do all this stuff for me from this link and all resolved ..
https://android-arsenal.com/details/1/2804
Hope it helps
I have faced a lot of issues while using integrating camera native/camera2 api. The code was bulky. To avoid complexity and compatibility issues google provide new CameraX api in new android jetpack library. See the google provided documentation https://developer.android.com/training/camerax. There is also a Kotlin based library i found on github https://github.com/robertlevonyan/CameraXDemo. You can get more clearity with less code.
Waged right!!!
A permission request is required. And I found the right code. he works.
I advise you to look at this article to initially connect the camera:
https://habr.com/ru/post/112272/
enter image description here
if(ContextCompat.checkSelfPermission(MainActivity.this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(MainActivity.this, new String[] {Manifest.permission.CAMERA}, 15); }
Related
I followed the docs on crashlytics/quick-start with autolink, using react-native 0.60.5.
Using the code below (exact copy from docs), I got _crashlytics.default.log is not a function.
import crashlytics from '#react-native-firebase/crashlytics';
function forceCrash() {
crashlytics.log('Testing crash');
crashlytics.crash();
}
I'm trying to think that I don't need change any file on /android/ folder.
The project source code is on github repo.
This was a mistake in the documentation which has been fixed and will be live soon. Correct usage should be:
crashlytics().log('...');
crashlytics().crash();
I am working with Expo, React Native and I want to to be able to detect text from images. Is there an package i can work with to achieve this?
I am using Expo camera module to snap the picture and supply the URI to the text detector
I have tried using react-native-text-detector but I am getting the error that the function detectFromUri is not defined. I have also tried with tesserect.js but it fails on import with "unable to resolve variable location".
await this.camera.takePictureAsync(options).then(photo => {
photo.exif.Orientation = 1;
//console.log(photo.uri);
const visionResp = await RNTextDetector.detectFromUri(photo.uri);
if (!(visionResp && visionResp.length > 0)) {
throw "UNMATCHED";
}
console.log(visionResp);
});
I am expecting the visionResp to log the results returned from the detection but instead i get undefined is not an object (evaluating '_reactNativeTextDetector.default.detectFromUri')
Is your project created with expo-cli?
If yes, Expo is not supporting OCR currently. There is a feature request on canny.io but you can't know for sure when it will become available.
Your only choice is to use an OCR service like this one.Internet connectivity will be required.
If not, (and the project is created with react-native-cli) you should be able to successfully use react-native-text-detector. Just make sure you link the package correctly. Docs here
I'm using RNFirebase to integrate firebase analytics into react-native ios application, I'm not sure though what's the best way to distinguish between prod/dev environments.
I've found one possible solution on Swift
let filePath = Bundle.main.path(forResource: "MyGoogleService", ofType: "plist")
guard let fileopts = FirebaseOptions(contentsOfFile: filePath!)
else { assert(false, "Couldn't load config file") }
FirebaseApp.configure(options: fileopts)
But I'm looking for a some sort of RNFirebase api to support this. Any thoughts?
thanks in advance
One way you can do this is by having multiple google-services.plist files outside your ios directory and use package scripts to overwrite the one in the ios directory as and when needed.
Someone touched on this here and in this issue. #akshetpandey's comment on the issue seems the most elegant way of doing it.
It's on our todo list to document this, we'll get to it as soon as we can.
I'm trying to open the device camera an activate immediately the LED light of that device (android/iOS).
I've tried the appcelerator ti.media events but didn't work, neither this module: Ti.Light.
Found this on this link activate-iphone-4-led-light
Hey guys!
For the flash stuff you have to check the property:
Ti.Media.cameraFlashMode (case sensitive)
To change it you can use Ti.Media.setCameraFlashMode(PARAM) .
PARAM could be: Ti.Media.CAMERA_FLASH_OFF , Ti.Media.CAMERA_FLASH_ON,
Ti.Media.CAMERA_FLASH_AUTO
Unfortunately you can’t start the led and use it as a torch, you can
only control the camera flash handling (on, off, auto) while taking a
photo.
There is any module that allow to use the led light all the time? I just need this while the camera is opened.
UPDATE 1#:
I’m trying to use your ts.camera widget, that have the embed camera and flash methods:
github - ts.camera
gittio - ts.camera
But there is no method switchFlashlight() in “pw.custom.androidcamera” module, this widget works?
github - Ti-Android-CameraView
gittio - pw.custom.androidcamera
UPDATE 2#:
In order to find a workaroud, I've added this flashlight module, and I'm trying to call it before or after showing the camera, but I think that it's not possible to have 2 camera activities at the same time.
This is my index.js file:
if(OS_ANDROID) {
flash = require('dk.napp.flashlight');
if(!flash.isFlashLightOn()) flash.turnFlashLightOn();
camera = require('pw.custom.androidcamera');
view = camera.createCameraView();
}
I'm getting this error:
[DEBUG] : CameraViewProxy: Camera not available
[ERROR] : CameraViewProxy: Camera is null. Make sure
[ERROR] : CameraViewProxy: <uses-permission android:name="android.permission.CAMERA" />
[ERROR] : CameraViewProxy: is in you tiapp.xml file.
This is my tiapp.xml file:
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.FLASHLIGHT"/>
<uses-feature android:name="android.hardware"/>
<uses-feature android:name="android.hardware.camera"/>
<uses-feature android:name="android.hardware.camera2"/>
<uses-feature android:name="android.hardware.camera2.params"/>
<uses-feature android:name="android.hardware.camera.flash"/>
<uses-feature android:name="android.hardware.camera.autofocus"/>
These two modules might do the job:
http://gitt.io/search?q=flash
Although it could well be that activating the device camera will override the control over the flash light.
I have a simple plugin that just does something like this:
chrome.extension.onMessage.addListener(function(msg, _, sendResponse) {
log("Got message from background page: " + msg);
});
unfortunately when my panel is loaded the following error is shown:
TypeError: Cannot call method 'addListener' of undefined
and according to my tests chrome.extension.onMessage is undefined
According to this page http://code.google.com/chrome/extensions/messaging.html I should be able to access this chrome API from my page so it has to be something small that I am missing here...
Please note methods chrome.extension.onRequest and chrome.extension.sendRequest, as originally suggested in this answer, are deprecated as of Chrome 33.
You should use
chrome.extension.onRequest
instead of
chrome.extension.onMessage
And in background page or any other extension scripts:
chrome.tabs.sendRequest
instead of
chrome.tabs.sendMessage
( the documentation is outdated... alert to google team ;) )
Just a side note: the Yandex browser (mostly oriented for Russians) which is also based on Chromium still (as of 11/10/2012, ver. 1.0) has the .*Request methods instead of .*Message. Many thanks to Ciprian Amariei for the tip, it saved me a lot of time!
PS: This should actually be a comment to Ciprian Amariei's answer but unfortunately I can't leave comments yet and I though this information could be very helpful to those who develop extensions for Yandex browser.
Make sure you're using the latest Google Chrome version. Older versions don't have the chrome.extension.onMessage API.