Is it possible using the Cordova camera API to take a picture and then store it locally in the camera roll on iOS and Android? I know its possible, but does it involve native code somehow or can it be done in pure HTML? The documentation doesn't say anything about this.
Simply add saveToPhotoAlbum: true in cameraOptions param.
For example
navigator.camera.getPicture(onPhotoDataSuccess, onFail,
{
quality: 50,
destinationType: destinationType.FILE_URI,
saveToPhotoAlbum: true
});
saveToPhotoAlbum is set to false by default.
Reference
Do you mean the device's gallery?
If so, just use FILE_URI for Camera.DestinationType option.
Reference: http://docs.phonegap.com/en/2.6.0/cordova_camera_camera.md.html#cameraOptions
Related
I want to integrate Screen Share feature in my react-native application in which I am using Twilio for video communication. In Web we are able to achieve this by following these steps.
1 : We get the media device stream using
navigator.mediaDevices.getDisplayMedia({
video: true,
});
2 : Then we get the first stream tracks using
const newScreenTrack = first(stream.getVideoTracks());
3 : After that we set this newScreenTrack in some useState
const localScreenTrack = new TwilioVideo.LocalVideoTrack(
newScreenTrack
);
4 : After that we first unpublish the previous tracks and publish the new tracks using
videoRoom.localParticipant.publishTrack(newScreenTrack, {
name: "screen_share",
});
5 : And finally we pass these tracks in our ScreenShare component and render these tracks to View the screenShare from remote Participant.
I need to do the same thing in my react-native application as well. Where if localParticipant ask for screenShare permission to another participant. Participant will accept the permission and able to publish the localScreenShare tracks.
If anyone know this please help me in this. It would be really helpful. Thank you
I think this is an issue with the react-native-twilio-video-webrtc package. It seems that, as you discovered in this issue, that screen sharing was previously a feature of the library and it was removed as part of a refactor.
Sadly, the library does more work than the underlying Twilio libraries to look after the video and audio tracks. The Twilio library is built to be able to publish more than one track at a time, however this React Native library allows you to publish a single audio track and a single video track using the camera at a time.
In order to add screen sharing, you can either support pull requests like this one or refactor the library to separate getting access to the camera from publishing a video track, so that you can publish multiple video tracks at a time, including screen tracks.
I am building an app for video streaming using HLS from s3.
I want to support the functionality to select Video Quality.
Still unable to find how to select the desired quality.
Can any one please help me in this issue.
If some one knows some react-native api or some other solution, please help.
Thanks
it is possible to select a video quality manually according to the react-native-video documentation use the selectedVideoTrack Prop
see https://github.com/react-native-community/react-native-video#selectedvideotrack
example
selectedVideoTrack={{
type: "resolution",
value: 1080
}}
add these lines in react-native.config.js
module.exports = {
dependencies: {
'react-native-video': {
platforms: {
android: {
sourceDir: '../node_modules/react-native-video/android-exoplayer',
},
},
},
},
assets:['./src/assets/fonts/'],
};
remove assets:['./src/assets/fonts/'], if you don't have fonts directory.
Then in Video component select video track like this
selectedVideoTrack={{
type: "resolution",
value: 1080
}}
This solution tested with only Android devices.
Original answer: https://stackoverflow.com/a/71148981/4291272
If you have multiple quality renditions in your HLS video, you could use hls.js for the playback component. Normally this will just switch playback for you, but you can control this manually with the Quality Switch API.
I.e. you would get the available qualities using hls.levels and then iterate through them. Then set hls.currentLevel tot he desired quality.
I'm writing a native iOS component for react native to fetch PHAsset(s) from camera roll. I'm struggling to show an upload a PHAsset as it's not giving a proper URI to use in react native and I'm writing an upload component too. How to achieve this.
I have solved this using the expo-media-library
https://docs.expo.io/versions/latest/sdk/media-library
async myFunc() {
let uri = "ph://ED7AC36B-A150-4C38-BB8C-B6D696F4F2ED/L0/001"
let myAssetId = uri.slice(5);
let returnedAssetInfo = await MediaLibrary.getAssetInfoAsync(myAssetId);
console.log(returnedAssetInfo.localUri); // you local uri link to get the file
} }`
React Native currently has very mixed support for PHAsset (aka PHImageLibrary) URIs, eg photos://A6A2CEBD-766E-4BD7-980C-71ED7828674E/L0/001. It has much better support for the deprecated ALAssetsLibrary eg assets-library://asset/asset.MOV?id=A6A2CEBD-766E-4BD7-980C-71ED7828674E&ext=MOV (note that is a video, not a photo, but the idea is the same).
You'll notice the ID in there is the same, it's just prefix/suffix changes. Try string manipulating that yourself.
Also, basically nothing supports a local URI that isn't prepended by photos:// or assets-library://. PHAssets don't have that, because ~~~apple things~~~. Try prepending photos://.
Side note, these URIs will work for things in iCloud that aren't actually local.
I'd like to use two geo location watchers in my app. One with useSignificantChangesand one with high accuracy.
The "lowres" watcher would provide my Redux store with approximate locations all the time, whereas the "highres" watcher would be enabled when the user is working in a live view.
Here are the options for the low res watcher
const options = {
enableHighAccuracy: false,
useSignificantChanges: true,
distanceFilter: 500,
};
And the high res watcher:
const options = {
enableHighAccuracy: true,
timeout: 60e3,
maximumAge: 10e3,
};
I have played around with the settings, but I can't see any difference in the output. Both watchers emits the exact same positions at the same time. I'm using the iOS simulator for the moment.
Questions:
I should be able to have several watcher, shouldn't I? What would be the point with the returned watchId otherwise?
Is this a problem only in the simulator?
Have I misunderstood or goofed?
Edit, the actual question is:
Why do I get high frequent accurate gps positions even in "significant changes" mode. This mode is supposed to save battery if I have understood correctly.
Thanks!
The useSignificantChanges option is fairly new and only recently implemented, so you need to make sure that:
You are using a version of React Native that supports it. Based on merge dates, looks to be v0.47+
You are testing on an iOS version that can use it. The Github issue states that this is a new feature that impacts iOS 11, so I presume that you would need at least that version to see any differences.
I am trying to control my Camera DSC-RX1RM2 with Remote SDK.
With the PDF guide [Sony_CameraRemoteAPIbeta_API-Reference_v2.20.pdf],
I think I can use [Continuous shooting mode]API for my Camera,
But the result always return ["error": [12, "No Such Method"]].
I want to ask where is the problem?my camera or the SDK or my source?
Unfortunately, the DSC-RX1RM2 is not supported by the API. Stay tuned to the Sony Camera Remote API page for any updates on supported cameras - https://developer.sony.com/develop/cameras/.
The latest API does support the DSC-RX1RM2, just confirmed it.
Also check that your URLs are like:
http://ip:port/sony/camera
or
http://ip:port/sony/avContent
I didn't append camera or avContent at first and got similar No Such Method errors.