Does the Google Mobile Vision (https://developers.google.com/vision/) API work offline? Or does it need Internet connectivity? The sample app does not require any Internet permission. Which means the API works entirely offline. I am looking for a positive confirmation of this.
PS. Also I am looking for more information on this API. For example, does it use neural networks? If so what algorithms were used? I can not find any detail discussion anywhere.
The API does an initial library download the first time that it is used, and then works offline from that point on.
I tried the official sample app here.
I installed the sample app on a device in airplane mode, and continued to run under airplane mode - It works without problem.
(Note that I only tested the OCR part)
But as far as I know, all google services libraries made use of an already installed (On most device, probably, except China models) application named Google Play Services. As long as this application is installed in the device (With a reasonably updated version), the Vision API should works even under no internet connectivity.
Related
I would like to implement a video / audio call feature from a browser. The goal is to allow two users to communicate remotely without having to install a third part (when I say third part, I'm talking about a software or an extension on a browser).
I know WebRTC, which is very popular today and free. However, it is very difficult to implement and the documentation is difficult to understand (not very easy for a beginner).
Here is the official webRTC documentation, and honestly, where to start? https://webrtc.org/start/
If you have an experience about WebRTC, is it possible to share with positive or negative points? This would be very useful for the community.
Moreover, if you have experience with another library, I think it would be interesting to hear it.
There is no other way to develop a call service in a website without the use of WebRTC today.
The alternatives are:
Use WebRTC
Use Flash (which is... dead)
Use a plugin (which is... dying as a mechanism in browsers)
Use an app you download (not exactly a service in a website)
Node.js is the way to go, but you will need to learn some new technology, especially when it comes to the backend.
The servers you will need are:
1. The traditional web application server
2. A signaling server (the one you plan on using Node.js for - you can use that for the web application server as well)
3. A STUN/TURN server (for NAT traversal)
4. Maybe a media server, depending on your use case
For some alternative open source and commercial products, you can check this WebRTC Developer Tools Landscape
I currently have a video chat app working on web(Flash) and android via Adobe AIR, it uses Adobe Media Server (RTMP) as backend for video streaming and shared objects, my question is, if there is another server or solution that provides many to many live video broadcast maybe using H.264 codec from android and iOS, have some sort of user list and room list stored in a database or similar, I want to move away from Adobe as it has many limitations on mobile devices.
Live video is crucial in 1 to many broadcasts that will have hundreds of viewers at the same time.
Thanks for reading!
Ulex.fr created an RTMP connector for Asterisk (the free PBX platform).
Used with the Asterisk Vonference application, it allows you to create conference rooms for 1 to many configuration, with audio and video. The only one limitation is the power of your server. You can plan a scalable architecure in order to broadcast one video to many (many could be unlimited). We developp a specific protocol to connect and manage the connection based on the telephony events. I think we already done a direct RTMP connection that skip this protocol too.
All the project done by ulex.fr is free, OpenSource and GPL.
Get the full project here : https://github.com/voximal/asterisk-rtmp
(a live demo is available)
We already develop an RTMP stack for android with video (using the camera), this allows you to create your own application without using AIR.
You can check Adobe Cirrus, it's still in the beta stage (actually IMHO Adobe forgot about it), but it works on web, desktop and mobile too. Check this Video Phone example, it can handle chat applications without a problem.
http://labs.adobe.com/technologies/cirrus/samples/
You could take a look at Red5 Media Server, which is an open source solution. There are other options like the Wowza's solutions on AWS, but they come a higher cost...
Ok as today, we have decided that we can manage the users,rooms and messages via Google Firebase Real Time Database, and the live video stream using ANT Media Server
Now I am working on the IBM MQA (Mobile Quality Assurance). But I want to get some informations from MQA Service such as device's information , session, bugs, .... The IBM have the rest API to support it ? Can anyone give some suggestion about this problem ?
Thank you.
We are working to improve the MQA filtering features however at this time we don't have a way to filter crashes by devices. We recently announced that MQA in Bluemix now supports integration with the following bug tracking systems: Jira, TFS, HP Quality Center, GitHub, FogBugz.
If you use one of these BTS's, its possible to use their sorting or filtering process, if any however unfortunately at this time there are no MQA APIs available.
I have developed an app in android studio i would like to make it cross platform just dont know how to test my app on windows, apple and other devices in the market. So still have not been able to choose the cross platform cloud OR cross platform IDE yet. Can some one share their experience if they have done this or any alternative process for developing - testing - deploying without device?
I am not an expert, but I would suggest searching for a mobile virtual machine - either one that can be used on a mobile device or on a computer.
Here is an interesting article about VMware. Maybe it can give some ideas.
Here is the best answer i could found.
Check out this link here has shed load of information that is indeed useful.
Can anyone tell me how apple handle submissions for bluetooth BLE enabled app. Actually, i have created a bluetooth iphone app which connects to third party device. And, I am not sure how apple is testing apps which connects to third party devices.
Object Lab has recently launched their first iOS app using iBeacon. It took them 3 attempts to get it approved. I would recommend you to create a video which demonstrates how your app works and send it to them.
Generally they ask for hardware as well but mostly it's not feasible for us to provide them with one. Object Lab had to send them instructions to setup hardware at their end and to test the app. THEY WILL NOT APPROVE UNLESS THEY KNOW ITS WORKING. So I would recommend to send a video and step by step instruction of how to setup hardware at their end to test it out.
My experience has been that a video demonstrating the use of the app while connected to your third party device is enough. Sending the device to them is not necessary (at least for all cases). I know there is another thread on stackoverflow.com concerning this very thing, but it's been months since I had found it and I can't find it now. Anyways, it has worked for me as well as acquaintances of mine who have an app on the store.