The Glass Mirror Timeline API document suggests that video can be streamed to a Glass headset (https://developers.google.com/glass/timeline). I'm trying to determine whether this could theoretically work with a WebRTC connection? Documentation is limited around the browser/rendering capabilities of the timeline so has anyone tried something similar?
Ran across a discussion about WebRTC + Glass in a reported issue - https://code.google.com/p/webrtc/issues/detail?id=2083
From the sound of things someone got it to work in chrome beta at 176*144. There were some audio/frame rate issues that appear to have been resolved. Note though they talk about streaming video/audio from the glass to another machine not video streaming into the glass display. I believe that at this point it will only work in chrome beta & doubt you could integrate this into the timeline smoothly, though with how hard Google is pushing WebRTC I would not be surprised to see increased support. I'll be testing this out with my own WebRTC demos soon & will know more.
WebRTC for Google glass has been reported: http://www.ericsson.com/research-blog/context-aware-communication/field-service-support-google-glass-webrtc/. There were some limitations, e.g. the device overheated after 10 minutes.
Another case - http://tech.pristine.io/pristine-presents-at-the-austin-gdg (thanks to Arin Sime)
Related
I'm building an application with peer-to-peer video calling. So far, I only know WebRTC. Is this sufficient for p2p video calling across the globe if I just have simplest Turn server(s)? By sufficient I mean it is as smooth as a normal video calling services like Google Meet or Zoom. If no, what else should I do to ensure smooth video calling?
For P2P calls with a few participants, WebRTC should absolutely be sufficient. WebRTC has evolved so much in the past decade that it's not unreasonable to estimate that most video applications that are not Zoom are built on it.
There are lots of tutorials about building WebRTC apps from scratch (here's one on DEV, and I appreciate everything Karl Stolley writes).
The only question is if you need to build the WebRTC logic from scratch. Jitsi is a good open-source library. There are other solutions with free tiers like Twilio, Agora, or Daily (full disclosure, where I work).
Good luck!
I'm trying to get the video feed from usb camera attached to my Raspberry. Since it's not the dedicated one I can't just use raspivid or the raspicam that comes with uv4l to make changes to config that actually gives some effect at contrary to v4l2-ctl.
When I connect to the WebRTC server through the browser client it actually works at decent framerate. I don't yet understand how that technology works, but before jumping into it I was wondering if someone could tell me if it's possible to somehow (with client made in python or some other opencv magic) get that video feed.
Thanks in advance
I'm still interested if what I've talk about is possible, so if anyone with knowledge stumbles upon this thread, please let me know.
I've kinda solved my issue by using the mjpg-streamer experimental instead, it can be found here:
https://github.com/jacksonliam/mjpg-streamer
Now I'm getting over 8 fps, but it seems much more constant and really seems like I don't need more, compared to uv4l that gave me 3.5 fps with stutters.
I have implemented VCL to play the live streaming video and Now I want to play it on apple TV by airplay option on my streaming screen..
Can you pls help me out.
Thanks
By question its not clear what you mean by implementing VLC. Apple provided framework to stream live video (AVPlayer). Which supports Picture-in-picture and airplay.
For making things more clear please elaborate, what exactly you are trying to do.
Support for AirPlay will be available in libvlc and VLCKit 4.0 next year. It will allow you play anything from any source on Apple TV and if needed, will convert the media on-the-fly. Thus, AirPlay support will match what's possible with Chromecast right now.
With VLCKit 3, there is no good way to do it. You could do Display Mirroring, but this would have a bad impact on performance and video quality. Audio-only output via AirPlay / RAOP will just work fine and quality is good. It even supports multichannel now.
I just bought a Sony A7 and I am blown away with the incredible pictures it takes, but now I would like to interact and automate the use of this camera using the Sony Remote Camera API. I consider myself a maker and would like to do some fun stuff: add a laser trigger with Arduino, do some computer controlled light painting, and some long-term (on the order of weeks) time-lapse photography. One reason I purchased this Sony camera over other models from famous brands such as Canon, Nikon, or Samsung is because of the ingenious Sony Remote Camera API. However, after reading through the API reference it seems that many of the features cannot be accessed. Is this true? Does anyone know a work around?
Specifically, I am interested in changing a lot of the manual settings that you can change through the menu system on the camera such as ISO, shutter speed, and aperture. I am also interested in taking HDR images in a time-lapse manner and it would be nice to change this setting through the API as well. If anyone knows, why wasn't the API opened up to the whole menu system in the first place?
Finally, if any employee of Sony is reading this I would like to make this plea: PLEASE PLEASE PLEASE keep supporting the Remote Camera API and improve upon an already amazing idea! I think the more control you offer to makers and developers the more popular your cameras will become. I think you could create a cult following if you can manage to capture the imagination of makers across the world and get just one cool project to go viral on the internet. Using http and POST commands is super awesome, because it is OS agnostic and makes communication a breeze. Did I mention that is awesome?! Sony's cameras will nicely integrate themselves into the internet of things.
I think the Remote Camera API strategy is better than the strategies of Sony's competitors. Nikon and Canon have nothing comparable. The closest thing is Samsung gluing Android onto the Galaxy NX, but that is a completely unnecessary cost since most people already own a smart phone; all that needs to exist is a link that allows the camera to talk to the phone, like the Sony API. Sony gets it. Please don't abandon this direction you are taking or the Remote Camera API, because I love where it is heading.
Thanks!
New API features for the Lens Style Cameras DSC-QX100 and DSC-QX10 will be expanded during the spring of 2014. The shutter speed functionality, white balance, ISO settings and more will be included! Check out the official announcement here: https://developer.sony.com/2014/02/24/new-cameras-now-support-camera-remote-api-beta-new-api-features-coming-this-spring-to-selected-cameras/
Thanks a lot for your valuable feedback. Great to hear, that the APIs are used and we are looking forward nice implementations!
Peter
I am developing a website exclusively for mobile browsers.
What guidelines should I follow to optimize the site for mobile development?
My main concerns:
Most mobile devices have propriety browsers. How can the app be tested on those different browsers (testing on an actual device is not possible due to security restrictions)?
How to optimize the site for different screen sizes?
How to make the app touch friendly?
How to detect orientation of devices (in devices that come with an accelerometer)?
How to check that the device is not a desktop/laptop?
Things that I have used when designing mobile websites.
Find out the range of devices that you are planning to support. Some questions that you can ask are
Are u going to support only smartphones
What platforms are u planning to support ( iPhone, Android, Symbian ? )
A lot of you questions can be answered by the kind of Analytics that you are able to gather. If you have very less statistics then you can follow this strategy to start with.
Separate out the target range of devices into
simple ( basic phones with minimum browsing capabilities. ) - Design a very simple plain vanilla site for them.
medium ( older generation smartphones with browsers with poor javascript support ) - Design a site that has slightly better features.
Highend smartphones ( iPhone, Android, WebOS ) - Provide jazzy features that these phones support.
Use a device detection library like WURFL / .Mobi for device detection and WALL for dynamic rendering of content.
You can use .Mobi to detect an HTML5 compliant mobile browser. That way, you can take advantage of HTML5 capabilities in the devices that support it.
For testing you can follow this approach
test on browsers - Firefox / Safari / Opera have plugins to alter USER_AGENT and can simulate mobile testing.
Test on simulators - All the device platforms provide free to download emulators
If needed try device emulation products like device anywhere / perfecto.
I hope I was able to clarify atleast some of you questions. :)
The definitive guide has to be the W3C Mobile Web Best Practices: http://www.w3.org/TR/mobile-bp/ Don't let the length of it put you off - I find it much easier to read than other W3C specs. The key section is the Best Practice Statements, divided into bite-size chunks, often with an example. There's also a recent and extensive mobile web optimization guide here: http://dev.opera.com/articles/view/the-mobile-web-optimization-guide/ (disclaimer: I work for Opera)
Q1 Most mobile devices have propriety browsers. How can the app be tested on those different browsers (testing on an actual device is not possible due to security restrictions)?
The answer depends on how many devices you want to test and support.
iPhone: device and simulator are available.
Android: devices and emulator are available
Other mobile phones?
check http://www.deviceanywhere.com
Of course, you need to pay service fee. But i think its reasonable.
Q2 How to optimize the site for different screen sizes?
iphone4
WVGA854
WVGA800
VGA
HVGA
QVGA
QCIF+
Making contents for all different size is difficult. So have to make a choice about screen size and supported models.
Q3 How to make the app touch friendly?
It is your design issue.
Q4 How to detect orientation of devices (in devices that come with an accelerometer)?
Android and iOS has special message about such event. You have to follow such message.
Of course, you need both landscape and portlait layout.
Q5 How to check that the device is not a desktop/laptop?
You can use User-Agent header or IP address. But IP address is not good method.