I am a begginer in unity and I making tank game with this tutorial series. How to I can add multiplayer to my game(damage, score, network view )? Is there any (detailed) tutorial for this or assets?
The documentation for multiplayer networking is here, and a google search for "unity multiplayer tutorial" comes up with some good tutorials, including this one.
Hope this helps!
Related
I have been playing with cloud vision API. I did some label and facial detection. During this Google I/O, there is a session where they talked about mobile vision. I understand both the APIs are related to machine learning in Google Cloud.
Can anyone explain (use-cases) when to use one over another?
what sort of applications that we can build by using both ?
There can be many various sets of application requirements and use cases befitting one of the APIs or the other, so the best decision is to be taken on an individual basis.
It is worthwhile noting that the Mobile Vision offers facial recognition, whereas Cloud Vision API does not. Mobile Vision is geared towards the likeliest use-cases to be encountered in a mobile device environment, and encompasses the Face, Barcode Scanner and Text Recognition APIs.
A use case for both the Mobile Vision set and the Cloud Vision API would require face recognition as well as one of the features specific to the Cloud Vision API, such as the detection of inappropriate content in an image.
Google is planning to "wind down" Mobile Vision as I understood.
The Mobile Vision API is now a part of ML Kit. We strongly encourage you to try it out, as it comes with new capabilities like on-device image labeling! Also, note that we ultimately plan to wind down the Mobile Vision API, with all new on-device ML capabilities released via ML Kit. Feel free to reach out to Firebase support for help.
https://developers.google.com/vision/introduction?hl=en
I just bought a Sony A7 and I am blown away with the incredible pictures it takes, but now I would like to interact and automate the use of this camera using the Sony Remote Camera API. I consider myself a maker and would like to do some fun stuff: add a laser trigger with Arduino, do some computer controlled light painting, and some long-term (on the order of weeks) time-lapse photography. One reason I purchased this Sony camera over other models from famous brands such as Canon, Nikon, or Samsung is because of the ingenious Sony Remote Camera API. However, after reading through the API reference it seems that many of the features cannot be accessed. Is this true? Does anyone know a work around?
Specifically, I am interested in changing a lot of the manual settings that you can change through the menu system on the camera such as ISO, shutter speed, and aperture. I am also interested in taking HDR images in a time-lapse manner and it would be nice to change this setting through the API as well. If anyone knows, why wasn't the API opened up to the whole menu system in the first place?
Finally, if any employee of Sony is reading this I would like to make this plea: PLEASE PLEASE PLEASE keep supporting the Remote Camera API and improve upon an already amazing idea! I think the more control you offer to makers and developers the more popular your cameras will become. I think you could create a cult following if you can manage to capture the imagination of makers across the world and get just one cool project to go viral on the internet. Using http and POST commands is super awesome, because it is OS agnostic and makes communication a breeze. Did I mention that is awesome?! Sony's cameras will nicely integrate themselves into the internet of things.
I think the Remote Camera API strategy is better than the strategies of Sony's competitors. Nikon and Canon have nothing comparable. The closest thing is Samsung gluing Android onto the Galaxy NX, but that is a completely unnecessary cost since most people already own a smart phone; all that needs to exist is a link that allows the camera to talk to the phone, like the Sony API. Sony gets it. Please don't abandon this direction you are taking or the Remote Camera API, because I love where it is heading.
Thanks!
New API features for the Lens Style Cameras DSC-QX100 and DSC-QX10 will be expanded during the spring of 2014. The shutter speed functionality, white balance, ISO settings and more will be included! Check out the official announcement here: https://developer.sony.com/2014/02/24/new-cameras-now-support-camera-remote-api-beta-new-api-features-coming-this-spring-to-selected-cameras/
Thanks a lot for your valuable feedback. Great to hear, that the APIs are used and we are looking forward nice implementations!
Peter
I have a question about Kinect Xbox360: it can track the hand movement and fingers? I am searching on the web and I dont found any interesting about this. Another camera that I am thinking to use is the Asus Movement Sensor, but I dont know if this is better than Kinect (more options, I know that both uses OpenNI) or if both are the same.
Thanks for your time!
I would see these links:
Finger tracking in Kinect
http://www.kinecthacks.com/kinect-hand-tracking-gesture-experiment/
http://makematics.com/code/FingerTracker/
http://social.msdn.microsoft.com/Forums/en-US/c128197f-6925-49c6-bedc-d7692d03c0a9/fingers-tracking-using-kinect
http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device
These should get you started and give you many options. You can use the SDK or OpenNI, however my personal preference is the SDK, OpenNI or OpenKinect may be better in this case, expecially because of the FingerTracker API (3). Although the sdk has source code for finger tracking with an xbox kinect (5).
I've had excellent results with the Google API for speech recognition with natural dialogues, however for sounds from Youtube videos or movies recognition is poor or nonexistent.
Recording sounds on an iPhone 4 of my voice in both Spanish to English is recognized, but with the same phone at a movie is almost impossible, even a scene with a character talking with little background noise. Only once had success.
I try to clean up the sound with SoX (Sound eXchange) using noisered and compand efects, without any success.
Any idea? Or simply are sounds that can not be identified by the Google API for more you change? It will have better success with other speech recognition software?
Google voice recognizer (and most other recognizers) is not compatible with reverberation effects. In most video scenes distance between person and microphone more than 1-3 meter. Try to put your phone on table and recognize smth from 3 meters distance. This will does not lead to anything but sound quality will be very good.
I'm trying to find out if there are any libraries or frameworks that will help with detecting facial features i.e. the eyes while video recording.
I tried using face.com api and THE CIDetector on IOS, but they only work on Images not video.
P.S. I'm developing for the iphone!
Why not simply extract frames from the video as it is playing and use those in the CIFaceDetector? This site has some good info on how to get frames from video files on iOS:
http://www.7twenty7.com/blog/2010/11/video-processing-with-av-foundation
I've never used this on iOS/Mac OSX, but you should check the OpenCV library.
Check this question for iOS support: iPhone and OpenCV
The library has built-in functions to detect faces, but I don't know if they are available on the iOS port.
You're looking for Object detection and I would recommend OpenCV.
If you want an out-of-the-box example just check out this link :) There is fully functional sample code attached to the tutorial. You can use OpenCV for a lot more stuff than just face tracking – just dig into the documentation and some tutorials.
You can finde several cascade classifier here for partial face detection.