Is there a way to prevent BackgroundTransfer from trying indefinitely to upload a file. Let's say one of my user is trying to upload a movie from the phone to Facebook. Facebook Graph API doesn't accept byte-range/resume/etc. Let's say the network is slow, less than 50 kbps. Under 50 kbps, BackgroundTransferService will restart the upload.
That being said, when testing my app, I've noticed that the uploaded restarted 4-5 times under my very slow 3G wifi router (yeah... I'm a mix of the two cases).
Will this behavior happen on a GSM/3G/4G network?
What think is that this behavior is totally welcome, on a Wifi, but not on a phone network, as data costs more.
[Edit]
I forgot one important info: I don't have internet on my WP, so that's why I ignore the behavior of BTS on a phone network.
Yes, the agent will try and reattempt the transfer if the connection is dropped. This is one of the benefits of using the agent, you let it worry about reattempting and network conditions so you don't have to. The API does all you a level of control over usage of cellular data via the TransferPreferences property. You could set this if you're concerned. Alternatively, let the user set their own preferences about data usage via the built in settings on the phone.
There is more information at http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202955(v=vs.105).aspx#BKMK_TimelinessofCompletion
Related
I'd like to implement a simple video chat system for students to tutor each other. I'm a one man show, and would like a system I can run in a cost effective way starting with 10 users, and hopefully scale up as needed.
WebRTC seems like a great, low latency, and cheap option to build this feature. However, if clients are communicating, then they must know each other's public IP. Is this a significant privacy or security issue?
What is the worst case scenario of somebody getting my IP address? Wouldn't any malicious actor have to get through my ISP to get my specific location?
Thanks!
If you host it yourself, WebRTC can be extremely cost-effective. I've been running the SFU at galene.org (disclaimer: I'm the main developer), which is used for multiple lectures with up to a hundred students. Even though this is a full-fledged SFU (and not a mere TURN server), hosting amounts to just over €6/month.
If your tutoring sessions involve just two or three people, then peer-to-peer WebRTC might be enough, but even then a TURN server will be required, especially if some of your users are on university networks. For larger groups, you will need to push your traffic through an SFU.
If you do peer-to-peer WebRTC, then any user can learn the IP of any user they are communicating with; this is most probably not an issue, since the IP addresses are most probably already being disclosed (e.g. in mail headers). If you go though an SFU, then the IP addresses are not deliberately disclosed, but they might still leak; for example, the SFU implementation mentioned above (Galene) discloses IP addresses when a user initiates a file transfer since file transfers happen directly between clients, in a peer-to-peer fashion. (It may be possible to avoid this disclosure by setting the iceTransportPolicy field to relay in the PeerConnection constructor, but I haven't tested how effective it is.)
WebRTC doesn't have to be P2P. You could run a SFU. Each user will upload their video to your server, and the server will distribute via WebRTC. Then the users will never know each others IPs.
I don't have any exact numbers, but it isn't expensive either. Your biggest expense will probably be bandwidth. Lots of Open Source SFUs exist, this is a good list to get started.
I have developed a site that hosts user videos. I store the video files in AWS S3, I deliver them through AWS Cloudfront and I use video.js as the site's player with HTML5 as default and flash as fallback.
Generally the video streaming seems to work fine but in some cases I receive complaints from users for slow or choppy video playback. I want to create some tests to measure the performance of streaming in order to be able to distinguish user problems (e.g. slow connection at the user side) or with my service.
Are there any best practices or tools to collect video delivery metrics? I'm interested in open source solutions or something that I can implement myself because it's just a personal project, but I don't want to rediscover the wheel.
Testing progressive download implies checking the transmission bandwidth and its continuity. For example for a high transmission rate the initial client buffer will be filled faster and the playback will start sooner. However, losing that transmission capacity at some later time can cause re-buffering. The total transmission time of your file must be lower than the video duration.
To identify potential issues you can start with the S3 bucket logs and the CloudFront cache statistics and access logs.
There's a load testing tool written in Java called Apache JMeter. It cannot use JavaScript so it must be configured to request the files directly.
The disadvantage of using a load test tool in a single location is pretty evident. Different geographical areas and carriers have different characteristics and test results will be different.
There are online, non open-source tools that can load test from multiple locations but they are generally paid though some offer free trials.
Here's another way to look at this.
but in some cases I receive complaints from users for slow or choppy video playback.
If you're using an Adaptive HLS stream, and you're CloudFront, and the video is still choppy to some users, that's probably because of their own internet connection speeds.
In that case, you can encode your video in multiple resolutions (using just one AWS MediaConvert job, btw) - like 1080p, 720p, 360p, 240p, 144p etc.
And then Videojs has a stream switcher plugin that will 1) automatically start playing the highest possible resolution - and no higher - that's right for the viewer's connection and 2) give the user the option via a "Settings" (gear) icon in the control bar that they can use to switch resolutions manually.
That way, even those with really poor internet connections should be able to watch your video.
Of course, the other alternative is to use progressive download videos that the viewer can simply click play, then immediately click pause, and wait for the video to buffer, and then play it after it's fully downloaded.
Check out the Videojs Resolution Switcher demo here.
-- Ravi Jayagopal
I have an auto suggestion mechanism that works fairly nice for desktop version where we have a wireless or a wired internet connection. The worst response time is 320ms.
(Without using solr as of now, I use a storage system on the server that gives back the result).
I have users that belong to the group where you can have a slow internet connection also known as a 2G connection where the downspeed can be ~10Kbps-50Kbps.
I have seen that google provides Auto Suggestion to this speed as-well, my my system cannot.
I have tried these:
Make a txt and JSON file on the server and when the user does a keydown (1st) it fires ajax to bring the entire 2.2MB data inside a JS variable on client side and show suggestions.
Make a service that is called when the user types 2 characters, service reads the txt/JSON file for those 3 character occurence anywhere in the words and gives data into a JS variable.
Repeat the above step and store the result in localStorage, for a fresh 3 characters again the same process occurs and storage happens. The benefit is that the user in future gets a prompt suggestion but according to me browser storage is used very sensibly.
Anyone with suggestions how www.google.com and www.flipkart.com handles auto suggestions for slow internet connections on mobile (smartphones).
I'm writing an application for the Mac App Store in Obj-C/Cocoa. The app processes .html files and does not require an internet connection.
I was wondering, what would be the best way to collect statistics? All I'm interested in is the number of files processed.
That way, on the app's home page, I can display XXX,XXX files processed.
I was thinking that I would just post to a web server whenever a file was converted, but that would considerably slow down the app and wouldn't work if the user was not connected to the internet.
You could accumulate the stats internally to be uploaded only every so often (each day, perhaps). You'd save the accumulated number across restarts using NSUserDefaults.
You should ask the user for permission to upload data, even something so seemingly innocuous as a count of processed files.
You'd use a simple HTTP request to upload the data. (You know it will be vulnerable to spoofing, right?) You should use the network reachability API to check whether the system is network connected before trying, so you don't force a dial-up, for example. The reachability API can't tell you that your connection will for sure succeed, so you should handle failure to connect gracefully.
Well, I tried to ask this question as a comment on this question, but I thought that maybe no one will notice it, so I decided to ask it as a separate one.
The question is about how to do real-time GPS tracking system things; if we have the following scenario:
Rather than connecting a GPS receiver to a PC, the user will have a mobile device with an integrated GPS receiver.
Location data will be sent over mobile network using GPRS data connection to a server side.
The data will be processed and a KML path file will be created and updated on time intervals and used to track the user using Google Earth.
The question is: what is the best method to accomplish this scenario for the server side; is it a web service, a web application, a windows service, a windows application or what exactly? Taking into account that the system will serve a number of users simultaneously, and that more users may use the system in the future(scalability issues).
Thank you in advance and I highly appreciate any help :)
What kind of device are you using exactly, something like this or something more sophisticated / configurable? If we assume that the device sends its data over TCP, I would consider the following approach with separate input/output processes:
Input: a process listening specific TCP port and storing incoming coordinates to database with a device id. Preferably, your listening loop must be able to handle simultaneous connections without them blocking each other.
Output: web application reading coordinates from database for a given device id and displaying them through the Google Earth API.
Use whatever programming language(s) you are familiar with.
For me there is a technical limitation/risk here -> the mobile device, and its connectivity.
1) What are your requirements? Do you need to support various mobile devices or will you focus on only one platform ?
2) More importantly, you have to understand that GPRS data connections differ from a PC connected to the Internet. There are various connection restrictions imposed by different mobile operators.
If I was to design such a system in order to minimise those risks I would go with a web server running on port 80 which the mobile devices would upload their Long/Lat through POST (or even GET to simplify things).
EDIT: Regarding scalability, it would be very easy to scale things up in the future using tried&tested load-balancing techniques.
EDIT2: Whichever technology you decide to use, i would HIGHLY recommend that the first thing you do is to mock up a prototype. Those connection restrictions could be show-stoppers. Ideally you need to explore them before you have made any serious investment.