I am trying to transfer 2 megabytes worth of data using the new multi-peer connectivity framework in iOS 7. I am finding that sending and receiving 2 megabytes of data takes at least 5 minutes. This seems very odd. This is between an iPhone 5S and an iPad 3 in the same room. Problem happens regardless of whether WiFi or Bluetooth are enabled or disabled and whether reliable is on/off.
// self.session is an open MCSession, packet is the 2 MB of data, reliable is YES or NO, both are slow
[self.session sendData:packet toPeers:peers withMode:MCSessionSendDataReliable error:&error];
I'm pretty sure this is because the iPad 3 is the bottleneck. Transfer from iPhone 5S to iPad Simulator on my MacBook Air for a 10 MB file was about 1 second. My theory is that only AirDrop enabled devices will get fast transfer speeds - http://en.wikipedia.org/wiki/AirDrop.
EDIT My assumption was wrong, transfer between two iPhone 5s is just as slow :(
EDIT Switched to streams API and it's much better
EDIT Tweaking wifi channel settings on my router has helped performance, but it still seems slower than it should be. 10 MB transfer now takes 30-60 seconds instead of 5 minutes.
EDIT I solved the problem by converting the images to JPEG2000 which is vastly smaller than PNG, even though the transfer is only like 100K a second it now finishes in a reasonable 5-10 seconds. See this stackoverflow answer: How do I convert UIImage to J2K (JPEG2000) in iOS?
EDIT Disabling encryption has also helped with transfer speed
Related
The following shows a screenshot of the Xcode CPU Report indicating that my application (while number crunching) is maxing-out one of the CPUs:
The above shows a maximum of 400%. However the iPhone has a 2-core CPU, so I am wondering why the gauge doesn't go to 200% instead?
Furthermore, by using concurrency and splitting my number crunching across multiple threads I can max out at 400%, however my algorithm only runs twice as fast. Again, indicating that the work is divided across 2 CPU Cores.
Does anyone know why Xcode shows 400% and how this relates to the physical hardware?
If you are testing in a simulator then it shows reports on the basis of your MAC's processor that's why it is showing 400% ( for a quad-core processor).
The iPhone has only 2 cores (although some iPads have more). The Mac running the simulator apparently has four, or two plus hyper threading.
So I am developing an app for a company and am under quite a time pressure now. It is yet available for 3.5 and 4 inch. But since Apple has just released the new screenSizes, do they possibly reject the binary I upload for review if it is not made for the new sizes? Because that was the case for when 4 inch was new, I remember.
Did anybody release an app since the release of iPhone 6 and 6+ yet?
Thanks in advance
The app must work on iPhone 6 and 6+.
You should test it (e.g. in the simulator). If the app doesn't crash, even though things appear somewhat disproportioned (too big), compared to other iDevices, then you have a high probability there will be no problem, since the scale is the same as on 4 inch devices.
You need to upload screenshots for iPhone 6 and 6+, too.
Currently I am using the DTWGestureRecognizer open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc.
My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging.
Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!
Some of the factors I can think of are
Reduce the resolution.
Reduce the frames being recorded/processed by the application using polling model i.e. OpenNextFrame(int millisecondsWait) method of DepthStream, ColorStream & SkeletonStream
instead of event model.
Tracking mode is Default instead of Seated(sensor.SkeletonStream.TrackingMode =
SkeletonTrackingMode.Default) as seated consumes more resources.
Use sensor.MapDepthFrameToColorFrame instead of calling sensor.MapDepthToColorImagePoint method in a loop.
Last and most imp. is the algorithm used in the open source tool.
The Info
I recently launched an app on the AppStore. After testing on the simulator thousands of times, and actual devices hundreds of times we finally released our app.
The Problem
Reviews started popping up about app crashes when the user launches the app. We figured that the app crashes on launch on iOS devices with less than (or equal to) 256 Mb of RAM. The following devices are devices our app supports with less than 256:
iPod Touch 4G
iPhone 3GS
iPad 1
The app doesn't always crash. Sometimes it launches fine and runs smoothly. Other times it crashes. The time from launch (when the user taps the icon) to crash is usually two seconds, which would mean that the system isn't shutting it down.
Findings
When using Instruments to test on certain devices, I find the following:
There are no memory leaks (I'm using ARC), but there are memory warnings
Items are being allocated like crazy. There are so many allocated items, and even though I'm using ARC it's as if ARC isn't doing what it's supposed to be doing
Because of what I see as "over-allocation", the result is:
This app takes (on average) 60 MB of Real Memory and 166 MB of Virtual. When the app launches the memory being used quickly increases until it reaches about 60 MB at which point the view has been loaded.
Here is a snapshot of the Activity Monitor in Instruments:
I know that those figures are WAYY to high (although the CPU % never really gets up there). I am worried that ARC is not working properly, or the more likely case: I'm not allocating objects correctly. What could possibly be happening?
The Code and Warnings
In Xcode, there are only a few warnings, none of which pertain to the app launch or any files associated with the launching of the app. I placed breakpoints in both the App Delegate and my viewDidLoad method to check and see if the crash occurred there - it didn't.
More Background Info
Also, Xcode never generates any errors or messages in the debugger. There are also no crash reports in iTunes Connect, it just says, "Too few reports have been submitted for a report to be shown." I've added crash reporting to my app, but I haven't released that version.
A Few Questions
I started using Obj-C just as ARC arrived, so I'm new to dealing with memory, allocation, etc. (that is probably obvious) but I'd like to know a few things:
How can I use #autoreleasepool to reduce my memory impact? What do I do with memory warnings, what do I write in the didRecieveMemoryWarning since I'm using ARC?
Would removing NSLog statements help speed things up?
And the most important question:
Why does my app take up so much memory and how can I reduce my whopping 60 MB footprint?
I'd really appreciate any help! Thanks in advance!
EDIT: After testing on the iPhone 4 (A4), we noticed that the app doesn't crash when run whereas on devices with less than 256 MB of RAM it does.
I finally solved the issue. I spent a few hours pondering why my application could possibly take up more RAM than Angry Birds or Doodle Jump. That just didn't make sense, because my app does no CALayer Drawing, or complex Open GL Graphics Rendering, or heavy web connection.
I found this slideshow while searching for answers and slide 17 listed the ways to reduce memory footprint. One thing that stuck out was PNGCrush (Graphics Compression).
My app contains a lot of custom graphics (PNG files), but I hadn't thought of them affecting my app in any way, apparently images (when not optimized properly) severely increase an applications memory footprint.
After installing PNGCrush and using it on a particularly large image (3.2 MB) and then deleting a few unused images I ended up reducing my apps memory footprint from 60+ MB and severe lag to 35 MB and no lag. That took a whopping five minutes.
I haven't finished "crushing" all my images, but when I do I'll update everyone on the final memory footprint.
For all those interested, here is a link to a blog that explains how to install PNGCrush (it's rather complicated).
UPDATE: Instead of using the PNGCrush process (which is very helpful, although time consuming with lots of images) I now use a program called ImageOptim that provides a GUI for multiple scripts like PNGCrush. Heres a short description:
ImageOptim seamlessly integrates various optimisation tools: PNGOUT, AdvPNG, PNGCrush, extended OptiPNG, JpegOptim, jpegrescan, jpegtran, and Gifsicle.
Here's a link to the website with a free download for OS X 10.6 - 10.8. Note, I am not a developer, publisher or advertiser of this software.
I have a BlackBerry app that I am about to port to the iPhone. The app contains mp3 files which causes the BlackBerry version to be about 10MB in size (even after I reduced the quality of the files to 92kbps). 10MB won't do for the iPhone. Does anyone know of any best practices when it comes to including audio files in your iPhone app? I'm interested in knowing suggested format(s), quality, channels (left, right) etc. I will also need to play more than one file at a time (very important).
Thanks.
You could consider downloading (some of) the MP3 files after your app is installed. For low bitrate you're better off recompressing with AAC though (perhaps at 48-64 kbps); it provides better quality than MP3 at the same size. Also consider mono instead of stereo if it makes no difference.
Why won't 10 MB for the iPhone work?
Applications on the iPhone can be as large as 2 GB with apps larger than 10 MB can be downloaded over wifi or through iTunes.