On the mac you can use their native 'f-keys' to do things like change volume, brightness, and even pause or play music or video.
I was wondering if there was someway to access to 'now playing' media (music or video) through an API or something?
I know it's possible to get currently playing tracks from iTunes using things like ITLibMediaItem or even from other apps by using Applescript to ask the application what it's header is. But how would you figure out which application is the 'currently playing' one and then get the play pause functionality?
Does Apple have an API for this?
Related
I am currently trying to find the most efficient way to continue a method in the background of my app.
I am probably adding location/gps to my app soon, so I was considering using that flag to keep the app Active in the background. However, I do not want to add that flag yet because I want to post an App Update before I add the location functionality.
I know the exceptions;
Apps that play audible content to the user while in the background, such as a music player app
Apps that keep users informed of their location at all times, such as a navigation app
Apps that support Voice over Internet Protocol (VoIP)
Newsstand apps that need to download and process new content
Apps that receive regular updates from external accessories
Besides asking for a more generic idea then these ^, can someone explain the "external accessory" flag please? I am recording video from an outside device. However, I do not know what constitutes an "external accessory".
I also see that iOS7 has introduced new Multi-tasking functionality but I haven't seen any examples that I understand can someone also explain that? Maybe that is a viable solution?
Thanks in advance!
I have managed to get seamsless looping of wav files using the SharpDX library.
But this does not seem to work while the app is minimised (in the background).
Using the metro players I do not get a seamless loop this is why I use XAudio2 in the SharpDX library.
Hope someone can help with this.
When your app is in the background it no longer has access to the CPU so your audio will stop playing.
The only way around this is with background agents running the audio component. The issue here is that the certification process will be hard on you if you are just playing looping audio. Playing audio in the background is intended for audio player apps (like the inbuilt "Music" app).
If I were a user of your app I would likely be unhappy that it clogs up the audio system when it isn't in the foreground (if, for example, I went to answer a Lync call). If the only way to stop your app playing audio is to go and turn it off manually or exit the app then my opinion is that the user experience isn't great.
Of course, you may have a different opinion, or your app might be doing something I haven't considered.
I've seen the Google blog article explaining the way to embed youtube videos in an iOS app. I've successfully embed videos in my iOS app. BUT, I've seen the WhoSampled app where they are able to play Vevo/youtube videos. Somehow they are able to play those videos and my app is not able to play them. (In my app, there is a blue play circle with a line through it showing that is not playable.)
How do I allow those videos to be playable?
Do I have to setup the youtube link in a specific way or is there a way I need to setup the developer key?
There is a way to get the actual video stream for a given YouTube video instead of the HTML-embedded one. See this project:
https://github.com/hellozimi/HCYoutubeParser
The problem with this approach is that it will likely violate the EULA of YouTube APIs, especially if you are releasing it in a paid application:
In addition, please remember that attempting to play a YouTube video outside of either the YouTube embedded, custom or chromeless player is strictly prohibited by the API Terms of Service.
(source: https://developers.google.com/youtube/creating_monetizable_applications)
You just have to use the iframe embed as opposed to the Flash one.
I'm interested in seeing working code for how to compose an SMS/MMS programmatically using the latest iOS in order to include a sound file, taking into consideration that if the file is too big (unsure of the max size at this time, any info is appreciated) an error should be displayed to the user.
I know this can be done, because the built-in recorder for the apple iphone allows for sending audio files via a text message if they're not too big. I'd like to understand how it achieves this programmatically, what sound formats are available to me and what are the limitations if any.
You are not allowed to send MMS through the MessageUI framework, which is the framework iOS allows developers to interact with the Messaging interface. Apple uses private APIs in their apps, and any use of private APIs = automatic rejection in the App Store.
Raphael is right, there is currently no way in the current iOS version (iOS 5) to send an MMS using the MessageUI framework.
One potential workaround we've found was to create a "send MMS" screen, where a user can attach their selected audio / pxt, and then when the user hits the send button, make a call to a 3rd party MMS gateway to deliver the audio / image.
If I wanted to create a mobile app that allows the user to take pictures with their phone, record audio notes and record video, how would I do that?
I was browsing through the Sencha Touch 2 API and while I see documentation on video and audio files, it seems like it is just providing a way for me to access files stored on the phone - not actual triggers to record, or take pictures.
Am I missing something?
How would I do what I want?
In order for Sencha Touch to have access to your phone capabilities, you need to use a product like Phone Gap
Unless there is a HTML5 api for doing those sorts of things I don't think you can do that. I know on PhoneGap there are native extensions added into that platform for access to things like microphone, camera, etc. I don't know if Sencha Touch has added any of those sorts of extensions in order for you do this.
Just thinking out of the box here, but you might be able to put Sencha javascript into a Web View from within an Android Java process. Then the Java code could expose an object in its process as an extension point to the Javascript engine for access to Camera, Microphone, what not.