Shipment Tracking in iOS - objective-c

iOS 4 automatically detects tracking numbers found in emails, notes, and messages and turns them into clickable links.
And it redirects to this URL,
http://trackingshipment.apple.com/?Company=UPS&Locale=&TrackingNumber=1Z1234567890123456
How can we use this API or library into our iOS apps so it will automatically detect or force detect shipping numbers?

Unfortunately, the publicly-released data detector types don't include common carrier tracking numbers. I wrote a small project showing how to detect UPS, USPS, and FedEx package numbers and got pretty good results:
You'll have to do the work of assembling the tracking URLs yourself, but this sample code may help you get started. Download here.

The class being used to do this is called NSDataDetector.
It is a subclass of NSRegularExpression where you can specify some built in patterns to look for.
The list of built in type values in the NSTextCheckingType enum can be seen here.
I don't see one specifically for tracking information, but the closest thing appears to be NSTextCheckingTypeTransitInformation. That is most likely the one you're going to be using.
Good luck!

Related

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Record sound of one application

I want to develop an application for Mac OS X to record audio from one application.
I played around with Soundflower, but it only grabs the full system audio.
I know that I have to use a HAL plug-in. This plug-in is loaded from an application that uses Core Audio and then I can communicate with the plug-in to grab the audio.
My question is: How does such a plug-in look like? Are there examples on the internet? I have not found anything about this topic.
Now that you've decided that using Cocoa injection is a feasible solution to your problem, let's start there.
What you need to do is find out how the ObjC classes in the app are setting up to play audio, and hook in to set a different AU in place of the default system out.
There are two options (besides writing your own custom AU from scratch, which you don't need to do). You can use AUHAL as the AU, and capture the data from AUHAL. This is a bit easier from the point of view of hooking things up, but it means you have to write the code that renderers and saves the audio. Or you can just hook in a save-to-file AU, which is a bit harder to hook up, but once you do it takes care of rendering automatically.
So, how do you hook things in? Well, most of the higher-level CA calls are written to just write to the current output. If the app is doing things that way, you just need to hook in at startup to find your replacement AU and set it as the current output, in place of the default. On the other hand, if the app is writing directly to an AU that it stores in a variable, you have to hook it to store your AU as a variable. And if it's building a graph of AUs, you either replace the default output, or stick yours in front of it, in the graph.
See TN2091 for some sample code fragments for most of the hard parts for most of the possibilities. It doesn't show you how to put them together, and it's got a lot more about setting inputs than outputs (because that's harder), and the terminology can get confusing, but if you read it carefully, you should be able to find the parts you need.
If you haven't yet built a simple AU host and AU plugin before, you really should take the time to work through the whole Audio Unit Development Fundamentals guide. (And if you don't think you really need to know all that to do something simple, you're wrong. Why CoreAudio is Hard explains half of the reason; the changes between OS X versions versions are the other half of the reason.)
You probably also want to look at CocoaDev's CoreAudioAndAudioUnitsTutorial page for a placeholder page for a complete tutorial that nobody's ever written, with links to a lot of useful stuff.
Meanwhile, if injecting the whole MTCoreAudio framework into the app is feasible, it comes with a ton of nice, complete samples. In fact, even if you aren't going to use the framework, it's worth reading the Overview documentation, and possibly the source code.

Allowed VCard elements

I create VCards (.vcf-files) in some apps of mine. Now a customer needs to add a private mobile and a business mobile number to a vcard.
I have tried to add two times the same attribute to the vcard (TEL;CELL;VOICE) but this seems not to be supported (at least outlook only takes the first instance).
Is there a up-to-date description of all fields I can add into vcards and is there a description on what is allowed and what are donts.
If I search the web, I find a lot of information, but the most of it is very old and it seems that the different clients only support a subset of attributes.
Check the version of vCard that Outlook supports vs. the version to which you are coding.
I think that Outlook supports only vCard ver 2.x and perhaps that version does not support what you are trying to do.
Maybe the cardme project will do your happiness.
Many others projects are existing, but these one was the best to my point of view.

What is a good way to implement a message board or other common UI plugins

I am thinking there must be some libraries out there that people have developed which can be used as "plugins" or whatever people call them to do simple and common UI types of things.
I am using the message board idea as just an example, but I am looking for a general solution. For example, is there a place where I can browse "gems" for RoR that just take care of some UI component?
How do people usually integrate such pieces as a message board present at the bottom of every page, or some other ui tool without writing their own, or using a CMS?
Thanks,
Alex
Two good places to browse gems are http://ruby-toolbox.com/ and of course http://rubygems.org/

How can I change the audio output device in Objective-C?

I'm building a simple Cocoa app and I want to direct the audio output to a specific device, instead of the system selected one. I know some apps, like Skype, let you select where to send the output to. How do they do this?
I tried the MTCoreAudio framework but I can't even compile my app (or their AudioMonitor demo) with it included and the errors aren't helpful (_objc_fatal). Are there any complete examples that I can learn from? So far my searches haven't turned anything up.
Thanks!
The CAPlayThrough example on the Mac Dev Center Sample Code library shows how to list all of the available input and output devices, and select a default device from a menu.
Have you looked through the sample code on http://developer.apple.com ?
Look at these projects http://developer.apple.com/mac/library/navigation/index.html?section=Resource+Types&topic=Sample+Code
Namely the DefaultAudioUnit project.
I should say that working with Core Audio is more challenging than Cocoa. Most of the API's are C-based (I find that harder). You should read the Core Audio programming guide as well to get a sense of how the audio system is put together.