Integrating PURGE into Mac OS X Application - objective-c

I want to run command line "purge" programmatically. I know that I can do a shell exec and call "purge" but my problem is that purge is not included in Mac OS 10.6 and below and will be installed if you install Developer Tools.
Wondering how I can ship purge via my application and/or install that if is not there.
More info:
Platform : MAC OS X
IDE: XCode 4.6
Lang: Obj-C

Don't.
The purpose of purge is described in the manual thus:
Purge can be used to approximate initial boot conditions with a cold disk
buffer cache for performance analysis.
The only effect that it has on an end-user's system is to make the system perform worse while the cache is repopulated. It should NOT be used as part of "memory cleaner" tools, as the only effects it has are negative. (Indeed, these tools should not be used.) If the system actually needs memory, disk caches will be purged as necessary.

Related

How to update mac OS application to support Catalina?

I have an old mac OS application developed in Mojave with the deployment target 10.12. Now how to update mac OS application to support Catalina? Or is the application automatically supports all future mac OS versions?
When developing for macOS (or any other Apple platform, for that matter), there are two key concepts to take into account when thinking about compatibility:
The SDK version: this is the SDK you're compiling against and it is usually determined by the Xcode version you're using to build your project.
The Deployment Target: this is the lowest OS version you want to support.
Normally, if you have followed the best practice in implementing your code and all of your dependencies have done the same, updating an app for a new macOS version requires only to download the latest Xcode on the latest macOS, build it and run your smoke tests (manually or through automated tests).
There may be things that have been deprecated in the meantime and Xcode will report them as warnings while building. You may read more about deprecated APIs in the macOS 10.15 release notes.
Keep in mind that you don't actually have to rebuild your app every time a new macOS version comes out. Even though it is better to test it at least once and dedicate time to explore and make use of new APIs, apps built on the previous version of macOS will, most of the times, run flawlessly on the next version (and maybe even further). This obviously depends on the app complexity, so your mileage may vary.

Automatic packing of server-side product as Docker and OVA image

We develop a server-side solution and to ease its deployment we would like to provide our cutomers with two options:
1. Docker image
2. VM image in OVA format
The images should be automatically created by our build machine.
As of today, we use packer for this purpose. First we create docker image and then update that image in preconfigured virtual machine image (using 'virtualbox-ovf' builder). This works pretty well, but there are some problems with this solution.
First, our vm includes docker framework and two OSes (host's and docker's), so our VM image is ~twice bigger than docker. Second, to base our solution on another linux distro, we should manually configure new VM machine.
We are looking for 'Dockerfile'-style solution to create and configure VM automatically and then export it in OVA format. 'virtualbox-iso' builder is the obvious way to do this, but the building process will be much longer.
If you are willing to use Debian as your base OS then you could look at TurnKey Linux's TKLDev. It's probably a bit of a learning curve initially but it's a pretty cool thing IMO (although I'm very biased - see below disclaimer). TKLDev will build you a TurnKey (Debian based) ISO with your software installed on top. Then using Buildtasks you can convert the ISO to OVA, VMDK, LXC, Docker, OpenStack, etc...
Unfortunately Buildtasks is not very well documented but significant chunks of it are in bash so if you are handy with a Linux commandline you could work it out. Otherwise ask on the TurnKey forums.
The initial development (from Packer to TKLDev) may take a little while, but once the heavy lifting is done the creation of an ISO (in a guest VM on a moderm multicore CPU PC) takes about 10-15 mins and the OVA probably another ~5; Docker another ~5.
If you wanted to make it build automatically then you could use a hook to trigger a fresh TKLDev build (including the buildtasks image creation) everytime a commit was made to a repo. I know that git supports this but I assume that other version control systems allow something similar.
Also if the appliance that you are making is open source then perhaps it could be added to the TurnKey Linux library?
Disclaimer: I work with TurnKey Linux. :)
FWIW this is essentially the process we use to create our library of appliances in most virtualisation formats known to human kind!

Miscellaneous confusion about Xcode build settings (64/32 bits, SDK version, etc)

When I create a new OS X application project, I noticed many target options those confuse me quite a lot:
(1) The top-left setting of Xcode window:
(2) The "Base SDK":
(3) "Deployment Target":
(4) Architectures:
Here comes my questions:
For (2) and (3), I think it was clearer to understand. These are what I comprehend:
(2) This identifies what I develop with.
(3) This identifies what OS version my application will be used on.
Please tell me whether I am right...
But I could not understand (1). I just know that if I select 32-bit here, I could not use ARC.
Neither with (4), what are they? Are they represent the bit-width of the CPU? What was the difference between (1) and (4)?
I'll explain your items out of order.
The Base SDK
This defines the largest set of APIs you can use. You can use anything that existed as of the version number identified here. For example, if you use the 10.8 SDK, you can use -[NSColor CGColor] (introduced in 10.8), but not -[NSData base64EncodedDataWithOptions:] (first public in 10.9).
(Of course, you can also use anything older than that version.)
Accordingly, the SDK version is also known as the “max[imum] allowed” version in the Availability macros.
The SDK version also sometimes becomes important when Apple changes the behavior of an API. When they do that, they sometimes keep the old behavior around for applications linked with older SDKs. This is called an “on-or-after check”, as in “checks whether you're on 10.8 [SDK] or later”. (The concept and term pre-date Xcode having SDKs for each OS version. It used to just go by whatever OS you were running Xcode and building on.)
The Deployment Target
This is the minimum OS version you require. If something was removed in a prior version (rare, but it happens), you can't use it.
This tends to affect link-time and run-time things more than compile-time things. For example, ARC won't work if your deployment target is 10.5 or earlier.
Accordingly, the Deployment Target is also known as the “min[imum] required” version in the Availability macros.
The Info.plist can also specify a minimum OS version. Nowadays, this is set by default and it's set by macro expansion to the Deployment Target.
The Architectures build setting
Different CPUs have different architectures. Essentially, they fit into broad categories, such as:
PowerPC 32-bit (ppc)
PowerPC 64-bit (ppc64)
Intel 32-bit (i386)
Intel 64-bit (x86_64)
ARM 32-bit
ARM 64-bit
(PowerPC architectures aren't supported anymore. You can add them to the Architectures list, as ppc and ppc64, but Xcode will just ignore them.)
Macs nowadays have Intel processors. Almost all Intel Macs have 64-bit processors. You only need to worry about 32-bit Intel if you want to support Macs all the way back to 2006. That's probably more hassle than it's worth.
iOS devices run ARM processors, and most are still 32-bit. The A7 (iPhone 5S, iPad Air, iPad Mini with Retina Display) is 64-bit. But, if you run on an iOS Simulator, it's running on your Mac (it's a Simulator, not an emulator), so it'll target an Intel architecture (formerly always i386, but probably can now be x86_64 if needed).
The “top-left setting of Xcode window”
This is the build scheme and run destination. (Yes, it's two separate things in one pop-up menu. Actually, it's two separate pop-up menus in one control. Try it.)
“My Mac 64-bit” is the run destination. You'll be running the 64-bit version of your app on your Mac, not in an iOS Simulator or on an iOS device. Your choice for a Mac app is merely which architecture you want to run, and they should behave the same (this is, obviously, something you sometimes need to test).
iOS apps have more choices here. Some apps are iPhone-only, some are iPad-only, some are universal, and some may be set to build for both 32-bit and 64-bit architectures. You'll have a Simulator offered for each combination of form factor and architecture (e.g., iPhone Simulator 64-bit) you can run on. You'll also have the option to run your app on any iOS device that's connected and enabled for development (you get this prompt when you plug in the device in Xcode's sight).
TL;DR
Deployment Target is the lowest OS version your app will run on.
Base SDK is the highest OS version you can use stuff from. If it didn't exist yet, it doesn't exist at all for you.
Architectures are the set of hardware your app will run on.
Run Destination is the hardware you're going to run it on from within Xcode.
Just like with most OSes these days you can develop either a 32bit or a 64bit application. The "bitness" refers mostly to how memory addresses are structured (either using 32bit allowing so at most 4GB to address or 64bit (computation left as an exercise to the reader)). However the choosen architecture usually has more implications (like the missing ARC support for 32bit apps) but also how wide CPU registers are, how much memory a structure uses in RAM etc.
OS X also supports socalled fat binaryies that is, a bundle containing both 32bit and 64bit variants of your application. This is however only needed if you normally prefer to run 64bit code, but want your app also to run on OS versions that only support 32bit.
In XCode you can define for what architecture to build your project, either 32bit only, 64bit only or a fat bundle. In the project settings you can set what is allowed and in the top bar in XCode you can quickly switch between the allowed architectures (your questions 1 and 4).
The base SDK determines what you want to use to compile your application. If you select for instance 10.7 you cannot use new APIs that were introduced in 10.8 or 10.9 (which might perfectly be ok if you want your application to run on earlier OS versions only). However if you want to dynamically use new features if they are availble you'd select the latest OS as base SDK and check in code what OS you are running on and only use new features if they are available. It is totally ok to compile an application with access to new features and run it on older systems if you don't use the new APIs there (they are late-bound and hence only crash when you access them the first time and they are not available).
The deployment target determines the minimum OS version your application needs to run properly. This is a runtime check done when the application is started. The OS will refuse to start an application that is made for a later version.

OpenCl: Minimal configuration to work with AMD GPU

Suppose we have AMD GPU (for example Radeon HD 7970) and minimal linux system without X and etc.
What should be installed and what should be launched and how it should be launched to have proper OpenCL environment? In best case it should be headless environment.
Requirements to environment:
GPU visible by OpenCL programs (clinfo for example)
It is possible to monitor temperature and set fan speed (for example using aticonfig).
P.S. Simple install Xserver, catalyst and run X :0 won't work properly. See X server with fglrx driver won't responce after exactly 49 accesses to X server
UPD When you use AMD GPU on linux, OpenCL applications don't see AMD GPU if Xserver isn't launched.
I had similar problem, asked a question and had succeed solving it by myself.
For R9 290 cards and newer i assume you have:
Built kernel 4.14 or later, with amdgpu driver support. There is option in linux kernel config under Graphics Support.
All nesesary firmware .bin blobs are incorporated. To do so easily you may edit buildroot/package/linux-firmware/* contents for buildroot, and manually add BR2_PACKAGE_LINUX_FIRMWARE_AMDGPU option by yourself, along with BR2_PACKAGE_LINUX_FIRMWARE_RADEON (use it as a template). Actually we should post that update to their git.
When booting you should see appropriate dmesg messages about amdgpu initializing, per each adapter. And screen mode should be switched. If you still see large console text and no videomode switch occured during init then you have problem in kernel/firmware, you should fix that out first.
To answer second question, controlling fan speeds/temperatures is achieved via powerplay filesystem, eg /sys/class/drm/.. like this:
cd sys/class/drm/card0/device/hwmon/hwmon0
echo 1 > pwm1_enable
cat pwm1_max > pwm1
You may dig a bit deeper and find powertune parameters nearby, in device folder.
But instead of using /sys/class/drm/card0/device/pp_dpm_sclk i highly recommend flashing that values directly in cards' bios. Set with required frequencies/voltages, as it is more reliable, stable and api independent - you either init it, or not :)
PS. Also put away 7970, buy something a bit newer. I dont know if it is still supported in the latest drivers, we havent such an old card by hands right now. I tested 290, 390, 480, 580 cards series. (for R9 270, miner fails to build cl code). For older cards better to use some older software <=16.40 and maybe a bit older kernel <=4.13

Does VirtualBox have any advantages over VMWare Player?

I've been using VMWare Player for ages now for both Windows development on my Linux box and (more importantly) automated testing of Windows applications.
Basically what I do is to:
have my development VM running and I build my code and automatically transfer the install package to Linux.
when this shows up at Linux, automatically copy a "known-state", snapshot VM to my test work area (I say snapshot but it's really just a backup copy of the whole directory, not a real VMWare snapshot).
also automatically start the VM in the work area once it's copied.
the VM has a single never-changing startup script which pulls a real startup script from Linux and runs it.
that startup script is responsible for getting down the install package and doing a silent install.
it then runs a test suite and uploads results back to Linux where I have automated scripts which check them.
So, it's basically a one-button test process.
Now I notice more and more people seem to be using VirtualBox.
First off, I'd like to confirm that it can also do a similar thing, primarily being able to backup and restore whole VMs and having shared folders between VirtualBox and Linux.
Secondly, and this is the crux: I'd like to know if that has any concrete advantages over VMWare Player, especially for the automated testing jobs.
I switched to VirtualBox because of one concrete advantage, I wasn't able to setup the network as I wanted to in player. I don't remember if it was bridging or port-forward or whatever that didn't work, but something didn't work the way I wanted it to with the network-setup (cause I needed the pay-version for that) and thus I switched. Personally I've found that both have good and bad sides, but I still use virtualbox cause of that network-thing.