Posts in "howto"

iOS Machine Learning with Core ML and Vision


Sample different ML models using iOS and Core ML and Vision. Take a photo, or pick images from your photo library, and use pre-trained Core ML models to classify them. You’re only as good as your model! The source for this example can be found on GitHub. I’ve assumed working knowledge of iOS using Swift and Storyboards.

Core ML and Vision

With Core ML and Vision, we use Vision to run image analysis requests against a trained Core ML model to attempt to classify scenes in images and videos. (This sample app covers image classification only – hoping to do a video classification app later too!) The classification result(s) provide a classification of the image/video. The classification is the confidence of the match against an object identifier e.g. a 22% certainty the image you just gave me is a box of cereal.

Sample app

The app will

  • allow image input via the camera, or the photo library
  • Add a trained model from one of the object detection models.  (Resnet50, InceptionV3, or VGG16). These models are compiled, and become available via Swift compiled classes.
  • display its best classification on screen
  • provision for device (via automatic signing)

 New Project

Create a Single View Universal app in Swift. As an optional step, modify your Main.storyboard to use safe area layout guides. Select Main.storyboard, and select the first tab on Utilities Pane. Check ‘Use Safe Area Layout Guides’ on. The safe area is a new layout guide in iOS 11, deprecating top and bottom layout guide usage in AutoLayout constraints, making autolayout a bit easier in iOS 11.


Your hierarchy will change from this:

to this:

Views now contain safe area guides – bind your AutoLayout constraints to this guide.

Privacy usage

Add permission descriptions for camera and photo library usage in Info.plist, or she won’t run!


I used a single view controller. Add these UI components:

  • A centred UIImageView. 50% height proportion to its super view. I set its aspect ratio to 1:1 (square). Lastly, set its content to Aspect Fit. These changes allow for adjustment to orientation changes on the device.
  • A UILabel result text label. Anchored vertically below the UIImageView, aligned to the leading and trailing edges of the UIImageView. Set its number of lines to 2, and align text to centre.
  • Two tool bar buttons for camera input and photo library image selection. Align leading, trailing, bottom to the safe area guide.
  • I added two sample images for quick testing in the simulator without picking any images. One of a cat, one of a monkey.

Save and run your app. Rotate the device – the image should center as expected, with an offset text below. It’ll work on both iPhone and iPad

Add the Core ML model

Download a trained model, and drag it into your project folder. Make sure to include it in your target or it won’t compile to a Swift model. I used Resnet50. Since ML models are compiled, you’ll need to make a code change too. To find the model name, select your .mlmodel file, and click through to the source.

You’ll find the model name pretty easily – it contains an MLModel instance variable.

@objc class VGG16:NSObject {
var model: MLModel


We’ll add code to

  • Respond to camera input and image selection from photo library
  • Configuration of the ML model, setting up Vision and making a classification request
  • Display classification on-screen

Responding to camera/photo library input

In ViewController, we implement UIImagePickerControllerDelegate for both picking and using the camera.  Remember to wire them up to the toolbar button actions added in the storyboard.

    //MARK: - User actions
    @IBAction func pickImageTapped(_ sender: UIBarButtonItem) {
        let pickImageController = UIImagePickerController()
        pickImageController.delegate = self
        pickImageController.sourceType = .savedPhotosAlbum
        present(pickImageController, animated: true)
    @IBAction func cameraButtonTapped(_ sender: UIBarButtonItem) {
        let pickImageController = UIImagePickerController()
        pickImageController.delegate = self
        pickImageController.sourceType = .camera
        pickImageController.cameraCaptureMode = .photo
        present(pickImageController, animated: true)

Now, add a reference outlet for the picked image and result label. Finally, wire them up from the storyboard too.  Add protocol extensions for UIImagePickerControllerDelegate. It’ll respond to picked images from the library and camera.

//In ViewController
@IBOutlet weak var pickedImageView: UIImageView!
@IBOutlet weak var resultLabel: UILabel!
// Class Extensions
// MARK: - UIImagePickerControllerDelegate
extension ViewController : UIImagePickerControllerDelegate {
    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
        dismiss(animated: true)
        guard let pickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage else {
            print("pickedImage is nil")
        pickedImageView.image = pickedImage

// MARK: - UINavigationControllerDelegate
extension ViewController: UINavigationControllerDelegate {

Image classification

Once you have an image, its time to classify it! In ViewController, declare your model as a static iVar, or use it inline

let coreMLModel = Resnet50()

Now, add a method to classify an input image. This method will

  • convert the input image to a CoreImage
  • instantiate a Vision MLModel
  • Create a Vision request (VNCoreMLRequest)
    • When the request is performed, we’ll receive an (optional) array of observations, ordered by confidence
    • Each observation has an identifier with a level of confidence
    • Update the UI with the identifier with the highest confidence – its the model’s best guess!
  • Invoke a VNImageRequestHandler to perform the classification request
    /// Classify this image using a pre-trained Core ML model
    /// - Parameter image: picked image
    func classifyImage(image: UIImage) {
       guard let ciImage = CIImage(image: image) else {
            print("could not continue - no CiImage constructed")
        resultLabel.text = "classifying..."
        guard let trainedModel = try? VNCoreMLModel(for: coreMLModel.model) else {
            print("can't load ML model")
        let classificationRequest = VNCoreMLRequest(model: trainedModel) { [weak self] classificationRequest, error in
            guard let results = classificationRequest.results as? [VNClassificationObservation],
                let firstResult = results.first else {
                    print("unexpected result type from VNCoreMLRequest")
            //for debug purposes - print all the classification results as a confidence percentage
            print("classifications: \(results.count)")
            let classifications = results
                //                .filter({ $0.confidence > 0.001 })
                .map({ "\($0.identifier) \(String(format:"%.10f%%", Float($0.confidence)*100))" })
            print(classifications.joined(separator: "\n"))
            //display first result only as a percentage (highest classification)
            DispatchQueue.main.async { [weak self] in
                self?.resultLabel.text = "\(Int(firstResult.confidence * 100))% \(firstResult.identifier)"
        //perform an image request
        let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage) .userInteractive).async {
            do {
                try imageRequestHandler.perform([classificationRequest])
            } catch {

Provision for device

Since you’ll be running this on your phone/iPad, don’t forget to setup your device provisioning (Target..General..Signing). I used automatic provisioning.

xCode 9. Run on device

With xCode 9, you can run wirelessly on device. Make sure your computer and iPhone/iPad are on the same wireless network, select ‘Window…Devices and Simulators’. Check ‘Connect via network’. I found its a bit slower to install than a tethered device, but super convenient!


Here were some of my results. More testing is definitely needed!

Trained Core ML results


Try other Core ML models from Apple. When you add them to your project, add them to your project target. You’ll need to change the model name in code too.

The sample source code for this app is available on GitHub.

xCode 9. Github improvements

As part of xCode 9 – theres a better integration with Github. I added this source code to my Github account by

  • Source Control…Create Git Repositories
  • Add your credentials under Preferences..’Source Control Accounts’.
  • Switch to the Source Control Browser Pane, and select the Settings icon

Source Control Pane in Xcode 9

  • Create ‘CoreMLDetection’ Remote on Github…’

Sample Xcode9 GitHub Push

Seemed simple enough, but I’ll take a terminal window. It might be useful for tracking code changes, but not sure yet.

Building iOS apps using xcodebuild with multiple configurations

(UPDATE: 29 Jan 2018. Tested on 9.2)

Xcode handles app signing automatically. This is excellent for single app distribution via TestFlight! You might need more granular or manual control of your app builds; different apps for different environments; apps provisioned for specific test users. You’re probably more suited to manual builds. Enter xcodebuild and xconfig files!

If you’re looking for a quick way to get up and running with multi-config command-line builds, keeping you close to the xCode toolset, keep on reading! xcConfig files are great for this purpose and can be used to include provisioning information too. Its also an easy way to create your own CI shell script implementations.

For something more extensive – check out Fastlane and Buildkite. Fastlane is excellent for some serious CI with vast iOS tools. Buildkite is a very flexible build agent. I’m impressed with its reliability – up and running in minutes too. Their support was impressive too!

you’re going to need…

  • Your Apple Developer Team ID
  • Your App ID setup on the Apple Developer website.
  • Signing Certificates and provisioning profiles for each environment,  synced to your xCode 8 environment

start with the demo app

SigningTest is demo app setup for multi-config usage. Source code is on Github. The project has been setup for debug, staging and release configurations.

  • Debug – configuration for building to developer devices
  • Staging – configuration for building to a staging environment. Possibly a limited set of users Staging builds aimed at a limited set of users (by device)
  • Release – release builds destined for the app store

You’ll need to make some minor modifications. Make it work with your Team, App ID and provisioning profile configs. Read through setup is below, followed by some explanations.


  • checkout SigningTestApp project from Github
  • Open and search project for ‘DEVELOPMENT_TEAM’. It’ll be set to ‘ YOURTEAMHERE’. Replace with your Apple Team ID
    • from your project root in Terminal/Finder
      • modify exportOptions/adhoc.plist
        • “teamID”:”YOURTEAMHERE”. Replace with your Apple Team ID
        • “provisioningProfiles”. Map your bundle ID to your provisioning profile name. Replace BUNDLE_ID, PROVISIONING_PROFILE_NAME with your adhoc  build information
      • modify exportOptions/store.plist
        • “teamID”:”YOURTEAMHERE”. Replace with your Apple Team ID
        • “provisioningProfiles”. Map your bundle ID to your provisioning profile name. Replace BUNDLE_ID, PROVISIONING_PROFILE_NAME with your app store  build information
  • I’ve configured three xCode configurations in the Config folder. Each uses its own .xcConfig file Modify, or delete them to suit your needs.
    • Development – for developers on the project, with a profile for specific devices (developer devices)
    • Staging – an adhoc profile, with a profile for specific devices (testing devices)
    • Release – the app store distribution profile for the app (app store provisioning)
  • Edit the three xcConfig files, and replace them with your specific configs. Here are mine. Yours should be different.
PRODUCT_NAME = SigningTestDebug
PRODUCT_BUNDLE_IDENTIFIER = org.sagorin.signingtest.debug.SigningTest
PRODUCT_NAME = SigningTest
PRODUCT_BUNDLE_IDENTIFIER = org.sagorin.signingtest.SigningTest
PROVISIONING_PROFILE_SPECIFIER = SigningTest App Store Distribution
PRODUCT_NAME = SigningTestStaging
PRODUCT_BUNDLE_IDENTIFIER = org.sagorin.signingtest.staging.SigningTest
PROVISIONING_PROFILE_SPECIFIER = SigningTest Staging Distribution
  • Close your project, and re-open it. This step was necessary to pick up the values for the provisioning profiles per configuration. Strangely, I only needed to do this once?
  • Check the changes have been applied:
    • Project..Target..General. Signing (Debug,Release, Staging – they should all pickup the values from your xcconfig files)
  •  Now can run the app in the Simulator – make sure it launches.
  • From a Terminal window, at your project root, run one of the following commands. It’ll build an IPA file – per configuration:
 ./ Release
 ./ Staging
 ./ Debug

Thats it. You’ll have a shiny new .ipa  in ./build folder

explain please!


Values in xcConfig files are automatically used per build configuration.  In the sample app, check out Project..Info to see how they’re configured. For the sample app, I used a single App ID with a wildcard on the ID (org.sagorin.signingtest.*). I made three provisioning profiles. For a real app, you’ll probably use a unique Bundle ID. xcConfigs can give you the flexibility to use different App IDs per configuration. This allows for broader distribution of apps from one project.

build script

The build script is a very basic shell script. Using xocdebuild, it’ll create an archive, and then the .IPA. When creating the .IPA, we specify the IPA options using ‘exportOptions’. Information like bitcode, dsyms upload, team information. All are specified in exportOptions. Run ‘xcodebuild –help’ for more all options available for exportOptions

In the demo app, the build script can build for Debug, Staging and Release. Release builds use the store.plist – all the others use adhoc.plist.

a bonus: cocoapods!

The demo project has a branch for cocoapods, setup with a workspace with the same configs. I used a demo pod entry for Google Analytics.

  • Open the workspace file, not the project file, and follow the same setup instructions above.

Any questions or problems with the above example? please email me!

Alert WWDC 2013 announcement using Pushover and Ruby

Last year I used WWDCAlerts for WWDC 2012. My alert arrived two hours after the tickets were released. Luckily a friend let me know and I booked a ticket. This year I thought it be better to do it myself using Ruby and PushOver – a mobile alerting service for Android/iOS. Here’s how to setup a WWDC 2013 alert announcement using PushOver and Ruby. The idea came from this gist:

What the script does

Search html from for the 2012 Apple WWDC graphic (‘wwdc2012-jun-11-15.jpg’). If the graphic doesn’t exist, send a PushOver alert. The assumption is the page has been updated (hopefully) to reflect the WWDC 2013 announcement.


1. Signup for Pushover (

– Create an account

– Create an app – you’ll need this to send Pushover alerts

– Download the iPhone/Android app, and test you can send and receive notifications using your app. Check out the Pushover FAQ for test clients:

– Make a note of your user key and app key – you’ll need it for the Ruby script.

2. Create Ruby script.


require 'rubygems'
require 'open-uri'
require 'net/https'

class WWDC2013
def self.announced?
indicator_line = open('') do |f|
# look for current 2012 image.
f.detect { |line| line =~ /wwdc2012-june-11-15.jpg/ }
rescue Exception => e
print "Error: #{e.message}\n"
indicator_line == nil
def self.pushOneOff
url = URI.parse("")
req =
:message => "WWDC 2013 announced!",
res =, url.port)
res.use_ssl = true
res.verify_mode = OpenSSL::SSL::VERIFY_PEER
res.start {|http| http.request(req) }


# print"At %I:%M:%S%p WWDC 2013 has...")
if WWDC2013.announced?
# print "not been announced\n"
# WWDC2013.pushOneOff

3. TEST your Ruby script

run it on the command line – comment in ‘WWDC2013.pushOneOff’ to force a PushOver alert. (Make sure to comment it out when you’re done testing! It should only be invoked if WWDC 2013 is announced)

4. use Cron to schedule your script to run.

I scheduled mine to run every 12 minutes.

See you at WWDC 2013!


Determine the NSIndexPath of a UITableViewCell when a sub-view is tapped

I do a fair amount of iOS consulting work for clients who have outsourced their iOS code development. My work usually involves code review and final steps to help them successfully submit their app to the App Store. While it will always work out cheaper for my clients to outsource (usually offshore), the quality of the code received by most is um, questionable. My clients might get a functional UI which follows a specific design but under the hood, the house is a mess!

An example of this – figuring out the NSIndexPath of a UITableViewCell when a sub-view (eg UIButton, UIImageView) is tapped within the cell. Two of my ‘favorite’ solutions:
– use the tag property on the button or the image view to store the index path. ugh.
– create a variable within the cell instance (assuming there was an abstraction of table view cell code!) to track the index path of the enclosing cell.

A much better approach – take advantage of the UIEvent associated with the user touch event. In this example, I have a reset button contained within my UITableViewCell with the following target:

[resetButton addTarget:self action:@selector(resetButtonTapped:withEvent:) forControlEvents:UIControlEventTouchUpInside];

For legibility, I’ve refactored the UITouch variable but it could be easily inlined too.

-(void) resetButtonTapped:(UIButton*)button withEvent:(UIEvent*)event {
UITouch *touch = [

NSIndexPath * indexPath = [self.tableViewindexPathForRowAtPoint: [touch locationInView: self.tableView]];
NSLog(@"index path %@", indexPath);

Schedule iTunes to download Podcasts

Apple recently released all video sessions for WWDC 2011. That’s about 24 GB for the SD sessions. If you’re like me you might be limited by some archaic ISP data usage cap. I have on-peak  (8am-2am) and off-peak  (2a-8a) usage times with my monthly usage cap split between the two. Hopefully you’re not like me.

Make full use of your data caps – download during off-peak times too. Using Apple Script and iCal, I set iTunes to download all WWDC video sessions at 2:10 am – just after my off-peak usage starts.

1. Add podcast sessions for download

– Click through to each of the session groups and click ‘get tracks’ – add them all to iTune Store download. Do this for all – about 160 videos, 24GB of video for the SD versions.
– Select all podcasts – some might be downloading, and pause all. (every time you start iTunes, it’ll start downloading them. pause them again if necessary)

2. Write an AppleScript to launch iTunes to update podcasts

– Open Script Editor, write and compile the following script:

tell application "iTunes"
end tell

– save the script to your hard drive

3. Add calendar Entry to iTunes to launch script

– Open iTunes. create a new calender entry – within your off-peak window. for me its anywhere between 2am and 8am. Change alarm to ‘run script’ and point it at the script you wrote earlier.

ical calender entry to launch applescript

ical calender entry to launch applescript

4. Energy Saver preferences

– Your Mac must be awake to launch iTunes.  Open System Preferences, and select Energy Saver. Select ‘Schedule…”, and set it to wake up 1 or 2 minutes before the script is scheduled to run.

Energy Saver Schedule

Energy Saver Schedule

– Check your computer sleep setting is set long enough to allow for the download. You can also set it to never sleep, or schedule the script to run over successive nights. iCal gave me the ability to run this script as often as needed.

WWDC SD video sessions: 160 x avg 150mb/video = 24 GB

1 MB/s  60 MB/min  ~ 6.6 hrs
1.5MB/s 90 MB/min ~ 4.4 hrs
2 MB/s  120 MB/min  ~ 3.3 hrs

Splitting your WordPress blog posts with ‘Read More’

Sometimes you just want to split your WordPress blog post up into a couple of paragraphs, followed by a ‘Read More’ link to the full posting. I do most of my digging in other frameworks and languages, so my WordPress customization skills are limited.  I’ll install it, throw up a theme and do some basic customizations for you. But ask for more ‘advanced’ things and I’d go digging on the Interwebs for the solutions. I landed up digging around in theme files looking for this option today, but turns out its a lot simpler to add ‘Read More’ breaks than I thought.

When writing a blog post insert a markup tag in the post to indicate ‘Read More’. In the WordPress blog editor, when editing in ‘Visual’ mode type Alt-Shift-T, or click the 4th icon from the top right- ‘Insert More tag’. If you’re editing in HTML mode, add <!–more–> to the markup where you want the break.

Remember though – ‘Read more’ breaks are ignored in any Templates (eg a single post), and are most useful when viewing lists of blog posts (e.g. WordPress categories). For a more detailed implementation explanation of ‘Read more’ usage in content, checkout the WordPress Codex for the_content template tag. I RTFMed and it actually helped!

Migrating a WordPress site between two domains. Egg before the chicken?

You’ve setup a WordPress site on with some initial content and you’re migrating it to When you attempt to login into WordPress admin, you’re redirected to your staging site (

Why is this happening?
Oops. Remember to change values in ‘General settings’ for ‘WordPress address’ and ‘Site address’ before you migrate the site.If you forget, these values can be changed after migration.

Login to mysql (or other) database and run the following sql:
update wp_options set option_value = '' where option_name in ('home','siteurl');