Posts in "iphone"

iOS Machine Learning with Core ML and Vision

tl;dr

Sample different ML models using iOS and Core ML and Vision. Take a photo, or pick images from your photo library, and use pre-trained Core ML models to classify them. You’re only as good as your model! The source for this example can be found on GitHub. I’ve assumed working knowledge of iOS using Swift and Storyboards.

Core ML and Vision

With Core ML and Vision, we use Vision to run image analysis requests against a trained Core ML model to attempt to classify scenes in images and videos. (This sample app covers image classification only – hoping to do a video classification app later too!) The classification result(s) provide a classification of the image/video. The classification is the confidence of the match against an object identifier e.g. a 22% certainty the image you just gave me is a box of cereal.

Sample app

The app will

  • allow image input via the camera, or the photo library
  • Add a trained model from one of the object detection models.  (Resnet50, InceptionV3, or VGG16). These models are compiled, and become available via Swift compiled classes.
  • display its best classification on screen
  • provision for device (via automatic signing)

 New Project

Create a Single View Universal app in Swift. As an optional step, modify your Main.storyboard to use safe area layout guides. Select Main.storyboard, and select the first tab on Utilities Pane. Check ‘Use Safe Area Layout Guides’ on. The safe area is a new layout guide in iOS 11, deprecating top and bottom layout guide usage in AutoLayout constraints, making autolayout a bit easier in iOS 11.

 

Your hierarchy will change from this:

to this:

Views now contain safe area guides – bind your AutoLayout constraints to this guide.

Privacy usage

Add permission descriptions for camera and photo library usage in Info.plist, or she won’t run!

Storyboard

I used a single view controller. Add these UI components:

  • A centred UIImageView. 50% height proportion to its super view. I set its aspect ratio to 1:1 (square). Lastly, set its content to Aspect Fit. These changes allow for adjustment to orientation changes on the device.
  • A UILabel result text label. Anchored vertically below the UIImageView, aligned to the leading and trailing edges of the UIImageView. Set its number of lines to 2, and align text to centre.
  • Two tool bar buttons for camera input and photo library image selection. Align leading, trailing, bottom to the safe area guide.
  • I added two sample images for quick testing in the simulator without picking any images. One of a cat, one of a monkey.

Save and run your app. Rotate the device – the image should center as expected, with an offset text below. It’ll work on both iPhone and iPad

Add the Core ML model

Download a trained model, and drag it into your project folder. Make sure to include it in your target or it won’t compile to a Swift model. I used Resnet50. Since ML models are compiled, you’ll need to make a code change too. To find the model name, select your .mlmodel file, and click through to the source.

You’ll find the model name pretty easily – it contains an MLModel instance variable.

@objc class VGG16:NSObject {
var model: MLModel

Code

We’ll add code to

  • Respond to camera input and image selection from photo library
  • Configuration of the ML model, setting up Vision and making a classification request
  • Display classification on-screen

Responding to camera/photo library input

In ViewController, we implement UIImagePickerControllerDelegate for both picking and using the camera.  Remember to wire them up to the toolbar button actions added in the storyboard.

Now, add a reference outlet for the picked image and result label. Finally, wire them up from the storyboard too.  Add protocol extensions for UIImagePickerControllerDelegate. It’ll respond to picked images from the library and camera.

Image classification

Once you have an image, its time to classify it! In ViewController, declare your model as a static iVar, or use it inline

Now, add a method to classify an input image. This method will

  • convert the input image to a CoreImage
  • instantiate a Vision MLModel
  • Create a Vision request (VNCoreMLRequest)
    • When the request is performed, we’ll receive an (optional) array of observations, ordered by confidence
    • Each observation has an identifier with a level of confidence
    • Update the UI with the identifier with the highest confidence – its the model’s best guess!
  • Invoke a VNImageRequestHandler to perform the classification request

Provision for device

Since you’ll be running this on your phone/iPad, don’t forget to setup your device provisioning (Target..General..Signing). I used automatic provisioning.

xCode 9. Run on device

With xCode 9, you can run wirelessly on device. Make sure your computer and iPhone/iPad are on the same wireless network, select ‘Window…Devices and Simulators’. Check ‘Connect via network’. I found its a bit slower to install than a tethered device, but super convenient!

Results

Here were some of my results. More testing is definitely needed!

Trained Core ML results

Models

Try other Core ML models from Apple. When you add them to your project, add them to your project target. You’ll need to change the model name in code too.

The sample source code for this app is available on GitHub.

xCode 9. Github improvements

As part of xCode 9 – theres a better integration with Github. I added this source code to my Github account by

  • Source Control…Create Git Repositories
  • Add your credentials under Preferences..’Source Control Accounts’.
  • Switch to the Source Control Browser Pane, and select the Settings icon

Source Control Pane in Xcode 9

  • Create ‘CoreMLDetection’ Remote on Github…’

Sample Xcode9 GitHub Push

Seemed simple enough, but I’ll take a terminal window. It might be useful for tracking code changes, but not sure yet.

Building iOS apps using xcodebuild with multiple configurations

Xcode 8 app handles app signing automatically. This is excellent for single app distribution via TestFlight! You might need more granular or manual control of your app builds; different apps for different environments; apps provisioned for specific test users. You’re probably more suited to manual builds.

If you’re looking for a quick way to get up and running with multi-config command-line builds, keeping you close to the xCode toolset, keep on reading! xcConfig files are great for this purpose and can be used to include provisioning information too. Its also an easy way to create your own CI shell script implementations.

For something more extensive – check out Fastlane and Buildkite. Fastlane is excellent for some serious CI with vast iOS tools. Buildkite is a very flexible build agent. I’m impressed with its reliability – up and running in minutes too. Their support was impressive too!

you’re going to need…

  • Your Apple Developer Team ID
  • Your App ID setup on the Apple Developer website.
  • Signing Certificates and provisioning profiles for each environment,  synced to your xCode 8 environment

start with the demo app

SigningTest is demo app setup for multi-config usage. Source code is on Github. The project has been setup for debug, staging and release configurations.

  • Debug – configuration for building to developer devices
  • Staging – configuration for building to a staging environment. Possibly a limited set of users Staging builds aimed at a limited set of users (by device)
  • Release – release builds destined for the app store

You’ll need to make some minor modifications. Make it work with your Team, App ID and provisioning profile configs. Read through setup is below, followed by some explanations.

Setup

  • checkout SigningTestApp project from Github
  • Open and search project for ‘DEVELOPMENT_TEAM’. It’ll be set to ‘ YOURTEAMHERE’. Replace with your Apple Team ID
    • from your project root in Terminal/Finder
      • modify exportOptions/adhoc.plist “teamID”:”YOURTEAMHERE”. Replace with your Apple Team ID
      • modify exportOptions/store.plist “teamID”:”YOURTEAMHERE”. Replace with your Apple Team ID
  • I’ve configured three xCode configurations in the Config folder. Each uses its own .xcConfig file Modify, or delete them to suit your needs.
    • Development – for developers on the project, with a profile for specific devices (developer devices)
    • Staging – an adhoc profile, with a profile for specific devices (testing devices)
    • Release – the app store distribution profile for the app (app store provisioning)
  • Edit the three xcConfig files, and replace them with your specific configs. Here are mine. Yours should be different.
Debug.xcconfig

Release.xcconfig

Staging.xconfig

  • Close your project, and re-open it. This step was necessary to pick up the values for the provisioning profiles per configuration. Strangely, I only needed to do this once?
  • Check the changes have been applied:
    • Project..Target..General. Signing (Debug,Relase, Staging – they should all
  •  Now can run the app in the Simulator – make sure it launches.
  • From a Terminal window, at your project root, run one of the following commands. It’ll build an IPA file – per configuration:

Thats it. You’ll have a shiny new .ipa  in ./build folder

explain please!

xc-configs

Values in xcConfig files are automatically used per build configuration.  In the sample app, check out Project..Info to see how they’re configured. For the sample app, I used a single App ID with a wildcard on the ID (org.sagorin.signingtest.*). I made three provisioning profiles. For a real app, you’ll probably use a unique Bundle ID. xcConfigs can give you the flexibility to use different App IDs per configuration. This allows for broader distribution of apps from one project.

build script

The build script is a very basic shell script. Using xocdebuild, it’ll create an archive, and then the .IPA. When creating the .IPA, we specify the IPA options using ‘exportOptions’. Information like bitcode, dsyms upload, team information. All are specified in exportOptions. Run ‘xcodebuild –help’ for more all options available for exportOptions

In the demo app, the build script can build for Debug, Staging and Release. Release builds use the store.plist – all the others use adhoc.plist.

a bonus: cocoapods!

The demo project has a branch for cocoapods, setup with a workspace with the same configs. I used a demo pod entry for Google Analytics.

  • Open the workspace file, not the project file, and follow the same setup instructions above.

Any questions or problems with the above example? please email me!

Determine the NSIndexPath of a UITableViewCell when a sub-view is tapped

I do a fair amount of iOS consulting work for clients who have outsourced their iOS code development. My work usually involves code review and final steps to help them successfully submit their app to the App Store. While it will always work out cheaper for my clients to outsource (usually offshore), the quality of the code received by most is um, questionable. My clients might get a functional UI which follows a specific design but under the hood, the house is a mess!

An example of this – figuring out the NSIndexPath of a UITableViewCell when a sub-view (eg UIButton, UIImageView) is tapped within the cell. Two of my ‘favorite’ solutions:
– use the tag property on the button or the image view to store the index path. ugh.
– create a variable within the cell instance (assuming there was an abstraction of table view cell code!) to track the index path of the enclosing cell.

A much better approach – take advantage of the UIEvent associated with the user touch event. In this example, I have a reset button contained within my UITableViewCell with the following target:

[resetButton addTarget:self action:@selector(resetButtonTapped:withEvent:) forControlEvents:UIControlEventTouchUpInside];

For legibility, I’ve refactored the UITouch variable but it could be easily inlined too.

-(void) resetButtonTapped:(UIButton*)button withEvent:(UIEvent*)event {
UITouch *touch = [

anyObject];
NSIndexPath * indexPath = [self.tableViewindexPathForRowAtPoint: [touch locationInView: self.tableView]];
NSLog(@"index path %@", indexPath);
}

Baseball Coin 2013

Baseball Coin is a baseball salary app for viewing team and payroll info for professional baseball players. There have been two versions of the app – each for the 2012, and 2013 season. The latest 2013 update included data for the 2013 season, support for iPhone 5 and design changes for viewing team information.

teamDetail_568teams_568playerdetail_568players_568 search2_568search_568

 


App Store

Fonzi

Fonzi was a musical Jukebox app. It was the first music app I worked on and was really fun to work on. It allowed users to play and queue songs for playback  at parties through one host. Fonzi was well suited for parties, but could be used anywhere for playing and listening to music with friends, or people in the office. All you’ll need is speakers, Wifi or 3G, and some music. You brought your device, and Fonzi brought the jukebox. It was supported on iPhone and iPod, iOS version4.3+.

I wrote both the app and the back-end API for the app (Ruby Sinatra and Redis caching). The founders were based in New York, NY.

 

FonziFonzi App Play MusicFonzi App Party ActivityFonzi App Party Info Continue reading

Back button missing on iPhone detail views

Make sure to give parent view controllers  a page title. If you don’t, any child view controllers pushed onto the view stack will have missing back buttons:

Back button missing

The navigation will still work- tap the left-most corner, and you’ll be taken back to the Root view. In this example, I set the view title before display in RootViewController.m:

set the page title

Checking the navigation again:

Back button visible

Not intuitive.