Posts in "ipad"

iOS Machine Learning with Core ML and Vision

tl;dr

Sample different ML models using iOS and Core ML and Vision. Take a photo, or pick images from your photo library, and use pre-trained Core ML models to classify them. You’re only as good as your model! The source for this example can be found on GitHub. I’ve assumed working knowledge of iOS using Swift and Storyboards.

Core ML and Vision

With Core ML and Vision, we use Vision to run image analysis requests against a trained Core ML model to attempt to classify scenes in images and videos. (This sample app covers image classification only – hoping to do a video classification app later too!) The classification result(s) provide a classification of the image/video. The classification is the confidence of the match against an object identifier e.g. a 22% certainty the image you just gave me is a box of cereal.

Sample app

The app will

  • allow image input via the camera, or the photo library
  • Add a trained model from one of the object detection models.  (Resnet50, InceptionV3, or VGG16). These models are compiled, and become available via Swift compiled classes.
  • display its best classification on screen
  • provision for device (via automatic signing)

 New Project

Create a Single View Universal app in Swift. As an optional step, modify your Main.storyboard to use safe area layout guides. Select Main.storyboard, and select the first tab on Utilities Pane. Check ‘Use Safe Area Layout Guides’ on. The safe area is a new layout guide in iOS 11, deprecating top and bottom layout guide usage in AutoLayout constraints, making autolayout a bit easier in iOS 11.

 

Your hierarchy will change from this:

to this:

Views now contain safe area guides – bind your AutoLayout constraints to this guide.

Privacy usage

Add permission descriptions for camera and photo library usage in Info.plist, or she won’t run!

Storyboard

I used a single view controller. Add these UI components:

  • A centred UIImageView. 50% height proportion to its super view. I set its aspect ratio to 1:1 (square). Lastly, set its content to Aspect Fit. These changes allow for adjustment to orientation changes on the device.
  • A UILabel result text label. Anchored vertically below the UIImageView, aligned to the leading and trailing edges of the UIImageView. Set its number of lines to 2, and align text to centre.
  • Two tool bar buttons for camera input and photo library image selection. Align leading, trailing, bottom to the safe area guide.
  • I added two sample images for quick testing in the simulator without picking any images. One of a cat, one of a monkey.

Save and run your app. Rotate the device – the image should center as expected, with an offset text below. It’ll work on both iPhone and iPad

Add the Core ML model

Download a trained model, and drag it into your project folder. Make sure to include it in your target or it won’t compile to a Swift model. I used Resnet50. Since ML models are compiled, you’ll need to make a code change too. To find the model name, select your .mlmodel file, and click through to the source.

You’ll find the model name pretty easily – it contains an MLModel instance variable.

@objc class VGG16:NSObject {
var model: MLModel

Code

We’ll add code to

  • Respond to camera input and image selection from photo library
  • Configuration of the ML model, setting up Vision and making a classification request
  • Display classification on-screen

Responding to camera/photo library input

In ViewController, we implement UIImagePickerControllerDelegate for both picking and using the camera.  Remember to wire them up to the toolbar button actions added in the storyboard.

Now, add a reference outlet for the picked image and result label. Finally, wire them up from the storyboard too.  Add protocol extensions for UIImagePickerControllerDelegate. It’ll respond to picked images from the library and camera.

Image classification

Once you have an image, its time to classify it! In ViewController, declare your model as a static iVar, or use it inline

Now, add a method to classify an input image. This method will

  • convert the input image to a CoreImage
  • instantiate a Vision MLModel
  • Create a Vision request (VNCoreMLRequest)
    • When the request is performed, we’ll receive an (optional) array of observations, ordered by confidence
    • Each observation has an identifier with a level of confidence
    • Update the UI with the identifier with the highest confidence – its the model’s best guess!
  • Invoke a VNImageRequestHandler to perform the classification request

Provision for device

Since you’ll be running this on your phone/iPad, don’t forget to setup your device provisioning (Target..General..Signing). I used automatic provisioning.

xCode 9. Run on device

With xCode 9, you can run wirelessly on device. Make sure your computer and iPhone/iPad are on the same wireless network, select ‘Window…Devices and Simulators’. Check ‘Connect via network’. I found its a bit slower to install than a tethered device, but super convenient!

Results

Here were some of my results. More testing is definitely needed!

Trained Core ML results

Models

Try other Core ML models from Apple. When you add them to your project, add them to your project target. You’ll need to change the model name in code too.

The sample source code for this app is available on GitHub.

xCode 9. Github improvements

As part of xCode 9 – theres a better integration with Github. I added this source code to my Github account by

  • Source Control…Create Git Repositories
  • Add your credentials under Preferences..’Source Control Accounts’.
  • Switch to the Source Control Browser Pane, and select the Settings icon

Source Control Pane in Xcode 9

  • Create ‘CoreMLDetection’ Remote on Github…’

Sample Xcode9 GitHub Push

Seemed simple enough, but I’ll take a terminal window. It might be useful for tracking code changes, but not sure yet.

sohoPOS POS iPad App

sohoPOS is a Point Of Sale iPad app. The iPad app was developed for the Australian retail market.

Features

Categorised product lists, setup for retail and restaurant, offline mode, Epson thermal printer integration.

App notes

The app synchronizes data to a web dashboard for sales reporting.  Pricing options for the app follow the freemium model – free basic features or a paid subscription for full feature access. sohoPOS launched mid-end July 2013.

soho_eat_in_tables soho_menu soho_orderhistory soho_payment soho_shifts

Determine the NSIndexPath of a UITableViewCell when a sub-view is tapped

I do a fair amount of iOS consulting work for clients who have outsourced their iOS code development. My work usually involves code review and final steps to help them successfully submit their app to the App Store. While it will always work out cheaper for my clients to outsource (usually offshore), the quality of the code received by most is um, questionable. My clients might get a functional UI which follows a specific design but under the hood, the house is a mess!

An example of this – figuring out the NSIndexPath of a UITableViewCell when a sub-view (eg UIButton, UIImageView) is tapped within the cell. Two of my ‘favorite’ solutions:
– use the tag property on the button or the image view to store the index path. ugh.
– create a variable within the cell instance (assuming there was an abstraction of table view cell code!) to track the index path of the enclosing cell.

A much better approach – take advantage of the UIEvent associated with the user touch event. In this example, I have a reset button contained within my UITableViewCell with the following target:

[resetButton addTarget:self action:@selector(resetButtonTapped:withEvent:) forControlEvents:UIControlEventTouchUpInside];

For legibility, I’ve refactored the UITouch variable but it could be easily inlined too.

-(void) resetButtonTapped:(UIButton*)button withEvent:(UIEvent*)event {
UITouch *touch = [

anyObject];
NSIndexPath * indexPath = [self.tableViewindexPathForRowAtPoint: [touch locationInView: self.tableView]];
NSLog(@"index path %@", indexPath);
}

Optimizing HTML web sites for iPad

We did some work recently to determine what it would take to optimize the current site for iPads. Currently iPads are redirected to the mobile version of the site. All that real estate to waste.
Links/articles I found to be useful which might kick start your development into iPad-optimized web sites:

Device detection via CSS: iPad specific styles through CSS: The most useful bit was around recognizing the iPad via the CSS media attribute:

<link rel=”stylesheet” media=”only screen and (min-device-width: 768px)
and (max-device-width: 1024px)” href=”ipad.css” type=”text/css” />
Designing and Optimising Websites for iPad.

Safari Reference Libray: Straight from the horse’s mouth: Style your web app for iPhone OS. For audio and video, use HTML5.
Safari Reference Library.

On usability: Jakob Nielson not  a fan of the iPad – he likens it to stepping back 15 years or so in web history. An interesting read around the challenges of iPad usability and navigation confusion:
(Don’t miss the 93-page PDF document at the end of the article – well worth the read!)
http://www.useit.com/alertbox/ipad.html.

Other things: The notion of portrait vs. landscape styling – nothing new here (iPhone), but a lot more work to do for two orientations.

After my first couple of hours with optimization for iPad, traditional web sites just don’t seem to suit the iPad. Although the site mostly worked (there was a list of CSS optimizations needed), the design seemed misplaced on the iPad. A lot of the issues are well described in Jakob Neilson’s article above. Its a different paradigm for navigation and interaction. For one thing, context changes are less obvious. Remove the ability to click, to hover, and interaction patterns change. We’ve seen and are probably quite used to these behavioural changes on the iPhone.

My expectations for web browsing on the iPhone are quite low. But my expectations for the iPad are somewhat different. Its resolution is acceptable – 1024 x 768. I can interact with (most) web pages with a lot less snot-like pinching of my fingers. I expect clarity in choice, in navigation, and hopefully in productivity, but its still a bit puzzling. I don’t think this is a failure in the device, but more the idea of trying to use web sites (and iPad apps) based on traditional web design. I’m not a designer – I develop software. My designs are functional – like a plain cheese sandwich, you’ll want some mustard.

So, even my brief time with iPad optimizations, the biggest challenge isn’t technical. Web designers – think differently! The key is not to design iPad apps/iPad-optimized sites in the same paradigm. Use an iPad for a while. Revisit the website you’re looking to optimize for iPad, and hopefully the challenges (and some solutions) will become clear.