What's new in iTunes U 3.0

iTunes U 3.0 has just shipped, so what's new? The answer, as is usual with iTunes U, is a small number of big features. Before I list them, it's worth taking a moment to survey the landscape as it existed prior to this release.

The venerable Showbie (sponsor of my podcast) has long been the go-to assignment submission tool for iPad classrooms. It offered class groups, assignments within those classes, the ability to annotate documents and provide private teacher-to-pupil comments as well as shared class folders.

The newcomer, which has already been impressive, is Google Classroom. Google Classroom is focused on assignment setting, submission and grading. It currently does not have quite the same kind of "course content" features as iTunes U (course outlines, posts, etc.). It does have strong integration with Google Docs, Drive and YouTube. Classroom is a new product and, as such, has a few rough edges to work off. That said, Classroom is a massive boon for Chromebook schools and, until iTunes U 3.0, was also useful for iOS schools. The Classroom iOS app is nowhere near powerful enough yet but it does let you do the basics.

Prior to version 3.0, iTunes U was a very competent teacher-to-student content delivery platform. It allowed teachers to specify the outline of their course, provide posts with content in them, upload materials in various formats and create assignments that students could be notified of.

The one thing that iTunes U didn't do was deal with the inbound half of the assignment workflow. Google Classroom came along and proved that this was important enough to be a major plank of the first-party solution, and now we have iTunes U 3.0.

Key Features

iTunes U 3.0 brings three key features to the platform:

  • Assignment submission, grading and feedback.
  • Per-assignment private communication between instructors and students
  • A unified course grade book for all assignments given in the course

I'll look at each of these in more detail.

Assignment Creation

iTunes U now supports assignment grading, submission and feedback. The new features support many different types of assessment. The teacher will create the assignment as normal, but there are now three additional fields in the assignment creation window.

The "Enable Grading" switch opens another field where the teacher can set the maximum mark for the assignment. The second switch enables file submission to this assignment.

Various combinations of these switches cover a wide range of common assessment situations in the classroom.

For a task which is not graded, such as an optional or extension task, disable both switches. No grade will be recorded and no files can be submitted, but the student will still see it in their assignment lists.

Assignment Creation Options

Assignment Creation Options

For those tasks which are graded but for which there is no concrete digital artefact created, such as a performance or presentation: enable grading but disable file submission. This will allow the teacher to enter a grade and engage in private dialogue with the student without allowing the student to upload files. The teacher could video or photograph the student or complete a PDF rubric and return that material privately to the student as an attachment in the private message thread.

For a task that requires submission but isn't graded: enable the submission switch but not the grading switch. This could be useful for situations where a teacher has to gather evidence of something being done but does not need to award a specific mark for it beyond "it exists". This could be powerful for check-pointing or draft submission tasks, where the teacher sets specific deadlines for the student to show progress but the grading isn't done until the final submission. In this way, iTunes U could be used as a kind of learning log.

The likely most common task - one that requires a submission to be graded - enable both switches.

I think, overall, these four types of assignment cover many of the situations that teachers find themselves in when assessing student work.

Assignment Submission

Students can submit work to an assignment in one of three ways:

  • They can use Open In to send a file from any app to iTunes U. They are then presented with a picker to choose the appropriate assignment. This is very familiar to anyone who has used Showbie or Google Classroom.
  • iTunes U supports Document Provider extensions. This means that any cloud service app that has a document provider extension can present its files right inside iTunes U and the student can pick from there. In practice I was able to pick a file from Google Drive and submit it for an assignment without leaving iTunes U.
  • If the assignment has a PDF attached, say a document to be filled in, the student can mark up the PDF right inside iTunes U and return it to the teacher.

It is possible that a student may occasionally not perform to the best of their ability. In such circumstances a teacher may wish to ask fair a re-submission. As long as the assignment is still unlocked, the student can re-submit a new document. The old submission remains in the private message thread but a new one is added afterwards.

PDF Markup

iTunes U now contains a basic PDF markup tool. It allows pen drawing, with a selection of line thickness and colours but not transparency; a definite oversight for highlighting on top of documents. The markup tool also allows text entry with a choice of five fonts, colour, a size slider, alignment and borders on the text box.

PDF Markup Tools - Teacher View

PDF Markup Tools - Teacher View

I would liked to have seen text box presets in here - commonly used combinations of size, font, border and colour that would carry specific meaning in a marking scheme. I was initially confused that there is apparently no eraser tool for the pen, although there is an undo. It was later pointed out to me that you can tap on any annotation - whether text or pen - and delete or duplicate it from the black edit bar.

I was initially under the impression that all PDFs are flattened on submission. This is not actually true. It is possible for a student to save an editable copy of their markup, but this must be done manually before submission. There are two buttons in the PDF markup UI: "Hand In" and "Save". If the student opens the teacher-provided PDF, edits and taps Hand In, the document is flattened, submitted and closed. There is no opportunity to save the document in an editable state. On the other hand, if the student opens the teacher-provided PDF, edits, taps Save, and then taps Hand In, the document will be submitted flattened and also saved in an editable state. Honestly, this is so annoying and non-obvious I'm assuming it's a bug in iTunes U (I've filed it as #21569632.


I'm really pleased that iTunes U has gone a step beyond Google Classroom in doing a better job of providing a course-wide grade book. In Classroom, you have to go into each assignment individually to see the scores, but iTunes U contains a really well done grading dashboard.

Grading Dashboard

Grading Dashboard

This view does so many things. Let's get into them.

  • Firstly, students are represented by rows and assignments are the columns.
  • Tapping on a row header allows you to focus the row down to that particular student. This is a very good data protection feature if you were using this screen at a parent conference.
  • Tapping on a column header brings an assignment status popover. This popover shows the progress of the assignment in terms of how many have been handed in, how many graded and how many returned. If you have graded some or all of the submissions, this popover also allows you to return all draft grades at once.
  • Tapping on a cell opens the private message thread between the teacher and the student. Assignments can be accessed here.

The appearance of a specific cell changes as students work through the assignment:

  • A dash in the cell indicates the student has not yet looked at the assignment.
  • The cell will show "viewed" when the student has read the assignment.
  • The cell will show a document icon when something has been submitted to the assignment.
  • A number in the cell indicates the teacher's grade for the assignment. It is shown in light italics if it's a draft grade and solid regular text if the grade has been returned.
  • If a student has made a comment in the private thread, a blue dot will appear in the cell and in the assignment header.

Assignments can be locked, and locked assignments can be hidden or shown. Locking can be done manually from the header popover. As far as I can tell right now, assignments do not automatically lock when their deadline passes.

It is also possible to export the entire course's data from the marking dashboard as a CSV, which is a great way to archive the data or start working on a mark report.

Private Student Communication

Every assignment in iTunes U carries a private communication channel with every student in the course. Students and teachers can chat in this channel as well as post pictures, videos and documents.

Teachers and students can attach documents to the private communication channel

Teachers and students can attach documents to the private communication channel

This channel is how the document submission is implemented. Submissions are just special file attachments on the private message channel. Teachers can also send documents back to students this way. This might be particularly useful in situations where one or more of the digital artefacts that result from the work is actually generated by the teacher. For example, the teacher might video a student's performance or complete a marking rubric based on the student's work and return that document to them privately through this channel.

The private message channel supports document provider extensions so you can pull straight from Google Drive, Dropbox, Box or iCloud Drive.

The private message channel supports document provider extensions so you can pull straight from Google Drive, Dropbox, Box or iCloud Drive.


There are a few limitations on this current release, but not many.

While the teacher gets a very usable marking dashboard for the course, the student gets no such overview of their grades (bug: #21569689).

There is no mechanism for a teacher to see an overview of a student across courses. This would be ideal for guidance staff and, in principle, the student's Apple ID could be the primary key (bug: #21570607).

There is no mechanism for limiting the number or type of files a student can submit for an assignment. In some scenarios, that's totally OK. Maybe the submission is composed of multiple items. On the other hand, I would very much like to be able to force the students to submit either a PDF or another specific kind of file (bug: #21570640).

There's no mechanism for automatically locking an assignment when the deadline has passed (bug: #21570691).

The marking dashboard does not calculate any max, min or average stats for the class (bug: #21569771.

Attachments in the private message channel are seemingly never cached on the device. I posted a 25MB movie to a student and the student had to wait for it to download entirely every time they wanted to play it. It couldn't be streamed, apparently. This might be a rather hard limit on the scalability of this feature. Hard to imagine a 250MB feedback video being watched much if the latency is that high (bug: #21570782).


Assignments with submission but no grade can only be submitted to via open in. Students can add photos to the discussion but that doesn't count as submission. Effectively you can't submit from Photos to an assignment that does not have a grade, because Photos does not support Open In. A task which is graded and has a hand in can be submitted to from the paper clip. This is different to ungraded assignments. (bug: #21569879)

If a student opens a PDF, marks it up and then immediately submits, the annotations are flattened and then lost. It is possible to save the PDF with editable annotations but this has to be done manually. The student has to manually save the edited version before handing in. (bug: #21569632)


The iTunes U release cycle is long - too long, I would argue - but it does tend to bring good results when releases do arrive. iTunes U 2.0 brought us Course Manager on iPad. iTunes U 3.0 brings us a whole new document submission and grading workflow that is easily as good as anything that currently exists.

In the post-iPhone 6 Plus era, I remain more than a wee bit disappointed that the Course Manager component is not available on the iPhone despite iTunes U being a universal app. Students can submit files and participate in private messaging with teachers from an iPhone, but the PDF markup tools are not available to either teachers or students on a phone.

When you look at it as a whole, though, iTunes U is clearly the most complete native mobile learning platform there is right now. Showbie has done stellar work for years on the document submission aspect of the problem. Google Classroom, too, has attacked the hill from that side.

iTunes U started with the courses, the materials and the learning content. Now it adds the assignment submission and grading components too. When you take that all together, nothing else comes close as a complete solution for delivering a course on iOS.

MDM Structure Design for the Long Term

As we come up to the end of the school year, it's a good time to reflect on the administrative tasks we do in order to get ready for the next school year. One area of deployment that's been on my mind recently is structuring our Mobile Device Management (MDM) server to be easy to maintain in the long run.

This is one area in which, thus far, I have not done a great job.

We started with our MDM in August 2013. This was before the Volume Purchase Program Managed Distribution approach was available to us. We converted to VPP-MD in August 2014 and that approach has been highly successful in reducing to near-zero the amount of time iPads are removed from service in the classroom to be updated and have new apps installed.

Having said that, the internal structure of our MDM is not in great shape. In this article I'll explain the mistakes I made and come to some conclusions about how we're going to do things differently in the future.

I'll be writing with reference to the Casper Suite by JAMF, since that's what we use at Cedars. Full disclosure, JAMF also sponsor my podcast.

The Aspects of a Modern MDM

In the VPP-MD era, a Mobile Device Management server essentially has two major entities: mobile devices and users. Mobile devices can have configuration profiles applied and users can have apps assigned.

When we started with MDM, we only had mobile devices. There were no user objects in the Casper Suite. To install apps for the primary school, we brought the iPads back to base and used Apple Configurator. This process typically took a couple of hours a week. For the secondary school, we used Casper to make VPP Coupon Codes available to the students in Casper's Self Service app - effectively, but not technically, a "private App Store".

In some ways this old model was easier: you enrolled devices and assigned both configuration profiles and apps to those devices. In the VPP-MD era, you assign devices to users, assign configuration profiles to devices and assign apps to users. This is far more flexible but, in a one-device-per-person model, it appears to be complexity for the sake of it. It makes tons more sense if you understand that one user might have many devices.

The Mess

Basically, I have two problems with our MDM:

  • I made groups for specific classes - as they were in 2013. That means that this year, I'm still managing groups that have names one year out of date.
  • I have way too many ad-hoc groups for various quick hacks around the above structure.

Casper allows you to have two groupings of devices and four of users:

  • Static Mobile Device Groups
  • Smart Mobile Device Groups
  • Static User Groups
  • Smart User Groups
  • Buildings (for devices)
  • Departments (for devices)

These smart groups are dynamic groups composed of users or devices who meet specified criteria.

Further, two distinct objects can be "scoped" to these six collection types:

  • Sets of apps, called VPP Assignments, can be scoped to individual users or to user groups, whether smart or static.
  • Configuration Profiles can be scoped to individual mobile devices, smart or static mobile device groups, buildings or departments.

Finally, Casper allows you to create "extended attributes" for both mobile devices and users. These are custom key/value pairs that you can add to either record type. All my User objects have an EA named "Class" that describes the class they are in.

At the moment, I have apps scoped to smart user groups. These user groups are generated by users' Class EA matching a specific value.

Secondly, I have configuration profiles scoped to a mixture of different things. I started in 2013 by defining each class through the "department" attribute on the device, so I hit some classes by scoping Configuration profiles to their 2013-14 department. I also later created some static device groups named "2014-15 Primary 7" to distinguish it from the "2013-14 Primary 7" that is encoded in the device's department attribute.

This is, as you might imagine, a bit of a mess:

  • There are too many steps to put a device into the "right" group for all the settings they need to have.
  • A device needs to have its department set to its user's class - as it would have been in session 2013-14.
  • The device might also need to be manually added to a static group representing the correct class for 2014-15.
  • The User needs to have their Class EA set correctly.
  • It's hard to determine the impact of assigning a profile to a given group or class.

In all of this, the biggest problem is that all these groups change their composition each year. If classes are departments, all the users change department once a year. That's too much churn.

The Future Model for Configuration Profiles

I've taken this opportunity to re-think what we really need in terms of MDM control of app assignment and configuration profile distribution.

One of the first things that I've come to realise is that our deployment of configuration profiles is fairly stable. We have the following profiles that essentially everyone gets:

  • Deploy a web clip linking to CEOP
  • A subscription to the school's calendar feed
  • Restrict iMessage and Facetime
  • Disable shared photo streams
  • Require passcode
  • Restrict in-app purchase
  • Prevent installing profiles
  • Prevent account changes

Almost everyone gets these profiles and they very rarely change. We also apply a couple of security profiles through Apple Configurator that limit apps to 12+ and disable downloading movies and TV shows.

In the past, it was necessary to have class-specific device groups as that was also how you scoped the distribution of VPP coupon codes.

In the future, I think class-specific device groups will be less necessary. I will probably just have one main device group named "All Managed iPads" and scope these configuration profiles to that group. If anyone needs to be excluded from these groups, Casper has a 'limitations' feature that allows me to specify "everyone in A excluding B", which computes the relative complement of the two sets of users A and B.

There are also a few configuration profiles that I keep up my sleeve in case I need them. Mainly, these are "Disable Camera" and "Disable App Store". These are rarely deployed except as a disciplinary measure. For these profiles, Casper allows me to target them to individual devices. They're never targeted at entire groups.

The Future Model for VPP Assignments

The model of grouping users for VPP assignments is harder. It's harder for several reasons:

  • Students move classes each year
  • Apps are usually a requirement of classes, rather than of students.
  • Students can, from time to time, change class mid-year.
  • The set of apps assigned to a class changes over the course of the year, usually by addition of new apps.
  • Classes are sometimes composite classes of two year groups together and a teacher might only want an app for the upper or lower half of their class.

My plan, right now, looks like this:

  • An "everybody" group, to which our core apps are assigned.
  • An Extension Attribute on each user that is not their "class" but their year of graduation, which is more stable.
  • Another EA on each user that designates them as staff or students.
  • Classes are represented by a VPP Assignment that scopes a specific set of apps to one or more graduation cohorts.

With that structure, all of the following situations are handled:

  • At the end of a year, we simply rename the current VPP Assignments for next year.
  • If the composition of classes changes between sessions, we can change the class smart groups to select on different graduation cohorts.
  • If a student moves grades, we change their graduation year EA which moves them into the right smart groups. This scenario is, honestly, quite rare.
  • Apps are scoped either to "everyone" - for the core apps - or to specific class-based assignment groups.

So that's how I intend to start moving forward in managing our Casper implementation. It allows apps to be assigned to compositions of year groups, if need be. It also minimises the number of structures or fields required to put things into the right place.

As an example, here's what would be required to enroll a new device for a new student:

  • Create a User record for the student with their graduation cohort and staff/student status set correctly.
  • Enroll the device in Casper, set the device to be a "managed" iPad. There are a number of attributes in Casper you could use to identify a device as such.
  • Assign the device to its user.

With these steps, the user will be assigned the apps appropriate for their class and the device will acquire the correct configuration profiles.

Presenting with Apple Watch

I've been using Keynote on the iPhone to control my presentations for a few years now. My standard presentation setup is to run Keynote on my iPad connected to the projection equipment and use my iPhone as the controller. This is effective, reliable and convenient as it's one less thing to remember.

With the Apple watch comes along two ways to control presentations from this new device. Both Keynote and PowerPoint for iOS have companion Apple Watch apps that are able to control their respective parent apps.

Here is a brief comparison of each without, at this point, any comment on reliability as I have not used either in anger so far.


The Keynote Watch app can operate in two modes. Firstly, it can control a Keynote presentation that's running on your paired iPhone. In this mode, you would connect the phone to the projector and control it with the watch.

In the second mode, Keynote on the Watch can control Keynote on your iPhone which is itself operating in the existing remote-control mode to control Keynote on another device. It's a bit like Keynote Inception and, at first blush, is at least 100% more chance of wireless failure than I'm entirely happy with. Still, this is the only way that you can use your Apple Watch to control Keynote on your Mac or iPad.

I was initially confused about this as it's not obvious what puts it into each mode but here's the rule: if you have a Keynote presentation open in Keynote on your iPhone when you launch the Watch app, it will control that presentation. If you're looking at the Keynote file picker when you launch the Watch app, it will go into Keynote Remote Remote mode and start trying to connect to Keynote on your Mac or iPad. That's a lot of Keynote.


Once the presentation is running, the entire surface of the screen is a big "forward" button. This is obviously ideal for no-look advancing of your slides: just mash your big thumb anywhere on the screen of the watch. You do get a slide progress counter at the bottom and there's the current time in the top-right corner - but no indication of elapsed time since the start of the presentation.

A force touch on the screen reveals two additional options: Back and Exit Slideshow. I quite like this approach as it's rare enough that you want to go backwards through slides.

What is lacking on the watch is any of the other niceties of the Keynote presenter display. Presenter Notes are obviously not really going to work here but it would be extremely helpful if the watch screen included the red/green "Ready to Advance" indicator and the number of builds remaining in the slide.


The PowerPoint watch app is basically the same idea as the Keynote app: forward and back through your slides. PowerPoint, however, only has the ability to control a presentation running on the Watch's paired iPhone. There's no ability to control PowerPoint on a third device.

As with Keynote, the PowerPoint app has to be running on the iPhone. First, you're presented with the Start Slideshow button.


When you're presenting, PowerPoint offers a couple of additional options on the main screen. You get the ability to navigate both back and forward as well as an elapsed time counter and a slide progress counter. To me, this is a win-one-lose-one scenario: the elapsed time counter is a great addition. However, putting both back and forward buttons on such a small screen would - I imagine - increase the chances of a navigation error.

PowerPoint provides two options with a force touch: Restart and Exit Slideshow. I can't think of too many PowerPoint presentations after which I would wish the presenter to have such easy power to restart the show, but I suppose there is a use case in there somewhere.

Setting up the Watch for presenting

In order to usefully use the Watch as a presentation remote, you're going to want to make a few adjustments.

Firstly, I think that you're going to want to take the watch off your wrist. Unless you're a very stationary and gesture-free speaker, it's going to be really obvious when you go to advance a slide with your Watch. Also, for the next few months at least, you're going to be That Speaker Who Controlled Their Presentation With Their Watch And Was A Bit of a Douche rather than the Speaker Who Was Awesome.

You want to have the face of the watch nestling in your cupped fingers. The same place you'd interact with a TV remote or a more traditional presenter's remote. I found that taking the watch off, re-closing the sport band and placing three fingers through the band, in the way that you might pick up a watch to look at it, was an effective way to hold it. The most important thing here is that you don't distract your audience by fiddling to switch slides and you don't make a mistake when navigating.

Secondly, you want to make sure that the watch doesn't turn itself off or otherwise jump to some other function while you're using it. To minimise the chance of this, you should:

  • Disable Wrist Detection, the feature that locks the watch if it comes off your wrist. You do this in the Apple Watch app on your iPhone, in General > Wrist Detection.
  • Set "Activate on Wrist Raise" to "Resume Previous Activity". You can do this right on the Watch in Settings > General > Activate on Wrist Raise. This ensures that, if the watch does sleep during your presentation, tapping the screen will bring you back to the remote app, rather than the watch face.

I've been presenting exclusively with iPad and iPhone for several years now and it has been unfailingly reliable for me. With the advent of the iPhone 6 Plus and the Apple Watch, it seems entirely possible to me that my presenting kit just got a whole lot smaller.

As always, if you're interested in hiring me to present to your school, university or business, I'm available.

Notes on Migrating from Aperture to Photos for OS X

I have enjoyed photography for many years, particularly since the transition to digital. I've been shooting exclusively digitally since I got my first digital camera in 2002 and the result is rather a large body of digital pictures.

This is the story of migrating from a system that involved Aperture and a bunch of jury-rigged hacks to Apple's new Photos for OS X.


Since Aperture first shipped, I used it to manage all my digital photographs - until the iPhone came along and wrecked my workflow. I can't properly explain what went wrong but I think we all recognise now that there is a sense in which photos "live" on an iPhone in a way that they don't live on a digital camera. At least they didn't for us "serious" photographers but, if you recall the days before smartphones, many regular users did store photos on their digital camera as a way to take the photos with them and show friends.

Since the iPhone, I rather relied on occasionally dumping photos out into Aperture and then, later, relying on the never-very-good iCloud PhotoStream to get photos into Aperture. The implementation of Photo Stream in both Aperture and iPhoto was a mess, as evidenced by the dozens of projects I have named "PhotoSteam (month year)", each roughly correlating to the dates I opened Aperture.

So, at the end of this era, I had:

  • About 31,000 photographs in Aperture, totalling over 300GB.
  • The Aperture library residing on my Mac's internal SSD storage taking up 38GB
  • All master images referenced on a 3TB external hard drive
  • About 6,000 photographs in iCloud Photo Library, totalling around 6-7GB of iCloud storage space.
  • A folder of unknown number of other photographs of around 40GB in size from the time between early iOS 7 betas and iOS 8 shipping.

So, how to get all of this into iCloud Photo Library and back down to my iPhone and iPad?

Migrating the Library

The first step after installing the 10.10.3 beta was to migrate the Aperture library. Photos did this more or less automatically and quite well. The software is obviously still in flux, so exact details of UI are not worth discussing right now and I want to focus on the data migration.

Photos correctly maintained the connection of photos in the library to their referenced masters on the external drive. Everything worked well as long as the drive was connected. At this point I discovered that images with referenced masters cannot be uploaded to iCloud Photo Library. Only images with managed masters can.

The result at this point was that I could browse around 36,000 photos on my Mac but my iOS devices only showed about 6,000 photos. These were the 'native' iOS photos taken on my various devices in the iCloud Photo Library era, but none of my 'legacy' DSLR photos were coming across.

It is also possible to consolidate the masters into the Photos library (using File > Consolidate). Now, recall that my Aperture library was over 300GB in size. My MacBook Air only has 512GB of internal storage and a fair amount of that is being used for other things. This seemed like a bit of a stalemate since I couldn't consolidate the library for lack of disk space.

At this point I left things for a while as other parts of life interfered....and a few new 10.10.3 betas arrived.

Migrating the Data to the Cloud

The story resumes this week as I sat down to try and solve the problem once and for all. The impetus only somewhat increased by an unexpected burst of techno-lust for the new MacBook Apple announced this week.

My first try was to consolidate the entire library. This quickly failed due to lack of disk space, as expected. An indicated 200GB was required over and above what was available.

The next step was to consolidate some images. I started off consolidating a year at a time, which allowed me to make some progress. The default setup for Photos is to keep the full-resolution masters on the Mac's storage but there is an option to optimise instead. I turned this on.

At this point, it's worth noting that I'm still struggling a bit with a mental model of where all this data is going and what's happening to it. As a precaution, I have made backups of my entire Aperture library and masters on the external hard drive.

The result of this was that I was able to, in batches, consolidate around 25,000 of the legacy images into the Photos library. The library on disk grew to around 214GB. At the same time, I was monitoring outbound network traffic from my Mac in Activity Monitor. This indicated that uploads were ongoing, which I assumed to be Photos sending these images to the cloud now that they were no longer referenced.

I opened Photos on my iPhone and iPad and could see photos streaming down to these devices. It was helpful to turn off Summarize Photos (Settings > Photos & Camera > Summarize Photos) on iOS to see the full extent of progress when viewing the Years view on the devices.

The Impact of 30,000 Photos

As the number of photos in my iOS device libraries grew, I started to notice some impacts on performance and correctness in some apps on my devices. Twitter correspondents were telling me that they had seen poor performance on iPhone 5s devices.

Once I crossed 20,000 images, I started to notice the following on my iPhone 6 Plus running iOS 8.2:

  • The first effect was occasional instability in the Photos app on iOS. It was by no means unusable but the crash rate went from zero to not-zero (note: this is much better - but still not perfect - on iOS 8.3).
  • The next effect was that iOS apps that implement their own photo-picking UI started to really struggle to either remain performant or even show a correct view of the photos on the device. Particular offenders include Instagram and Explain Everything.
  • Apps that use the system photo picker UI continued to work well.
  • The Camera app started to be slightly slower to launch.

Time Taken

Having started the migration on Friday evening, by the time I woke up on Tuesday morning my devices were showing a more-or-less consistent view of my photo library. My iPhone had downloaded tiny thumbnails for all images although my iPad was still catching up.

The image counts were a little inconsistent at this point:

  • Mac: 30,931 Photos, 211 Videos
  • iPhone and iPad: 30,190 Photos, 211 Videos
  • iCloud Settings Panel: 31,030 Photos, 213 Videos (Settings > iCloud > Storage > Manage Storage > iCloud Photo Library)

The only reliable way I found to determine whether my Mac was completely finished migrating all the data to the cloud was to observe the Networking tab in Activity Monitor. When Photos was migrating, there was a very obvious pattern to the upstream bandwidth usage.

During this time, the system did not instantly sync anything. If I deleted an image on one device, it persisted for days on the others. This is hardly surprising given the background workload that was going on and I wasn’t particularly concerned about it.

Mike Bradshaw let me know via Twitter that he had observed image counts being incorrect for around 48 hours after migration as the various devices reached a quiescent state between them.

Data Usage and Optimisations

As I said earlier, I started with over 300GB of Aperture master images. In the final tally, here is how much data the system used after all optimisations:

  • Mac Library: 95GB (as reported by du(1))
  • iCloud Storage: 269.3GB (as reported by Settings > iCloud > Storage)
  • iPhone on-board storage: 10GB (as reported by Settings > General > Usage > Manage Storage)
  • iPad on-board storage: 8.6GB

I have turned on “optimise storage” on every device, including the Mac.

I still don’t have a perfect mental model of the data placement in Photos but my current understanding of what has happened is this:

  • I consolidated (copied) all my master files from my external drive to my Mac’s internal storage
  • Photos proceeded to upload that entire collection to iCloud
  • Once the photos were in iCloud, the consolidated masters were replaced by mac-optimised versions
  • On the iPhone and iPad, Photos was aware of the existence of these images but no data was downloaded to any device until required.
  • When I accessed the Years view, and tiny thumbnails were required, these were loaded from iCloud
  • When I accessed any individual image, a high-resolution version was downloaded from iCloud. A circular pie-chart progress meter in the corner tells you this.

Steady State Operation

7 days into the migration, I woke up to find my iPad and iPhone in total agreement about the number of photos I finally had:

  • Mac Library: 109GB (as reported by du(1))
  • iCloud Storage: 269.3GB (as reported by Settings > iCloud > Storage)
  • iPhone on-board storage: 10.3GB (as reported by Settings > General > Usage > Manage Storage)
  • iPad on-board storage: 8.6GB

At this point, my Mac’s photo count was still ahead by 21 photos. I’m now assuming that there are 21 photos that are either corrupt or missing their master files in some way somewhere in my photo library. I doubt I’ll ever find them.

Sync performance is good at this scale:

  • Photos deleted from one device disappear from the other in under 5 seconds.
  • Photo edits on one device appear on the other in about 15 seconds.
  • New photos from the phone appear on the Mac in under a minute (I have image optimisation turned on at both ends so this is likely to take some extra time).

It’s rare that I’ll be wanting or needing faster syncing than this. What most people really want, I’d guess, is “my photos are on the other device when I get back to it” and Photos certainly seems to be offering that. There is one caveat, however: Photos/iOS will not upload newly-taken photos unless on WiFi and there seems to be no way around this. I have Settings > Cellular > Use Cellular Data For: turned ON for Photos but it won’t upload unless the phone is on WiFi. I understand why that is the way it is but I have an eat-all-you-can data plan for my phone and it’d be nice to have this.

Overall, I can say that I'm really very pleased with Photos for OS X and with iCloud Photo Library.