On Switching from iPad to Chromebook in School

This summer, my school is making a substantial change in our 1-to-1 programme. After nearly ten years, we are switching from iPad to Chromebook. I thought I would write a bit about why we are doing this.

We have refreshed our iPad deployment twice now. We started in 2010 with the original iPad, then got the 4th Generation iPad in 2013 then the 9.7” iPad Pro in 2016. Now, here we are in 2019, ready to refresh again.

The first thing I’d like to note is that this isn’t a spur of the moment decision, made in a fit of pique. I have internal papers at school that date back to 2014 exploring whether we should stick with iPad or move to Chromebook. I’ve owned Chromebooks since the Samsung Series 3 came out in 2012. I have been tracking ChromeOS for a long time.

iTunes U: A Burning Platform

The problem with Apple’s iOS education offerings that started to really make me wonder what the future held came when I realised that iTunes U was clearly just being left to die a slow death. At the time of writing, iTunes U still does not support basic iOS multitasking features that were introduced in iOS 9 - four releases ago.

I found myself looking enviously at features in Google Classroom. Features that I had filed radars for years previously and which still lie open. Features like scheduled posts, homework summary emails to parents, posts to individual pupils or groups in the class.

Whatever learning platform a school uses is a vital part of the work of the school and, if it’s not evolving, it’s dying. Make no mistake: iTunes U is a dying service and it would be more honest and respectable of Apple just to announce the date on which it will be put out of its misery.

iOS Management

I’ve been doing iOS sysadmin since before it was a thing that you could reasonably do. It’s way easier now than it has ever been in the past, but it’s still not easy enough. Too often, something or other just behaves strangely. Whether it’s a device that doesn’t receive timely push notifications or which won’t install a particular app over MDM for reasons which have an error message but no clear explanation or it’s that one iPad that thinks it’s not enrolled in DEP when it definitely is.

The worst issue by far in iOS sysadmin is backup and restore of supervised devices. This process has never been properly documented and it seems to change freely with iOS versions. Every time I have to do it, it takes at least three hours of experimentation to get something that mostly works.

Still, there are many things that are excellent about iOS management and it’s a very controllable platform for many purposes. It’s particularly good for sitting formal exams with.

iPad Hardware

We’ve been using 9.7” iPad Pro hardware in this cycle and, while the hardware remains fast and capable, I have not been very pleased with durability. We have seen a lot of fatigue-related screen damage - that is, damage not caused by a catastrophic accident but rather just repeated put-downs in a schoolbag.

We have also seen several other kinds of issues with our iPads this year that haven’t been happy. I’ve seen video cards go, batteries just stop working, devices just refusing to start up or restore correctly. This is to say nothing of the very poor quality of chargers and cables that Apple ships with iPads. Charger and cable damage is a constant problem in 1:1 programmes.

iPad longevity in a 1:1 programme is something that you need to consider too. I would not feel at all confident in going into a fourth year with our current set of hardware. I don’t know if the more education-focused 6th Generation iPad is better, but I’ve been disappointed in this hardware cycle.

Official iPad repairs are now very expensive. When we started, we were paying about £127 for an iPad screen repair at the Apple Store. We are now paying £365. Buying AppleCare doesn’t help, because AppleCare is tied to specific serial numbers and I don’t know in advance which iPads are going to break. You’d have to buy AppleCare for every iPad, which is not cost effective, even at current damage rates.

When I realised I could buy 1.8 brand new Chromebooks for the cost of one Apple iPad repair, I started to think seriously about what we had to do.

Learning and Teaching

When we started with iPad in 2010, I suppose I thought that we were heading into a new era in education with creativity at the forefront. Particularly, I thought that Scotland’s Curriculum for Excellence was going to usher that in. We were led to believe that all different kinds of assessment materials would be considered appropriate for submission to our exam board. None of that happened, and we seem to be moving away from that idea at a steady clip.

At the same time, it’s worth noting that we live in quite a different world in 2019 than obtained in 2010.

In August 2010, Google hadn’t yet shipped the Cr-48 Chromebook. Dropbox was only 3 years old; iCloud was a year away; Google Docs was a year old and Drive, Sheets and Slides were all a couple of years in the future. Mobile networking was not well developed. WiFi wasn’t all that fast. We were living on a 5 Mbit internet connection shared across our whole school.

Today, many things are different but a very significant factor is the sheer speed and availability of high-speed networking. We are looking at a world where TV, movies, virtualised PC desktops and, soon, Triple-A video games are able to be delivered to you over the network with no loss in quality.

It’s also worth noting the significant impact that the rise of tablets has had on the design and capability of laptops. In 2010, laptops weighed four-plus pounds - not including a weighty charger - and got 3-4 hours of battery life. Today, they’ve halved in weight and more than doubled in battery life while getting faster, more robust and more flexible. In the final analysis, I think that the long-term effect of tablets will be that they forced laptops to get better. You can increasingly see with devices like the iPad Pro that any significantly advanced tablet usage these days is barely distinguishable from using a laptop.

When I look at the world now, I see deep and real collaboration happening across the network. We are starting to see the end of people emailing documents back and forth. Synchronous and asynchronous collaboration with people across the internet is a serious technical and social skill that seems very important to me these days.

I feel that Apple has not grasped this issue correctly. There are only two ‘productivity clouds’ in the game: GSuite and Office 365. In 2010, we chose our computers and ran the software that came on our computers. In 2019, I think that we choose our productivity cloud and get the computer that best works with that cloud. Apple simply has not and is not competing in this space and is therefore at the mercy of forces it does not control.

It seems to me that, for a school, the choice is whether you’re a GSuite school or an Office 365 school and everything flows from that decision. It’s quite difficult to transition from one productivity cloud to another and nobody will do that without a compelling reason. Google and Microsoft are matching each other blow-for-blow in cloud features, partly for each to make sure that the other never develops such a compelling advantage.

That leaves Apple, happily making what might be the ne plus ultra of local-state computing. The best fat clients ever made. As Benedict Evans puts it, the best is the last. However, I think this model of computing is becoming increasingly irrelevant and I honestly don’t know if I can envisage a long-term future for software outside of the cloud.

The kinds of software that don’t run on the cloud these days are more constrained by the difficulty of getting the data on which they operate into the cloud, rather than the difficulty of running the software itself in the cloud. For example, editing 4K video in the cloud is difficult because moving raw 4K footage to the cloud is difficult, not because building a cloud-based video editor is beyond our reach.

So what do I hope to get out of our transition to Chromebooks? I hope that we will be able to better prepare our young people for a future where work is done collaboratively in the cloud rather than on local computers. I hope to use Google Classroom to improve the workflow between teachers and pupils. I hope to see a reduction in workload for teachers through collaboration on documents with pupils and the use of tools like self-marking Google forms, and mark recording in Google Classroom.

I would like a reduction in my own sysadmin workload when it comes to swapping out damaged devices and administering new ones. We will save 56% off our current iPad budget and I hope to be able to use that to provide new educational experiences for our pupils.

It was gratifying to see Apple put serious effort into getting the desktop version of Google Docs working in iPadOS 13. However, it’s too little too late for us at this stage in our development. We might come back to iPad in years to come but, for the next four years at least, we’re going to see what GSuite and Chromebooks can do for us.

What’s Wrong With iOS 11/12 Multitasking

[Note: This is an old article I wrote but never published, in the hope that these issues would bave been fixed relatively quickly. I thought it would be an interesting historical note on the design issues that ultimately drove me away from using the iPad as my main computer. I originally write this in September 2017, so a few details may be different on iOS 12 but the basic design remains the same. FS.]

iOS 11 represents a major step forward for iOS in many ways, most notably on the iPad. However I am concerned about two aspects of the design of iOS 11: the Home Screen and the Multitasking interface. I worry that many people are not going to understand this and, even if they do, will find it either too difficult or too confusing to use in daily operation.

In my own use of iOS 11, I have found the home screen and multitasking to be imprecise, frustrating and overall slow to use on a daily basis. I had hoped that something would click for me; that there was some aspect of the design that I wasn’t quite getting that would illuminate it all but I can’t find that insight.

Home Screen

My difficulties begin with the home screen. In iOS 10, there were two things you could do with an icon: tap it to open or press and hold to rearrange.

In iOS 11, there are now four things you can do with an icon:

  • Tap to open
  • Tap longer to get the recents popover
  • Tap a bit longer to drag
  • Tap yet longer to rearrange

I have a number of problems with this.

Tap Timing is Hard

Now that we have four operations that are essentially the exact same physical gesture but differentiated by the length of time the gesture is performed for, I am concerned that many users will become quite confused.

Teaching someone that you get a different effect by just holding ever so slightly longer or shorter is a recipe for frustration. Many users will discover this for themselves but likely put the differing behavior down to bugs or some kind of magical-thinking “I don’t know what I did to get this”.

Further, younger and older users with variations in their fine motor skills will have some difficulty executing these different gestures in the correct amount of time.

In my experience of using iOS 11, I have found that I frequently make mode errors in the Home Screen. Sometimes, I don’t hold long enough to initiate a drag and the operation fails. Other times I hold too long and I get into rearrange mode instead of initiating the drag.

Same Gestures, Different Results

Another issue with the current design of the home screen is that the same behaviors produce different results based on the mode you are in.

If you’re dragging a single app, tapping another app will launch it. If you’re dragging a single app in rearrange mode, tapping another app will add it to the drag pile.

This, combined with the ease of making a mode error between simple dragging and rearrange mode can lead to some maddening behaviours on the home screen.

Another aspect of this problem is that if you simply drag when you intended to rearrange, tapping another icon causes the entire operation to fail because you are transported out of the home screen and into the second app you tapped.

Conversely, if you wanted to drag and then tap a second app to multitask, you can sometimes find yourself in rearrange mode, adding to a drag pile. At this point there is nothing to do but abandon the drag and start over.

There are a number of what you might call “functional cliffs” in iOS 11 where making a simple error can cause an operation or a set-up interface to irrevocably collapse and require reconstruction.

A further example of this is accidentally adding an app to a drag pile -there’s no way to drop one.

Frustration and Slowness

Think about iWork for iOS for a second. In early versions of those apps, you had to select an object, then wait, and then tap again in order to make the black edit bar appear over an object.

This made the interface slow at best - there was a built-in waiting period every single time you wanted to Cut or Animate an object - and more often than not error prone. If you tapped and then tapped again too quickly, instead of getting the edit bar, you would begin editing the text inside the shape.

Some time in the evolution of iWork this was changed so that the edit bar appears immediately on object selection. This dramatically improved the perceived speed of the interface and helped make iWork the delightful suite of tools it is today.

Unfortunately, the Home Screen in iOS 11 suffers from exactly the same problem that the early versions of iWork did: large variations in behaviour from tiny variations in user performance.


My second area of concern in iOS 11 is the design of the multitasking interface.

I deeply appreciate many of the advances in iOS 11 multitasking, such as allowing 25/75, 50/50 and 75/25 splits; in-place swapping of apps and moveable Slide Over apps. These are all great enhancements.

However, as I use iOS 11 more, I find myself increasingly frustrated by the process of getting apps into these multitasking spaces.

Enforced Waiting

Firstly, let’s consider trying to set up split-screen multitasking. The simple and obvious way to do this is to access the Dock and drag an item from there into either side of the screen. This is what I sometimes think of as the “golden path” for multitasking.

I still have an issue with even this, though, as it requires several taps, swipes and waits just to set up:

  • Swipe the Dock up (this sometimes requires two swipes if the status bar is hidden)
  • Tap and hold an app until it lifts off (wait)
  • Drag to the edge of the screen
  • Wait again
  • Drop it

Still, slow as that is, it’s at least simple and understandable. The more difficult situation is where you want to split-screen with an app that is not in the Dock.

I have already had several very savvy iOS users tell me that they simply thought it wasn’t possible to multitask an app that wasn’t in the Dock.

I know that there are a number of ways to do this but none are very discoverable and some are very difficult to execute.

The first method:

  • Go to the Home Screen
  • Drag an app (with enforced pause)
  • Use another hand/finger to get back to the first app
  • Hover over side (another pause)
  • Drop

This method has the fewest enforced pauses but is physically difficult to execute.

The “spotlight method”:

  • Invoke Spotlight
  • Search
  • Drag the app to the edge of the screen (pause)
  • Drop

This method is only possible if you have a physical keyboard attached and invoke Spotlight with Cmd-Space. Without a physical keyboard, there’s just no way to do this at all.

My main issue in pointing out all of these methods and the enforced waits is that the whole multitasking interface just constantly feels slow. It’s slow to find an app, it’s slow to get it into multitasking. Over the course of days, weeks and months all of these pauses will add up.

The Golden Path and the Long Path

One of the other results of the “privileging” of apps in the Dock is that you often run into a situation where you try one method to get an app (via the Dock), fail, and have to restart down the other path.

I find myself doing this a lot:

  • Desire another app
  • Pull up Dock
  • App is not in the Dock
  • Home screen
  • (follow one procedure for multitasking that app)

I have observed a number of friends on Twitter posting screenshots of their iOS 11 app arrangement which can probably best be described as “everything in the dock and a junk drawer folder at the end” just so they can be guaranteed that the first path to an app (going via the Dock) will always yield a result. This feels distinctly like a workaround.

I have adopted the “junk drawer” approach to app layout myself and it is still constantly annoying. To get to any app that’s not one of the top 13 that have made it into the dock, I have to:

  • Swipe once (or twice) to get the Dock up
  • Tap to open the folder
  • Possibly swipe to a secondary page of the folder
  • Tap the app or begin dragging it into multi-tasking

This is another reason why iOS 11 feels slow to me - there are short paths and long paths. Reliably deciding which path to take requires the user to develop and maintain a mental model of the state of the Dock and Home Screen.

People are working around this path-difference by forcing every app to be accessed through the tiny window of the Dock. This at least has the benefit of making all apps accessible through the same kind of path, but that path is tedious and slow.

App Pairings

The iOS 11 idea of there being multiple split screens and switching between them is, initially, an interesting idea. In practice, I have found it quite difficult to make any practical use of and have already mostly abandoned it.

First of all, my experience as an iOS-only user is that we constantly freely mix and match applications at will to solve various problems throughout the day. It’s not at all clear to me that there exist multiple stable pairs of apps that should always stay together.

There appears to be a level of impedance mismatch between the Mac-like idea of “multiple spaces” and the iOS model of “apps not windows”. When I first started using iOS 11, I tried to identify certain apps that might work well together - Mail and Calendar - for example.

In practice I found that, too often, I needed to create arbitrary pairs of apps and this caused all my bespoke app pairings to be dismantled in the background. This gave me a sense of instability in the iOS 11 UI. Things I had built were being dismantled invisibility and they were not the way I had left them. This is another of these “functional cliffs” - a short term use of an app leads to the invisible dismantling of a pairing in the background.

Again, as with the perceived speed issues in the Home Screen, I’m talking here about percieved stability. I’m not referring here to bugs but to conceptual structures in the UI that don’t survive my general use patterns on iOS.

Combining all of this with the idea that Cmd-Tab now switches between spaces and not apps means that the user has to build and maintain a mental model of the state of all the spaces and apps in order to reliably predict what is going to happen when they switch to another app.

As a result, I find myself using Cmd-Tab less and less. This is because, for any app I might want to use, I have to remember if it was paired with something else. If it was, Cmd-tabbing to it or launching it from Spotlight will cause both it and its buddy to come to the front, replacing everything I was doing.

If the app I want is not paired with another app and I launch it, the app will appear full screen - again replacing everything I was doing.

Therefore, if I want to work with two apps - but constantly varying one or other app as I go - I am forced to use one of the dragging methods (Dock, Home Screen or Spotlight).

To me, this is one of the reasons why iOS 11 constantly feels slow: I spend too much time thinking about what the effect of certain actions might be and then I end up forced down one of the slower paths for rearranging my workspace - dragging.

Losing Apps

Another “functional cliff” that I’ve come across is the behaviour of split screen multitasking when you entirely collapse the split to get rid of one app. Where does that app go?

Sometimes, you just need to expand one of the two apps up to full screen temporarily in order to do some detailed task. However, when you do that, there is no simple or fast way to recover the previous state of your workspace. When an app is collapsed out of multitasking, it is basically “gone” and you have to reconstruct it from scratch.

As I have already discussed, if the app you’re using isn’t immediately accessible, that can send you back down the path of wondering whether it’s in the Dock or not, then going to the home screen and so on.

The “recents” area in the Dock might be an amelioration but it’s not clear to me what the heuristic is for having an app appear in that space. It doesn’t appear to be a strict “most recently used” algorithm so, again, it’s hard to make a mental model of how that area of the dock can be used consistently to develop a workflow. It’s helpful in an opportunistic sense, but it’s not clearly dependable. I would estimate it contained the app I wanted about 40% of the time and I have since disabled it - again to force all app access down an annoying-but-at-least-consistently-predictable path.

Mental Models

Ultimately, my big issue with the iOS 11 multitasking model is the demands it places on the user’s mental model. In iOS 11, you have to develop and maintain a mental model of:

  • The apps in your Dock
  • The apps not in your Dock
  • The state of your app pairings
  • Which apps were in Slide Over in which spaces

...in order to predict what a behaviour is going to do to your workspace.

Due to the idea of switching spaces and not apps, if your workflow doesn’t have obvious and persistent pairings, I seem to spend an inordinate amount of time tediously rearranging my workspaces to get the apps I want on screen together.

In a system which has pervasive drag and drop, it will often be desirable to arrange things in such a way that the source and target of the drag are both visible at the same time. In that way, it becomes even more important to have access to easy reorganisation of apps in a side-by-side view.


In summary, there are many things I love about iOS 11 but I find these two fundamental areas of the system to be incredibly frustrating to use in their current form.

We use these areas so much in a typical day. I’m constantly moving between different pairings of apps. All of these enforced pauses in dragging, along with the model of swapping spaces rather than apps, is adding up to making serious work on iOS 11 a very frustrating experience in practice.

On Switching from an iPad Pro and a Macbook to a Pixelbook

The stranger and more complex story of change in my computing life is the decision to move from an iPad Pro and a Macbook to a Chromebook. My computing is complicated and I wear too many hats but what I’m talking about here is my day-to-day main portable computing device.

In late 2015, I decided to go all-in on the iPad Pro. That actually worked pretty well for a while and, for all of 2016, I didn’t have a Mac at home and relied solely on my iPad Pro for daily work. I made a lot of progress with that setup and was even able to launch a successful podcast which has almost entirely been recorded, edited and published using iOS.

My work was evolving and so was iOS. In late 2017, I was appointed Head Teacher of the school where I work, taking up the post in August 2018. That was a more significant shift in my workloads than I had perhaps anticipated - not just in the volume of work but also the type of work that my computer was required to support.

At the same time, iOS 11 had shipped and it simply broke my relationship with iOS on the iPad. There was nothing about its new multitasking system that I liked and little that I could even tolerate. From its unhelpful app-pairing feature to its devaluation of the Home Screen and overloading of the Dock, I just could not get comfortable with it at all. Then iOS 12 came along and made it even worse in some ways with its aping of iPhone X gestures and ever-finer distinctions between a little swipe up to display the dock and an ever so slightly longer swipe up to show multitasking. Touch just isn’t built for such fine distinctions.

My school is known as an “iPad school” - we were the first whole-school 1:1 iPad deployment in the world - but we have been a “Google school” even longer. We have been on what is now GSuite since it was called “Google Apps for Your Domain”. The first thing I ever did when I started at the school was to sign up for GAFYD and we have been on it ever since.

When Google Drive launched in 2012, we started making more use of it and Google Docs. In the six years since, we have really gone all-in on these apps. I was never a huge fan of web-based software but we started with one particular project where we cut so much time and effort out of the process that I couldn’t help but get interested.

That project was the process of writing report cards. Previously, every teacher used to produce and print a Pages document for their pupils. That pile of documents would get delivered to the secretary who was then responsible for dividing them up and assembling report cards from them. This process was time-consuming and hugely error prone. We replaced that with a process where I create a template file in Google Drive for each pupil and then teachers write their reports in a single shared file per pupil and then we print them all at the end. That was when I got interested in Google Docs in a big way.

Fast forward to 2018 and virtually all of the work I do at school is now in Google Docs. I don’t think I’ve created anything new outside Google Docs for a couple of years now. As I was preparing to take over as Head Teacher, more and more of my work became about higher levels of complexity, involving more data sources and requiring larger work spaces than ever before. Big spreadsheets to build timetables, school budgets, pupil information and the like.

If I can refer back to an article I wrote in 2013 called “Beyond Consumption vs Creation”, which I still think is the most coherent thing I’ve written on the topic, I observe that mainly what happened my work took me outside that boundary of complexity and duration that the iPad can support.

At the same time, Chromebooks have been on the rise. They are killing Apple in US K-12 education but it’s still not clear exactly what impact they are having on the wider market if any. It does seem obvious to me, though, that Google knows exactly where their strength lies and it actually has very little to do with ChromeOS itself.

My school runs on GSuite but we usually access it through iPads. What I have found, though, is that the GSuite iOS apps are not very good. They lack important (and sometimes basic) functionality found in the web version of GSuite and they take a long time to adopt iOS platform features.

The point, though, is that GSuite is so powerful and so much at the heart of everything I do at school that if you asked me to decide between giving up GSuite and giving up iPad, I’m afraid iPad has to go. It is for this reason that I have been vocally advocating that Apple make iOS Safari as close to a “desktop class” browser as it can be. I don’t know the technical reasons why GSuite can’t be accessed in Safari on iOS. I don’t know if the browser has limitations that mean the apps genuinely can’t run in it, or whether they could but Google just chooses not to allow it.

I’m entirely willing to believe that this isn’t Apple’s fault. That doesn’t mean it’s not their problem. Lack of feature-complete access to GSuite is, I believe, as serious a risk to Apple in K-12 as the potential lack of Photoshop and Office on Mac OS X was to its role in business back in the early 2000s.

None of this is to say that iPad and iOS has suddenly become a bad platform. It has not - although I could make a strong case that every change made to multitasking in iOS 11 worsened the experience in every way. iPad is still a good platform for the things it was good at back in 2015/16 when I was using it full time. What has really changed more than anything are my own personal computing needs and the strength of the competition.

iOS has a particular software lack in a particular sector. It just so happens that I work in that sector and I work extensively with deep features of that suite of software that iOS lacks. We might fairly ask why Apple has chosen not to compete with GSuite. It has, from time to time and piece by piece, but in no way is it a realistic proposition for a school heavily invested in GSuite to consider switching to iCloud. The concept simply doesn’t make sense. It’s not that Apple’s equivalent features are better or worse; it’s that in many cases they simply don’t exist. There are no shared folders in iCloud Drive, no organisational units, precious little control over iCloud feature availability, no auditing or security tools. Collaboration tools do exist in iWork apps but the workflow is ad-hoc and on a per-document basis. There’s nothing like Google Drive’s Team Drives feature. There are no fine-grained controls over spreadsheet editing. If we were to take the top 20 daily-actively-used features of GSuite, I don’t know if Apple’s cloud infrastructure has equivalents for almost any of them.

So, you see, my point is not that the iPad is a bad computer or that iOS is a bad operating system. Neither of these things are true. I do have a question, though, about what Apple thinks people should be buying from their product line.

Notice that the premise of this article is how I came to switch from a 12.9” iPad Pro and a 2015 MacBook to a Chromebook. It honestly seems to me that Apple might seriously expect people to own more than one £1000+ computing device just to get the benefits of both a laptop and a touch screen. Macs can do some things that an iPad can’t do - like access the full GSuite - and an iPad can do some things that a Mac can’t do. Should people really have to buy two computers from Apple to get the best of both worlds?

The Google Pixelbook I have been using seems to do a much more balanced job of providing laptop features and tablet features in one coherent device. It is virtually identical to the new 12.9” iPad Pro in terms of physical dimensions and weight. It opens like a traditional laptop with a very nice keyboard and trackpad but then folds back to become a tablet when required. It makes you wonder what might have been if Apple hadn’t been so determined to keep the Unibody MacBook Pro form factor essentially inviolate for a decade. The Google Pixelbook isn’t cheap in Chromebook terms but it’s significantly cheaper than the iPad Pro/MacBook dyad that Apple wants to sell you.

Is the Google Pixelbook a genuinely great tablet computer? No, certainly not. It’s a very good laptop that does a passable job of some kinds of tablet tasks. In tablet mode, it’s great for Netflix, YouTube, casual web browsing and that sort of thing. Would I put the Pixelbook into tablet mode to go deep on a Google Sheets document? Of course not. Having said that, I reflect on how often I used my iPad Pro in pure “tablet mode” too - it wasn’t all that often either. It seems that as laptops get more tablet-y, tablets are getting more like laptops. The Pixelbook isn’t a better laptop than a MacBook, and it isn’t a better tablet than an iPad, but this one device satisfies 98% of my computing needs in a single package. It also costs less than half of what I would need to buy from Apple to get the same set of capabilities.

As for doing “real work” on the Chromebook, I’ve been in that world for some time already. That part wasn’t even a concern for me. I had proven all that some time ago - my Macbook literally had no non-default software on it except Chrome. It effectively was a Chromebook in all but name. Sure, the Chromebook doesn’t run all the software in the world, but it runs the software that I actually need and use every day absolutely flawlessly.

One other thing that hurts to say but I believe is true is this: ChromeOS is getting better faster than iOS on iPad. Apple seems now to be on a two-year cadence for meaningful iPad-related software updates and, honestly, that’s just not fast enough. ChromeOS is moving very quickly. Probably, iOS is ahead for now but I hate waiting on an “iPad year” WWDC and then hoping that something will happen for the OS features I happen to care about. There are some parts of iOS that have lain fallow for years now - Mail, Calendar, Safari - that need some serious investment. Third party apps might fill some of the gaps but iOS doesn’t let them be full replacements for the system apps. Honestly, I'm bored waiting for progress on some of these platform basics that have been on iPad users' wish lists for literally half a decade or more now.

So where do we go next? On the one hand, we have Apple with a wildly successful phone product and a tablet product sharing the same basic OS. They also have a legacy desktop platform that's adopting mobile app APIs to fill functionality gaps. That said, after nearly a decade of iOS on iPad, we seem to still be staggering our way along the road to a full-power iOS. Every single year, we iPad users talk amongst ourselves about how “next WWDC” will fix everything - many people who have dropped laptop money on the 3rd generation iPad Pro are really buying it on more of a hope, even, than a promise that iOS 13 will make it sing.

Google, on the other hand, has a numerically successful phone platform that still has some quality challenges and an all-but-abandoned tablet strategy. They have a moderately up-and-coming hybrid laptop/tablet platform in ChromeOS that is seeing significant work and investment and is shipping significant feature updates on a regular basis. (Interestingly, said platform is also adopting mobile APIs/runtimes to fill functionality gaps.) Google also have a genuinely wonderful collaboration platform in GSuite that has become the most important software in my life bar none. Clearly, it’s become more important to me than any software that Apple makes.

Ironically, the web browser was what opened the door for the first Mac revival in the original iMac era. The lack of a fully-capable web browser is what’s closing the door on the iPad for me.

Wasn’t there an app for that?

On Switching from iOS to Android

I may be having the most boring mid-life crisis that any man has ever had, or I may be opening a whole new chapter of my computing life. I don’t know.

This year, I decided to make a change. I switched from an iPhone X to a Google Pixel 2XL phone, and then switched from a combination of Apple portables - a 12.9” iPad Pro and a 2015 MacBook - to a Google Pixelbook.

This change has been so surprising to some people that I am told that friends are asking friends if I’m psychologically OK. Don’t worry about me - I’m absolutely fine. I thought I would try and get back into longer-form writing by trying to explain my thinking here and see if it’s really what I think.

There is no single reason why I decided to make these changes, and the reasons are different for the phone and the laptop cases. Some have been brewing for a long time and others are more recent. In this post, I’ll concentrate on the phone question and come back to the laptop question later.

The phone story is perhaps simpler, and mostly revolves around price. I had misgivings about the value of the iPhone with the release of the iPhone X and it breaking the £1000 barrier. I wondered if Apple would be able to continue to increase the price of iPhones much more. Still, I ended up with a new iPhone X last December. That deal ran out and I wondered about what to do next.

This year, Apple released iOS 12 and along with that came a feature called Screen Time. In iOS 12 you can look in Settings and find out how much time you spend in particular apps and what you really do with your phone.

I always imagined myself to be a high-end iPhone user. I had all the powerful iOS apps installed: Keynote, Pages, Word, Excel, Ulysses, OmniFocus. Then I turned Screen Time on early in the beta versions and started to watch what I really did with the phone.

It turned out that, consistently, what I did with my phone was exactly this, in order of screen time:

  • Twitter
  • iMessage
  • YouTube
  • Google Maps
  • Instagram
  • Overcast (although Screen Time doesn’t count screen-off time, which would have put Overcast at #1 by a country mile.)

Everything else was typically minutes per day at most. This started to sow a seed of doubt in my mind - why do I have this £1,000 phone to do such, well, basic things?

Then the iPhone XS and XS Max came along. The iPhone Upgrade Program prices in the UK ranged from £51.45 to for the 64GB XS to £73.95 for the 512GB model - this is without any carrier service added. Even the lower-cost XR ran from £41.45 to £48.95.

At the same time as this, I have three children - two of whom are old enough to be phone users now too. I suppose I just started feeling the pinch, imagining how I would be able to fund even infrequent iPhone purchases for three people.

As I was coming to the end of my year on the iPhone Upgrade Programme, I decided to see what life might be like on the other side. I saw a deal for the Google Pixel 2XL phone which was £33/month. I was already paying £18/month on top of my iPhone program costs for carrier service on my iPhone, so this was effectively a brand new flagship Google phone for £15/month.

So one Friday, armed with my Screen Time data and the knowledge that all but one of my most-used iPhone apps were also available for Android, I just took the leap and signed on for a 2 year plan on the Pixel 2XL.

My expectation was that I would find it almost entirely fine but that I would eventually come across something that I really hated about it. So far, that honestly hasn’t happened.

I’m aware that I’m still using Android with a heavy iOS accent. My launcher is organised to be mostly like an iPhone; I can’t stop opening Chrome before starting a search and I cannot swipe-to-type for all the tea in China. I have no clue about home screen widgets. However, I’m getting used to it and I’m as productive as I need to be on a phone.

Like many others, I thought that the loss of iMessage would have been a serious limitation. It simply hasn’t been. I never cared about iMessage stickers or apps and it turns out that, in Europe at least, almost literally everyone is also on WhatsApp. Chats with some American friends have moved to Twitter DMs as WhatsApp doesn’t seem to be so big over there.

I replaced Overcast with Pocket Casts. I still liked Overcast better but Pocket Casts is honestly fine. I replaced OmniFocus with Todoist. Honestly, I was doing a terrible job of my GTD system in the six months before the switch, so there was practically nothing that had to be moved over there.

Virtually every app that I used on my iPhone also exists on the Android side of the house and they’re all virtually identical - not just at the feature level but almost down to the pixel level. It’s interesting that so much of Google’s Material Design influence had rubbed off on iOS apps - particularly Google’s apps but others too - that the shift from iOS to Android hardly even looks different in many apps.

One thing that is very alien, but very interesting as an iOS user, has been watching the various parts of Google update their Android apps on a rolling basis. As Apple users we are used to watching WWDC with the hopeful expectation that whatever part of iOS or macOS that you particularly care about will get its moment in the sun this year.

Unfortunately, several parts of the Apple ecosystem seem to go years and years without being significantly improved. Look at Mail, Calendar, Contacts and even Safari. They’ve had virtually no engineering resources devoted to new user-level features in multiple years now. At the same time, though, I watch the updates rolling through on this Android phone and I see the Contacts app getting an update, the Calendar and Gmail apps getting regular feature improvements. Even the Camera app just delivered this incredible new Night Shot feature in an overnight update.

Maybe I’m just old and boring now. Maybe I just want to clear my inbox and go home earlier and maybe I don’t want to do any of this in an Augmented Reality battle-scape. This is what I want now - I want my email to help me go on a trip. I want the lock screen of my phone to surface useful and timely information. I love being able to just quickly and precisely search my email on my phone.

I also like that it came with a rapid charger in the box. The physical design of the phone is fine to me. It’s another black rectangle. The placement of the fingerprint sensor on the back is weird to an iPhone user but it works OK - although it’s much less forgiving than Touch ID ever was. The only thing I really don’t like about the physical design of the phone is that the only shortcut to launching the camera when the phone is locked is to double-press the power button. The button is small and narrow and I find it very difficult to do the double-press accurately and fast enough for the OS to recognise it.

Honestly, the switch from iOS to Android has been fine. Much, much easier than I expected and far more interesting. The iOS 12 team should take great heart - I am spending much less time on my iPhone since I started using it!