Last week I was at the Canada-UK Colloquium on AI in Toronto. These are some things I learned and thoughts I had while there, in no particular order.

  1. On the role of “anchor firms”: Big tech firms help support a startup ecosystem by acting as a backstop for technologists, allowing them to take the risk of working for startups as they know they won’t be left completely high and dry if the startup fails. They also perform a useful role mapping academic approaches into the real world in the form of code, online services and so on that can be plugged together to build new applications quickly: no one else has the resources/capability/motivation to do this mapping. It’s interesting to think about the extent government should be doing this, or subsidising it, and the degree to which this mapping is done for data as well as code.

  2. On the role of third sector: We focus a lot when talking about AI and data on the role of the state, of business and of academia. But the third sector is important too. Consumer rights organisations have a role to play assessing and informing consumers about how services use data about them. Trade unions need to have a vision for how the demands on the workforce will change and workplaces and conditions should adapt. It was striking to me that through all the discussion of bodies supporting good governance of AI and data, the Ada Lovelace Institute was not mentioned.

  3. On the hype cycle: All the AI practitioners urged caution and were concerned about hyperbole in the media narrative about AI. They pointed out that deep learning and reinforcement learning are only suitable for particular tasks and that much of the AI vision we are being fed requires techniques that haven’t been invented yet. There’s a danger that when the current wave of AI (machine learning) fails to meet high expectations we will enter another AI winter of reduced funding for research that slows progress again.

  4. On what your phone can sense about you: Well-intentioned academics in Canada are prototyping applications to monitor levels of social anxiety, in a bid to provide better mental health care. (With permission) they can do things like work out what kind of places you go to, listen to your conversations, monitor movement, light, how much you touch your screen and so on. It felt creepy and invasive but got through the university ethics board. Not news, but to me it highlighted that these APIs and data were available to other Android apps, with the only check being the permissions dialog everyone clicks through. We probably don’t need to worry too much about well-intentioned academics with ethics approval: how do we find out about everyone else?

  5. On diversity: Canada has a strong commitment to increasing (particularly gender) diversity. There are warm words about diversity in the UK too. I have Opinions, highly influenced by Ellen Broad, that appear to be unusual:

    • Having a diverse team will not necessarily mean you avoid bias in your algorithms/products. Saying you need diversity to create products that work for everyone gives non-diverse teams an excuse for poor practices that they really shouldn’t be allowed to use. What about user research? What about empathy? It is impossible to represent everyone by having someone exactly like them within a team: we should focus on finding good ways of engaging with people outside development teams and hold those teams to a higher standard in using them.

    • We should be careful to quote local statistics, or statistics relevant to particular subfields, rather than make diversity out to be a general problem across technology. I also have a lingering concern that making a big deal about women being less prevalent in technology makes technology less attractive to women (no one likes to be in places where they’re a minority).

    • In contrast to software development, there are many women in the field of ethics and algorithmic accountability. Is ethics subtly being thought of as women’s work (emotional labour)? (In the UK, this is even spelled out in the names of our institutes: Alan Turing for computer science, Ada Lovelace for ethics.)

  6. On geopolitics: Canada and the UK have a lot in common. This may become even more true if Brexit goes ahead and Britain becomes a third country to Europe, with similar values but needing to prove data adequacy while having strong surveillance powers. France was other ally most often mentioned by Canadian representatives. The sense was that despite its strong investment in AI research and work by CIFAR, Canada was behind on thinking about data and data governance; there were also hints that its information commissioner’s office was not as helpful (to businesses) as the UK’s. As is common in these fora, there was a lot of talk about China, and state-led AI, but a general feeling that we need to engage and create international norms around AI rather than enter into a race.

  7. On the stories we tell: Quite a lot of debunking went on in the room. There were requests never to treat or talk about Sophia as AI; never to use the trolley problem as if it had anything to do with the choices autonomous cars would make; not to believe Babylon’s figures about triage accuracy; not to spread the falsehood that a sexbot was manhandled at an Australian trade fair; not to mischaracterise how DeepMind Health use patient data in Streams. Even a room of “experts” needed to be corrected on occasion. It is good to challenge each other, the examples we repeat, and the evidence we quote.

  8. On data trusts: Everyone is interested in data trusts. More precisely, everyone is interested in how to get data shared more readily while preserving privacy. When people say “data trusts” they mean very different things; they project their own notion of what well governed data sharing might look like. I really hope our work at ODI, and the concrete pilots we’ll be taking forward over the next few months help to make the notion more tangible, and highlight other models for sharing.

  9. On regulation / government intervention: I find that whenever we start talking about how government should intervene around AI, we get sucked into a personal data ethics black hole. It is hard to see past what should or shouldn’t be done with personal data and into other issues such as public procurement, competition policy or worker rights. Particularly in the UK, where there’s already lots of activity around data & AI ethics, we should avoid the black hole by trying to create venues for discussions that don’t talk about personal data.

  10. On populism & fear of technology: We listened to a fascinating presentation (similar to this recording) about the correlation between populism and fear of technology. Recent displacement from work is more likely to arise from technology than immigration, but immigration is more likely to get blamed. The good news is that those who fear automation, and particularly populists who fear automation, are happy with any policy response, including positive ones like supporting retraining. The lesson is to have a vision.

  11. On the role of humans: Both humans and computers are biased and sometimes make poor decisions. (When people feel there’s too much emphasis on AI being good, they remind us of AI’s failings; when they feel there’s too much emphasis on it being bad, they remind us of human failings.) We are more concerned about the black boxes of silicon-based neural networks than we are about the ones in our heads, or perhaps in our organisations. I lazily insist that decisions are made by humans, informed by data, but that’s because my mental model is medical diagnosis or parole recommendations. In a battle, there’s no time for a system that detects and destroys incoming torpedoes to refer to a human. I have started to think that the same things are needed whatever the decision making entity: transparency, explanation, accountability (a means of recompense for harm and a correction for the future). The trap we need to avoid is thinking any system (human or machine) is faultless.

  12. More on the role of humans: Robots are common in automobile manufacturing, but customers are now demanding more customisation in their cars, which robots aren’t as good at providing. So there are new roles for humans, working with machines. They call them “cobots”. On the railroad, there are now “portals” that photograph every outwardly visible inch of railcars as they drive through, and detect faults in minutes that used to take hours of inspection. Railcar engineers can concentrate on maintenance rather than finding faults. The current crop of AI is good at dull operational tasks, leaving the more interesting work for people (but do some humans like doing dull things some of the time? I know I do.).

  13. On intelligence: People are building more expressive bots, whether physical or virtual, that mimic human emotions through their appearance or behaviour. They are also getting better at reading emotion. At some point the mimicry gets so good we start reacting as if it’s real; that’s the point of the Turing test. On the other hand, knowing that you are talking to a machine rather than a human may be liberating: we learned about a chatbot designed to help people decide to stop smoking - one of its benefits was that people could talk to it without feeling judged. If a bot could fake care, would you prefer to tell a machine your woes?