THANK YOU SPONSORS!

Thank you for your support for this event. Without you and our sponsors, this conference would not have been the success that it was!

Stay tuned to the CMIA website and social media for more information about the theme of CANIC 2019 and how you can be part of it as a sponsor, contributor or participant!

CANIC 2018 Summary

Prepared by Dr. David Charters

Please note: As the conference was held under Chatham House Rules, the following remarks will not be attributed to any specific speaker.

The theme of this year’s conference was “Artificial Intelligence and the Implications for the Canadian Intelligence Enterprise”. Eleven speakers and panelists from the military, government, academic, and industry communities offered their expertise and perspectives on this topic. What follows is a summary of  the key discussion points and issues that emerged from the presentations. 

One of the first speakers provided a simple but useful definition of Artificial Intelligence (AI): “Embedding human-level intelligence into machines.” AI is not a thing unto itself – it is many things. As the conference proceeded several speakers pointed out that there are varying degrees and types of AI, some more advanced than others: classic; machine learning, and its offshoot, deep learning (neural networks); and machine intelligence (that mimics the human brain). We are currently using weak ‘narrow’ AI, which is very basic, and not sentient. For example, it can do facial recognition, but not problem-solving. Super AI will exceed human reasoning capacity – but we are not there yet. But AI is also disruptive; it simultaneously empowers us and makes us vulnerable. The “Internet of Things” that we rely on for day-to-day life is a vulnerability. AI can allow things to do activities they weren’t designed or intended to do.

In the military context AI’s human-machine interactions are changing the ways we think about and do warfighting, which means the military needs to change its operational culture to adapt to AI. It must be seen as integral to military organizations and operations. For military intelligence, AI is an enabler for ISR collection operations, fusion, analysis and more. But it won’t revolutionize the military intelligence enterprise by itself. People in positions of authority and leadership will need to take ownership of AI’s potential and lead the charge to incorporate it. They need to persuade higher military, political and bureaucratic decision-makers that AI innovation is a value proposition – even if we can’t see all of that value in concrete terms today or tomorrow.

The speakers, panelists, and members of the audience raised a host of issues that need to be considered as armed forces develop and incorporate AI capabilities. These are not listed in any order of priority.

  1. How can we manage the torrent of data? We are literally drowning in it – like trying to drink from a firehose. Can AI serve as our multi-port nozzle, allowing us to select which data ports to drink from and which to ignore?
  2. Adopting AI does not require choosing between machines and people. We will need both: a person-machine team. What we have to ask is, how many people, doing what things, and how can AI help them be more capable? What training and education does the future work force need? We may be able to leave low-functioning tasks to AI, freeing up people to do vital reasoning tasks, assisted by AI.
  3. Procurement – AI already outruns normal procurement timelines by several orders of magnitude. How do we solve that dilemma? How do we persuade bureaucracies to invest in ‘experimental’ technologies whose outcomes can’t be predicted? The private sector has a vital role to play in this as a partner. Start small if necessary, but start. 
  4. Accountability – what does that mean in the AI world? Who (or what) is responsible for the use of AI, especially if something goes wrong? Is it the user, the builder, the data source, the system design? Does it depend on what the problem is: user error, data error, programming error, or a hack?
  5. The Fear Factor. This takes several different forms. The most widespread is that AI will be misused or that it will run amok on its own, ultimately turning against us. This is the highly emotive fear that could easily dominate debate about whether to use it or not, and prevent us from doing so, even if a good rational case can be made in its favour. The second is Risk Aversion – fear of failure, because it can be expensive, and politically inconvenient. Yet, we need to be able to fail to learn how to use it better. So, there is a need to fail early (or fail fast) and fail ‘forward’ (to see it as a learning opportunity). Finally, there is fear of the very real AI threats. Don’t be complacent about them. We need to make our systems and people sensitive to subtle changes that may presage an emerging major problem or threat, such as a cyber attack or a deception operation.
  6. Lawful Access – What Big Data can the military intelligence enterprise use in a Canadian legal/Charter context, when our opponents and some of our allies are not so constrained?
  7. Autonomous Weapons – these raise a host of legal and ethical questions, because machines can’t make moral decisions. What is autonomy and is it ‘all or nothing’ or a spectrum? How do these weapons relate to Law of Armed Conflict? Are all of the implications negative, or are there positives beyond simple efficiency? Finding answers to these questions will influence political decisions, laws, procurement, and military practices, doctrines, and operations.
  8. Technological Boundaries – are there any beyond which only governments (and militaries) should be allowed to go? Or are there some beyond which they should be prevented from going? (Autonomous weapons are part of this debate).
  9. Go Slow equals No Go. It is tempting to wait to see what the next step is, then try to catch up later. That would be a mistake. Our opponents aren’t waiting to see what we are going to do in the AI domain. They are forging ahead. We may never again have information dominance. The best we might hope for is information parity, and even that may be temporary. AI is not a ‘silver bullet’ and there is no AI “Easy Button”.
  10. Prepare for AI failure in war. Both sides will try to take down each other’s systems, and they will probably succeed, at least for some period of time. So, our forces (including military intelligence) will need to be able to fight in an AI-free environment – to be adaptable enough that they can fall back on ‘traditional’ means of fighting and doing intelligence: map and compass, Mark 1 Eyeball, and human contact.     

To conclude with an observation from one of the speakers, “warfare will continue to evolve in unexpected ways”.  His admonition was to “embrace the chaos”. In other words to face the challenges and opportunities presented by AI.