Keynote Speakers

Main Keynote Speakers

Requirements We Live By

Bashar Nuseibeh, The Open University(UK) and Lero(Ireland)
Abstract  |  Biography  |  Homepage  | 

Enlightened requirements researchers and practitioners generally accept that RE is as much aboutunderstanding the world as it is aboutunderstanding the software and systems that will be built to inhabit that world. As a result, the RE field has fostered a multi-disciplinary following of researchers and practitioners who are prepared to engage deeply in application domains, to apply a range of technical and socio- technical skills to understand those domains, and to accept that the outcome of an effective RE process may not deliver a software system at all. The RE community has also developed, deployed, and evaluated a wide range of contributions that reflect such enlightenment: conceptual modelsthat reflect the relationships between the world and the machine, domain models and scenarios that reflect understandings of problem domains, and enterprise models that reflect the organisations and processes that build and deploy systems. All these in addition to the models that capture the all-important behaviour of systems and software.

It seems to me however that the REdiscipline is at a crossroads. The mechanics of the discipline appear to be established – much of the published research is now empirical – or technical, but only in so far as it responds to technological advances elsewhere, such as mobile and ubiquitous technologies represented by the Internet of Things, richer application domains such as Industrie 4.0 and SmartCities, or more advanced computational techniques that are maturing, such AI, machine learning, and blockchains. As a community, we reassure ourselves that our discipline is safe and thriving, after all RE is a “forever problem”: all systems we wish to build will have requirements, now and forever. But this is to be complacent. RE has no protected status to study and deploy requirements. The formal models we elicit, design, and build are increasingly deployable by other disciplines, as are the values that we seek our modern, AI-driven systems to embody.

A new and potentially radical re-framing of our discipline may be needed, and I will speculate what this may look like. It may require letting go of what we haveconsidered to be the boundaries of our discipline, while embracing new but fluid boundaries. I have advocated and explored “software without boundaries” as one such framing that challenges the separation of ‘world and the machine’, not because I don't accept the separation of the ‘what’ and the ‘how’, the ‘indicative’ and the ‘optative’, or the ‘problem’ and the ‘solution’, but because the world we live in no longer accepts these separations. Society, more often than not, does not think of systems, of technology, or indeed of software; it thinks of ways of working, ways of interacting, ways of living. Requirements, such as they are, are ‘requirements we live by’ not requirements of systems in the world. At anextreme, if one believes the AI hype, ‘the world and the machine’ will increasingly bereplaced by the ‘world in the machine’. Where does the RE community stand on this, and what can this community do to contribute to the framing and solving of this new reality?

My own work in recent years has evolved to reflect the above. I still revisit, with some pride, the ‘RE Roadmap’ that Steve Easterbrook and I published in 2000 – many of the fundamental RE principles we presented still hold today. But I cringe at how we missed the changing nature of the world in which we operate: a world populated by autonomous and adaptive systems, populated by big data and associated analytics, and populated by stakeholders whose multiple perspectives reflect amultitude of ethical and social values, not all of which are wholesome, and many of which are actively subversive or malicious. My own research on security and privacy requirements only scratches the surface of this evolving reality. I invite the RE community to reflect on how it frames its own research in this context.

Bashar Nuseibeh is Professor of Computing at The Open University and a Professor of Software Engineering and Chief Scientist at Lero - The Irish Software Research Centre. He is also a Visiting Professor at University College London (UCL) and the National Institute of Informatics (NII), Tokyo, Japan. Previously, he was a Reader in Computing at Imperial College London and Head of its Software Engineering Laboratory. He has had a career-long research interest in requirements engineering (RE), having helped organise the first RE conference in 1993, co-founded the first national (British) RE specialist group in 1994, and served as programme chair of RE’01. His research interests in RE have broadened over the years, continuing to advocate a problem-driven perspective for software engineering of mission critical systems, and increasingly engaging with socio- technical systems that cut across digital, physical, and social spaces. Much of his research in recent years has explored security and privacy requirements of modern software systems, and the engineering autonomy and adaptation in those systems.

Bashar is a longstanding and founding member of the editorial board of the RE Journal and served as Editor-in-Chief of IEEE Transactions on Software Engineering and of the Automated Software Engineering Journal. He currently serves as Editor-in-Chief of ACM Transactions on Autonomous and Adaptive Systems and is an associate editor of IEEE Security & Privacy Magazine. He chaired the IFIP Working Group 2.9 on Requirements Engineering, of which he is a founding member since 1995. He received an ICSE Most Influential Paper Award, a Philip Leverhulme Prize, an Automated Software Engineering Fellowship, and a Royal Academy of Engineering Senior Research Fellowship. He received an IFIP Outstanding Service Award (2009) and an ACM SIGSOFT Distinguished Service Award (2015). His Open University research team received the 2017 IET Innovation Award in Cyber Security, and as Chief Scientist of Lero was the recipient of the IEEE TCSE Distinguished Synergy Award for research-Industry and Innovation collaboration. He is the recipient of a Royal Society-Wolfson Merit Award and two European Research Council (ERC) awards, including an ERC Advanced Grant on ‘Adaptive Security and Privacy’.

He is a Fellow of the British and Irish Computer Societies, a Fellow of the Institution of Engineering & Technology, and a Member of Academia Europaea.

Uncertain Requirements, Assurance and Machine-Learning

Marsha Chechik, University of Toronto, Canada
Abstract  |  Biography  |  Homepage  | 

From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human.

Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem --- rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging.

In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.

Marsha Chechik is Professor in the Department of Computer Science at the University of Toronto. She received her Ph.D. from the University of Maryland in 1996. Prof. Chechik’s research interests are in the application of formal methods to improve the quality of software. She has authored numerous papers in formal methods, software specification and verification, computer safety and security and requirements engineering. In 2002-2003, Prof. Chechik was a visiting scientist at Lucent Technologies in Murray Hill, NY and at Imperial College, London UK, and in 2013 – at Stonybrook University. She is a member of IFIP WG 2.9 on Requirements Engineering and an Associate Editor in Chief of Journal on Software and Systems Modeling. She is has been an associate editor of IEEE Transactions on Software Engineering 2003-2007, 2010-2013. She regularly serves on program committees of international conferences in the areas of software engineering and automated verification. Marsha Chechik has been Program Committee Co-Chair of the 2018 International Conference in Software Engineering (ICSE18), 2016 International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS'16), the 2016 Working Conference on Verified Software: Theories, Tools, and Experiments (VSTTE16), the 2014 International Conference on Automated Software Engineering (ASE'14), the 2008 International Conference on Concurrency Theory (CONCUR'08), the 2008 International Conference on Computer Science and Software Engineering (CASCON'08), and the 2009 International Conference on Formal Aspects of Software Engineering (FASE'09). She will be PC Co-Chair of ESEC/FSE'2021. She is a Member of ACM SIGSOFT and the IEEE Computer Society.

Human Values in Software: A New Paradigm for Requirements Engineering?

Jon Whittle, Monash University, Australia
Abstract  |  Biography  |  Homepage  | 

Requirements engineering (RE) has generally done a good job of helping to define software systems with the intended functionality and cost and that is safe, secure and reliable. However, there is a broader set of human values -- such as transparency, integrity, diversity, compassion, social justice -- that are largely ignored when we develop software systems. In this talk, I will argue that RE methods should place more emphasis on these human values so we do a better job of building software that aligns with our individual, corporate or societal values. Furthermore, drawing on recent evidence from case studies in Industry, I will argue that dealing with human values in software systems is not just of interest to a small group of organisations; rather, all software projects should think about human values, build them in where appropriate, test for them, and use them to drive design decisions. When they are not dealt with in this way, there can be severe social and economic consequences.

Jon Whittle is Executive Dean of the Faculty of Information Technology at Monash University, Melbourne, and Professor of Software Engineering. Before joining Monash, Jon was Head of the School of Computing and Communications at Lancaster University, UK. Jon’s research spans software engineering and human-computer interaction. In software engineering, he is best known for his work on program and design synthesis, model-driven development and aspect-oriented modelling. He is a past recipient of the Royal Society’s Wolfson Merit Fellowship, a Pilkington Teaching Award for his studio-based approach to software engineering education, and an IET Software Premium Award. He has also received a number of Best Paper awards or nominations at ICSE, ASE, RE, MODELS, CSEE&T and CHI. Jon has Chaired a number of prestigious software engineering conferences and recently co-Chaired ICSE 2019 with Tevfik Bultan. Currently, Jon’s research focuses on IT for social good and, in particular, how to reimagine software design methodologies to embed social values.


Industrial Innovation Track Keynote Speaker

The next step in Requirements Management: Artificial Intelligence

Hazel Woodcock, IBM Watson IoT, USA
Abstract  |  Biography  | 

In this session we look at automation in requirements management, starting with a view of automation in the review process. We look at the benefits to be gained, and the practicality of automation. Choosing a standard to follow is a necessary step before implementation, and we discuss some of the options, from domain specific templates to an industry agnostic standard.

We then look at the IBM Watson IoT approach of Requirements Quality Assistant, how it works, how it was trained, and what value it can add to a project.

Finally, we look to the future of Artificial Intelligence and Machine Learning support for the Requirements Management discipline, and how this can aid the requirements engineer at any career stage.

Hazel Woodcock is an Industry Solution Architect with IBM Watson IoT. This role involves helping clients with both process and tool issues around requirements management and broader systems engineering. With a strong pragmatic approach, she will work to help people on a journey of achievable, worthwhile, improvement in process, tool use, and individual skills.

Hazel has a background in the defence and automotive industries, and has been in the realm of requirements definition and management for twenty years, both as a practitioner and in a consultancy role. Hazel's recent work has included advising on IBM's Requirements Quality Assistant as a Subject Matter Expert, bringing her long association with INCOSE (International Council on Systems Engineering) to bear on the topic.

Hazel is currently the INCOSE UK Communications Director, and an active member of the wider INCOSE organisation.