Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Rethinking our Defensive Strategy
Inge Bryan, Chair of the Dutch Institute for Vulnerability Disclosure, Netherlands

How India Navigates Between Binding Government Regulation and Self-Regulation
Pavan Duggal, Advocate, Supreme Court of India, Chairman, International Commission on Cyber Security Law India, and Chief Executive, Artificial Intelligence Law Hub, India

Rules for AI, Governanbility and the Common Interest of States to Create an International Artificial Intelligence Agency
Paul Nemitz, Principal Adviser European Commission, Belgium

 

Rethinking our Defensive Strategy

Inge Bryan
Chair of the Dutch Institute for Vulnerability Disclosure
Netherlands
 

Brief Bio
Inge Bryan, supervisory board member at the Clingendael Institute and chair of the Dutch Institute for Vulnerability Disclosure, former CEO of Fox-IT, is a trusted advisor to boards and policymakers. Her career spans two decades of intelligence and criminal investigations, and eight years in leading cyber security businesses. She has vast experience in leading investigations, leading change in organizations and managing crises. She is intimately familiar with all sides of cybercrime, espionage and warfare. After leaving law enforcement in 2016, she has led cyber security programs in large organizations primarily in the public sector and critical infrastructure.
Inge’s ancillary positions are: Board member of Royal Dutch Society for the Sciences, Chair of the Supervisory Board of Datenna, Advisory board member at the National Archives, Supervisory Board Member of the Victim Support Fund, member of the advisory council of the Dutch federation of employers and advisory board member of the Global Cyber Alliance.


Abstract
Cybercrime as we know it is changing, powered by AI. Defensive measures are predominantly small scale and customized. We need to rethink our defensive strategy. This is not only a technical challenge but an organizational and societal one. In her presentation, Ms Bryan will explain the current state of cyber defense and explore viable solutions.



 

 

How India Navigates Between Binding Government Regulation and Self-Regulation

Pavan Duggal
Advocate, Supreme Court of India, Chairman, International Commission on Cyber Security Law India, and Chief Executive, Artificial Intelligence Law Hub
India
 

Brief Bio
Dr. Pavan Duggal is an internationally renowned thought leader in the field of cyber law, cybersecurity, and internet governance. He has been a pioneer in shaping the legal framework for the internet in India, and his contributions have been instrumental in protecting the rights of individuals and businesses in cyberspace. His expertise spans a wide range of areas, including data protection, privacy, e-commerce, intellectual property, and cybercrime.


Abstract
From all countries all over the world, India is uniquely positioned for an AI revolution. Two important reasons are: (1) India has a vast youth population and (2) they are ardent users of Large Language Models (LLMs). We all know that Artificial Intelligence is the cornerstone of transformative solutions. From this position India participates in the UN’s Global Digital Compact (GDC) and the Global Partnership on Artificial Intelligence (GPAI).
A large driver of local innovation is the open-source large language models (LLMs) such as Llama 3.1.
The Western fear of AI is mainly job losses and fear for misinformation But in India AI is seen as an opportunity. Some of our researchers are advocating for minimal AI-specific regulation and reliance on existing legal frameworks to address potential fallout.
Three Types of Regulation Governability: Two types of regulation are clear. They are: (1) a Binding Governance Regulation (as in the EU and in China) and (2) a Self-Regulation Approach (as in the US, UK, Japan, and Singapore). Type three is (3) a Binding Governance Regulation which is still Under Discussion. Examples of the latter type are to be found in Australia, Brazil and Canada. India is very active in this field and indeed India has not yet decided where to go.
So far there are no formalised rules between the three types of systems. As it is now: peace between states is in some way related to trade and services between states. But, in practice, the situation is dangerous and, consequently, governments should remain alert.
India’s Aim: Currently India aims to develop an AI policy that focusses on AI for All, emphasizing global ethical standards and finding a responsible way that is applicable for all AI related research in India.
Topics of importance are: AI presents a unique set of risks, stemming from its rapid development, ambitious goals, and the potential for significant societal impact.
Governability: AI presents a unique set of risks, stemming from its rapid development, ambitious goals, and the variety of applications
Unpredictable Outcomes: The rapid development of AI systems can lead to unexpected and potentially dangerous emergent behaviours, making it challenging to fully understand and control their actions.
Superintelligence: A primary goal for some AI researchers is to develop artificial general intelligence (AGI) or even superintelligence – AI that surpasses human intelligence in all aspects. While offering immense potential, such advanced AI could also pose existential risks if not developed and controlled responsibly.
Autonomous Systems: The development of autonomous systems, such as self-driving cars and AI-powered weapons, raises concerns about safety, accountability, and the potential for misuse.
Job Displacement: Automation powered by AI is expected to displace many jobs across various sectors, leading to significant economic and social disruption.
Bias and Discrimination: AI systems can inherit and amplify biases present in the data they are trained on, leading to discriminatory outcomes in areas like loan applications, criminal justice, and hiring.



 

 

Rules for AI, Governanbility and the Common Interest of States to Create an International Artificial Intelligence Agency

Paul Nemitz
Principal Adviser European Commission
Belgium
 

Brief Bio
Paul Nemitz was appointed by the European Commission on 12. April 2017, following a 6 year appointment as Director for Fundamental Rights and Citizen’s Rights in the same Directorate General.
As Director, Nemitz led the reform of Data Protection legislation in the EU, the negotiations of the EU – US Privacy Shield and the negotiations with major US Internet Companies of the EU Code of Conduct against incitement to violence and hate speech on the Internet.
Before joining the Directorate-General for Justice and Consumers, Nemitz held posts in the Legal Service of the European Commission, the Cabinet of the Commissioner for Development Cooperation and in the Directorates General for Trade, Transport and Maritime Affairs.
Nemitz has represented the European Commission in numerous cases before the European Court of Justice and has published widely on EU law.
He is a visiting Professor of Law at the College of Europe in Bruges; Member of the Board of the Verein Gegen Vergessen – Für Demokratie e.V., Berlin; Trustee of the Leo Baeck Institute, New York; Member of the Board of the Association for Accountability and Internet Democracy, AAID, Paris; Member of the Scientific Council of the Foundation for European Progressive Studies, Brussels. He is also a member of the Tönissteiner Kreis e.V., Berlin, the Commission for Media and Internet policy of the SPD, Berlin; the German Association for European Law and the Arbeitskreis Europäische Integration, Heidelberg.
Nemitz studied Law at Hamburg University. He passed the state examinations for the judiciary and for a short time was a teaching assistant for Constitutional Law and the Law of the Sea at Hamburg University.
He obtained a Master of Comparative Law from George Washington University Law School in Washington, D.C., where he was a Fulbright grantee. He also passed the first and second cycle of the Strasbourg Faculty for Comparative Law.


Abstract
Neither peace between states nor trade in services between states exists without rules. It is a misconception to believe that a free-for-all, an unregulated competition for more powerful weapons or a larger market share, benefits any state. Historical experience demonstrates that trade functions better with legal rules, whether it is for goods or services. And AI services across borders are a form of trade in services. Similarly, weapons of mass destruction, as well as weapons that lose control after deployment, like landmines or small arms, pose a risk to every state. This has led to international agreements on these types of weapons. AI is deployed globally via the internet and will become as ubiquitous as electricity. If control of AI is lost and it mutates into a risk, it could undermine the governability of any state. Therefore, maintaining governability is a common interest among states. Thus, there is a shared interest in adopting rules that ensure AI (regardless of how it is developed or how it mutates) does not undermine governability. The costs of hedging against potential intentional or unintentional threats to governability through AI will rise exponentially with the advancement of AI systems. To focus resources on the productive use of AI, it is in the interest of states to agree on multilateral rules that guarantee AI never undermines governability. Recognizing that we live in a global community of risks and opportunities, the International Atomic Energy Agency, the World Health Organization, and many other international organizations were created to manage specific risks for the benefit of states and humanity. There is no doubt that a similar international organization will be necessary for AI, and the costs for all states will be lower if this authority is established quickly. The question is not whether the world will follow the example of the EU's AI Regulation and the 'Brussels effect.' The question is whether the states of this world can define their interests rationally in light of a technology that, the more opportunities it offers, the more inherent risks it poses.



footer