Artificial Intelligence for Cyber Security (online)

Overview

Artificial Intelligence for Cybersecurity at the University of Oxford is a pioneering course that blends the domains of cyber security and artificial intelligence (AI).  This course has been designed for cyber security professionals who want to understand AI, and AI professionals who want to work with cyber security. 

Artificial intelligence impacts all the personas in cyber security (threat actors, defenders, regulatory and government agencies, etc.).  In this course, we aim to create an overall framework spanning personas, technology components, and platforms and study the impact of artificial intelligence on this ecosystem.  

Where coding is needed, Python will be used.  You are expected to be familiar with coding but are not required to master any specific language or code in class. Some code will be used in demonstrations, but you will not need to do any coding yourself.  We expect that you have an understanding of Cybersecurity but not Artificial Intelligence. 

The course uses the book: Machine Learning Security Principles - an electronic copy will be provided to you as part of the course.

Programme details

Fundamentals of Artificial Intelligence

Covers the basics of machine learning and deep learning as applicable to Cyber security. 

Fundamentals of Cyber Security

This section explores the fundamentals of cyber security. 

The themes covered include: 

  • Identity Authentication 
  • Confidentiality 
  • Privacy 
  • Anonymity 
  • Availability and integrity 
  • Cryptographic algorithm 
  • Major attack types
  • High-level security protocols 
  • Authentication
  • Compliance
  • Security assessment

Fundamentals of AI for Security

This module discusses the fundamentals of AI and cyber security, including the algorithms, benefits, and threats to AI models. 

Here, we take a case study approach and discuss the strategies of specific vendors. 

Actors in the Cyber Security ecosystem and how Artificial Intelligence impacts their roles

This section discusses each participant in the cyber security ecosystem and how they are impacted by artificial intelligence. 

Securing a Machine Learning System

In this section, we discuss how to secure a machine learning system with the aim of understanding the types of attacks on a machine learning system and its mitigations. 

Mitigating Risk at Training by Validating and Maintaining Datasets 

Data is one of the most significant risks to AI and cyber security. 

This section covers issues like dataset-related threats, data corruption, feature manipulation threats, and dataset modification risks. 

Detecting, Analysing and Mitigating Anomalies 

This section is concerned with detecting and analysing various anomalies and threats

  • Network-level threats and mitigation using machine learning
  • IoT threats and mitigation using machine learning 
  • Emerging threats and mitigations

 Protective Technologies:

  • Log Technologies
  • Intrusion Prevention System
  • Anti-virus / Anti- Malware Solutions
  •  Response planning and case management          

DevSecOps and MLSecOps  

Identify the need of DevSecOps and MLSecOps in the development lifecycle.

Have a conceptual understanding of the evolving processes, tools and technologies for securing AI enabled products.

GRC (governance risk and compliance) 

  • Security Governance covering principles, frameworks, standards (ISO/IEC 42001, ISO/IEC 27001, NIST AI Risk Management Framework)  
  • Emerging and future threats

LLM security standards

  • Conversation agent vulnerabilities
  • Phishing and Social Engineering with LLMs 
  • LLM-Aided Social Media Influence Operations
  • LLMs Red Teaming 
  •  Private Information Leakage in LLMs
  • Prompt injection and LLM control flow hijacking
  • Vulnerabilities Introduced by LLMs related to code generation
  • Vulnerabilities Introduced by LLMs related to media (images and video generation) Maintaining the privacy and security of training data
  • Maintaining the privacy and security of AI models
  • Adversarial Evasion on LLMs 
  • On-prem LLM deployment vulnerabilities
  • LLM vulnerabilities due to open source LLMs 

The AI Risk Register

The course capstone is based on the idea of an AI Risk Register which considers Threat agents, Assets and vulnerabilities, giving us risk scenarios and risk likelihood. When we additionally factor in the likelihood and impact, we get an inherent risk. Additionally, we then apply AI controls giving us the residual risk. 

The potential risks considered include: 

Data and algorithm risks

  • Data Privacy Risks
  • Algorithmic Bias and Fairness Risks
  • Security Risks
  • Ethical Risks
  • Transparency and Explainability Risks
  • Data Quality and Integrity Risks
  • User and Societal Impact Risks
  • Performance and Scalability Risks
  • the training data 
  • the model parameters, 
  • and the trained model itself. 

Operational risks

  • Business downtime risks
  • Regulatory and Compliance Risks
  • Vendor and Third-Party Risks
  • Financial Risks
  • Reputational Risks
  • Innovation and Competitive Risks
  • Legal Risks
  • Identity (management, authentication, access)

The above may be subject to minor changes and revisions.

Course Delivery

This course will run over six live online sessions on Mondays, Wednesdays and Fridays.

Session dates: Monday 11, Wednesday 13, Friday 15, Monday 18, Wednesday 20 and Friday 22 November 2024. 

Sessions will be 14:00 to 18:30 UK time (with a half-hour break) and delivered online via Microsoft Teams.

A world clock, and time zone converter can be found here: https://bit.ly/3bSPu6D

No attendance at Oxford is required and you do not need to purchase any software.

Accessing Your Online Course 

Details about accessing the private MS Teams course site will be emailed to you during the week prior to the course commencing.  

Please get in touch if you have not received this information within three working days of the course start date. 

Digital Certification

To complete the course, you will be required to attend and participate in all of the live sessions on the course in order to be considered for a certificate. Participants who complete the course will receive a link to download a University of Oxford digital certificate. Information on how to access this digital certificate will be emailed to you after the end of the course.

The certificate will show your name, the course title and the dates of the course you attended. You will also be able to download your certificate or share it on social media if you choose to do so.

Fees

Description Costs
Course Fee £1295.00

Payment

All courses are VAT exempt.

Register immediately online 

Click the “book now” button on this webpage. Payment by credit or debit card is required.

Request an invoice

If you require an invoice for your company or personal records, please complete an online application form. The Course Administrator will then email you an invoice. Payment is accepted online, by credit/debit card, or by bank transfer. Please do not send card or bank details via email

Tutors

Ajit Jaokar

Course Director

Ajit is a dedicated leader and teacher in Artificial Intelligence (AI), with a strong background in AI for Cyber-Physical Systems, research, entrepreneurship, and academia. 

Currently, he serves as the Course Director for several AI programs at the University of Oxford and is a Visiting Fellow in Engineering Sciences at the University of Oxford. His work is rooted in the interdisciplinary aspects of AI, such as AI integration with Digital Twins and Cybersecurity. 

His courses have also been delivered at prestigious institutions, including the London School of Economics (LSE), Universidad Politécnica de Madrid (UPM), and as part of The Future Society at the Harvard Kennedy School of Government.

As an Advisory AI Engineer, Ajit specialises in developing innovative, early-stage AI prototypes for complex applications. His work focuses on leveraging interdisciplinary approaches to solve real-world challenges using AI technologies.

Ajit has shared his expertise on technology and AI with several high-profile platforms, including the World Economic Forum, Capitol Hill/White House, and the European Parliament.

Ajit is currently writing a book aimed at teaching AI through mathematical foundations at the high school level.

Ajit resides in London, UK, and holds British citizenship. He is actively engaged in advancing AI education and innovation both locally and globally. He is neurodiverse - being on the high functioning autism spectrum. 

Ajit's work in teaching, consulting, and entrepreneurship is grounded in methodologies and frameworks he developed through his AI teaching experience. These methodologies help to rapidly develop complex, interdisciplinary AI solutions in a relatively short time. These include:
1. The Jigsaw Methodology for low-code data science to non-developers.
2. The AI Product Manager framework and AI product market fit framework 
3. Software engineering with the LLM stack 
4. Agentic RAG for cyber-physical systems.
5. AI for Engineering sciences: 
6. The ability of AI to reason using large language models 

He also consults at senior advisory levels to companies.

His newsletter on AI in Linkedin has a wide following 
https://www.linkedin.com/newsletters/artificial-intelligence-6793973274368856064/

Raj Sharma

Course Director

Raj Sharma has over 24 years of experience in software consulting, entrepreneurship with artificial intelligence (machine learning and deep learning), big data technologies, cybersecurity, and AI. He focuses on digital sustainability.

As a consultancy's founder and principal enterprise architect, Raj delivered data and AI strategy and architecture full-stack data science projects to startups, scale-ups, MNCs, and the UK government. He focuses on Designing and building Tech using AI and Big Data for cybersecurity, Insurance, and healthcare domains. He was also involved in Designing strategy and architecture for  Enterprise Data platforms and securing Machine learning pipelines for development, training, testing, and deploying ML algorithms in a production environment. He has been involved in implementing artificial intelligence cyber security algorithms based on Generative AI.

He has a Master's Degree in Information Security certified by GCHQ, the UK Government Communications Headquarters, with a Research Project in AI. He has a Master's Degree in Software Development and Algorithm Design and a strong software engineering background in mathematics, statistics, and physics. He is pursuing PhD from Royal Holloway University of London (Generative AI applications in Information Security).

Nadeem Bukhari

Course Tutor

With over two decades dedicated to Information Security Governance, Risk, and Compliance (GRC), Nadeem’s career encompasses a variety of high-impact roles, from strategic positions within top management consultancies to key CISO appointments, across a global landscape. His leadership in transformative security initiatives has contributed to the integration of AI technologies within multinational organisations positioning him as a forward-thinking leader in the cybersecurity field.

Nadeem’s educational approach is aimed at equipping professionals with the necessary skills to adeptly manage security governance, risk, and compliance, enriched with AI advancements. By bridging the gap between theoretical knowledge and practical application, he ensures a deep understanding of AI’s transformative role in information security.

Beyond his corporate achievements, Nadeem is an influential voice within the information security community, shaping the discourse on the integration of cybersecurity and AI through participation in conferences and contributions to renowned publications. His role extends to serving as a board advisor for an AI startup, where he leverages his expertise to guide the strategic direction of pioneering AI solutions in cybersecurity.

In leading sessions that highlight the crucial role of AI in reshaping Security GRC, Nadeem equips professionals to utilise technological advancements in developing secure, forward-thinking frameworks. His insights into AI strategies against evolving digital threats provide the tools necessary for constructing secure and resilient security organisations.

Vikram Tegginamath

Course Tutor

Vikram Tegginamath is a Cyber Security leader at McKinsey & Company, and a technologist with over two decades of experience in developing, managing and securing information systems for high-growth Consumer Electronics (Philips, NXP), Broadcast (Sky and BBC), Data Science, Artificial Intelligence (QuantumBlack) companies. Earlier in the career, his focus was software development and integration, working with numerous Fortune 500 companies in the design, implementation and management of large Systems Integration projects (SkyQ, YouView, etc).

In the past 10+ years, he has applied that experience in the software and digital assets space focused on Practice Security leadership, Team leadership, Cloud Security, Design, Development and Integration of Security Tools, Third-party Vendor risk management, DevSecOps, Big data security domain, AI transformation and MLSecOps.

He currently serves as the Head of Practice Security for Global Operations Practice at Mckinsey & Company, serving clients across various industries from FinTech, Investment\Retail Banking to Manufacturing and Public Sector. As a trusted advisor to senior business and security leadership, his current role involves securely adopting new technologies (such as GenAI), building successful internal security teams, creating security programme and taking the organization’s security capability forward in an accelerated timeframe.

With extensive experience of DevSecOps and Cloud Security, he is involved in the research areas of evolving AI Security and MLSecOps with active contribution to the security community, been a panelist in Cyber workshops and conferences.

With a MSc in Cyber Security (GCHQ certified) from the University of Oxford and BEng in E&C, it has helped him to apply the engineering principles within the diverse domains he works with, including Data Science & AI. He holds industry certifications, including the AI for Cybersecurity and ML for Cybersecurity from reputed universities, AWS Certified Solution Architect - Associate, PRINCE2 and ISEB certs in Solution and Enterprise.

Outside of work, he is passionate about cricket, believes in giving back to the community. He is a qualified ECB Cricket coach, working to help young players improve, have fun, be safe and learn at every stage of their development.

Anjali Jain

Course Tutor

Digital Solutions Architect, Metrobank 

Anjali is a Digital Solutions Architect at Metrobank, where she helps to deliver advanced technology driven business solutions around diverse themes of Internet Banking, Mobile App, Business banking, and Open banking/PSD2, using agile methodology.

She has over 16 years of IT experience and worked across Banking, Telecom and logistics domains, from inception to the delivery of complex projects.

Anjali is passionate about AI and Machine learning and completed the course "Data science for internet of things" in February 2019 from the University of Oxford.

David Stevens

Course Tutor

Regional Director for Customer Success, Neo4j 

David is the Regional Director for Customer Success at Neo4j, where he helps customers realise their business goals with Graph database solutions.  

He has over 25 years of experience and before joining Neo4j, he was a member of the Office of CTO at Hewlett Packard and held the title of Distinguished Architect, working on emerging technologies and designing data driven solutions. 

David has a strong passion for solution architecture and ensuring technology delivers the outcomes desired by business users and sponsors.
His experience with Graph databases spans almost 10 years, designing CMDB, NLP engines and HR and workforce developments solutions. He won a Graph industry award in 2018.  

Olu Odeniyi

Course Tutor

Olu Odeniyi has over 30 years’ experience helping organisations maximise commercial gain from technology solutions.
During this time, Olu held several key senior leadership, strategic and operational positions in the public and private sectors where he gained awards for innovation and exceeding objectives.  

He advises companies on cyber security, information security and digital transformation.
Working with the University of West London Enterprise hub, Olu created and led workshops on cyber security, AI, and information security. He has also created and taught cyber security courses to audiences from the public and private sectors.  

Olu led the cyber security and big data themes for the Science and Innovation Audit (SIA) sponsored and by the department of Business, Energy and Industrial Strategy (BEIS) led by Brunel University. He also led a workshop on “cyber security for airports” involving experts from Glassgow and Royal Holloway university.
Olu co-authored the final report published by BEIS.  

As a virtual CISO, Olu has led several cyber security incident responses and liaised with forensic investigators.
He leverages his practical expertise and scholarly inquiry to produce insights and guidance on emerging cyber security challenges. He has also reviewed books on cyber security and is quoted regarding his acuities on “the threat of deepfakes”. 

Olu is currently researching application of AI for cyber security within IoT, working with the University of West London. He is also working on a new start-up focused on educating boardrooms on cyber security governance. Olu speaks on various cyber security topics at public events and private gatherings. 

Researching the use of Large Language Models (LLMs) within the cyber security domain is another current research interest for Olu. 

Olu has a degree in Electronic Systems Engineering and is a professional member of BCS (Chartered Institute for IT).
Special Interest groups Olu has joined include Artificial Intelligence, Cybercrime Forensics, Information Risk Management Assurance, Information Security, Agile Methods.

Ayşe Mutlu

Course Tutor

Data Scientist

Ayşe Mutlu is a data scientist working on Azure AI and devops technologies. Based in London, Ayşe’s work involves building and deploying Machine Learning and Deep Learning models using the Microsoft Azure framework (Azure DevOps and Azure Pipelines).

She enjoys coding in Python and contributing to Open Source Initiatives in Python.

Mike Dillinger

Course Tutor

My mission is to push the boundaries of cognitive computing and AI by melding human expertise with machine intelligence.

I'm a cognitive scientist with training in linguistics, epistemology, and experimental cognitive psychology as well as many years of experience applying this expertise to solve gnarly computational problems in the world of Tech at companies such as LinkedIn/Microsoft, eBay, Intel, and others. 

My most important takeaway from all this experience is that the key to fusing human expertise with machine intelligence is representing knowledge in ways that are accessible and valuable to both humans and machines. Any kind of next-generation, human+machine hybrid intelligence will have shared, interoperable knowledge as its foundation.

So that's what I'm working on: making knowledge as shareable, interoperable, accessible, and valuable as possible to human and machine stakeholders.

Joylynn Kirui

Course Tutor

Joylynn Kirui is an infosec evangelist who believes in empowering developers and users in general on security best practices. She has vast experience in web and mobile app security testing, DevSecOps and GSM security having previously worked in the telco industry. She has a passion for mentorship and training students and empowering them. She has spoken in several conferences where she shares her knowledge in cyber security and software development.

She is among the Top 50 Women in Cyber Security Africa 2020 finalists, Woman Hacker of the year Africa 2020 finalists and Young CISO Vanguard 2022 among others. She is a co-Author of 'DevSecOps for Azure' available on Amazon.

She is a Senior Cloud Security Advocate at Microsoft. She focuses on DevSecOps on GitHub and Azure which includes application security.

Christoffer Noring

Course Tutor

Senior Cloud Advocate, Microsoft 

Chris is Senior Cloud Advocate at Microsoft with more than 15 years's experience in the IT industry. He's a published author on several books about web development as well as the Go language. He's also a recognized speaker as well as keynote speaker and holds a Google developer expert title.   

Application

If you would like to discuss your application or any part of the application process before applying, please click Contact Us at the top of this page.

IT requirements

This course is delivered online using Microsoft Teams. You will be required to follow and implement the instructions we send you to fully access Microsoft Teams on the University of Oxford's secure IT network.

This course is delivered online; to participate you will need regular access to the Internet and a computer meeting our recommended Minimum computer specification.

It is advised to use headphones with working speakers and microphone.