Skip to content

AI for Executives

 

Executive Masterclass:
Generative, Agentic & Trustworthy AI

Strategic and Tactical Insights for Modern Business Leaders

This intensive two-session masterclass prepares executives with the strategic knowledge to:

  • Understand Generative and Agentic AI fundamentals from foundational models to autonomous agents
  • Assess when to deploy Agentic AI and understand the inherent risks vs. benefits
  • Implement Trustworthy AI across all seven lifecycle stages with proper controls
  • Establish executive-level AI policies that protect and grow your business

Through practical case studies and strategic insights, you’ll gain actionable knowledge to guide your organization’s AI transformation.

1
Morning Session: Understanding Generative & Agentic AI

ChatGPT, Claude, and autonomous AI agents in practice. Learn to identify high-impact use cases, understand capabilities and limitations, and develop implementation strategies for your business context.

2
Afternoon Session: Deploying Trustworthy AI

Improving risk management, ethics, security and reliability. Moving past simple compliance, learn the deeper benefits of Trustworthy AI and how it enables us to build better, more secure AI products while maintaining the trust of our employees, customers and partners.

Led by Recognized AI Expert

Dr. David Stephenson

Dr. David Stephenson, Ph.D.

Internationally recognized AI strategist, author, and faculty at Amsterdam Business School

+ Leading industry experts (lineup varies per session)

Masterclass Details

📅

Date: November 14, 2025
Full day intensive (9:00 AM – 5:00 PM GMT+1)

📍

Location: Amsterdam + Online Hybrid
In-person at Amsterdam Arena or join via Zoom

💶

Investment: €750 per session (excl. VAT)
Academic and non-profit discounts available

 Enrollment is Limited

Reserve Your Spot →

Questions? Email us at info@dsianalytics.com

Session 1: Fundamentals of Generative and Agentic AI

What is Generative AI and why is it important? 

chatGPT is the most well-known example of Generative AI (GenAI), but GenAI has actually been around for many years and in many forms.  The basic concept is that an AI is able to create (generate) something meaningful.  In the case of chatGPT, it is generating conversational replies, summaries of documents, or even entire compute programs.  GenAI may also generate images,  video, audio, and even entire songs.  You’ll also often hear these GenAI tools referred to as Large Language Models (LLMs) or Foundation Models. 

This technology promises to be extremely valuable to businesses, NGOs and governments because it has the potential to automate a very wide range of tasks, especially those that do not require expert knowledge.  Examples include customer support, boiler-plate software development, translation, article research and summarization, etc. 

What is Agentic AI? 

Agentic AI is when we create a system that uses AI to perform tasks for us.   This involves having one or more AI’s in the lead (typically an LLM), giving them a set of tools to use (such as our booking or payments systems), giving them shared memory (so they can remember details and share information), and providing guardrails (to keep them from doing something they shouldn’t).

Should I be using Agentic AI?

There has been tremendous interest in Agentic AI since people realized the capabilities of LLMs  (chatGPT, Claude, Gemini, DeepSeek, etc).  If, instead of  asking a chatBot for advice and then carrying out that advice ourselves, we instead allow the chatBot to execute on its advise, we can scale and automate the benefits of the AI, allowing us to reduce human workload.

But Agentic AI comes with significant risk.  GenAI is inherently non-predictable and uncontrollable, and it brings a number of risks that we simply cannot control.  We need to think carefully about the settings in which we want to deploy Agents AI and the risks that we are inviting, and then we can decide if the potential benefits outweigh the inherent risks.

Session 2: Trustworthy AI

What is the business value of Trustworthy AI?

There are three broad areas in which Trustworthy AI benefits an organization:

  1. Improved Business Results:  This includes top- and bottom-line growth, enhanced product quality, and competitive advantage  
  2. Decreased Risk:  Here we think not only of fines from violating regulations related to use of data and AI, but also reputational/branding risk as well as cybersecurity risks and leaks of sensitive personal or corporate information.
  3. Stronger Relationships:  An organization’s own employees are often the biggest advocates for its use of Trustworthy AI, and many will leave an employer because of related concerns. Studies have also shown that many organizations will refuse to work with companies that have shown themselves untrustworthy in their use of data or AI.

What do we mean by the term ‘Trustworthy AI’?

Trustworthy AI refers to applications of AI that are safe, secure, and resilient,  and which have proper considerations and controls for bias, explainability, transparency, privacy (in training and usage),  and accountability. Obviously each of these considerations are complicated and often complex.

What is the difference between Trustworthy AI, Ethical AI and Responsible AI?

An organization generally has a preexisting ethical foundation, and the goal of Ethical AI is to ensure that development and deployment of AI is in line with those ethics.  For example, if an organization is dedicated to sustainability and reduction of emissions, then an ethical pursuit of AI for that organization would be one that minimizes carbon footprint (such as from data centers).

Taking a broader view, organizations will also have strategic and tactical goals, such as maximizing revenue and minimizing faults and defects.  Responsible AI is the development and deployment of AI tooling in a robust, responsible way…. so developing AI that is fit for purpose, that works in a reliable manner, that does not bring legal or reputational risk, etc. 

Trustworthy AI is then the combination of both Ethical and Responsible AI.  It is careful consideration of the opportunities and risks involved with AI, as it impacts our shareholders, employees, stakeholders, partners, and customers.  It involves also the processes and controls whereby we conceptualize, develop, launch and monitor AI tooling across the AI lifecycle.

How do executives ensure that the AI in their organizations is trustworthy?

It’s important to consider all seven stages of the AI lifecycle, along with the business units that are typically involved.  For tooling developed in-house, this generally means involving five separate business units (including the AI specialists, the data team, and the infra team).  And for off-the-shelf / SaaS AI, it’s even more crucial that business leaders understand the processes, considerations and risks involved in use of modern AI.

When considering a top-of-the-house policy for Trustworthy AI, we want to start by forming the policy at the executive level. Second, we create internal training and educational processes and ensure that there are internal and external feedback loops to help policies stay current and relevant.  Thirdly, we establish the internal processes that will clarify roles and responsibilities across the AI lifecycle, including post-deployment monitoring.

Email trainings@dsianalytics.com with questions or to reserve your spot.  

Feedback from participants of past classes

“Material and presentation was very structured and clear. Provided practical checklists of the topics and themes one might want to consider and plan for in an AI strategy implementation. Very practical. I will go to his slides and pull out a few to use in planning implementation.”

“Clear, engaging and accessible… the materials remain a valuable reference for my day-to-day work”
 
“<David> had great lecturing skills. He kept my attention the whole day with a mix of examples, small jokes and assignments. Great mix!

“Truly enjoyed this session. I kept my attention from the first to the last minute. Very pleasant to speak to as well”

“I really like the calm and clear style of the lecturer, he does a great job bringing across the message and does a very good job in keeping you engaged”
 
“Great presentation and slides very structured lecture and the content already answered a lot of questions before they arise.”

 

Register to hear about future trainings and master classes 










* indicates required


On a very limited basis, I offer 1-1 coaching to assist analytics professionals in preparation for presentations. Please contact me for more information.