“Without psychological safety, there can be no culture of AI experimentation.”
Interview with AI expert and LEARNTEC keynote speaker Dr. Pirita Pyykkönen-Klauck
Like all disruptive technologies, artificial intelligence (AI) is rapidly transforming media, education, and organizations—and often, the actual use of the technology outpaces the legal framework. Many companies are still grappling with its strategic positioning. In this interview, AI expert Dr. Pirita Pyykkönen-Klauck, CEO of ZDF Sparks, explains what skills leadership needs now, how to foster a genuine culture of experimentation, and why clear guidelines are crucial. As a keynote speaker at LEARNTEC, the trade fair and conference for digital education in Karlsruhe, she highlights the potential AI can unlock in the future.
How do you personally use AI—both professionally and privately?
Dr. Pirita Pyykkönen-Klauck: Professionally, I use AI primarily as a sparring partner and efficiency booster. Since German and English (the two languages I actively use) are not my native languages, I’ve been using technical tools for text correction for some time now. I now also frequently use AI to check the exact tone of my messages. In addition, I have summaries and to-do lists generated from many of my notes. A major advantage for me is the generation of visualizations and templates. This saves an enormous amount of time because I’m not particularly good at visual storytelling.
In my personal life, I’m an experimentalist through and through. I continuously test AI solutions and tools, far outside my comfort zone. One long-standing project is home automation. I’m also experimenting a bit with automating my own needs—such as adjusting music and lighting to match my exact mood. I haven’t quite got the algorithms right yet, though. After all, I don’t store much of my important personal data in databases. At the moment, my husband is still better at recognizing my mood than the AI.
What specific role does artificial intelligence currently play in the media industry?
Pyykkönen-Klauck: A current topic in the media industry is the variety of impressive AI-generated videos being shared on various social media platforms. The reality behind them, however, is often neither as simple nor as quick as it appears in these posts. As a consulting CEO, I view the media industry holistically across the entire value chain: from audience understanding through programming and production to distribution and administration.
AI offers enormous potential everywhere. Currently, however, the practical focus is on complex data analysis and tasks for which people simply lack the time or capacity. The reduction of these work steps and the higher accuracy of analysis results through the use of AI are very much welcomed by many employees.
Strategically speaking, I am currently observing the fundamental developments of the new so-called “World Models.” These hold the technological potential to fundamentally overcome many limitations of today’s models. As soon as they become available, we will be able to develop AI solutions for creative professionals much faster than we can today.
Especially with disruptive developments, people are already far ahead in daily use, while organizations and lawmakers are lagging behind. How do you see this in the context of artificial intelligence?
Pyykkönen-Klauck: An open discourse is essential here. As the question also shows, technologically speaking, far more is possible today than we as a society might wish for from AI. That is precisely why clear laws and regulations are absolutely necessary.
However, this requires much closer collaboration between decision-makers and AI experts who can explain the technology in depth. We must distance ourselves from the extremes regarding the opportunities and risks of AI; instead, we should develop evidence-based scenarios and simulations that demonstrate what AI can be as a technology. Without this deep understanding, we run the risk of missing the mark: We will then not only regulate risky use, but simultaneously prevent good and urgently needed innovations.
What skills are needed to not only use AI, but to integrate it strategically? How must leadership be structured in the age of AI?
Pyykkönen-Klauck: The principles of sound strategic leadership still apply, albeit on a drastically shortened timeline. Leaders must define precise KPIs that operational teams can implement immediately. We need shorter evaluation cycles and the willingness to pivot quickly.
The implementation of AI is not a multi-year project and should not be viewed as an IT or technology project. The first KPIs defining the start of the process should be achieved within the same year; financial benefits can be realized quickly once the initial KPIs are met. But without the courage to make real investments, the progress necessary for today’s competitiveness is not possible. These investments must ensure that employees have the time and resources they need to implement AI. Ultimately, consistent “leading by example” determines success in the transformation.
At the LEARNTEC Congress, AI readiness is a major topic—for both companies and individual employees. How can a culture of innovation and AI experimentation be established within a large organization?
Pyykkönen-Klauck: Psychological safety comes first. A culture of experimentation requires safe environments and clear guidelines. Companies need transparent guidelines: Who is allowed to test what, how, and with which data? To avoid critical errors, it must be clearly defined in advance where extended risk analyses are required. This is not only a technical challenge but also an organizational one. Ultimately, every employee must know exactly how the company’s intellectual property is uncompromisingly protected in all AI experiments. This allows employees to experiment with a sense of security.
Furthermore, companies need the appropriate expertise and autonomy—or must bring it in-house—to design these processes professionally. I often observe significant uncertainty in the market regarding which AI topics actually make sense in which cloud environments. What is needed here is an objective and holistic assessment of what creates real added value and how, within the specific corporate context. In such assessments, a certain degree of distance is often helpful to evaluate these decisions objectively.
In conclusion—since we are an education fair: AI in education—what can AI do that the educator (trainer, teacher, coach) cannot?
Pyykkönen-Klauck: There are many aspects, but I would like to highlight two points:
- Availability: AI is available 24/7. No human can or should be expected to do that.
- Hyper-personalization: It is impossible for a human educator to identify the exact needs of every single learner in real time without comprehensive data analysis. Even if a trainer performs this analysis, the adaptation always comes with a delay. AI, on the other hand, analyzes during direct interaction and immediately offers automated, tailored measures. It adapts formats—whether text, video, or interactive tests—to maximize learning effectiveness, or responds individually to learning difficulties by, for example, providing more concrete examples or explaining them differently. This saves learners an enormous amount of time and energy and immediately gives them the feeling of achieving learning success.
With these two advantages, learners can plan more effectively and have the assurance that everyone has the same opportunities for optimal learning success. Well-designed AI-based learning solutions also provide feedback to instructors, allowing them to use their time to design training sessions that address remaining learning challenges or further enhance the quality of learning.
Thank you very much for the interview!
Dr. Pirita Pyykkönen-Klauck’s keynote address, “Future by Design: Governing Fearless Labs for Sovereign AI Returns,” will take place on Thursday, May 7, 2026, at 1:00 p.m. on the Main Stage in Hall 2.
