Read Time:5 Minute, 32 Second

IDC Digital Enterprise event: ‘Driving leadership in the digital age of AI everywhere’

On April 24th, I attended the IDC Digital Enterprise event in Nieuwegein, where presentations delved into the digital transformation of organisations across various sectors. The focus primarily centred on optimizing organisational processes through AI innovation. Among the topics discussed, one standout was GenAI, exploring its application through personas and use cases and what is needed for this transformation to take place. From this event, there are some key takeaways relevant to education and for my PhD research, concerning Conversational Agents and trust.


GenAi is used by KLM, ING and NN to save time and reduce waste
Starting with day-to-day cases, GenAi has been used more and more, perhaps a bit more than we thought. Real-world examples showcased how organisations like KLM leverage AI to predict and reduce food waste, while ING Bank utilizes GenAI to enhance customer service experiences. Another organisation (ServiceNow) talked about “triage conversation” and underscored the overarching organisational objective of leveraging GenAI to enhance productivity. They showcased how GenAI facilitated an increase in the efficiency of handling customer requests. Another case study shared by Nationale Nederlanden showcased their initial experimentation with GenAI, focusing on productivity, by utilizing GenAI to generate summaries of customer calls. NN employees save valuable time writing and summarising such reports, but not without manual quality checks. Aside from all the potential, NN highlighted the restricted public access to their GenAI platform due to concerns regarding hallucinations. NN also acknowledged several inherent risks, including overreliance on these models and tools, data security concerns, potential legal issues, and the importance of maintaining ownership and validation of generated code—particularly crucial as coders use GenAI to develop and refine their code.  This demonstrates that the private sector is already employing GenAi for a variety of applications, frequently directly with users, while also acknowledging the obstacles; nonetheless, what is required from these professionals, and future creatives and creators, is to be prepared for these challenges.

What skills are needed for this wave?
The event underscored the importance of redefining skill sets necessary for engaging with such technologies, emphasizing critical thinking and creativity as important traits. During Eviden’s presentation, they highlighted the concept of chaos engineering to deliberately disrupt systems to gain insights and learn from the resulting chaos. Additionally, they encouraged a blended approach to innovation, combining traditional methods with experimenting and emphasized the importance of high-quality data, telling us that the quality of outcomes directly links with the quality of input data—saying “garbage in, garbage out.” Addressing the surge in demand for these new skills, NN emphasized the significance of encouraging creativity, critical thinking, communication, collaboration, and curiosity-driven experimentation. The importance of these skills—creativity, open-mindedness, and constant experimentation—was repeated by various presenters and attendees. Notably, GET (a collaboration between Gall & Gall and Etos) demonstrated the practical application of GenAI for public use, specifically in enhancing keyword searches through an AI-driven search function. Once again, the emphasis was placed on the necessity of relentless experimentation to drive innovation and adaptation in the rapidly evolving landscape. However, they are also employing GenAi for mind-numbing manual work, leaving creative and critical duties to humans, and using numerous tools to create their own process and framework.

Open-source and Big Tech Tools
Tools such as OpenAI and Co-pilot are frequently used, with organisations, such as Epam, sometimes adopting these models from Open Sources to build their own solutions in-house. GitHub Co-pilot and AI Dial chat were highlighted as examples. Dutch tech developers often customize GitHub Co-pilot for their own use and collaborate on code there. The discussion also touched upon the future wave, particularly the concept of autopoietic agents – systems that continually develop themselves, writing and refining code to enhance productivity and employee happiness by automating small tasks. It emphasized the importance of remaining open-minded and experimenting with emerging technologies. Microsoft’s self-study introductory course on GitHub for AI was mentioned as a resource for continuous learning and exploration. Interestingly mentioned that the danger is the overreliance of using these tools, sometimes framed as ‘open’.

Trust: A Hot Topic
My PhD topic focuses on trust and Conversational Agents, mostly in the Dutch Public Sector. It was encouraging to observe that trust was considered significant during the event, as speakers frequently discussed it. The discussions centred around trust in systems, trust in organisations, and trust in technology. One presentation that particularly resonated with my research was from Amsterdam Smart City, where the emphasis was on placing citizens at the forefront, providing tools to assist them, and ensuring accessibility rather than expecting citizens to become more tech-savvy. The core aim is to enhance the quality of life for citizens. However, when organisations utilize algorithms, ensuring ethics and trust becomes challenging. Amsterdam Smart City has introduced a registry called ALGAMS, where algorithms are documented and made publicly available. Registration ensures that all algorithms, particularly those designed for public services, adhere to established rules and remain transparent to the public. Additionally, they conduct ethical audits on algorithms regularly. Another noteworthy initiative is TADA (tada.city), which facilitates discussions about data ownership, ensuring that data is owned by the people. While it remains uncertain whether citizens will fully embrace these initiatives, the openness of such initiatives is praiseworthy. A challenge highlighted during the event was the perspective of citizens in AI development. It was stressed that citizen perspectives must be considered, especially concerning issues like hallucinations, data quality and bias, where biases can inadvertently influence outcomes. Trust in these systems is mirrored in trust in the organisation behind it, therefore ensuring that those systems are fair and are made for the benefit of citizens is of utmost importance. Next to these initiatives, regulations such as the EU AI Act are put in place to ensure these systems do not damage trust but also pose a challenge for organisations that engage in profiling, as compliance with this act will be mandatory.

Ultimately, the main takeaways emphasise the importance of creativity and critical thinking (and making) in the ongoing implementation of ‘novice’ technology like GenAi to increase productivity. For these advancements to continue, it appears that perseverance in the face of challenges is essential. Risk and challenges aside, organisations have been implementing these technologies in everyday life, influencing not only their employees but everyone who interacts with their services, and they appear to be aware of the risks and the impact these implementations may have on trust.

Image: Linus Zoll & Google DeepMind / Better Images of AI / Generative Image models / CC-BY 4.0

Arvid Malde Previous post CONVERSATIONS 2023 in Oslo, Norway
Next post 2024 ISPIM Innovation Conference