In May 2024, I enrolled on the Advanced Artificial Intelligence course provided by the University College Dublin Professional Academy. It felt like AI hype was at its peak (I was wrong). Having always worked with new technologies in publishing, I wanted a framework to cope with the daily vortex of AI-related news and opinion columns and figure out a way to navigate the coming wave.
Logistics
In terms of practicalities, the course ran online via Zoom, one evening a week (6.30pm to 9.30pm) for 3+ hours between May and August, followed by a couple of weeks for end-of-course assignments. There were up to 20 participants each week, and the course was run by a very experienced tutor with an international background in software development and teaching.
Each week included a presentation, discussion, and a breakout activity based on the week’s topic. The course participants self-organized into ‘communities of interest’ to finish each class discussing a topic of interest over the duration of the course. Four of us set up a Responsible AI group representing Information Security, Investment banking, Finance, and academic publishing. One colleague was particularly valuable for his knowledge of the impending EU AI act, and shared useful online tools for navigating the act.
It was refreshing to learn and discuss the implications of a new technology with people from a hugely diverse range of industries. Coming from academic publishing, professional development courses or conferences include only participants from the same industry. This can lead to an echo chamber effect.
The industries represented on the course included insurance, aviation, cybersecurity, automotive, finance, agritech, consulting, government, and others. Roles represented included customer experience, risk management, software engineering, and training. Whatever the long term future of AI is, transferable skills across industries is an interesting aspect of AI.
Responsible AI
Before this course, much of my exposure to AI focused on the latest tools and trends. It was good to see that the initial module of the course examined universal issues relating to AI – an area broadly termed ‘Responsible AI’. Having an undergraduate background in Philosophy (also from UCD) made some of this familiar, but viewed through a lens that unimagined in 1990s BA lectures…
Ethical aspects of AI covered included the following:
- Transparency
- Bias and discrimination
- Privacy
- Human rights and dignity
- Security
- Concentration of power
- Dependence on AI
- Job Displacement
- Economic inequality
- Misinformation and disinformation
- Autonomous warfare
- Ethical decision-making
- Accountability and liability
- Human autonomy
- Data protection
There was much discussion of Responsible AI in the context of lessons learned from the introduction of social media platforms. Social media platforms lacked any significant ethical framework and now face retroactive regulation after significant negative impact on society (“Social media executives to be ‘personally accountable’ for harmful content“). Can AI companies do better?
The practical element of this module was to review different technology companies’ Ethical AI Frameworks and then develop an AI governance framework for a fictional company. Many are quite superficial, sometimes necessarily. There is a tension between communicating complex ethical issues simply while ensuring that frameworks don’t constrain future technology strategy. If AI is to shape society positively, society must proactively shape AI.
AI and Organizational Change
My assumption before the course was that implementing AI technologies in organizations would be broadly similar to other technology projects where good change management is a primary requirement for success.
Change management is important, but perhaps of greater importance is AI literacy at all organizational levels. In the same way that data privacy training has become standard post-GDPR, AI literacy is now essential given that requirements of EU AI Act compliance (“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems.“). From understanding bias in algorithms to ethical risks, staff must be equipped to engage critically with AI tools.
Like many technologies, early adopters will already be using AI tools in our organizations without official support or supervision: It’s important we stay ahead of developments and ensure that there is adequate staff training to ensure that all levels of staff have awareness of AI issues and risks.
It was reassuring that aspects of organizational culture required for successful AI projects are qualities which I would naturally identify with and seek out: openness, continuous learning, diversity of thought and open innovation.
Human-AI Interaction
AI, particularly Generative AI, is challenging our concept of machine intelligence, and how humans and technology interact. Generative AI’s uniquely persuasive, human-like responses lead to a complex relationship between humans and AI-enhanced technology.
We see this in one of the most pervasive applications of AI, chatbots, where users often don’t know if they’re communicating with a person or a machine (the ‘Turing test’).
AI in popular culture is often characterized as malevolent. We also have a degree of suspicion around the motivation of large AI technology companies. It is essential, therefore, that companies implementing AI systems have frameworks or policies covering human-AI engagement that address human concerns around the capabilities of AI technology, demonstrating the advantages of AI while being clear about its limitations.
I personally believe there is a huge task ahead of us as a society in terms of public education around AI. It has been fascinating to see that public perception of this new technology is largely negative – unlike previous technologies.
Technology companies’ ability to productize and sell AI has vastly overtaken society’s ability to understand the uses and potential mis-uses of AI. Employees and even students have more access to AI training than the public at large: this is a very risky situation that needs to be addressed.
Trust, transparency, and authorship in AI
One of the most interesting aspects of a framework for human-AI engagement is around trust and transparency.
Recently, I had a personal experience of this: listening to an audiobook on Spotify about AI, I gradually suspected some text was AI-generated. Although I thought the book was an excellent summary of AI in general, I began to distrust it. The text hadn’t changed, but my relationship to it had.
This may be irrational in some ways, but we need to remember when deploying AI systems, the importance of emotional reactions to human-like technology, and ensure that we explain as far as possible what the AI system is doing, and what it can and cannot do. Perhaps if there was a statement at the start of my audiobook on whether it was partly AI generated, that would have set my expectations.
This also leads to questions of responsibility in AI. In my own industry, publishers are experimenting with chatbots to provide interfaces to large bodies of legal textbooks where a lawyer can ask a question and get a response from a GenAI system (more recently via a RAG AI architecture).
This leads to fundamental questions around authorship. The GenAI interface ‘wrote’ the reply, not the textbook author – so who is the author of the response? Who is responsible if that response is misleading? Who owns the copyright? Some of these questions are still under debate, but what I learned from the lectures on this topic was the importance of having a framework into which to put questions like this so that answers may be found through further analysis and discussion.
Operational optimization
The course looked at the case study on the MD Anderson Cancer Centre-IBM Watson Project. There is still organizational memory in my previous employer (which licensed scientific content to the project) around the ‘failure’ of that project in terms of over-selling and under-delivery to a degree that arguably the IBM Watson brand was damaged. If the project was started today, thanks to the leap forward in LLM technologies, the projected would likely have been a huge success.
The ‘black box’ nature of AI systems leads to an over-expectation of their capabilities. This problem is further exacerbated where the media term everything ‘AI’ without any qualification of whether something is GenAI, machine learning, NLP etc.
It may be an old-fashioned framework, but SMART objectives are essential any AI project; analysis and real understanding of the business process that is to be AI-enhanced; careful choice of AI tools; clear scope of what is/is not feasible; a defined pilot phase with a People, Process, Technology, and Data (PPTD) framework.
Awareness of where AI is on the ‘hype cycle’ is also important in terms of operational implementation of AI systems. Today, in 2024, AI is at the peak of inflated expectations. Many businesses will experiment with AI and have disappointing results, leading to disillusionment. The Watson project would have a significantly increased possibility of success with today’s AI infrastructure.
Generative AI and prompt engineering
Before the course my approach to GenAI prompts was pretty rudimentary ‘one shot’ prompts to AI systems. The modules on GenAI emphasized the importance of careful prompt structuring, as well as the improvements in generated responses by multiple, iterative requests, the adoption of personas, asking the AI to ‘mark its own homework’ (e.g. “What’s missing from the above response”) etc.
Further experimentation on multiple GenAI LLMs clearly demonstrated the wide spectrum of capabilities across LLM services. This underlined the importance of knowing the capabilities of different AI systems, and selecting the most appropriate tool for the work in hand.
Further reading on GenerativeAI in the Harvard Business Review (provided as a benefit of the course) led to this interesting graphic on the real-world use cases that GenAI is being used for. It will be interesting to see if this analysis is repeated in future to see how/if this develops beyond experimentation.

[Source: Harvard Business Review. Jul/Aug2024, Vol. 102 Issue 4, p29-29. 3/4p.]
