National Maritime College of Ireland
1. Context, definition of the problem
Feedback elicited from candidates completing MTU maritime and offshore safety training courses fulfils a key quality management function thus comprises an essential component of the MTU (indeed all maritime training centre) teaching system. It is used to:
- Confirm candidates’ learning outcome expectations have been met;
- Ensure candidates are satisfied with training quality;
- Identify shortcomings in course content that can inform course evolution decisions;
- Capture training delivery style improvements suggestions;
- Facilitate internal training delivery quality audits;
- Provide evidence training quality standards satisfy global certification body standards.
Trainees are canvassed immediately after their final course session by means of pen-and-paper forms comprising 13 closed questions (formatted with 5-point Likert scale responses), and 1 open comment text field). Instructors strongly suspect that the current means of eliciting feedback delivers poor insights into trainee mind-sets because 1) candidates’ desire to expeditiously depart (or fatigue) minimize form-filling effort, biasing choices towards convenient median (‘satisfactory’) scores and by-passing the open comments text field, and 2) only textually adept individuals are tempted to use the open comments field.
Feedback-gathering efforts are further limited by brevity of this time window, currently the sole opportunity for post-training candidate engagement. The time that remains after feedback forms have been completed but before candidates disperse is taken up by a review of open comment text responses. This strategy permits misunderstandings to be clarified and defects already under review to be explained thus reducing the likelihood that disgruntled candidates will broadcast negative reports detrimental to MTU business, but this formal element of MTU quality management leaves insufficient time for negative responses to the 13 closed questions responses – the bulk of the feedback - to be probed. Follow-up later is not possible because forms are filled-in anonymously.
Lack of post-training engagement, particularly with dissatisfied customers whose suggestions are more likely to reveal weaknesses in the current training regime or with trainees whose responses identify groups of individuals with specific learning needs that might be targeted by improvements, is an undesirable state of affairs. Attempts to improve quality, both in the short term in order to make MTU stand out from its competition, and in the long-term to foster business growth (the key reasons for undertaking feedback surveys), are hampered.
2.Challenge definition. Description of need
MTU strives to reinforce its position in the maritime safety training market both by improving the current generation of training products and by embracing emerging digitization trends to protect future profitability.
However, MTU must tread the path towards future marine safety training carefully to avoid pitfalls in a field beset with complex curriculum requirements (learning outcomes that must be equally positive for first-time learners and trainees re-engaging with material after multiple iterations of recurrent training, courses with dual focuses on practical skills and classroom-acquired knowledge, and diverse educational background, experience and personality gradients within groups of trainees) yet rich with possibilities for digital education. A thorough understanding of end-user perspectives in respect to the introduction of digitization is key to preventing training innovation from disrupting programs, discouraging customers, marginalising regulatory bodies, and is needed to ensure investment in change delivers most reward for least risk.
MTU therefore needs to engage with delegates post-training in order to:
1. Remedy and evolve its current curriculum, and
2. Gauge trainee’s enthusiasm for, and value attributed to, course content and delivery techniques that embrace digital transformation.
The challenge to inspire training candidates to reveal end-user insights will be met by supplementing the existing pen-and-paper handwritten feedback system with a conversational software system. This system should share the basic strengths of chatbots a) offering candidates the convenience of choosing when and where to engage with feedback collection, b) being inexpensive to operate, and c) running without human oversight; but must not fall into the trap of emulating an automated survey system (i.e. merely a synthetic voice reading out questions test subjects about which candidates are expected to state preferences). Trainees will be incentivized to participate in feedback gathering and be motivated to return to MTU for further training by the knowledge that their input will shape future training offered by MTU.
In addition to the core chatbot software development work package, the successful proponent’s plan should include workshops in which the following topics will be explored:
- Marine safety training domain knowledge relating to current courses and delivery style, relevant to existing feedback questions.
- Trends in education technology relevant to future maritime safety training.
- 3rd party media and web resources suitable for inclusion as examples or resources during, or as endpoints to, chatbot conversations.
- Conversation scripting.
- System training (if necessary).
- System testing and validation (in which MTU will provide users).
- Hand-over to MTU and MTU staff training.
Proponents may customize suitable off-the-shelf educational feedback-gathering software (e.g. hubert.ai), so long as data security and user confidentiality can be assured, or may tackle development from the ground up. In the latter case open-source software should be used since the system might also serve as an educational tool so access to source code is necessary. It is possible that design criteria for elements of the system will only become apparent during development: proponents should account for this eventuality by employing an iterative, user-centric, plan in which MTU staff will play the role of trainee delegates.
The system will supplement, but not replace, the hand-written form currently in use thus will play no role in the formal quality assurance process. The proponent will be responsible for integrating suitable data storage (e.g. Web Server SQL Database), but not for providing means to process or front-ends to display analysed data.
Chatbot-led conversations are not expected to involve the same intensity of natural language processing as client-initiated ‘free’ discourse, so the budget will be adequate to meet ambition of this project in terms of HR effort required to script conversations. Furthermore, because the client profile will be homogenous in terms of motivation and expectations, and because of the ‘serious’ nature of the domain, ambition of this proposal is in line with the capabilities of available conversational software.
The starting point for conversational flow must be the existing closed feedback questions in order to obtain insights into these key areas of dissatisfaction which are high priority targets for improvements. These conversational paths should prompt users to divulge insights into a) current course content failings, course delivery defects and resource shortfalls.
Conversation branches should also explore trainees’ opinions on, and preferences for, digital enhancements for future marine safety training. Questions should therefore investigate options such as pace and style of classroom activity, blended learning options, training schedules revisions, and innovation such as digital learning spaces, personalized learning, AR/VR/MR, and serious computer games.
In order to encourage trainees:
1) conversation tone should engender a sense of ownership in the direction of future training, and
2) question branches should terminate with suggested links to third-party websites on which relevant self-improvement safety training material (e.g. games, quizzes etc.) can be found, replacing or in addition to open comment fields.
The system will be deployed on PCs and mobile devices and interaction will be via a Web interface: design effort should be focused on conversation development rather than deployment issues.
4. Expected Outcomes
The proponent will deliver a textual conversational system solution to be deployed in a server provided by and operated by MTU, configured to support a year-long experiment during which feedback from up to 1000 training customers will be logged.
Deliverables will comprise:
- The chatbot software solution, expressed in English.
- A compilation of documents recording domain knowledge, conversation scripts, background media material relevant to present and future maritime safety training collected during workshops.
- System architecture documentation.
- Storage schema for a) conversational relationships (arguments and expressions), and feedback results.
A budget of 15,000€ will be provided.
This sum will cover the proponent’s human resources costs only. MTU will procure software, if required for the project, and will provide IT infrastructure, arrange workshop venues and cover workshop overheads from a separate budget.
Applications to this challenge are not allowed because the call is already closed.