A Markov Chain Approach for Modelling Normally Distributed Online Assessment Time in a University Setting Masikini Lugoma, Masengo Ilunga, Violet Patricia Dudu, Amuli Bukanga (Pages: 1-5)
The Markov chain (MC) technique is applied to an online invigilated assessment situation to predict the writing time in a typical university setting. Different students' writing times cannot be determined accurately in advance and are associated with randomness. This preliminary study simulates data related to the time to download the question paper and the writing time, from a normal distribution. The time variable is simulated to have a reasonably good approximation of the real settings where most students’ writing times are spread around the expected value, namely the mean. Simulations are conducted based on the experience and knowledge of researchers in the online teaching and learning environment. Computer simulations demonstrate that the writing time estimates depicted a stable convergence, thus giving clear insights for optimising online assessment implementation. The findings showed that the average writing time of a selected trial reaches a stable value at 1.498 hours (89 minutes) within the confidence interval [0.6, 2.5], at 95%. Therefore, these results offered a more realistic range of feasible times to guide academic practitioners on the planning and implementation of invigilated online assessments.
A Results-Driven Curriculum for Engineering Education Ngaka Mosia, Kemlall Ramdass, Masengo Ilunga, Lusiwe Maduna (Pages: 6-12)
The field of engineering education faces an ongoing challenge of effectively preparing students for the demands of the rapidly evolving technological landscape. Traditional engineering curricula often focus more on theoretical knowledge rather than practical application and outcomes. As a result, engineering graduates are not equipped with the necessary skills to excel in industry, leading to a gap between industry requirements and graduate competencies. To bridge this gap, a results-driven curriculum for engineering education has been proposed to provide students with the knowledge, skills, and abilities required to succeed in industry. To address the question, “Where do most curriculum programs fall short?” It is noted that curriculum programs focus on delivering content rather than delivering experiences that support and enable change in teaching and learning. When curriculum design is driven by content, it ends up with numerous blurred boundaries regarding the scope, audience, and the applicability of what is contained in the curriculum. Consequently, the curriculum may result in education that falls short in producing graduates required by current and future markets and industry. In an e-learning and distance education environment, a curriculum might look well designed and meet all the set criteria, but it might ultimately match the wrong objectives. The result of this mismatch is that students after completing the curriculum depart with a lot of information in their heads but with no practical skills that they can implement in the workplace. This indicates that the graduate attributes required by Engineering Council of South Africa (ECSA) and industry were not attained by the student at completion of the enrolled curriculum. The research adopts a qualitative case study approach to explore and explain the steps, activities, and tools that can be used to design a results driven and engineering-focused curriculum solution, that has a clear goal tied to well-articulated pedagogy strategy.
AI-Enhanced Transdisciplinary Data Encoding for LLMs Training Rusudan Makhachashvili, Natalia Bober (Pages: 13-19)
The rapid advancement of artificial intelligence (AI) has reshaped linguistic data encoding, particularly for Large Language Models (LLMs). AI-driven annotation techniques enable efficient lexical processing, semantic disambiguation, and automated neology tagging, refining computational language modeling across transdisciplinary domains.
This study explores AI-enhanced methodologies for encoding linguistic data for LLM training. AI-assisted lexicographic workflows enable LLMs to dynamically adjust to linguistic evolution while ensuring scalable annotation across diverse transdisciplinary corpora. LLMs trained on transdisciplinary lexicons can generate cross-modal language interpretations, refining machine-generated discourse across domains. The inquiry objective is the investigation of the innovative philosophic aspects cyberspace through the lenses of the language development processes as it informs AI models elaboration, LLMs training, and digital communication. The study design is the disclosure of cyberspace as an ontology model and as a logosphere model. Two data encoding projects, developed by the authors, serve as foundational elements for this investigation.
A methodology and AI-augmented, AI-performed protocols of computer vocabulary innovative elements phenomenological features identification is introduced supplying the template for a new study field – phenomenological, AI-enhanced digital neology, neography and neosemiotics. Transdisciplinary educational applications of these approaches to data encoding, include: training AI-enhanced NLP models for transdisciplinary communication; developing standardized linguistic annotation protocols, ensuring interoperability across AI-driven lexicographic systems; integrating transdisciplinary discourse structures into machine-learning lexicons, refining AI adaptive language comprehension.
Assessing SVM and Logistic Regression Models for Live Birth Prediction in IVF: A Barbadian Case Study Steven Cumberbatch, Adrian Als, Peter Chami, Juliet Skinner (Pages: 20-26)
The success rates of in vitro fertilization (IVF) have significantly improved over recent decades due to advancements in both clinical practice and biomedical technologies. Clinicians rely on the analysis of large volumes of patient data to inform treatment decisions. Aggregated longitudinal data from multiple patients may reveal latent patterns that can further enhance IVF outcomes. In this study, three machine learning models — Linear Kernel SVM, RBF-Kernel SVM and Logistic Regression — were developed and implemented to predict live-births from IVF clinical and demographic data, and their performances were compared. Results show that the linear SVM achieved the highest global discrimination (ROC-AUC = 0.72) and the strongest cross-validated F1-score (0.56). Logistic regression followed closely in global discrimination (ROC-AUC = 0.69), but its cross-validation recall for the minority class was notably low (0.26). The RBF SVM demonstrated a higher recall for the minority class compared to the linear SVM (0.45 vs 0.36), yet its overall discriminative performance was weaker, as reflected by a lower ROC-AUC of 0.63. This research serves as an initial exploration of machine learning applications in IVF within developing countries in the Eastern Caribbean, such as Barbados. The findings may contribute to improved clinical decision-making, reduced treatment cycles, and lower healthcare costs in resource-constrained settings.
Case Study on Understanding the Power of Retrieval Augmented Generation (RAG) Venkata Jaipal Reddy Batthula, Richard S. Segall, Sreejith Sivasubramony (Pages: 27-35)
This paper explores how Generative AI is changing with the use of Retrieval-Augmented Generation (RAG). RAG helps improve Artificial Intelligence (AI) systems by making them more capable, efficient and accurate. The paper explains how to build the Retrieval Augmented Generation model, covering important steps like preparing the data, creating embeddings, and setting up the retrieval system. Through a case study, we look at the main components of RAG, how it works with Large Language Models (LLMs), and why it is important in everyday digital tools. One of the goals is to compare different strategies for RAG, including choices for embeddings, similarity metrics and language models to find an optimal approach that can be generalized to work best. This helps us to understand how these factors affect performance and gives us ideas for building better and more efficient systems.
Detecting AI-Generated Text: A Comparative Study of Machine Learning Algorithms Li-jing Arthur Chang (Pages: 36-41)
As ChatGPT and other large-language-model (LLM) tools have made the generation of text via AI much easier than before, there is an increasing need to determine if humans indeed write the text we are reading. The study used six machine learning and deep learning algorithms to detect AI-created text. Using a balanced sample of AI-generated and human-written text, the results showed that deep learning algorithms outperformed their machine learning counterparts. A hybrid deep learning algorithm achieved the top accuracy rate of 0.974 (or 97.4%). Post hoc analysis showed that with a small fraction, such as 10%, of the full sample used in the present study, the hybrid algorithm achieved 0.928.
Key Aspects for a Secure Migration of Databases to the Cloud: Challenges and Solutions Yadira-Jazmín Pérez-Castillo, Sandra-Dinora Orantes-Jiménez, Eleazar Aguirre-Anaya (Pages: 42-46)
In the digital age, migrating databases to the cloud has become an essential strategy for organizations seeking greater flexibility, scalability, and operational efficiency. However, this process poses significant challenges related to information security, including cyberattacks, regulatory compliance, data loss, and access control. This article explores the main challenges of migrating and managing databases in the cloud, analyzing the most common risks and their impact on protecting critical data. In addition, practical solutions such as encryption, multi-factor authentication, and disaster recovery strategies are presented to enable organizations to mitigate risks and ensure the confidentiality, integrity, and availability of information. Finally, the article highlights the benefits of adopting good security practices during migration, promoting a smooth transition to the cloud while safeguarding sensitive data. By proactively addressing these challenges, organizations can achieve a more secure and efficient cloud environment.
Knowledge and Understanding: Differences and Relationships Nagib Callaos, Jeremy Horne (Pages: 47-67)
We explore in this article the distinction and interaction between knowledge and understanding. While epistemologically, knowledge is often defined as a justified belief, understanding emerges from the interpretation and application of that knowledge. Importantly, one may exist without the other, though both can also intersect and reinforce one another through dynamic feedback loops.
These relationships can be understood cybernetically: negative feedback loops reduce discrepancies between knowledge and understanding, while positive feedback loops strengthen congruence. The interplay becomes particularly evident in Action Research, Action Learning, and Action Design, where applying knowledge generates or deepens understanding.
A special case is transdisciplinary communication, which requires intellectual effort to effectively share knowledge across domains. This effort often produces neurological effects that transform knowledge into understanding or raise its level. Consequently, applying knowledge to real-world problems may generate understanding in two ways: (1) through the application itself, which links abstract knowledge with specific contexts, and (2) through transdisciplinary communication, when it is required for problem-solving via multidisciplinary teams or effective Transdisciplinary Communication.
Understanding is, therefore, both a prerequisite for and a result of transdisciplinary communication. It requires a minimal level of understanding in order to convey knowledge, yet successful communication almost inevitably enhances the communicator's own understanding. In both application and communication, the intellectual effort involved increases neural complexity, raising the likelihood of understanding as an emergent property.
Using Workbook Templates to Improve Teaching Russell Jay Hendel (Pages: 68-72)
The paper advocates the use of templates to significantly improve pedagogy. By templates, also sometimes referred to as a workbook approach, the intent is on providing model solutions with key words or phrases omitted; the student, after training in the use of the template, fills in these omitted phrases or words when attacking a new problem. However, to accomplish pedagogic improvement, templates must be accompanied by higher-order instructional strategies including contrasts, decisions, evaluations, and componential analysis. The theory presented is fully consistent with a variety of educational hierarchies such as those of Bloom, Anderson, Van Hiele, and Marzano. The theory is also consistent with the four educational pillars of Hendel. The theory is supported by literature; illustrations are provided from statistics and literary analysis.
|