Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.
In this computer age, CAS, Computer Assisted Software, is common, available, and used in both University teaching and Industry Training. The purpose of this talk is to address a new approach to assessing CAS usefulness. The typical approach, both in the University and Industry settings is, “Does it work?” “What are the ‘before and after’ scores and are they significant?” This approach is flawed for three reasons: I) INSTRUCTION vs. SOFTWARE: We already have a rich literature on good instruction that is supported by before-after analysis. This instructional literature should be both sufficient and necessary to evaluate software. II) SOFTWARE OMISSIONS: If the software is lacking an important instructional feature the current attitude is to wait for the next software version before implementing; contrastively, we advocate concurrent supplementation of the software with necessary instructional aids. III) CONTRADICTORY STATISTICAL RESULTS: The over-emphasis on software necessarily leads to contradictory statistical results on efficacy since the important driver of instructional methodology is typically lacking from the experiments. As time permits, examples are given from several disciplines using the four pillars of good instructional pedagogy advocated by Hendel in a recent book.