2015
Authors
Moreira, IC; Ventura, SR; Ramos, I; Rodrigues, PP;
Publication
JOURNAL OF MEDICAL INTERNET RESEARCH
Abstract
Background: Mammography is considered the best imaging technique for breast cancer screening, and the radiographer plays an important role in its performance. Therefore, continuing education is critical to improving the performance of these professionals and thus providing better health care services. Objective: Our goal was to develop an e-learning course on breast imaging for radiographers, assessing its efficacy, effectiveness, and user satisfaction. Methods: A stratified randomized controlled trial was performed with radiographers and radiology students who already had mammography training, using pre-and post-knowledge tests, and satisfaction questionnaires. The primary outcome was the improvement in test results (percentage of correct answers), using intention-to-treat and per-protocol analysis. Results: A total of 54 participants were assigned to the intervention (20 students plus 34 radiographers) with 53 controls (19+ 34). The intervention was completed by 40 participants (11+ 29), with 4 (2+ 2) discontinued interventions, and 10 (7+ 3) lost to follow-up. Differences in the primary outcome were found between intervention and control: 21 versus 4 percentage points (pp), P<. 001. Stratified analysis showed effect in radiographers (23 pp vs 4 pp; P=. 004) but was unclear in students (18 pp vs 5 pp; P=. 098). Nonetheless, differences in students' posttest results were found (88% vs 63%; P=. 003), which were absent in pretest (63% vs 63%; P=. 106). The per-protocol analysis showed a higher effect (26 pp vs 2 pp; P<. 001), both in students (25 pp vs 3 pp; P=. 004) and radiographers (27 pp vs 2 pp; P<. 001). Overall, 85% were satisfied with the course, and 88% considered it successful. Conclusions: This e-learning course is effective, especially for radiographers, which highlights the need for continuing education.
2015
Authors
Spiliopoulou, M; Rodrigues, PP; Menasalvas, E;
Publication
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Abstract
In year 2015, we experience a proliferation of scientific publications, conferences and funding programs on KDD for medicine and healthcare. However, medical scholars and practitioners work differently from KDD researchers: their research is mostly hypothesis-driven, not data-driven. KDD researchers need to understand how medical researchers and practitioners work, what questions they have and what methods they use, and how mining methods can fit into their research frame and their everyday business. Purpose of this tutorial is to contribute to this learning process. We address medicine and healthcare; there the expertise of KDD scholars is needed and familiarity with medical research basics is a prerequisite. We aim to provide basics for (1) mining in epidemiology and (2) mining in the hospital. We also address, to a lesser extent, the subject of (3) preparing and annotating Electronic Health Records for mining.
2015
Authors
Spiliopoulou, M; Rodrigues, PP; Menasalvas, E;
Publication
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15
Abstract
2015
Authors
Jr., CT; Rodrigues, PP; Kane, B; Marques, PMdA; Traina, AJM;
Publication
CBMS
Abstract
2015
Authors
Abdulrahman, SM; Brazdil, P; Van Rijn, JN; Vanschoren, J;
Publication
CEUR Workshop Proceedings
Abstract
Identifying the best machine learning algorithm for a given problem continues to be an active area of research. In this paper we present a new method which exploits both meta-level information acquired in past experiments and active testing, an algorithm selection strategy. Active testing attempts to iteratively identify an algorithm whose performance will most likely exceed the performance of previously tried algorithms. The novel method described in this paper uses tests on smaller data sample to rank the most promising candidates, thus optimizing the schedule of experiments to be carried out. The experimental results show that this approach leads to considerably faster algorithm selection.
2015
Authors
van Rijn, JN; Abdulrahman, SM; Brazdil, P; Vanschoren, J;
Publication
Advances in Intelligent Data Analysis XIV
Abstract
One of the challenges in Machine Learning to find a classifier and parameter settings that work well on a given dataset. Evaluating all possible combinations typically takes too much time, hence many solutions have been proposed that attempt to predict which classifiers are most promising to try. As the first recommended classifier is not always the correct choice, multiple recommendations should be made, making this a ranking problem rather than a classification problem. Even though this is a well studied problem, there is currently no good way of evaluating such rankings. We advocate the use of Loss Time Curves, as used in the optimization literature. These visualize the amount of budget (time) needed to converge to a acceptable solution. We also investigate a method that utilizes the measured performances of classifiers on small samples of data to make such recommendation, and adapt it so that it works well in Loss Time space. Experimental results show that this method converges extremely fast to an acceptable solution.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.