Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • Pam3CSK4 br Fig The proposed system diagram for the assessme

    2019-09-27


    Fig. 1. The proposed system diagram for the assessment of feature ranking techniques.
    changing the trends in reported mortality in other countries [2]. Risk prediction models becomes an important tool to identify peo-ple at increased risk of developing CRC and uncover the risk factors (features, in the context of this Pam3CSK4 work) for this disease [23].
    Feature selection is a key step in many classification problems [4,17,36]. The size of the training data set needed to calibrate a model grows exponentially with the number of dimensions but the number of instances may be limited due to the cost of data collection. In particular, in cancer risk prediction applications, re-ducing the data dimensionality can avoid overfitting and improve model performance [3,10,11,27]. Additionally, the process of knowl-edge discovery from the data is simplified when the unwanted noisy and irrelevant features are removed.
    Numerous works have examined feature selection with respect to classification performance [9,25,31], but a problem that arises in many practical problems is that small variations in the data lead to different outcomes of the feature selection algorithm. Perhaps the disparity among different research findings has made the study of the robustness (or stability) of feature selection techniques a topic of recent interest [1,18,28,37,39].
    When developing a cancer risk prediction model, performance is not the only goal but also extracting a feature subset of the most relevant features in order to better understand the data and Pam3CSK4 the underlying process. Thus, in this work, we assess several feature ranking techniques in the context of a colorectal cancer prediction model. Fig. 1 shows the proposed system diagram. From the whole 
    dataset, different data samples are extracted. The feature ranking technique applied on each of these subsets lead to different fea-ture rankings. The feature selection method is evaluated both with respect to classifier performance (after combining these individual feature rankings) and also with respect to their stability (or robust-ness), trying to measure the similarity among the rankings.
    Several (scalar) metrics [19,22] have been proposed to evaluate the stability of the feature selection process. In this work, assess-ment is conducted following some of these metrics and we also propose a graphical approach [5] that enables us to analyze the similarity between feature ranking techniques as well as their in-dividual stability. We also compare the performance achieved with the risk prediction models that use features selected with feature selection techniques and those models that rely on features that the experts consider to be state-of-the-art.
    The rest of this paper is organized as follows: Section 2 de-scribes the feature selection process and its stability. Experimen-tal evaluation is shown in Section 3 and discussion in Section 4. Finally, Section 5 summarizes the main conclusions.
    2. Methods
    Feature selection techniques measure the importance of a fea-ture or a set of features according to a given measure. There are many goals of these techniques, but the most important ones are [36]: (a) to mitigate the curse of dimensionality, (b) to gain a
    deeper insight into the underlying processes that generated the Perceptron architecture. In both cases, model performance is esti-
    data and (c) to provide faster and more cost-effective prediction mated by the Area under the ROC (Receiver Operating Characteris-
    models.
    tic) curve (AUC), where the ROC curve plots the true positive rate
    examples and a class label d associated with each sample. Each in-
    stance xi is a p-dimensional vector xi = (xi1, xi2, . . . xip ) where each