Text Mining in Biomedical Domain with Emphasis on Document Clustering
Article information
Abstract
Objectives
With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents.
Methods
This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain.
Results
Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail.
Conclusions
Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.
I. Introduction
Text mining [12] is the process of extracting new information on a particular topic from a set of documents. Text mining is useful where the data is in the form of text (document) which is unstructured and cannot be processed using traditional methods, such as data mining methods [3]. Text mining is different from normal search queries as it is also useful in discovering unknown information from a set of documents. Text mining is based on the natural language processing technique, which helps computers to understand and process human language [4].
II. Roles of Text Mining and Its Applications in Biomedical Field
Biomedical researchers have started using text mining techniques [15] due to the vast amount of unstructured information available in the biomedical domain in the form of research articles, case reports, Electronic Health Records (EHRs), and so forth. The following section highlights the applications of text mining in the biomedical domain.
1. Extraction of Knowledge from Biomedical Literature
The number of articles and papers published in the biomedical domain is increasing at a fast rate due to the expansion of online publishing. The total number of articles indexed in MEDLINE exceeded 23 million [6], and the number of citations reached more than 806 thousand. Thus, knowledge extraction from this vast collection of articles on a particular topic could be very time-consuming. Text mining techniques can facilitate the extraction of unknown knowledge from the vast number of articles available. Some of the research carried out in this area has focused on the following applications. (1) In the field of cancer research, it offers a means to improve diagnosis, treatment, and prevention of cancer through mining of cancer literature [7]. (2) In pharmacology, it can be helpful to extract drug-drug interactions, protein interactions, and microbial interactions through mining of biomedical literature
2. Text Mining in Systematic Reviews
Systematic reviews [8] normally involve searching, screening, and synthesizing information from articles meeting the inclusion criteria and combing through the results to address the research problem. The searching, screening, and classification of articles requires enormous time and effort for the researchers involved in systematic reviews. Text mining offers tools for carrying out automatic searching, clustering, and classification of documents and information extraction during various stages of systematic reviews, such as searching, screening, and synthesis of information.
3. Text Mining in Information Extraction from Electronic Health Records
EHR systems store huge amounts of structured and unstructured information. Data mining methods are helpful in analyzing the structured part. Clinical texts [9], such as patient pathology reports, personal medical histories, and notes related to findings during examinations or procedures form the unstructured part of EHRs, and they can be analyzed using text mining techniques to explore hidden information [9]. The following are some of the applications of text mining in EHR systems. (1) EchoInfer [10], a text mining software tool, can be used to extract data pertaining to cardiovascular structures and functions from heterogeneously formatted echocardiographic data sources. (2) A text mining system using Bayesian networks can be used to mine narrative text from mammography reports to aid cancer diagnosis [11].
4. Biomarkers
Text mining techniques are useful for identifying disease-related biomarkers and associated genes from the literature [12]. Text mining is applied through the named entity recognition (NER) method, which is a technique for sub-task information extraction. It identifies entities and classifies them into various classes, such as gene, name of a person, organization, etc. The following are the some of the applications of text mining in the field of biomarkers. (1) MeinfoText [13] is a tool that provides knowledge about associations between gene methylation and cancer through the mining of large amounts of literature. (2) Whatizit [14] is a tool that identifies terms from a web source, such as PubMed abstracts, and then links them to the corresponding entries in bioinformatics databases.
5. Disease Surveillance
Web mining, a part text mining which helps us to detect disease outbreaks in disease surveillance systems. (1) Biocaster [15] is a web-based open-source and ontology-based text mining system for detecting and tracking the outbreak of diseases from web-based sources. (2) MedISys and PULS [16] are information retrieval and extraction systems used to analyse disease epidemics.
6. Other Areas
Some recent advances in biomedical text mining are in the areas of pharmacogenomics, toxicology, precision medicine, and drug repositioning [5]. Text mining can help identify named entities, such genes, proteins, drugs, and diseases and identify relations among them. Text mining can help the extraction of genotype and phenotype data for providing care in the field of precision medicine as well as the identification of relations among existing drugs and new diseases by mining the biomedical literature to reposition drugs.
III. Text Mining Software in Biomedical Domain
Currently there are several free and commercial software tools available to carry out text mining on various research databases. Table 1 lists the free and commercial software tools available for text mining in the field, and Table 2 compares the biomedical text mining software tools presented in Table 1.
IV. Text Mining Process
The following processes are involved in text mining: search and retrieval of document [24], creation of corpus of documents, pre-processing of documents [25], preparation of document matrix, clustering of documents [26], finding associations, preparation of word cloud, and processing the language part using natural language techniques [410]. Once the process is completed, the next level classifies the documents using a naïve Bayes classifier [112] or the support vector machine [112] method, or the decision tree method. The vector space model [112] concept takes the centre stage in the text mining process, in which the documents are represented as n-dimensional vectors of terms.
1. Search and Retrieval of Documents
The first step involved in text mining is to search and retrieve documents using the information retrieval process [24], which automatically retrieves documents based on the information need of the user from a large collection of documents, which is usually web-based.
2. Pre-processing of Documents
The pre-processing of documents [25] involves the steps of stop word removal and stemming.
1) Stop word removal
Stop words, such as ‘the’ and ‘a’ are removed. There are number of methods available to remove stop words. The classic method removes pre-defined stop words, and Zip's law [25] method removes words with high Term Frequency-Inverse Document Frequency (TF-IDF) value and words appearing only once in the document.
2) Stemming
After the removal of stop words, the next step involves ‘stemming’, which helps us to use only the roots of terms. For example, the terms ‘analyze’, ‘analytical’, and ‘analyzing’ are represented by the root term ‘analysis’.
3. Term Document Matrix
Once the pre-processing has been completed, the next step is to prepare the term document matrix (TDM), in which terms are represented by rows, and documents are represented by columns. TF-IDF [27] are important measures in the text mining process.
4. Natural Language Processing
Natural language processing [410] is a tool that is used to analyze the language part of text documents through automated systems. Basically, language processing is divided into the processing of words (morphology), their different forms (lexicon), sentence structures (syntax), and sentence meanings (semantics), conference analysis, and the relationships between sentences (discourse). Natural language processing systems widely use statistical techniques to remove the ambiguity present in the processing of texts. It is used to automatically process text using a probabilistic approach and to carry out tasks such as segmentation of sentences into words, named entity recognition, parts of speech tagging, conference resolution [428], etc.
5. Methods for Text Clustering
Documents are grouped according to their document vector, and each cluster is denoted by the document vector name. Clustering [26] of documents can be carried out using techniques such as hierarchical clustering and portioning clustering (K-means clustering).
1) Hierarchical clustering technique
Hierarchical clustering [29] creates a hierarchy of clusters of documents using a top-down (divisive) or bottom-up approach (agglomerative). In the agglomerative method, clustering starts with each document as a single cluster, and in the next step, each cluster is combined with another cluster to form a new cluster based on the closest distance or similarity between the two clusters. This process is repeated until a single cluster is formed. In the divisive method, initially all the documents are combined to form a single cluster, and the cluster is divided into two sub-clusters which have maximum distance or dissimilarity between them. This process will continue until each document forms its own cluster. In hierarchical clustering, previous knowledge about the number of clusters is not required. The outcome of hierarchical clustering is a graphical representation called a dendrogram, in which the documents are represented in a hierarchical tree structure representing the documents as its branches.
2) Partitioning (K-means) clustering
K-means clustering [30] starts with a predefined number of clusters of documents, for instance, k clusters. Documents will be relocated to different clusters based on the nearness to the cluster centroid (mean). At each partition, the cluster centroid is recalculated recursively after the relocation of documents based on nearness to the cluster centroid. This process is repeated until there is no change in the cluster means or centroid due to the relocation of documents. Generally the K-means clustering algorithm is faster than the hierarchical clustering algorithm.
3) Similarity measures
Clustering process efficiency depends on the choice of similarity measures, such as cosine, Euclidean, Manhattan, and Mahalanobis [28]. Cosine measure is the simplest and easiest method for clustering documents. Cosine measure calculates the normalized dot products of two document vectors. The cosine values range from 0 to 1, and when the two documents do not share any words, the cosine value is 0.
6. Methods for Text Classification
Documents can be automatically classified into specific categories using classifier algorithm such as naïve Bayes, support vector machine, and decision tree.
1) Naïve Bayes
The naïve Bayes classifier can be used to classify documents based on a probabilistic concept by which terms in each document are assigned specific probabilities based on their frequency in the document corpus. During a supervised training process, the naïve Bayes classifier assigns documents in the training document set to predefined categories based on set of terms in the document whose probability of occurrence is maximum in the predefined category in relation to other category. This training document set can be used to classify a new set of documents based on the posterior probabilities that the set of documents will have terms with similar probabilities and can be classified to a predefined category.
2) Support vector machine
Support vector machine is used to find separators to separate two document categories. Documents are assumed to take the form of linear space, and the separator will be a hyper plane (a subspace in a two-dimensional space) that separates the two categories of documents.
3) Decision tree
The decision tree method represents category conditions as nodes and documents categories as leaves. The decision tree method works recursively and classifies documents into categories based on conditions.
4) Evaluation of classification
The evaluation of the classification process is carried out using recall, precision, and F measures:
TP (true positive): number of correctly classified instances to a class;
FP (false positive): number of falsely classified instances, as belonging to a class;
FN (false negative): number of instances belonging to a class, not correctly classified.
The F-measure is the harmonic mean of precision and recall. It is calculated by
V. Conclusion
This paper provided a comprehensive overview of text mining methods. The paper discussed the roles of text mining in biomedical applications and presented software available to carry out text mining. The paper also presented an overview of techniques to find similarities between studies on a given topic from available research articles.
Notes
Conflict of Interest: No potential conflict of interest relevant to this article was reported.