Apache cTAKES
{{Short description|Natural language processing system}}
{{Infobox software
| name = Apache cTAKES
| logo = File:Apache Ctakes logo.jpg
| screenshot =
| caption =
| collapsible =
| developer = Apache Software Foundation
| latest release version = 6.0.0
| latest release date = {{Start date and age|2024|09|16}}
| latest preview version =
| latest preview date =
| operating system = Cross-platform
| repo = {{URL|https://github.com/apache/ctakes|cTakes Repository}}
| programming language = Java, Scala, Python
| genre = Natural language processing, Bioinformatics, Text mining, Information Extraction
| license = Apache License 2.0
| website = {{official website}}
}}
Apache cTAKES: clinical Text Analysis and Knowledge Extraction System is an open-source Natural Language Processing (NLP) system that extracts clinical information from electronic health record unstructured text. It processes clinical notes, identifying types of clinical named entities — drugs, diseases/disorders, signs/symptoms, anatomical sites and procedures. Each named entity has attributes for the text span, the ontology mapping code, context (family history of, current, unrelated to patient), and negated/not negated.{{Cite book|chapter-url={{google books|yVp4CgAAQBAJ|plainurl=yes}}|title=Health Web Science: Social Media Data for Healthcare|last=Denecke|first=Kerstin|date=2015-08-31|publisher=Springer|isbn=978-3-319-20582-3 |chapter=Tools and Resources for Information Extraction |page=[{{google books|yVp4CgAAQBAJ|page=67|plainurl=yes}} 67] |via=Google Books }}
cTAKES was built using the UIMA Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit.{{Cite journal|last=Khalifa|first=Abdulrahman|last2=Meystre|first2=Stéphane|date=2015-12-01|title=Adapting existing natural language processing resources for cardiovascular risk factors identification in clinical notes|journal=Journal of Biomedical Informatics|series=Proceedings of the 2014 i2b2/UTHealth Shared-Tasks and Workshop on Challenges in Natural Language Processing for Clinical Data|volume=58|issue=Supplement|pages=S128–S132|doi=10.1016/j.jbi.2015.08.002|pmid=26318122|pmc=4983192}}{{Cite press release|url=https://globenewswire.com/news-release/2017/04/25/970806/0/en/The-Apache-Software-Foundation-Announces-Apache-cTAKES-v4-0.html|title=The Apache Software Foundation Announces Apache® cTAKES™ v4.0|publisher=The Apache Software Foundation|first=Sally |last=Khudairi |date=2017-04-25 |location=Forest Hill, MD |agency=Globe Newswire |access-date=2017-09-20}}
== Components ==
Components of cTAKES are specifically trained for the clinical domain, and create rich linguistic and semantic annotations that can be utilized by clinical decision support systems and clinical research.{{Cite journal|last=Savova|first=Guergana K|last2=Masanz|first2=James J|last3=Ogren|first3=Philip V|last4=Zheng|first4=Jiaping|last5=Sohn|first5=Sunghwan|last6=Kipper-Schuler|first6=Karin C|last7=Chute|first7=Christopher G|date=2010|title=Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications|journal=Journal of the American Medical Informatics Association|volume=17|issue=5|pages=507–513|doi=10.1136/jamia.2009.001560|issn=1067-5027|pmc=2995668|pmid=20819853}}
These components include:
- Named Section identifier
- Sentence boundary detector
- Rule-based tokenizer
- Formatted list identifier
- Normalizer
- Context dependent tokenizer
- Part-of-speech tagger
- Phrasal chunker
- Dictionary lookup annotator
- Context annotator
- Negation detector
- Uncertainty detector
- Subject detector
- Dependency parser
- patient smoking status identifier
- Drug mention annotator
History
Development of cTAKES began at the Mayo Clinic in 2006. The development team, led by Dr. Guergana Savova and Dr. Christopher Chute, included physicians, computer scientists and software engineers. After its deployment, cTAKES became an integral part of Mayo's clinical data management infrastructure, processing more than 80 million clinical notes.{{cite web |date=2015-06-22 |title=History |website=Apache cTAKES™ - clinical Text Analysis Knowledge Extraction System |url=http://ctakes.apache.org/history.html |access-date=2018-01-11 }}
When Dr. Savova's moved to Boston Children's Hospital in early 2010, the core development team grew to include members there. Further external collaborations include:
- University of Colorado
- Brandeis University
- University of Pittsburgh
- University of California at San Diego
Such collaborations have extended cTAKES' capabilities into other areas such as Temporal Reasoning, Clinical Question Answering, and coreference resolution for the clinical domain.
In 2010, cTAKES was adopted by the [http://www.i2b2.org i2b2] program and is a central component of the [https://web.archive.org/web/20170430025922/https://www.healthit.gov/policy-researchers-implementers/secondary-use-ehr-data SHARP Area 4].
In 2013, cTAKES released their first release as an Apache Software Foundation incubator project: [http://incubator.apache.org/ctakes/ cTAKES 3.0].{{citation needed|date=July 2020}}
In March 2013, cTAKES became an Apache Software Foundation Top Level Project (TLP).
See also
References
{{Reflist}}
External links
- [https://github.com/apache/ctakes?tab=readme-ov-file#apache-ctakes cTAKES Official Website]
- [https://projects.apache.org/project.html?ctakes Apache cTAKES Project Information page] from ASF
- [http://jamia.bmj.com/content/17/5/507.abstract Abstract (JAMIA)]
- [http://ohnlp.org/ Open Health Natural Language Processing (OHNLP) Consortium]
- [https://web.archive.org/web/20111007234355/http://healthit.hhs.gov/portal/server.pt?open=512&mode=2&objID=3128&PageID=20708 Strategic Health IT Advanced Research Projects (SHARP) Program]
- [http://sharpn.org/ SHARP Area 4 - Secondary Use of EHR Data]
- [https://web.archive.org/web/20130712065633/http://maveric.org/mig/arc.html The Automated Retrieval Console (ARC)]
- [https://www.i2b2.org/software/projects/hitex/hitex_manual.html Health Information Text Extraction (HITEx)]) was developed as part of the i2b2 project. It is a rule-based NLP pipeline based on the GATE framework developed by [http://www.i2b2.org Informatics for Integrating Biology and the Bedside].
- [http://code.google.com/p/cleartk/ Computational Language and Education Research toolkit (cleartk)] (No longer maintained) has been developed at the University of Colorado at Boulder, and provides a framework for developing statistical NLP components in Java. It is built on top of Apache UIMA.
- [http://code.google.com/p/negex/ NegEx] - is a tool developed at the University of Pittsburgh to detect negated terms from clinical text. The system utilizes trigger terms as a method to determine likely negation scenarios within a sentence.
- [https://web.archive.org/web/20111204211529/http://www.dbmi.pitt.edu/blulab/ConText.html ConText]): an extension to NegEx, and is also developed by the University of Pittsburgh. ConText extends NegEx to not only detect negated concepts, but to also find temporal (recent, historical or hypothetical scenarios) and who the Subject (of experience) is (patient or other).
- [http://metamap.nlm.nih.gov/ MetaMap] (by United States National Library of Medicine): is a comprehensive concept tagging system which is built on top of the Unified Medical Language System. It requires an active UMLS Metathesaurus License Agreement (and account) for use.
- [https://web.archive.org/web/20150913120033/https://knowledgemap.mc.vanderbilt.edu/research/content/medex-tool-finding-medication-information MedEx] - a tool for extraction medication information from clinical text. MedEx processes free-text clinical records to recognize medication names and signature information, such as drug dose, frequency, route, and duration. Use is free with a UMLS license. It is a standalone application for Linux and Windows.
- [https://web.archive.org/web/20150626172745/http://knowledgemap.mc.vanderbilt.edu/research/content/sectag-tagging-clinical-note-section-headers SecTag] (section tagging hierarchy): recognizes note section headers using NLP, Bayesian, spelling correction, and scoring techniques. Use is free with either a UMLS or LOINC license.
- ([http://nlp.stanford.edu/software/CRF-NER.shtml Stanford Named Entity Recognizer (NER)]): Stanford’s NER is a Conditional Random Field sequence model, together with well-engineered features for Named Entity Recognition in English and German.
- ([http://nlp.stanford.edu/software/corenlp.shtml Stanford CoreNLP]) is an integrated suite of natural language processing tools for English in Java, including tokenization, part-of-speech tagging, named entity recognition, parsing, and coreference.
{{Apache Software Foundation}}
{{Health software}}
Category:Electronic health record software
Category:Natural language processing software