Home

I am fourth-year Ph.D. student at the School of Engineering and Applied Science of the University of Virginia. I am fortunate to be advised by professor Laura E. Barnes and co-advised by professor Donald E. Brown. I earned my Master of Science from The George Washington University in 2014, my master advisor was Professor Simon Berkovich. I started an important research project in my MS. by using Golay Code technique for clustering into big data. My experience includes numerous projects and academic projects. my research interests area is medical informatics, mobile health, machine learning, mathematical modeling, algorithms, and data structure, and data manning biomedical computing and visualization.

Recent Publication

Text Classification Algorithms: A Survey

Confusion Matrix
Confusion Matrix
Convolutional Neural Networks (CNN) for Text Classification
Convolutional Neural Networks (CNN) for Text Classification
Hierarchical Attention Networks for Document Classification
Hierarchical Attention Networks for Document Classification
Random Multimodel Deep Learning (RDML)
Random Multimodel Deep Learning (RDML)
HDLTex: Hierarchical Deep Learning for Text Classification
HDLTex: Hierarchical Deep Learning for Text Classification
The Model Interpretability
The Model Interpretability
Hierarchical Classification Method
Hierarchical Classification Method
 Support Vector Machine (SVM)
Support Vector Machine (SVM)
k-Nearest Neighbor Classification
k-Nearest Neighbor Classification
GRU and LSTM cell
GRU and LSTM cell
Standard LSTM/GRU Recurrent Neural Networks
Standard LSTM/GRU Recurrent Neural Networks
Fully connected Deep Neural Network Classifier
Fully connected Deep Neural Network Classifier
Random Forest
Random Forest
Autoencoder
Autoencoder
The model depicted contains the following layers: Z is code and two hidden layers used for encoding and two used for decoding.
A Recurrent Autoencoder Structure
A Recurrent Autoencoder Structure
Boosting Technique Architecture
Boosting Technique Architecture
 Bagging Technique
Bagging Technique
Random Projection
Random Projection
. Limitation of Document Feature Extraction by per-sentence level
. Limitation of Document Feature Extraction by per-sentence level

Kowsari, K.; Jafari Meimandi, K.; Heidarysafa, M.; Mendu, S.; Barnes, L.; Brown, D. Text Classification Algorithms: A Survey. Information 2019, 10, 150.

RMDL: Random Multimodel Deep Learning for Classification

The continually increasing number of complex datasets each year necessitates ever improving machine learning methods for robust and accurate categorization of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification. Deep learning models have achieved state-of-the-art results across many domains. RMDL solves the problem of finding the best deep learning structure and architecture while simultaneously improving robustness and accuracy through ensembles of deep learning architectures. RDML can accept as input a variety data to include text, video, images, and symbolic. This paper describes RMDL and shows test results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB, and 20newsgroup. These test results show that RDML produces consistently better performance than standard methods over a broad range of data types and classification problems.

Kamran Kowsari, Mojtaba Heidarysafa, Donald E. Brown, Kiana Jafari Meimandi, and Laura E. Barnes. 2018. RMDL: Random Multimodel Deep Learning for Classification. In ICISDM ’18: 2018 2nd International Conference on Information System and Data Mining ICISDM ’18, April 9–11, 2018,

Lakeland, FL, USA. ACM, New York, NY, USA

CONTACT US

kk7nc@virginia.edu

kamran@kowsari.net

  Tel: (202) 812-3013

PROFILE

© Copyright 2018. All Rights Reserved. www.kowsari.net  Last Update: 01/09/2018