Browsing by Author "Erbay, Hasan"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Article 3-State Protein Secondary Structure Prediction based on SCOPe Classes(INST TECNOLOGIA PARANARUA PROF ALGACYR MUNHOZ MADER 3775-CIC, 81350-010 CURITIBA-PARANA, BRAZIL, 2021) Atasever, Sema; Azgınoglu, Nuh; Erbay, Hasan; Aydın, Zafer; 0000-0002-2295-7917; AGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü; Atasever, SemaAbstract Improving the accuracy of protein secondary structure prediction has been an important task in bioinformatics since it is not only the starting point in obtaining tertiary structure in hierarchical modeling but also enhances sequence analysis and sequence-structure threading to help determine structure and function. Herein we present a model based on DSPRED classifier, a hybrid method composed of dynamic Bayesian networks and a support vector machine to predict 3-state secondary structure information of proteins. We used the SCOPe (Structural Classification of Proteins-extended) database to train and test the model. The results show that DSPRED reached a Q3 accuracy rate of 82.36% when trained and tested using proteins from all SCOPe classes. We compared our method with the popular PSIPRED on the SCOPe test datasets and found that our method outperformed PSIPRED.conferenceobject.listelement.badge Open Source Slurm Computer Cluster System Design and a Sample Application(IEEE345 E 47TH ST, NEW YORK, NY 10017 USA, 2017) Azginoglu, Nuh; Atasever, Mehmet Umt; Aydin, Zafer; Celik, Mete; Erbay, Hasan; AGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü; Aydin, ZaferCluster computing combines the resources of multiple computers as they act like a single high-performance computer. In this study, a computer cluster consisting of Lustre distributed file system with one cluster server based on Slurm resource management system and thirteen calculation nodes were built by using available and inert computers that have different processors. Different bioinformatics algorithms were run using different data sets in the cluster, and the performance of the clusters was evaluated with the amount of time the computing cluster spent to finish the jobs.Article Sample Reduction Strategies for Protein Secondary Structure Prediction(MDPI, ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND, 2019) Atasever, Sema; Aydın, Zafer; Erbay, Hasan; Sabzekar, Mostafa; AGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü;Predicting the secondary structure from protein sequence plays a crucial role in estimating the 3D structure, which has applications in drug design and in understanding the function of proteins. As new genes and proteins are discovered, the large size of the protein databases and datasets that can be used for training prediction models grows considerably. A two-stage hybrid classifier, which employs dynamic Bayesian networks and a support vector machine (SVM) has been shown to provide state-of-the-art prediction accuracy for protein secondary structure prediction. However, SVM is not efficient for large datasets due to the quadratic optimization involved in model training. In this paper, two techniques are implemented on CB513 benchmark for reducing the number of samples in the train set of the SVM. The first method randomly selects a fraction of data samples from the train set using a stratified selection strategy. This approach can remove approximately 50% of the data samples from the train set and reduce the model training time by 73.38% on average without decreasing the prediction accuracy significantly. The second method clusters the data samples by a hierarchical clustering algorithm and replaces the train set samples with nearest neighbors of the cluster centers in order to improve the training time. To cluster the feature vectors, the hierarchical clustering method is implemented, for which the number of clusters and the number of nearest neighbors are optimized as hyper-parameters by computing the prediction accuracy on validation sets. It is found that clustering can reduce the size of the train set by 26% without reducing the prediction accuracy. Among the clustering techniques Ward's method provided the best accuracy on test data. Keywords