Scholarly works
Permanent URI for this collectionhttps://repository.ui.edu.ng/handle/123456789/408
Browse
Item A comparison of the predictive capabilities of artificial neural networks and regression models for knowledge discovery(2013) Ojo, A. K.; Adeyemo, A. B.In this paper, Artificial Neural Networks (ANN) and Regression Analysis models were considered to determine which of them performs better. Prediction was done using one hidden layer and three processing elements in the ANN model. Furthermore, prediction was done using regression analysis. The parameters of regression model were estimated using Least Square method. To determine the better prediction, mean square errors (MSE) attached to ANN and regression models were used. Seven real series were fitted and predicted with in both models. It was found out that the mean square error attached to ANN model was smaller than regression model which made ANN a better model in prediction.Item A mobile students’ industrial work experience scheme logbook application(Science and Education Publishing, 2020) Olojakpoke, D. M.; Ojo, A. K.Monitoring of students who are undergoing the Students’ Industrial Work Experience Scheme (SIWES) program by school-based supervisors is a difficult task because the current paper based logbook system currently employed is not adequate enough to determine how well students are undergoing the program. It is difficult for school-based supervisors to know whether students actually filled their logbooks daily, showing what they have done or whether they filled it all at the end of a long period of time which means that such entries are very likely to be fraudulent. Which is why school-supervisors try to visit students on the program to physically monitor such students, however due to distance and other logistical issues school-based supervisors are only able to visit such students once or at most twice or sometimes never. The application was developed following the incremental model. Node.Js was used for the backend, MongoDB was used as the database while React Native was used to create the front-end. This application helps school-based supervisors monitor students on the SIWES program more effectively and also makes grading and commenting on logbook entries a lot easier. It can therefore be deployed to tertiary institutions in Nigeria to assist them in the running of their respective SIWES programmes.Item A model for conflicts’ prediction using deep neural network(2021-10) Olaide, O. B.; Ojo, A. K.Conflict is part of human social interaction, which may occur from a mere misunderstanding among groups of settlers. In recent times, advanced Machine Learning (ML) techniques have been applied to conflict prediction. Strategic frameworks for improving ML settings in conflict research are emerging and are being tested with new algorithm-based approaches. These developments have given rise to the need to develop a Deep Neural Network model that predicts conflicts. Hence, in this study, two Artificial Neural Network models were developed, the dataset which was extracted from https://www.data.worlduploaded by the Armed Conflict Location and Event Data Project (ACLED), in four separate CSV files (January 2015 to December 2018). The dataset for the year 2015 has 2697 instances and 28 features, for 2016 was 2233 with the same feature, for 2017 has 2669 instances with the same features, and 2018 has 1651 instances. After the development of the models: the baseline Artificial Neural Network achieved an accuracy of 95% and a loss of 5% on the training data and an accuracy of 90% and 10% loss on the test set. The Deep Neural Network Model achieved 98% accuracy and 2% loss on the training set, with 89% accuracy and 11% loss on the test set. It was concluded that to further improve the prediction of conflict, there is a need to address the issue of the dataset, in developing a better and more robust model.Item A predicting phishing websites using support vector machine and multi-class classification based on association rule techniques(2018-06) Woods, N. C.; Agada, V. E.; Ojo, A. K.Phishing is a semantic attack which targets the user rather than the computer. It is a new Internet crime in comparison with other forms such as virus and hacking. Considering the damage phishing websites has caused to various economies by collapsing organizations, stealing information and financial diversion, various researchers have embarked on different ways of detecting phishing websites but there has been no agreement about the best algorithm to be used for prediction. This study is interested in integrating the strengths of two algorithms, Support Vector Machines (SVM) and Multi-Class Classification Rules based on Association Rules (MCAR) to establish a strong and better means of predicting phishing websites. A total of 11,056 websites were used from both PhishTank and yahoo directory to verify the effectiveness of this approach. Feature extraction and rules generation were done by the MCAR technique; classification and prediction were done by SVM technique. The result showed that the technique achieved 98.30% classification accuracy with a computation time of 2205.33s with minimum error rate. It showed a total of 98% Area under the Curve (AUC) which showed the proportion of accuracy in classifying phishing websites. The model showed 82.84% variance in the prediction of phishing websites based on the coefficient of determination. The use of two techniques together in detecting phishing websites produced a more accurate result as it combined the strength of both techniques respectively. This research work centralized on this advantage by building a hybrid of two techniques to help produce a more accurate result.Item A predicting phishing websites using support vector machine and multi-class classification based on association rule techniques(2018-06) Woods, N. C.; Agada, V. E.; Ojo, A. K.Phishing is a semantic attack which targets the user rather than the computer. It is a new Internet crime in comparison with other forms such as virus and hacking. Considering the damage phishing websites has caused to various economies by collapsing organizations, stealing information and financial diversion, various researchers have embarked on different ways of detecting phishing websites but there has been no agreement about the best algorithm to be used for prediction. This study is interested in integrating the strengths of two algorithms, Support Vector Machines (SVM) and Multi-Class Classification Rules based on Association Rules (MCAR) to establish a strong and better means of predicting phishing websites. A total of 11,056 websites were used from both PhishTank and yahoo directory to verify the effectiveness of this approach. Feature extraction and rules generation were done by the MCAR technique; classification and prediction were done by SVM technique. The result showed that the technique achieved 98.30% classification accuracy with a computation time of 2205.33s with minimum error rate. It showed a total of 98% Area under the Curve (AUC) which showed the proportion of accuracy in classifying phishing websites. The model showed 82.84% variance in the prediction of phishing websites based on the coefficient of determination. The use of two techniques together in detecting phishing websites produced a more accurate result as it combined the strength of both techniques respectively. This research work centralized on this advantage by building a hybrid of two techniques to help produce a more accurate result.Item Ako, A.(2019-09) Ojo, A. K.This study presents an approach to extracting data from amazon dataset and performing some preprocessing on it by combining the techniques of Bi-Directional Long Short-Term Memory and 1-Dimensional Convolution Neural Network to classify the opinions into targets. After parsing the dataset and identifying desired information, we did some data gathering and preprocessing tasks. The feature selection technique was developed to extract structural features which refer to the content of the review (Parts of Speech Tagging) along with extraction of behavioral features which refer to the meta-data of the review. Both behavioral and structural features of reviews and their targets were extracted. Based on extracted features, a vector was created for each entity which consists of those features. In evaluation phase, these feature vectors were used as inputs of classifier to identify whether they were fake or non-fake entities. It could be seen that the proposed solution has over 90% of the predictions when compared with other work which had 77%. This increase was as a result of the combination of the bidirectional long short-term memory and the convolutional neural network algorithms.Item An algorithmic framework for hybrid adaptive protocol (HAP) to manage broadcast storm problems in mobile ad-hoc networks (MANETS)(2008-09) Onifade, O. F. W.; Ojo, A. K.; Okiyi, K. U.The consequences of pure flooding which is amongst the simplest and the most straight forward approach to performing broadcast include redundant broadcast, contention and collision which are collectively referred to as broadcast storm problem (BSP). This is as a result of the use of plain broadcasting approaches leading to signal overlap in a geographical area with wireless communication. The Counter-based scheme was developed to reduce Broadcast Storm problem. However, to be able to maintain high delivery ratio in either a sparse or dense networks, different thresholds are required. Because of the nature of MANETs determining this threshold require a level of dynamism, without which its operation will be marred. This research work thus proposed an algorithmic framework to address the BSP problem, using the knowledge of it neighbourhood density to dynamically determine the threshold so as to adapt to both dense and sparse network while limiting the above stated constrains.Item An electronic shopping system with a recommendation agent(2009) Ojo, A. K.; Emuoyibofarhe, O. J.; Emuoyibofarhe, O. N.; Lala, O. G.; Chukwuemeka, C. U.There is an inevitable need to improve the operation portfolio of the boutique, and erase problems like time consumption, inconsistency and a host of other problems encountered by most business enterprises. This research study focused on the design of a web based shopping system. The reason for the development of this system is because every shopping software system is precipitated by some business need which are: the need to correct a defect in an existing application, the need to adapt a legacy system to a changing business environment, the need to extend the functions and features of an existing application or the need to create a new product, service or system. A feasibility study was carried out through interviewing an entrepreneur (business proprietor) in order to acquire knowledge about the mode of operation of the boutique; also specialists in the field of fashion designing were interviewed to acquire knowledge that will be used by the proposed software agent to give recommendations online. The existing system was studied and deficiencies such as long queues, customer dissatisfaction and staff impatience, as well as the need for customers to get professional guidance. The Scripting language used for developing the database is MYSQL, and the application used in developing the database for this site is an SQL application called SQLYOG and it is compactable with MYSQL Server which is either wamp, xammp or zends. The system accepts input from the user whether an administrator or a customer, processes the input i.e. (carries out the required action on the input collected as specified by the system design) and produces an output (either a completed transaction report and receipt or an outfit recommendation). Interfaces were designed using PHP on Dreamweaver platform. MySQL Query Language on SQLYOG platform was used as a database tool to develop, organize and store all vital details about customers, suppliers, sales, product, and product categories. The proposed system is designed in a bid to improve speed, accuracy, storage capability, customer satisfaction, job flexibility for the staff as well as shopping flexibility for the customer and consistency in the boutique; it can be used by trained personnel as well as for general public due to its simplicity. This work elaborates on the implementation and use of software agents in global transaction i.e. people can transact from the various locations and their goods are delivered at their doorstep enabling them to save time and the stress involved in physically doing the shopping.Item Angular displacement scheme (ADS): providing reliable geocast transmission for mobile ad-hoc networks (MANETs)(2008-08) Onifade, O. F. W.; Ojo, A. K.; Akande, O. O.In wireless ad hoc environments, two approaches can be used for multicasting: multicast flooding or multicast tree-based approach. Existing multicast protocols mainly based on the latter approach, may not work properly in mobile ad hoc networks as dynamic movement of group members can cause the frequent tree reconfiguration with excessive channel overhead and resulting into loss of datagram. Since the task of keeping the tree structure up-to-date in the multicast tree-based approach is nontrivial, sometimes, multicast flooding is considered as an alternative approach for multicasting in MANET. The scheme presented in this research attempts to reduce the forwarding space for multicast packets beyond earlier presented scheme and also examine the effect of our improvements upon control packet overhead, data packet delivery ratio, and end-to-end delay by further reduction in the number of nodes that rebroadcasts multicast packets while still maintaining a high degree of accuracy of delivered packets. The simulated result was carried out with OMNeT++ to present the comparative analysis on the performance of angular scheme with flooding and LAR box scheme. Our result showed a better improvement compared to flooding and LAR box schemes.Item Characterisation of academic journal publications using text mining techniques(Science and Education Publishing, 2017) Ojo, A. K.; Adeyemo, A. B.The ever-growing volume of published academic journals and the implicit knowledge that can be derived from them has not fully enhanced knowledge development but rather resulted into information and cognitive overload. However, publication data are textual, unstructured and anomalous. Analysing such high dimensional data manually is time consuming and this has limited the ability to make projections and trends derivable from the patterns hidden in various publications. This study was designed to develop and use intelligent text mining techniques to characterise academic journal publications. Journals Scoring Criteria by nineteen rankers from 2001 to 2013 of 50th edition of Journal Quality List (JQL) were used as criteria for selecting the highly rated journals. The text-miner software developed was used to crawl and download the abstracts of papers and their bibliometric information from the articles selected from these journal articles. The datasets were transformed into structured data and cleaned using filtering and stemming algorithms. Thereafter, the data were grouped into series of word features based on bag of words document representation. The highly rated journals were clustered using Self-Organising Maps (SOM) method with attribute weights in each cluster.Item DEVELOPMENT OF ADVANCED DATA SAMPLING SCHEMES TO ALLEVIATE CLASS IMBALANCE PROBLEM IN DATA MINING CLASSIFICATION ALGORITHMS(2015-09) FOLORUNSO, SAKINAT OLUWABUKONLAClassification is the process of finding a set of models that distinguish data classes to predict unknown class label in data mining. The class imbalance problem occurs when standard classifiers are majority-biased while the minority class is ignored. Existing classifiers tend to maximise overall prediction accuracy and minimise error at the expense of the minority class. However, research had shown that misclassification cost of the minority class is higher and should not be ignored since it is the class of interest. This work was therefore designed to develop advanced data sampling schemes that improve the classification performance of imbalance datasets with the view of increasing the recall of the minority class. Synthetic Minority Oversampling Technique (SMOTE) was extended to SMOTE+300% and combined with existing under-sampling schemes: Random Under-Sampling (RUS), Neighbourhood Cleaning Rule (NCL), Wilson’s Edited Nearest Neighbour (ENN) and Condense Nearest Neighbour (CNN). Five advanced data sampling scheme algorithms: SMOTE300ENN, SMOTE300RUS, SMOTE300NCL, SMOTENCL and SMOTERUS were coded using JAVA and implemented in WEKA, a data mining tool as an Application Programming Interface. The existing and developed schemes were applied to 886 Diabetes Mellitus (DM), 1,163 Senior Secondary School Certificate Result (SSSCR) and 786 Contraceptive Methods (CM) datasets. The datasets were collected in Ilesha and Ibadan, Nigeria. Their performances were determined with different classification algorithms using Receiver Operating Characteristics (ROC), recall of the minority class and performance gain metrics. Friedman’s Test at p = 0.05 was used to analyse these schemes against the classification algorithms. The ROC metric revealed that the mean rank values for DM, SSSCR and CM datasets treated with the advanced schemes ranged from 6.9-13.8, 3.8-12.8 and 6.6-13.5, respectively when compared with the existing schemes which ranged from 3.4-7.8, 2.6-12.6 and 2.8-7.9, respectively. These results signifies improved classification performance. The Recall metric analysis for the DM, SSSCR and CM datasets in the advanced schemes ranged from 9.4-13.0, 6.3-14.0 and 7.3-13.6, respectively when compared with the existing schemes 2.0-7.5, 2.5-8.9 and 2.1-7.4, respectively. These results show increased detection of the minority class. Performance gains by the advanced UNIVERSITY OF IBADAN LIBRARY vii schemes over the original dataset (DM, SSCE and CM) were: SMOTE300ENN (27.1%), SMOTE300RUS (11.6%), SMOTE300NCL (15.5%), SMOTENCL (8.3%) and SMOTERUS (7.3%). Significant difference was observed amongst all the schemes. The higher the mean rank value and performance gain, the better the scheme. The SMOTE300ENN scheme gave the highest ROC and recall values in the three datasets which were 13.8, 12.8, 12.3 and 13.0, 14.0, 13.6, respectively. The developed Synthetic Minority Oversampling Technique 300 Wilson’s Edited Nearest Neighbour scheme significantly improved classification performance and increased the recall of the minority class over the existing schemes using the same dataset. It is therefore recommended for classification of imbalanced datasets. Keywords: Imbalanced dataset, Receiver operating characteristics, Data reduction techniques, Data reduction techniques Word count: 445Item Development of english to yoruba machine translator, using syntax-based model(2020-06) Ojo, A.; Obe, O.; Adebayo, A.; Oladunjoye, M.Machine translators are required to produce the best possible translation without human assistance. Every machine translator requires programs, automated dictionaries, and grammars to support translation. Studies have shown that the fluency of machine translators depends on the approach or model adopted for their respective developments. Machine translators do not simply involve substituting words in one language for another, but the application of complex linguistic knowledge to decode the contextual meaning of the source text in its entirety. Approaches to machine translators are divided into a single and hybrid approach. In the aim to improve on translation quality of existing English to Yoruba language translator systems, this paper adopts a syntax-based hybrid approach for translating sentences. The grammar for translation is designed and tested with Joshua (an open-source natural language toolkit). The procedure includes data collection, data preparation, data preprocessing, parsing, training of translation model, extract grammar rule, implement grammar, evaluate translations using bilingual evaluation understudy metrics. This paper discusses the translation quality of machine translators (precisely phrase-based and syntax-based) in both tabular and graphical representations. It was observed that a syntax-based translator seemly has higher translation quality than phrase-based.Item Ensuring QoS with adaptive frame rate and feedback control mechanism in video streaming(2012-12) Onifade, O. F. W.; Ojo, A. K.Video over best-effort packet networks is cumbered by a number of factors including unknown and time- varying bandwidth, delay and losses, as well as many additional issues such as how to fairly share the network resources amongst many flows and how to efficiently perform one-to-many communication for popular content. This research investigates video streaming formats, encoding and compression techniques towards the development and simulation of a rate adaptation model to reduce packet loss. The thrust of this research aimed at enriching and enhancing the quality of video streaming over the wireless network. We developed both mathematical models which were thereafter simulated to depict the need for advancing the existing solution for packet scheduling towards recovery from packet loss and error handling in video streaming.Item Forecasting Nigerian equity stock returns using long short-term memory technique(2024) Ojo, A. K.; Okafor, I. J.Investors and stock market analysts face major challenges in predicting stock returns and making wise investment decisions. The predictability of equity stock returns can boost investor confidence, but it remains a difficult task. To address this issue, a study was conducted using a Long Short-term Memory (LSTM) model to predict future stock market movements. The study used a historical dataset from the Nigerian Stock Exchange (NSE), which was cleaned and normalized to design the LSTM model. The model was evaluated using performance metrics and compared with other deep learning models like Artificial and Convolutional Neural Networks (CNN). The experimental results showed that the LSTM model can predict future stock market prices and returns with over 90% accuracy when trained with a reliable dataset. The study concludes that LSTM models can be useful in predicting financial time-series-related problems if well-trained. Future studies should explore combining LSTM models with other deep learning techniques like CNN to create hybrid models that mitigate the risks associated with relying on a single model for future equity stock predictions.Item FORMALISING THE LOGIC OF SPATIAL QUALIFICATION USING A QUALITATIVE REASONING APPROACH(2014-04) BASSEY, PATIENCE CHARLESSpatial qualification problem, an aspect of spatial reasoning, is concerned with the impossibility of knowing an agent‟s presence at a specific location and time. An agent‟s location determines its ability to carry out an action given its known spatial antecedents. There are sparse works on the formalisation of this problem. Qualitative reasoning approach is the most widely used approach for spatial reasoning due to its ability to reason with incomplete knowledge or reduced data set. This approach has been applied to spatial concepts, such as, shapes, sizes, distance and orientation but not spatial qualification. Therefore, this work was aimed at formalising a logical theory for reasoning about the spatial qualification of an agent to carry out an action based on prior knowledge using qualitative reasoning approach. The notions of persistence, discretisation and commutative distance coverage were used as parameters in formalising the concept of spatial qualification. The axioms and derivation rules for the theory were formally represented using quantified modal logic. The formalised theory was compared with standardised systems of axioms: S4 (containing Kripke‟s minimal system K, axioms T and 4) and S5 (containing K,T,4 and axiom B). The characteristics of the domain of the formalised theory were compared with Barcan‟s axioms, and its semantics were described using Kripke‟s Possible World Semantics (PWS) with constant domain across worlds. A proof system for reasoning with the formalised theory was developed using analytical tableau method. The theory was applied to an agent‟s local distribution planning task with set deadline. Cases with known departure time and routes were considered to determine the possibility of an agent‟s presence at a location. From the formalisation, a body of axioms named Spatial Qualification Model (SQM) was obtained. The axioms showed the presence log and reachability of locations as determinants for agent‟s spatial presence. The properties exhibited by the formalised UNIVERSITY OF IBADAN LIBRARY xvii model when examined in light of S4 and S5 systems of axioms were KP1, KP2 (equivalent to axiom K), TP and 4P (equivalent to axioms T and 4 respectively) in an S4 system. The SQM therefore demonstrated the characteristics of an S4 system of axioms but fell short of being an S5 system. Barcan‟s axiom held, confirming constant domain across possible worlds in the formalised model. Explicating the axioms in the SQM using PWS enabled the understanding of tableau proof rules. Through closed tableaux, the SQM was demonstrably semi-decidable in the sense that the possibility of an agent‟s presence at a certain location and time was only provable in the affirmative, while its negation was not. Depending on the route, the application of SQM to the product distribution planning domain resulted in agent‟s feasible availability times, within or outside the set deadline to assess the agent‟s spatial qualification in agreement with possible cases in the planning task. The spatial qualification model specified the spatial presence log and reachability axioms required for reasoning about an agent‟s spatial presence. The model successfully assessed plans of product distribution task from one location to the other for vans‟ availability. Keywords: Spatial qualification model, Quantified modal logic, Tableau proof, Possible world semantics. Word count: 497Item A FRAMEWORK FOR DEPLOYMENT OF MOBILE AGENTS AS WINDOWS OPERATING SYSTEM SERVICE FOR INFORMATION RETRIEVAL IN DISTRIBUTED ENVIRONMENTS(2013-12) OYATOKUN, BOSEDE OYENIKEMobile Agent Technology (MAT), remote method invocation and remote procedure calling are the three most widely used techniques for information storage and retrieval in network environments. Previous studies have shown that MAT provides a more efficient and dynamic approach to information storage and retrieval than others. However, for mobile agents to effectively perform their various tasks, a static agent platform must be installed on the computers. These platforms consume more memory, increase access time and prevent other tasks from running on the computer. Therefore, an alternative framework that will eliminate the problems associated with agent platform is imperative. Consequently, this work was aimed at developing a more efficient framework for mobile agent system deployment as an operating system service. Two classes of existing information retrieval agents were adapted to develop Embedded Mobile Agent (EMA) system. The EMA was embedded into the Windows Operating System (OS) kernel, so that it could run as a service for information retrieval. This was done to eliminate the overheads associated with the middleware provided by agent platforms. The targeted OS were Windows XP, Windows Vista and Windows7. Mathematical models were simulated to assess the performance of EMA by measuring service delay, memory utilisation, fault tolerance, turn around time at fixed bandwidth with varying number of network nodes, and percentage denial of service. Denied services were generated by a random number generator modelled after the Bernoulli Random Variable with 0.1 probability of failure. The model‟s performance was then compared with Java Agent DEvelopment framework (JADE), a widely used open-source existing mobile agent system running on platforms. The implementation was done using four computer systems running the targeted Windows on an existing local area network. Analysis of data was done using descriptive statistics and independent t-test at p = 0.01. The EMA model effectively retrieved information from the network without the agent platform, thereby reducing access times and saving memory, regardless of the version of the Windows OS. The mean service delay for EMA (15067.5 ± 8489.6 ms) was lower than that of JADE (15697.0 ± 8844.5 ms). The embedded agent requires 3 KB of UNIVERSITY OF IBADAN LIBRARY xv memory to run compared to JADE platform requiring 2.83 103 KB. The mean fault tolerance in terms of fault recovery time for EMA was approximately 50% that of JADE (327.8 ± 193.1 ms). The mean turn around time for EMA was 499.7 ± 173.0 ms and JADE was 843.3 ± 321.6 ms consequential to the time JADE spent activating platforms. The mean percentage denial of service for EMA was 14.3 ± 9.8 while JADE was 24.7 ± 18.5. Memory requirements and service delay increased with increasing number of nodes while others show no systematic change. For all the parameters tested, there were significant differences between the two schemes. The embedded mobile agent provided more efficient, dynamic and flexible solution compared to Java Agent DEvelopment framework for distributed information retrieval applications. It could be incorporated into new versions of operating systems as operating system service for universal distributed information retrieval. Keywords: Mobile agent technology, Embedded mobile agent, Operating system service, Java agent development framework. Word count: 497Item Improved model for detecting fake profiles in online social network: a case study of twitter(2019) Ojo, A. K.Online Social Network (OSN) is like a virtual community where people build social networks and relations with one another. The open access to the Internet has increased the growth of OSN which has attracted intruders to exploit the weaknesses of the Internet and OSN to their own gain. The rise in the usage of OSN has posed security threats to OSN users as they share personal and sensitive information online which could be exploited by these intruders by creating profiles to carry out a series of malicious activities on the social network. In fact, it is no gain saying that the intent of creating fake accounts has adverse effect and the Internet has made it quite easy to concede one’s identity; and this makes it difficult to detect fake accounts as they try to imitate real accounts. In this study, a model that can accurately identify fake profiles in OSN which uses Natural Language Processing Technique to eliminate or reduce the size of the dataset thereby improving the overall performance of the model was proposed. Principal Component Analysis was used for appropriate feature selection. After extraction, six attributes/features that influenced the classifier were found. Support Vector Machine (SVM), Naïve Bayes and Improved Support Vector Machine (ISVM) were used as Classifiers. ISVM introduced a penalty parameter to the standard SVM objective function to reduce the inequality constraints between the slack variables. This gave a better result of 90% than the SVM and Naïve Bayes which gave 77.4% and 77.3% respectively.Item Improved model for facial expression classification for fear and sadness using local binary pattern histogram(2020) Ojo, A. K.; Idowu, T. O.In this study, a Local Binary Pattern Histogram model was proposed for Facial expression classification for fear and sadness. There have been a number of supervised machine models developed and used for facial recognition in past researches. The classifier requires human effort to perform feature extraction which has led to unknown changes in the expression of human face and incomplete feature extraction and low accuracy. This study proposed a model for improving the accuracies for fear and sadness and to extract features to distinguish between fear and sadness. Images of different people of varying ages were extracted from two datasets got from Japanese female facial expression (jaffe) dataset and Cohn cade got from Kaggle. In other to achieve an incremental development, classification was done using Linear Support Vector Machine (LSVM) and Random Forest Classifier (RFC). The accuracy rates for the LSVM models, LSVM1 and LSVM2 were 88% and 87% respectively while the RFC models, RFC1 and RFC2, were 81% and 82% respectively.Item Improved privacy Protection model for prevention of data over-collection in smart devices(2022-12) Oketayo, A. M.; Ojo, A. K.In this study, an attempt was made using machine learning algorithm with the user data store in the mobile cloud framework to solve the problem of data over-collection. This was achieved by designing a model using the security risk level of the applications and the corresponding class level of the users on the smartphone that will help in preventing smartphone apps from accessing and collecting users’ private data while still within the permission scope. Users can store information in the cloud environment where the huge numbers of users are involved. We develop a mobile agent simulator to generate data, and determine the security risk level of the apps on users’ data with the class level of the data. The permission model was designed to determine whether the app is granted permission to access user’s data or not. The data was trained with the use of Neural Network. The evaluation metrics used were accuracy and comparison. For accuracy, the algorithm was compared with the existing algorithm. The data analysis showed that there was restriction for apps accessing the users’ data. The model if deployed on the smartphone will prevent apps from over collect users’ data even while still within the permission scope. This study proved that neural network with mobile cloud computing can be applied to prevent data over-collection in smart devices.Item Improvement on emotional variance analysis technique (EVA) for sentiment analysis in healthcare service delivery(Foundation of Computer Science FCS, New York, USA, 2024-05) Agada, V. E.; Ojo, A. K.This research introduces an innovative approach to improving sentiment analysis in healthcare service delivery by integrating Emotion and Affect Recognition (EAR) techniques into Emotional Variance Analysis (EVA). Leveraging logistic regression, the modifications, including adjusting confidence thresholds and utilizing the Rectified Linear Unit (ReLU) function, aim to address high polarity and enable real-time analysis. The methodology outlines a systematic process for EAR integration, offering practical insights for healthcare practitioners. In this study, additional datasets, including the Healthcare Patient Satisfaction Data Collection, the 9 Popular Patient Portal App Reviews for November 2023, and the HCAHPS Hospital Ratings Survey, are incorporated to enhance the robustness and reliability of the approach. The results across three healthcare centers demonstrate the effectiveness of this augmented approach, with comparisons against existing models using performance metrics. While showcasing promising potential, further research is needed to explore scalability and generalizability.
- «
- 1 (current)
- 2
- 3
- »
