Scholarly works
Permanent URI for this collectionhttps://repository.ui.edu.ng/handle/123456789/408
Browse
Item Angular displacement scheme (ADS): providing reliable geocast transmission for mobile ad-hoc networks (MANETs)(2008-08) Onifade, O. F. W.; Ojo, A. K.; Akande, O. O.In wireless ad hoc environments, two approaches can be used for multicasting: multicast flooding or multicast tree-based approach. Existing multicast protocols mainly based on the latter approach, may not work properly in mobile ad hoc networks as dynamic movement of group members can cause the frequent tree reconfiguration with excessive channel overhead and resulting into loss of datagram. Since the task of keeping the tree structure up-to-date in the multicast tree-based approach is nontrivial, sometimes, multicast flooding is considered as an alternative approach for multicasting in MANET. The scheme presented in this research attempts to reduce the forwarding space for multicast packets beyond earlier presented scheme and also examine the effect of our improvements upon control packet overhead, data packet delivery ratio, and end-to-end delay by further reduction in the number of nodes that rebroadcasts multicast packets while still maintaining a high degree of accuracy of delivered packets. The simulated result was carried out with OMNeT++ to present the comparative analysis on the performance of angular scheme with flooding and LAR box scheme. Our result showed a better improvement compared to flooding and LAR box schemes.Item An algorithmic framework for hybrid adaptive protocol (HAP) to manage broadcast storm problems in mobile ad-hoc networks (MANETS)(2008-09) Onifade, O. F. W.; Ojo, A. K.; Okiyi, K. U.The consequences of pure flooding which is amongst the simplest and the most straight forward approach to performing broadcast include redundant broadcast, contention and collision which are collectively referred to as broadcast storm problem (BSP). This is as a result of the use of plain broadcasting approaches leading to signal overlap in a geographical area with wireless communication. The Counter-based scheme was developed to reduce Broadcast Storm problem. However, to be able to maintain high delivery ratio in either a sparse or dense networks, different thresholds are required. Because of the nature of MANETs determining this threshold require a level of dynamism, without which its operation will be marred. This research work thus proposed an algorithmic framework to address the BSP problem, using the knowledge of it neighbourhood density to dynamically determine the threshold so as to adapt to both dense and sparse network while limiting the above stated constrains.Item Improving node reachability QoS during broadcast storm in Manets using neighbourhood density kowledge (NDK)(2008-12) Onifade, O. F. W.; Ojo, A. K.; Lala, O. G.The Counter-based scheme was developed to reduce Broadcast Storm problem. However, to be able to maintain high delivery ratio in either a sparse or dense networks, different thresholds are required. Because of the nature of MANETs determining this threshold require a level of dynamism, without which its operation will be marred. Our earlier research work proposed an algorithmic framework to address the BSP problem, using the knowledge of it neighbourhood density to dynamically determine the threshold so as to adapt to both dense and sparse network while limiting the above stated constrains. In this work, we present the simulation result of our attempt to improve reachability of nodes in MANETs using Neighbourhood Density Knowledge (NDK). While the major characteristics of MANETs remain indeterminate behaviours in the number of participating nodes, mobility and sporadic topology changes based on nodal movement, ability of any supporting protocol to function under both sparsely and densely population of nodes. With the Counter based threshold value based on the neighbourhood information, an important metric considered is the reacheability which is defined in terms of the ratio of nodes that received the broadcast message out of the entire node in the network. Overall, the NDK approach performs best on both sparser and dense networks.Item An electronic shopping system with a recommendation agent(2009) Ojo, A. K.; Emuoyibofarhe, O. J.; Emuoyibofarhe, O. N.; Lala, O. G.; Chukwuemeka, C. U.There is an inevitable need to improve the operation portfolio of the boutique, and erase problems like time consumption, inconsistency and a host of other problems encountered by most business enterprises. This research study focused on the design of a web based shopping system. The reason for the development of this system is because every shopping software system is precipitated by some business need which are: the need to correct a defect in an existing application, the need to adapt a legacy system to a changing business environment, the need to extend the functions and features of an existing application or the need to create a new product, service or system. A feasibility study was carried out through interviewing an entrepreneur (business proprietor) in order to acquire knowledge about the mode of operation of the boutique; also specialists in the field of fashion designing were interviewed to acquire knowledge that will be used by the proposed software agent to give recommendations online. The existing system was studied and deficiencies such as long queues, customer dissatisfaction and staff impatience, as well as the need for customers to get professional guidance. The Scripting language used for developing the database is MYSQL, and the application used in developing the database for this site is an SQL application called SQLYOG and it is compactable with MYSQL Server which is either wamp, xammp or zends. The system accepts input from the user whether an administrator or a customer, processes the input i.e. (carries out the required action on the input collected as specified by the system design) and produces an output (either a completed transaction report and receipt or an outfit recommendation). Interfaces were designed using PHP on Dreamweaver platform. MySQL Query Language on SQLYOG platform was used as a database tool to develop, organize and store all vital details about customers, suppliers, sales, product, and product categories. The proposed system is designed in a bid to improve speed, accuracy, storage capability, customer satisfaction, job flexibility for the staff as well as shopping flexibility for the customer and consistency in the boutique; it can be used by trained personnel as well as for general public due to its simplicity. This work elaborates on the implementation and use of software agents in global transaction i.e. people can transact from the various locations and their goods are delivered at their doorstep enabling them to save time and the stress involved in physically doing the shopping.Item Voice over IP gateway for internet telephony(2011) Ojo, A. K.; Onifade, O. F. W.This paper presents the design and implementation of a Voice over IP Gateway, with special attention to a better voice communication and adding functionalities to the existing Internet communication. It makes use of the Speex codec Software in reducing latency and jitter faced with most VOIP systems. It offers an alternative means of Internet communication and it is accessible anytime, anywhere in the World. Modem is used as its Hardware platform and the Gateway is implemented by running several programs on the Modem under the Windows environment. It also summarizes the evolution of IP Telephony products, and how the new designed gateway adds to pre-existing functionalities in terms of Voice communication improvement and standardization. This paper is aimed at building a web-based Internet telephony system that uses Internet protocol to transmit voice data over the Intranet. It not only looks at VoIP technologies, but also designs and implements a VoIP gateway that allows users to make telephone call over a PSTN network and an IP network and also solves the issues posed by the existing systems such as Limit in available bandwidth, Packet loss, Jitter, Echo, Security and Reliability. As a result, users will not need to buy and install any applications before making a phone call.Item Ensuring QoS with adaptive frame rate and feedback control mechanism in video streaming(2012-12) Onifade, O. F. W.; Ojo, A. K.Video over best-effort packet networks is cumbered by a number of factors including unknown and time- varying bandwidth, delay and losses, as well as many additional issues such as how to fairly share the network resources amongst many flows and how to efficiently perform one-to-many communication for popular content. This research investigates video streaming formats, encoding and compression techniques towards the development and simulation of a rate adaptation model to reduce packet loss. The thrust of this research aimed at enriching and enhancing the quality of video streaming over the wireless network. We developed both mathematical models which were thereafter simulated to depict the need for advancing the existing solution for packet scheduling towards recovery from packet loss and error handling in video streaming.Item A comparison of the predictive capabilities of artificial neural networks and regression models for knowledge discovery(2013) Ojo, A. K.; Adeyemo, A. B.In this paper, Artificial Neural Networks (ANN) and Regression Analysis models were considered to determine which of them performs better. Prediction was done using one hidden layer and three processing elements in the ANN model. Furthermore, prediction was done using regression analysis. The parameters of regression model were estimated using Least Square method. To determine the better prediction, mean square errors (MSE) attached to ANN and regression models were used. Seven real series were fitted and predicted with in both models. It was found out that the mean square error attached to ANN model was smaller than regression model which made ANN a better model in prediction.Item Knowledge discovery in academic electronic resources using text mining(2013-02) Ojo, A. K.; Adeyemo, A. B.Academic resources documents contain important knowledge and research results. They have highly quality information. However, they are lengthy and have much noisy results such that it takes a lot of human efforts to analyse. Text mining could be used to analyse these textual documents and extract useful information from large amount of documents quickly and automatically. In this paper, abstracts of electronic publications from African Journal of Computing and ICTs, an IEEE Nigerian Computer Chapter Publication were analysed using text mining techniques. A text mining model was developed and was used to analyse the abstracts collected. The texts were transformed into structured data in frequency form, cleaned up and the documents split into series of word features (adjectives, verbs, adverbs, nouns) and the necessary words were extracted from the documents. The corpus collected had 1637 words. The word features were then analysed by classifying and clustering them. The text mining model developed is capable of mining texts from academic electronic resources thereby identifying the weak and strong issues in those publications.Item A FRAMEWORK FOR DEPLOYMENT OF MOBILE AGENTS AS WINDOWS OPERATING SYSTEM SERVICE FOR INFORMATION RETRIEVAL IN DISTRIBUTED ENVIRONMENTS(2013-12) OYATOKUN, BOSEDE OYENIKEMobile Agent Technology (MAT), remote method invocation and remote procedure calling are the three most widely used techniques for information storage and retrieval in network environments. Previous studies have shown that MAT provides a more efficient and dynamic approach to information storage and retrieval than others. However, for mobile agents to effectively perform their various tasks, a static agent platform must be installed on the computers. These platforms consume more memory, increase access time and prevent other tasks from running on the computer. Therefore, an alternative framework that will eliminate the problems associated with agent platform is imperative. Consequently, this work was aimed at developing a more efficient framework for mobile agent system deployment as an operating system service. Two classes of existing information retrieval agents were adapted to develop Embedded Mobile Agent (EMA) system. The EMA was embedded into the Windows Operating System (OS) kernel, so that it could run as a service for information retrieval. This was done to eliminate the overheads associated with the middleware provided by agent platforms. The targeted OS were Windows XP, Windows Vista and Windows7. Mathematical models were simulated to assess the performance of EMA by measuring service delay, memory utilisation, fault tolerance, turn around time at fixed bandwidth with varying number of network nodes, and percentage denial of service. Denied services were generated by a random number generator modelled after the Bernoulli Random Variable with 0.1 probability of failure. The model‟s performance was then compared with Java Agent DEvelopment framework (JADE), a widely used open-source existing mobile agent system running on platforms. The implementation was done using four computer systems running the targeted Windows on an existing local area network. Analysis of data was done using descriptive statistics and independent t-test at p = 0.01. The EMA model effectively retrieved information from the network without the agent platform, thereby reducing access times and saving memory, regardless of the version of the Windows OS. The mean service delay for EMA (15067.5 ± 8489.6 ms) was lower than that of JADE (15697.0 ± 8844.5 ms). The embedded agent requires 3 KB of UNIVERSITY OF IBADAN LIBRARY xv memory to run compared to JADE platform requiring 2.83 103 KB. The mean fault tolerance in terms of fault recovery time for EMA was approximately 50% that of JADE (327.8 ± 193.1 ms). The mean turn around time for EMA was 499.7 ± 173.0 ms and JADE was 843.3 ± 321.6 ms consequential to the time JADE spent activating platforms. The mean percentage denial of service for EMA was 14.3 ± 9.8 while JADE was 24.7 ± 18.5. Memory requirements and service delay increased with increasing number of nodes while others show no systematic change. For all the parameters tested, there were significant differences between the two schemes. The embedded mobile agent provided more efficient, dynamic and flexible solution compared to Java Agent DEvelopment framework for distributed information retrieval applications. It could be incorporated into new versions of operating systems as operating system service for universal distributed information retrieval. Keywords: Mobile agent technology, Embedded mobile agent, Operating system service, Java agent development framework. Word count: 497Item FORMALISING THE LOGIC OF SPATIAL QUALIFICATION USING A QUALITATIVE REASONING APPROACH(2014-04) BASSEY, PATIENCE CHARLESSpatial qualification problem, an aspect of spatial reasoning, is concerned with the impossibility of knowing an agent‟s presence at a specific location and time. An agent‟s location determines its ability to carry out an action given its known spatial antecedents. There are sparse works on the formalisation of this problem. Qualitative reasoning approach is the most widely used approach for spatial reasoning due to its ability to reason with incomplete knowledge or reduced data set. This approach has been applied to spatial concepts, such as, shapes, sizes, distance and orientation but not spatial qualification. Therefore, this work was aimed at formalising a logical theory for reasoning about the spatial qualification of an agent to carry out an action based on prior knowledge using qualitative reasoning approach. The notions of persistence, discretisation and commutative distance coverage were used as parameters in formalising the concept of spatial qualification. The axioms and derivation rules for the theory were formally represented using quantified modal logic. The formalised theory was compared with standardised systems of axioms: S4 (containing Kripke‟s minimal system K, axioms T and 4) and S5 (containing K,T,4 and axiom B). The characteristics of the domain of the formalised theory were compared with Barcan‟s axioms, and its semantics were described using Kripke‟s Possible World Semantics (PWS) with constant domain across worlds. A proof system for reasoning with the formalised theory was developed using analytical tableau method. The theory was applied to an agent‟s local distribution planning task with set deadline. Cases with known departure time and routes were considered to determine the possibility of an agent‟s presence at a location. From the formalisation, a body of axioms named Spatial Qualification Model (SQM) was obtained. The axioms showed the presence log and reachability of locations as determinants for agent‟s spatial presence. The properties exhibited by the formalised UNIVERSITY OF IBADAN LIBRARY xvii model when examined in light of S4 and S5 systems of axioms were KP1, KP2 (equivalent to axiom K), TP and 4P (equivalent to axioms T and 4 respectively) in an S4 system. The SQM therefore demonstrated the characteristics of an S4 system of axioms but fell short of being an S5 system. Barcan‟s axiom held, confirming constant domain across possible worlds in the formalised model. Explicating the axioms in the SQM using PWS enabled the understanding of tableau proof rules. Through closed tableaux, the SQM was demonstrably semi-decidable in the sense that the possibility of an agent‟s presence at a certain location and time was only provable in the affirmative, while its negation was not. Depending on the route, the application of SQM to the product distribution planning domain resulted in agent‟s feasible availability times, within or outside the set deadline to assess the agent‟s spatial qualification in agreement with possible cases in the planning task. The spatial qualification model specified the spatial presence log and reachability axioms required for reasoning about an agent‟s spatial presence. The model successfully assessed plans of product distribution task from one location to the other for vans‟ availability. Keywords: Spatial qualification model, Quantified modal logic, Tableau proof, Possible world semantics. Word count: 497Item Improving information acquisition via text mining for efficient e-governance(2015-03) Adeyemo, A. B.; Ojo, A. K.In this paper we proposed a framework for integrating text mining with E-Governance. We suggested that the users of electronic governance can use the text terms to describe their interest which can be processed for clustering and term extraction. The words thus expressed by users are tracked and subjected to processing wherein it is possible to generate content. We have provided the framework and tested it in a few web sites. We have used the clustering and pre-processing for the content management. The results are encouraging and it is possible to extent such exercises for other text minging processes.Item Trend analysis in academic journals in computer science using text mining(IJCSIS Publication, 2015-04) Ojo, A. K.; Adeyemo, A. B.Text mining is the process of discovering new, hidden information from texts- structured, semi-structured and unstructured. There are so many benefits, valuable insights, discoveries and useful information that can be derived from unstructured or semi- unstructured data. In this study, text mining techniques were used to identify trends of different topics that exist in the text and how they change over time. Keywords were crawled from the abstracts in Journal of Computer Science and Technology (JCST), one of the ISI indexed journals in the field of Computer Science from 1993 to 2013. Results of our analysis clearly showed a varying trend in the representation of various subfields in a Computer Science journal from decade to decade. It was discovered that the research direction was changing from pure mathematical foundations, Theory of Computation to Applied Computing, Artificial Intelligence in form of Robotics and Embedded Systems.Item MODELLING AND MITIGATING MINOR-THREATS IN NETWORK THREAT MANAGEMENT(2015-07) ORIOLA, OLUWAFEMINetwork Threat Management (NTM) is used to model and mitigate network threats classified as major-threats and minor-threats without exceeding Cost of Detection (CD), Time of Detection (TD) and False Positive Rate (FPR) limits. Existing network threat modelling and mitigation frameworks focused on major-threats because until recently, only major-threats are usually harmful, while minor-threats were perceived non-harmful. Recent studies however have shown that some minor-threats are harmful. This study was designed to model and mitigate minor-threats in NTM. The Threat Prediction Model (TPDM) and Threat Prioritisation Model (TPRM) were used for modelling while Threat Mitigation Model (TMTM) was used for mitigation. The TPDM was modified to identify minor-threats by incorporating actionable attributes. The modified TPDM accuracy was compared with TPDM based on confidence, with 1.0 benchmark. The TPRM was modified to rate minor-threats using Dempster-Shafer Method and compared with snort-classifier and Common Vulnerability Scoring System (CVSS) as standards. The rating range between 0 and 5 was ‗less harmful‘ while rating above 5 was ‗moderately harmful‘. The modified TPDM and TPRM were implemented using java. The TMTM was modified using Hillson‘s risk mitigation model. The CD based on number of rules, TD and FPR were used to compare modified TMTM and TMTM for snort and suricata implementations. Real life minor-threats known as Plymouth University Advanced Persistent Threats (PUAPT) were developed using metasploit for analysis. Existing Lincoln Lab Denial of Service (LLDOS) minor-threats were also analysed for standardisation. The CD, TD and FPR limits for PUAPT analysis were set at 5_rules, 60_seconds and 25% respectively while LLDOS were 5_rules, 90_seconds and 25%. Data were analysed using descriptive statistics. In PUAPT analysis, modified TPDM was accurate with confidence of 1.0 compared to 0.0 of existing TPDM. The modified TPRM rated harmful minor-threats as moderately harmful while non-harmful as less harmful. The snort-classifier rated both harmful and non-harmful minor-threats as less harmful while CVSS rated none of the minor-threats. With modified TMTM for snort implementation, CD, TD and FPR of UNIVERSITY OF IBADAN LIBRARY xvii 5_rules, 1_second and 2.7% respectively were incurred compared to 19082_rules, 240_seconds and 99.1% of existing TMTM. With modified TMTM for suricata implementation, CD, TD and FPR of 5_rules, 1_second and 1.2% respectively were incurred compared to 18701_rules, 240_seconds and 99.8% of existing TMTM. The modified TPDM for LLDOS was accurate with confidence of 1.0 compared to 0.1 of existing TPDM. The modified TPRM rated harmful minor-threats as moderately harmful while non-harmful as less harmful, snort-classifier rated both harmful and non-harmful minor-threats as less harmful and CVSS rated only minor-threats with vulnerabilities. With modified TMTM for snort implementation, CD, TD and FPR of 5_rules, 3_seconds and 21.1% respectively were incurred compared to 19082_rules, 480_seconds and 99.9% of existing TMTM. With modified TMTM for suricata implementation, CD, TD and FPR of 5_rules, 75_seconds and 1.3% respectively were incurred compared to 18701_rules, 480_seconds and 99.0% of existing TMTM. The modified models accurately modelled and mitigated minor-threats without exceeding cost of detection, time of detection and false positive rate limits. The modified models are recommended for modelling and mitigating minor-threats in network threat management. Keywords: Network threat management, Minor-threat, Threat modelling, Threat mitigation. Word count: 500Item DEVELOPMENT OF ADVANCED DATA SAMPLING SCHEMES TO ALLEVIATE CLASS IMBALANCE PROBLEM IN DATA MINING CLASSIFICATION ALGORITHMS(2015-09) FOLORUNSO, SAKINAT OLUWABUKONLAClassification is the process of finding a set of models that distinguish data classes to predict unknown class label in data mining. The class imbalance problem occurs when standard classifiers are majority-biased while the minority class is ignored. Existing classifiers tend to maximise overall prediction accuracy and minimise error at the expense of the minority class. However, research had shown that misclassification cost of the minority class is higher and should not be ignored since it is the class of interest. This work was therefore designed to develop advanced data sampling schemes that improve the classification performance of imbalance datasets with the view of increasing the recall of the minority class. Synthetic Minority Oversampling Technique (SMOTE) was extended to SMOTE+300% and combined with existing under-sampling schemes: Random Under-Sampling (RUS), Neighbourhood Cleaning Rule (NCL), Wilson’s Edited Nearest Neighbour (ENN) and Condense Nearest Neighbour (CNN). Five advanced data sampling scheme algorithms: SMOTE300ENN, SMOTE300RUS, SMOTE300NCL, SMOTENCL and SMOTERUS were coded using JAVA and implemented in WEKA, a data mining tool as an Application Programming Interface. The existing and developed schemes were applied to 886 Diabetes Mellitus (DM), 1,163 Senior Secondary School Certificate Result (SSSCR) and 786 Contraceptive Methods (CM) datasets. The datasets were collected in Ilesha and Ibadan, Nigeria. Their performances were determined with different classification algorithms using Receiver Operating Characteristics (ROC), recall of the minority class and performance gain metrics. Friedman’s Test at p = 0.05 was used to analyse these schemes against the classification algorithms. The ROC metric revealed that the mean rank values for DM, SSSCR and CM datasets treated with the advanced schemes ranged from 6.9-13.8, 3.8-12.8 and 6.6-13.5, respectively when compared with the existing schemes which ranged from 3.4-7.8, 2.6-12.6 and 2.8-7.9, respectively. These results signifies improved classification performance. The Recall metric analysis for the DM, SSSCR and CM datasets in the advanced schemes ranged from 9.4-13.0, 6.3-14.0 and 7.3-13.6, respectively when compared with the existing schemes 2.0-7.5, 2.5-8.9 and 2.1-7.4, respectively. These results show increased detection of the minority class. Performance gains by the advanced UNIVERSITY OF IBADAN LIBRARY vii schemes over the original dataset (DM, SSCE and CM) were: SMOTE300ENN (27.1%), SMOTE300RUS (11.6%), SMOTE300NCL (15.5%), SMOTENCL (8.3%) and SMOTERUS (7.3%). Significant difference was observed amongst all the schemes. The higher the mean rank value and performance gain, the better the scheme. The SMOTE300ENN scheme gave the highest ROC and recall values in the three datasets which were 13.8, 12.8, 12.3 and 13.0, 14.0, 13.6, respectively. The developed Synthetic Minority Oversampling Technique 300 Wilson’s Edited Nearest Neighbour scheme significantly improved classification performance and increased the recall of the minority class over the existing schemes using the same dataset. It is therefore recommended for classification of imbalanced datasets. Keywords: Imbalanced dataset, Receiver operating characteristics, Data reduction techniques, Data reduction techniques Word count: 445Item Characterisation of academic journal publications using text mining techniques(Science and Education Publishing, 2017) Ojo, A. K.; Adeyemo, A. B.The ever-growing volume of published academic journals and the implicit knowledge that can be derived from them has not fully enhanced knowledge development but rather resulted into information and cognitive overload. However, publication data are textual, unstructured and anomalous. Analysing such high dimensional data manually is time consuming and this has limited the ability to make projections and trends derivable from the patterns hidden in various publications. This study was designed to develop and use intelligent text mining techniques to characterise academic journal publications. Journals Scoring Criteria by nineteen rankers from 2001 to 2013 of 50th edition of Journal Quality List (JQL) were used as criteria for selecting the highly rated journals. The text-miner software developed was used to crawl and download the abstracts of papers and their bibliometric information from the articles selected from these journal articles. The datasets were transformed into structured data and cleaned using filtering and stemming algorithms. Thereafter, the data were grouped into series of word features based on bag of words document representation. The highly rated journals were clustered using Self-Organising Maps (SOM) method with attribute weights in each cluster.Item Projecting the future direction of publication patternsUsing text mining(2017-07) Ojo, A. K.In this study, text mining techniques were used to identify various research trends in academic journal publications. These techniques were applied to figure out trends in research patterns related to various specialisation areas in Computer Science academic journal articles within a period of two decades. The corpus mined were crawled online, pre-processed and transformed into structured data using filtering and stemming algorithms. The data were grouped into series of word features based on bag of words document representation. Abstracts and the keywords of the articles selected from these journal articles were used as the dataset. It was discovered that the publication trends have changed tremendously from communications and security to artificial intelligence over time.Item Predictive analysis for journal abstracts using polynomial neural networks algorithm(2017-07) Ojo, A. K.Academic journals are an important outlet for dissemination of academic research. In this study, Neural Networks model was used in the prediction of abstracts from The Institute of Electrical and Electronics Engineers (IEEE) Transactions on Computers. Simulation of results was done using the Polynomial Neural Networks algorithm. This algorithm, which is based on Group Method of Data Handling (GMDH) method, utilizes a class of polynomials such as linear, quadratic and modified quadratic. The prediction was done for a period of twenty-four months using a predictive model of three layers and two coefficients. The performance measures used in this study were mean square errors, mean absolute error and root mean square error.Item Self-disciplinary time-restricted smartphone addiction management system using (android) mobile technology(2018) Ojo, A. K.; Ohajinwa, R. S.The values of smartphone devices have increased tremendously over last few years, especially with the development of mobile applications which have been beneficial. According to Google, there are over a billion active users of mobile applications deployed on Android Playstore as at 2015. This means that there are over a billion users who actually spend time on their smartphones. However, despite the tremendous advantages of smartphones and the applications that can be installed on them, there are negative consequences that can come from spending countless number of hours on a smartphone and, applications that help in curbing these addictions are really scarce. There still exists a large inefficiency in the current systems that help to curb or manage smartphone addictions. In view of the above, this study sought to present a time-restricted smartphone addiction system that is effective, efficient and relevant. This study explored the use of mobile technology in the design and development of the system (the application), which enables a user to select the applications he or she wants to lock within a specified period of time and at the same time gives the user an overall view of how often he or she uses the applications installed on the device.Item A predicting phishing websites using support vector machine and multi-class classification based on association rule techniques(2018-06) Woods, N. C.; Agada, V. E.; Ojo, A. K.Phishing is a semantic attack which targets the user rather than the computer. It is a new Internet crime in comparison with other forms such as virus and hacking. Considering the damage phishing websites has caused to various economies by collapsing organizations, stealing information and financial diversion, various researchers have embarked on different ways of detecting phishing websites but there has been no agreement about the best algorithm to be used for prediction. This study is interested in integrating the strengths of two algorithms, Support Vector Machines (SVM) and Multi-Class Classification Rules based on Association Rules (MCAR) to establish a strong and better means of predicting phishing websites. A total of 11,056 websites were used from both PhishTank and yahoo directory to verify the effectiveness of this approach. Feature extraction and rules generation were done by the MCAR technique; classification and prediction were done by SVM technique. The result showed that the technique achieved 98.30% classification accuracy with a computation time of 2205.33s with minimum error rate. It showed a total of 98% Area under the Curve (AUC) which showed the proportion of accuracy in classifying phishing websites. The model showed 82.84% variance in the prediction of phishing websites based on the coefficient of determination. The use of two techniques together in detecting phishing websites produced a more accurate result as it combined the strength of both techniques respectively. This research work centralized on this advantage by building a hybrid of two techniques to help produce a more accurate result.Item A predicting phishing websites using support vector machine and multi-class classification based on association rule techniques(2018-06) Woods, N. C.; Agada, V. E.; Ojo, A. K.Phishing is a semantic attack which targets the user rather than the computer. It is a new Internet crime in comparison with other forms such as virus and hacking. Considering the damage phishing websites has caused to various economies by collapsing organizations, stealing information and financial diversion, various researchers have embarked on different ways of detecting phishing websites but there has been no agreement about the best algorithm to be used for prediction. This study is interested in integrating the strengths of two algorithms, Support Vector Machines (SVM) and Multi-Class Classification Rules based on Association Rules (MCAR) to establish a strong and better means of predicting phishing websites. A total of 11,056 websites were used from both PhishTank and yahoo directory to verify the effectiveness of this approach. Feature extraction and rules generation were done by the MCAR technique; classification and prediction were done by SVM technique. The result showed that the technique achieved 98.30% classification accuracy with a computation time of 2205.33s with minimum error rate. It showed a total of 98% Area under the Curve (AUC) which showed the proportion of accuracy in classifying phishing websites. The model showed 82.84% variance in the prediction of phishing websites based on the coefficient of determination. The use of two techniques together in detecting phishing websites produced a more accurate result as it combined the strength of both techniques respectively. This research work centralized on this advantage by building a hybrid of two techniques to help produce a more accurate result.
- «
- 1 (current)
- 2
- 3
- »
