Browse
Recent Submissions
Now showing 1 - 4 of 4
- ItemPredicting student performance trajectory by analysing internet technology utilization behavioural patterns: case of Kenyan university students(Strathmore University, 2019) Khakata, E. N. G.Learning within universities today is continuously being revolutionized by the presence and advancements made on internet technology. The use of internet technology by students in the learning process is greatly influenced by the adoption and utilization of the technology within their learning institutions. However, despite the investments made by the institutions for the provision of internet technology, it is not possible to determine whether the technology positively contributes to better student performance. Similarly, the students expend a certain level of effort in order to use the technology in the learning process. Nonetheless, it is not possible to determine whether their effort contributes to positive performance in their studies. Likewise, taking into account a student’s behaviour and the result they expect to achieve at the end of a learning process, it is not possible to determine the degree to which the effort of the student (effectiveness of student effort) contributes to improved performance. Therefore, there is need to develop a student performance prediction model that considers the investments made by institutions, the student effort expended and the effectiveness of student effort in the utilization of internet technology. The scientific contribution of this thesis involved generation of the student performance trajectory and the development of a student performance prediction model that focuses on student behaviour within a learning environment at a specific instance in time. This model will help educational practitioners to analyse the existing contextual factors within an institution and how the factors influence student performance without carrying out a longitudinal research that will be time and resource intensive. This research considered three major factors in the prediction of student performance, that is, the investment costs, the student effort and the effectiveness of student effort. Investment costs consider student behavioural costs such as the time budget, the physical costs and the mental budget. Student effort encompasses the behavioural intentions and the actions of the students. The effectiveness of student effort considers the expected outcome from performing an action and the behavioural costs. The time budget was mainly influenced by time spent using internet technology and the physical costs are determined by the physical environment and general infrastructure in the universities. The behavioural intentions and actions of a student were examined using the capability of the student, the attitude of the student, the relevance of the technology in the learning process, the productivity achieved in using the technology and the knowledge of a student in the use of the technology in the learning process. The key findings of this research showed that internet technology was a useful resource in the learning process of students and the students had embraced its use in their learning with vigour. The students perceived the technology as easy to use and useful in their studies. They had sufficient knowledge in the use of the technology in learning and they had used the technology to accomplish a number of tasks in their learning process. Furthermore, some universities had invested sufficiently for the provision of internet technology and hence, their students had benefited greatly from the technology. The study concluded by formulating the input factors based on key research findings that were used in the prediction of the student performance perceptions and the student performance trajectory. These formed the major research output and they could be used in predicting student performance at a given instance in time. Keywords: Internet technology, internet utilization, Cobb-Douglas theorem, student performance, predictive model, prediction algorithms, decision tree.
- ItemEnergy-efficient resources utilization algorithm in cloud data center servers(Strathmore University, 2019) Kenga, D. M.In recent years, the use of cloud computing has increased exponentially to satisfy computing needs and this is attributable to its success in delivering service on a pay-as-you-go basis. As a result, Cloud Service Providers (CSPs) are putting up more Data Centers to meet the demand. However, the high amount of energy consumed by cloud data center servers has raised concern because CSPs experience high operating costs (electricity bills), which reduces profits. The cause of high energy usage in data center servers is energy wastage, which results from the low level of server utilization. This problem is currently addressed through Virtual Machine (VM) consolidation and Dynamic Voltage and Frequency Scaling (DVFS). Unfortunately, VM consolidation does consider workload types and VM sizes, which are factors that affect the level of server utilization. On the other hand, DVFS is designed for processor-bound tasks because dynamic power ranges for other computing resources such as memory are narrower. In this study, the effect of workload types (heterogeneous or homogeneous) running in VM and VM sizing on data center server energy consumption was investigated. The results obtained from conducted experiments show that heterogeneous workload is consolidation friendly as compared to homogeneous workloads from a data center energy consumption perspective. Further, a review of the literature discovered that oversized VMs lead to a low level of server utilization and thus leads to energy wastage. Consequently, VM allocation and VM sizing algorithms have been proposed and tested. The VM allocation algorithm co-locates heterogeneous workloads whereas the VM sizing algorithm is used for VM right-sizing. To test the applicability of the proposed algorithms in the cloud, the algorithms were evaluated using simulations on a cloud simulator (CloudSim Plus) using workloads logs obtained from a production data center (Grid Workload Archive Trace 13 (GWA-T-13)). Results on the evaluations carried out on the designed VM allocation algorithm showed that data center server energy consumption was reduced by 4%, 11%, and 17% when compared with Worst Fit (WF), First Fist (FF), and Best Fit (BF) VM allocation algorithms respectively. On the other hand, the VM sizing algorithm reduced energy consumption, memory wastage, and CPU wastage by at least 40%, 61%, and 41% respectively. From the results, we concluded that workload types and VM sizes affect the level of server utilization, which in turn determines the amount of energy consumption. Thus, the right workload types combined with the right VM sizing leads to a high level of server utilization leading to energy savings.
- ItemA Monte Carlo tree search algorithm for optimization of load scalability in database systems(Strathmore University, 2020) Omondi, Allan OdhiamboVariable environmental conditions and runtime phenomena require developers of complex business information systems to expose configuration parameters to system administrators. This allows system administrators to intervene by tuning the bottleneck configuration parameters in response to current changes or in anticipation of future changes in order to maintain the system’s performance at an optimum level. However, these manual performance tuning interventions are prone to error and lack of standards due to varying levels of expertise and over-reliance on inaccurate predictions of future states of a business information system. The purpose of this research was therefore to investigate on how to design an algorithm that proactively reconfigures bottleneck parameters without over-relying on an accurate model of a stochastic environment. This was done using a comparative experimental research design that involved quantitative data collection through simulations of different algorithm variants. The research built on the theoretical concepts of control theory and decision theory, coupled with the estimation of unknown quantities using principles of simulation-based inferential statistics. Subsequently, Monte Carlo Tree Search, with a variant of the selection stage, was used as the foundation of the designed algorithm. The selection stage was variated by applying a “lean Last Good Reply with Forgetting” (lean-LGRF) strategy and first tested in the context of a strategy board game, Reversi. The lean-LGRF selection strategy applied over 1,000 playouts against the baseline Upper Confidence Bound applied to Trees (UCT) selection strategy recorded the highest number of wins. On the other hand, the Progressive Bias selection strategy had a win-rate of 45.8% against the UCT selection strategy. Lastly, as expected, the UCT selection strategy had a win-rate of 49.7% (an almost 50-50 win-rate) against itself. The results were then subjected to a Chi-square (χ2) test which provided evidence that the variation technique applied in the selection stage of the algorithm had a significantly positive impact on its performance. The superior selection variant was then applied in the context of a distributed database system. This also provided compelling results that indicate that applying the algorithm in a distributed database system resulted in a response-time latency that was 27% lower than the average response-time latency and a transaction throughput that was 17% higher than the average transaction throughput.
- ItemDeriving a transparent dataspace-oriented entity associative algorithm(Strathmore University, 2014-06) Shibwabo, Bernard K.; Wanyembi, Gregory W.; Kiraka, Ruth; Ateya, Ismail; Orero, JosephOrganizations possess data residing in varied data sources though there is no effective way of integrating these repositories to provide information to end users transparently. This is primarily caused by the fact that the existing data is stored in databases that consist of varied models and techniques of both storage and access to data. The main aim of this research was to formulate a set of algorithms to support the development of a dataspace support platform that integrates data residing in divergent data stores. These techniques facilitate the association of data entities in a dataspace by enabling entity coexistence for integrating data residing in divergent data stores. The research objectives were to analyze the state of dataspace implementation, to develop a model that outlines the criteria for successful dataspace design, to develop a dataspace support platform that integrates data residing in divergent data stores and to conduct experiments to validate the scalability of the implemented dataspace support platform. In order to achieve these objectives, the soft systems theory is applied. A literature survey approach is adopted and supplemented from the findings by use of brainstorming and further experiments. The findings have been used to identify facts pertaining to the principles, design and implementation of a dataspace support platform. The final outcome consists of a set of algorithms, models and a test dataspace support platform. Access to information is facilitated through a more scalable, flexible and transparent platform regardless of the underlying data models. This results to a O(log n + k ) query response time coupled with a O(n) build time on the entire dataspace. In conclusion, the triggers for enterprise systems integration are apparent, and compliance is only one of numerous drivers pushing organizations towards achieving a more integrated outlook of enterprise data. With the dataspace-oriented entity associative algorithm, users can have the ability to harness or filter informational requirements so as to enhance decision making in terms of time, accuracy and availability of information.