SCES Projects, Theses and Dissertations
Permanent URI for this community
Browse
Browsing SCES Projects, Theses and Dissertations by Title
Now showing 1 - 20 of 502
Results Per Page
Sort Options
- ItemA Bi-Lingual counselling chatbot application for support of Gender Based Violence victims in Kenya(Strathmore University, 2024) Mutinda, S. W.Gender-based violence (GBV) remains one of the highest prevailing human rights violations globally, surpassing national, social, and economic boundaries. However, due to its nature, it is masked within a culture of silence and causes detrimental effects on the dignity, health, autonomy, and security of its victims. The prevalence of GBV is fuelled by cultural nuances and beliefs that justify and promote its acceptability. The stigma surrounding GBV in addition to fear of the consequences of disclosure deter victims from seeking help. Additionally, the resources available for addressing GBV such as legal frameworks and recovery centres are limited. Technological approaches have been established to tackle GBV as intermediate and supplementary support for victims as part of UN-SDG 5. Conversational Agents such as Chomi, ChatPal, and Namubot have been developed for counselling of GBV victims who struggle with disclosing their predicament to humans. The existing chatbots, however, are not a fit for Kenyan victims because they utilize languages such as Swedish, Finnish, Isizulu, Setswana and Isixhosa in addition to incorporating referral services specific to their regions. This research addressed this gap by developing a chatbot application suitable for the Kenyan region for counselling of GBV victims using both Kiswahili and English, the languages predominantly used in the country, in addition to including contacts to referral services within the country. The methodology utilized involved the development of a chatbot application based on Rasa open source AI framework by training a model using a pre-processed counselling dataset. The performance of the model was evaluated using NLU confidence score to determine the model’s certainty in its intent identification and a confusion matrix was generated which with 80% and 20% training and testing data split resulted in 100% classification threshold accuracy. Python’s Fuzzy Matching Token Set Ratio score was also used to determine the response which best matches the input with results indicating satisfactory performance of the model ranging between 63% and 92% for GBV queries input. The developed model was then integrated into a web application as the user interface for user access and interaction with the model hence achieving the research objective of developing a chatbot application to conduct counselling for GBV victims in Kenya using English and Kiswahili languages . Keywords: Gender-based Violence, stigma, chatbot, Rasa open source, NLU Confidence Score, Fuzzy Matching Token Set Ratio score
- ItemA Blockchain tool to detect and mitigate e-book piracy: a case study of Kenya(Strathmore University, 2023) Nzangi, J. M.Numerous online e-book markets have emerged along with the growth of e-book readers. This has also increased the speed and ease with which people share books. As a result, piracy has been skyrocketing since there is no security for books being shared, allowing only one person to have one copy of the purchased books. Globally, e-book piracy has been a significant setback for publishers, as the available solutions cannot offer the necessary content protection. A good example is the strict Digital Rights Management (DRM) that occasionally annoys real readers by preventing them from accessing their books or forcing them to forfeit ownership if the platform is shut down. Publishers, online platform providers, and writers currently comprise the e-book market. E-book piracy has real-world consequences that affect both publishers' and authors' bottom lines and their ability to produce more books. This work developed a non-fungible-token-based e- book platform that enables writers to self-publish e-books and sell them without the risk of piracy. NFTs, or non-fungible tokens, are digital assets that stand in for real-world things like artwork, collectibles, and game assets. New Financial Instruments (NFTs) use blockchain and smart contracts as their underlying digital infrastructure. When published, each book will have a separate non-fungible token (NFT) attached to it. The study used a trusted and secure e-book transaction system that meets the following security requirements: license verification for each e-book, content confidentiality, right to read authorization, authenticating a genuine buyer, confirming the validity and integrity of e-book contents, direct purchase safety, and preventing e-book piracy and illegal downloading. The developed solution will be a lifesaver for the e-book industry in Kenya and other regions worldwide since they offer an easy way for readers and authors to easily make secure e-book transactions with zero risk of piracy or denial of access for legitimate access users. Keywords: Blockchain, E-book, Non-Fungible Tokens, Piracy, Smart Contracts.
- ItemA Blockchain-based prototype for cybersecurity threat intelligence sharing: a case of Kenyan banking and insurance financial institutions(Strathmore University, 2021) Kibuci, Wanjohi StephenCybersecurity threats to financial institutions have become more sophisticated and challenging to deal with. The growing dependence of financial institutions on cyberspace makes cybersecurity preparedness against threats important to achieve a financial institution's mission and vision. In this context, cybersecurity preparedness is the process in which a financial institution can protect against, prevent, mitigate, respond to, and recover from cyber threats. Traditionally, most organizations share threat intelligence through ad hoc methods such as emails and phone calls but there is a need to automate threat intelligence sharing where possible to improve cybersecurity preparedness. To address this issue, and enhance cybersecurity and trust, a blockchain-based approach can be employed to share threat intelligence. This study aims to leverage blockchain technology by developing a prototype to automate cybersecurity threat intelligence sharing in financial institutions. The study used a quantitative approach in data collection using structured online questionnaires with close-ended questions and open source datasets and data analysis using several analytic tools. The prototype has been developed using the Rapid Application Development software development methodology using open-source Oracle Virtual Box that runs on Linux Operating System
- ItemA Credit scoring model for mobile lending(Strathmore University, 2024) Oindi, B.An exponential increase in mobile usage has led to more accessible access to mobile loans for most Kenyans; this has created a lifeline for those excluded by traditional financial institutions; the easier way to borrow loans comes with its risks. The major one is borrower defaulting. This creates a need for credit scoring, which plays a crucial role in decision-making for lenders to determine borrowers’ creditworthiness, therefore minimizing credit risk and managing information asymmetry. On mobile lending, borrowers’ financial information is usually limited, making machine learning a favorable tool for credit assessment. Traditionally, the process required statistical algorithms and human assessment, which fall short when subjected to large datasets and are time-consuming. The traditional methods also need help adjusting to changes in borrowers' behavioral needs. Against this backdrop, this research developed a novel credit scoring model for mobile lending using Random Forest, XGBoost, LightGBM, Catboost, and AdaBoost algorithms. SMOTE was used to address the class imbalance problem. The model achieved the best accuracy of 86%. The research further analyzes the challenges in credit scoring and reviews related works by several authors. The research also looked at the feature importance of the models, which effectively analyzed the model's behavior. This model can analyze vast volumes of data, which would otherwise be resource-intensive if done manually. The machine learning model was then deployed into a Streamlit Web Application with a user interface where real-time predictions are made based on borrower data. The model can give lenders insights into determining borrowers' creditworthiness and enable them to make informed decisions before lending. Keywords: Mobile loans. Credit Scoring. Probability of Default. Machine Learning. Statistical Algorithms. SMOTE
- ItemA Customer churn prediction and corrective action suggestion model for the telecommunications industry using predictive analytics(Strathmore University, 2024) Wanda, R. K.The telecommunications industry is significantly susceptible to customer churn. Customer churn leads to loss of customer base which leads to reduction in revenue, reduced profit margins, increased customer acquisition costs and loss of brand value. Mitigating the effects of customer churn has proved to be a tall order for many organizations in the telecommunications industry. Most companies employ a reactive approach to customer churn and thus do not take any corrective actions until the customer has left. This approach does not enable organizations to know and prevent potential churn before it occurs. Alternatively, some organizations employ a more proactive approach to mitigate customer churn through predictive analytics. Although this approach is more effective, it only predicts which customers will churn without recommending the appropriate corrective action. In this dissertation, a customer churn prediction and corrective action suggestion model using predictive analytics was implemented to predict churn and suggest appropriate corrective actions. The IBM telco customer churn dataset accessed via API from the open machine learning.org website was used for this study. The dataset was subjected to pre-processing and exploratory data analysis to gain valuable insights into the data. To enhance the reliability of the developed model, an 80/20 train/test split was applied to the dataset. The training dataset was then divided into 5 folds before model fitting. Several classification algorithms; Logistic Regression, Gaussian Naive Bayes, Complement Naïve Bayes, K-NN, Random Forest and CatBoost were then fit with the training data and their performance was evaluated. Logistic Regression achieved a recall of 80% and was selected for system implementation. Logistic regression feature coefficients were then used to determine the appropriate corrective actions. A locally hosted web interface was then developed using the Python Streamlit library to enable users to feed input into the model and get churn predictions and corrective action suggestions. The developed model demonstrated ease of use and high performance and will enable telecommunication companies to accurately predict customer attrition and take appropriate corrective actions, reducing customer attrition's impact on the companies’ bottom line. Keywords: churn, machine learning, predictive analytics, telecommunications industry
- ItemA Framework for evaluating ICT use in teacher educ...Oredo, John OtienoTeachers are under increasing pressure to use Information and Communication Technology to impart to students the knowledge, skills and attitudes they need to survive in the 21st Century. The teaching profession needs to migrate from a teacher centered lecture based instruction, to a student-centered interactive learning environment. To attain this aspiration, an ICT enabled teacher education is fundamental. Towards this end, international and national authorities have been spending huge amounts of money to facilitate the implementation of ICT teacher education. This work attempts to evaluate the ueage of the available ICT facilities in Kenyan Public primary teacher colleges focusing ion the quantity of computer use,and the levels attained in terms of using ICT's support.
- ItemA Framework for sizing solar PV systems adaptable to off grid areas(Strathmore University, 2024) Nyangoka, I. K.Solar PV sizing is the process of determining the quantity and capacity of solar PV system components to meet a given energy demand. This process is needed to ensure that the components are not undersized resulting in insufficient energy or oversized increasing the system cost. There are several solar PV sizing frameworks currently in use in the market such as intuitive, numerical, and analytical frameworks. However, these frameworks have neglected some key adaptability factors unique to off-grid areas such as the ability of the household to pay and the type of roofing structure. This neglect has seen development of solar PV systems that are beyond the budget of most households in off-grid areas and with specifications that technically inhibit their effective use in the setup. Therefore, for enhanced adaptability, there is need to develop a new solar PV sizing framework that considers the unique adaptability factors of off grid areas. This study identified these unique adaptability factors and investigated how they influence the size of a solar PV system. Through the modification of the exiting numerical sizing framework, these adaptability factors were integrated in the sizing process within the context of this study. It was established that by integrating these factors, the resultant PV systems were more adaptable to off-grid areas in terms of cost, mobility, durability and reliability.
- ItemA Fraud investigative and detective framework in the motor insurance industry: a Kenyan perspectiveKisaka, George Ngosiah; Onyango-Otieno., VitalisInsurance fraud is a serious and growing problem, with fraudsters’ always perfecting their schemes to avoid detection by the basic approaches. This has caused a rise in fraudulent claims that get paid and increased loss ratios for insurance firms thereby diminishing profitability and threatening their very existence. There is widespread recognition that traditional approaches to tackling fraud are inadequate. Studies of insurance fraud have typically focused upon identifying characteristics of fraudulent claims and putting in place different measures to capture them. This thesis proposes an integrated framework to curtail insurance fraud in the Kenyan insurance industry. The research studied existing fraud detection and investigation expertise in depth. The research methodology identified two available theoretical frameworks, the Bayesian Inference Approach and the Mass Detection Tool (MDT). These were compared to comprehensive motor insurance claims fraud management with respect to the insurance industry in Kenya. The findings show that insurance claims’ fraud is indeed prevalent in the Kenyan industry. Sixty five percent of claims processing professionals deem the motor segment as one of the most fraud prone yet a paltry 15 percent of them use technology for fraud detection. This is despite the fact that significant strides have been made in developing systems for fraud detection. These findings were used to determine and propose an integrated ensemble motor insurance fraud detection framework for the Kenyan insurance industry. The proposed framework built up on the mass detection tool (MDT) and provides a solution for preventing, detecting and managing claims fraud in the motor insurance line of business within the Kenyan insurance industry.
- ItemA GIS decision based model for determining the best path for connection to a power distribution network a case study of Kenya power and lighting company limitedKinuthia, Augustine Muturi; Kimani, StephenThe purpose of this study is to present a GIS based decision model for determining the best path for connection to a power distribution network. The model was derived from studies that consider the design of the power distribution system and the GIS field of network analysis along with the method used by KPLC for connecting premises to the distribution network. A digital map of the study area and the distribution network was generated and taking into account the distributors and distribution transformers the best path between the premises and the transformer was derived. In this study it is demonstrated that the distributors’ length and size and the distribution transformers’ capacity, load and location influence the connection of premises to the distribution network. The results also show that combining geospatial methods with the power distribution network enables engineers to visualize the spatial distribution of data in maps which yields better insight into the nature of the power distribution network.
- ItemA HDF5 data compression model for IoT applications(Strathmore University, 2022) Chabari, Risper NkathaInternet of things has become an integral part of the modern digital ecosystem. According to current reports, more than 13.8 billion devices are connected as of 2021 and this massive adoption will surpass 30.9 billion devices by 2025. This means that IoT devices will become more prevalent and significant in our daily lives. Miniaturization in form factor chipsets and modules has contributed to cost-effective and faster running computer components. As a result of these technological advancements and mass adoption, the number of connected devices to the internet has been on the rise, leading to the generation of data, in high volumes, velocity, veracity, and variety. The major challenge is the data deluge experienced which in turn makes it challenging to visualize, store and analyse data generated in various formats. The adoption of relational databases like MySQL has been majorly used to store IoT data. However, it can only handle structured data because data is organized in tables with high consistency. On the other hand, NoSQL has also been adopted because of its capabilities of storing large volumes of data and has no reliance on a relational schema or any consistency requirements. This makes it suitable for only unstructured data. This outlines a clear need of adopting an effective way of storing and data managing IoT heterogeneous data in a compressed and self-describing format. Furthermore, there is no one- size all approach of managing heterogeneous data in IoT architecture. It is in the paradigm that this research solved this challenge by creating a tool that compresses heterogeneous data while saving it in a HDF5 format. The format of the data used was in .csv datasets. These data was parsed in the storage tool and data tool of the HDF5 for compression and conversion. The tool managed to achieve a good compression ratio percentage of 89.34% decrease from the original file. The output of the compressed file was represented on an external interactor called hdfview to validate that the algorithm used was lossless.
- ItemA Loan default prediction and loan amount recommendation tool for SACCOs in Nairobi: a case of Okoa Management SACCO(Strathmore University, 2023) Mwalozi, P. M.SACCOs loan delinquency is a severe danger to the organization's capacity to continue availing loans to loan applicants and to grow. SACCOs are unable to collect what they have lent out to loan beneficiaries as the default rate rises gradually. This research project aimed at using the analysis of the different factors that determine loan defaults in microfinance institu-tions, microlending institutions and SACCOs in Kenya with a focus on Okoa Management Ltd. and how the same factors can be used to predict the likelihood of a loan borrower to default in the repayment process by applying machine learning algorithms. Credit risk assessment pre-cision is important to the functioning of lending institutions. Traditional and most existing credit score models are developed and designed using demographic characteristics, historical payment data, credit bureau data and application data, with most of them not suitable for de-veloping countries such as Kenya which consider the employment type (casual, temporary, contractual or permanent) and the fact that we can lend up to 3 times as much as the borrower’s savings. With these factors being constantly changing and dynamic, credit risk models based on machine learning algorithms provide a higher level of accuracy in predicting default as they can be continuously trained with new data sets should the variables that are used change. Risk management has been an increasing issue for credit lending institutions as the need to deter-mine the likelihood of defaulting by borrowers is becoming more evident. By using machine learning, we can be able to reduce the uncertainty that comes with borrowing and even go further to recommending lower amounts for borrowers who we predict are likely to default in the repayment of the loan amount they have in mind. The research focused on three main al-gorithms: logistic regression, decision trees and tensor flow on the prediction. The algorithm that provided the best accuracy was the decision tree. The results of the research showed that people with little or no collateral (home-ownership/car ownership) were more likely to default and that there was a low correlation between months since last delinquent and the loan predic-tion default likelihood status. Keywords: Loan default prediction, machine learning, credit lending
- ItemA Location-aware nutritional needs prediction tool for type II Diabetic patients: case Kenya(Strathmore University, 2022) Karega, Lulu AminaDiabetes is a chronic disease caused by a lack of insulin production by the pancreas or by poor utilization of the insulin that is produced, with insulin being the hormone that helps glucose get to blood cells and produce energy. Urbanization and busy day to day schedules mean patients tend to pay little or no attention to their dietary habits which results in a preference for fast foods and processed food. The prevalence of type II diabetes in the world, Kenya included, has been steadily rising over the years and is projected to keep growing at an alarming rate. Diabetes if not properly managed can result in long-standing, costly and time-consuming complications. Diabetes management and control of blood sugar levels are generally done by the use of medication, namely insulin and oral hypoglycemic agents. However nutritional therapy can also go a long way to boosting the general health of a patient and reducing risk factors leading to further complications. Personalised nutrition has been formally defined as healthy eating advice, tailored to suit an individual based on genetic data, and alternatively on personal health status, lifestyle, and nutrient intake. Diabetes management falls under the field of health informatics that can benefit from data analytics. Predictive analytics is the process of utilizing statistical algorithms, software tools and services to analyze, interpret and visualize data with the aim to forecast trends, and predict data patterns and behavior within or outside the observed data. This study sought to develop a location-aware nutritional needs prediction tool for type II diabetic patients in Kenya. The prediction tool would help both nutritionists and patients by providing accurate and relevant nutritional advice that would help in dietary changes to combat type II diabetes with the added benefit of being location aware. The tool will use pathological results from nutritional testing to support nutritional therapy. If any deficiencies are identified from the provided nutritional markers, food items likely to improve those nutrient levels will be recommended. The amount of nutrient available in a given food item are determined by the food composition table for Kenya as published by the Food and Agriculture Organization (FAO) in conjunction with the Kenyan government. The study used a simplistic implementation of matrix factorization to provide predictions of locally available food items, down to the county level.
- ItemA Machine learning model for support tickets servicing: a case of Strathmore University ICTS client support services(Strathmore University, 2022) Maina, Antony KoimbiCustomer service is a highly vital part of any business. How satisfied your customers are can make or break a company. One of the greatest contributors to customer satisfaction is the ability to respond to their issues efficiently and effectively. Many businesses therefore opt to establish a customer service department that handles customers’ services, this includes receiving phone calls and replying to emails. Customers are expected to call with issues such as, “How do I reset my password?” “How do I access the Student Information System?” “Are the student’s marks out yet?” and the like. Often, the issues reported by customers are similar and tend to get similar resolutions. These requests can be overwhelming at times, for example in cases where the users/customers are accessing an online resource and the system goes down, the number of inquiries can be in the order of thousands depending on the number of system users. This means a human agent may not be able to service all these requests on time. This research aims to develop an intelligent chatbot model for a support ticketing system using machine learning to deliver an exceptional customer experience. This research specifically proposes to develop a machine-learning model that can be used to service customer tickets in the context of a university or learning institution. The Rapid Application Development methodology was used to produce a working prototype of a chatbot to test the model to be developed. Machine learning and natural language processing were used to extract a user’s intent from a message and by leveraging pre-trained frequently asked question models from the DeepPavlov library, the model was trained on 80% of the data and 20% for testing. All 37 sessions tested on Dialogflow were successful, translating to a 100% success response rate. The prototype was tested by integrating the WhatsApp messaging platform to send messages to the chatbot. The chatbot was able to respond to the user in a fraction of a second. The average response time was less than one minute during testing.
- ItemA Machine learning model to predict non-revenue water with severely unbalanced classes(Strathmore University, 2022) Muriithi, Patrick KimaniEvery household, industry, institution, organization needs clean water for existence. In Kenya, water is used for human consumption, production, and agriculture. The consumption of water, therefore, contributes to the overall growth of the economy through water bills. The term non-revenue water (NRW) is defined as water produced and 'lost' before it reaches the customers. NRW is also described as the difference in volume reaching the final consumer for billing and the initial volume released into the distribution network. Based on the assessment of the Public-Private Infrastructure Advisory Facility (PPIF), an organization that fosters inter-agency cooperation to curbing NRW, physical losses are the main causes of NRW. As per PPIF, most NRW emanates from physical losses, including burst pipes that are often a result of poor maintenance. Besides physical losses, PPIF notes other numerous sources of NRW, especially commercial losses arising from the manner billing data is handled throughout the billing process. The main issues related to this cause include under-registration of customers' meters’ reading, data handling errors, theft, and illegal connections. Other causes of NRW include unbilled authorized consumption such as water used for firefighting, utilities for operational purposes, and water provided to specific groups for free. Therefore, non-revenue water risks the country's revenue collection, which can lead to slow economic growth. This research proposes development of a machine learning model that will be used by water service providers. The model will be able to assist the WSP companies to reduce non-revenue water by predicting water consumption of different customers. To achieve these objectives, we intend to focus on providing tools and methods that will guide the WSPs on reducing the non-revenue water. Our model was trained with 2 years consumption dataset of Nairobi County. The model developed was able to predict customer monthly consumption with percentage accuracy of 95%.
- ItemA Machine learning tool to predict early-stage start-up success in Africa(Strathmore University, 2023) Gichohi, B. W.Most start-ups do not celebrate their first year in operation, and a few survive to see their fifth year of operation. This has been a challenge for all the stakeholders involved. Therefore, an effective tool for predicting the possibility of a start-up surviving its infancy stages and eventually growing into a profitable venture could be a breakthrough for entrepreneurs, innovators, and investors. This study assessed the factors that make early-stage start-ups successful, specifically in Africa and developed a web-based prototype that uses machine learning algorithms to predict the success of proposed start-ups. The study adopted both descriptive research design and applied research. Data was collected using a secondary data source called CrunchBase, a global investor platform. This data formed the basis for the development of the prediction tool. The tool was designed to predict the success or failure of start-ups based on the collected data. To ensure the accuracy and reliability of the prediction model, 80% of the collected data was used for training the model, while the remaining 20% was utilized for testing and validation purposes. The model development employed Artificial Neural Networks (ANNs) algorithm, known for its capability to analyze complex patterns and relationships in data. The developed model achieved an impressive accuracy of 86.81%, indicating its effectiveness in predicting the success of start-ups. The tool was implemented using Flask, a Python web framework, along with other Python machine learning frameworks such as Keras and Sci-kit Learn. This allowed for the development of a user-friendly and interactive web-based prototype. A number of users were provided access to the tool for usability testing, and their feedback indicated that the tool was intuitive, easy to use, and effective in predicting the success of start-ups. This study successfully developed a web-based prototype using agile methodology, integrating machine learning algorithms based on Artificial Neural Networks. The prototype demonstrated high accuracy in predicting start-up success, making it a valuable tool for entrepreneurs, innovators, and investors in Africa and beyond. Keywords: Business start-ups, machine learning algorithms, prediction tool, start-up success.
- ItemA Model for assessing digital technology readiness in mini grids(Strathmore University, 2024) Koskei, K. K.In Sub-Saharan Africa, approximately 0.6 billion people lack access to electricity due to challenges with the centralized grid. Mini grids, seen as a solution for rural electrification, face sustainability issues, including technical limitations with renewable energy, outdated monitoring methods, and scalability matters. To address these challenges and strengthen their value proposition, the integration of current and emerging Digital Technologies, such as AI, IoT, Blockchain, and Cybersecurity, is recommended. This study aimed to address a critical gap in the current understanding of Digital Technology Readiness (DTR) in the context of mini grids in Kenya. Recognizing the necessity of a DTR assessment model tailored to end user preferences and the local environment, this research employed a combination of quantitative methods, inferential analysis, and fuzzy synthetic evaluation (FSE). The research methodology embraced the design thinking process and descriptive analysis to develop the DTR assessment model. Data collection involved the use of online questionnaires, employing purposive and snowball sampling techniques. These instruments sought insights from relevant stakeholders in the mini grid industry. Subsequently, surveys were conducted to validate and test the DTR assessment model, ensuring its validity and efficacy. The study findings, based on responses from diverse industry professionals, were instrumental in identifying 15 critical indicators that collectively contribute to digital technology readiness in the mini grids. Through factor analysis, these indicators were categorized into five main dimensions: Digital Literacy (DL), Digital Technology Usefulness (DTU), Digital Technology Preparedness (DTP), Digital Transformation (DT), and Digital Infrastructure Availability (DIA). The critical index values of these dimensions, in descending order, were as follows: DL (4.443), DTU (4.362), DTP (3.839), DT (3.642), and DIA (3.5). These critical index values serve as a valuable guide, emphasizing the key areas of focus in digital technology readiness. The output of this research was a DTR assessment model for mini grids. The model was tested by mini grid stakeholders and it was found to be valid and effective. The model will be used by the stakeholders to measure DT readiness in mini grids. This will aid in strategic decision-making and enhancing the industry's adaptability to the challenges and opportunities presented by Industry 4.0.
- ItemA Model for costing information technology services in public organizations : case study of the Kenya Revenue AuthorityOsiro, Yvonne Wafula; Sevilla, JosephPublic organizations are increasingly embracing technology as a means of achieving operational efficiency and in the process reduce the cost of doing business. It is necessary for organizations to have a clear financial visibility into their Information Technology operations. However, it is frequently observed that IT continues to drain financial resources without providing any insight on its consumption. This is partially due to the intangible nature of IT and partially due to lack of standard IT service costing frameworks. IT Managers need financial and non –financial information to have proper insight of their operations. Having a costing model enables IT departments to determine the cost of providing an individual or group of services. The objective of this research was the development of a service oriented costing model for IT services offered by public organizations, with the following specific objectives: 1. To investigate the available IT service costing frameworks and models. 2. To establish the policies and factors that determine IT service costs. 3. To develop an IT service costing model for use in public organizations. 4. To validate the effectiveness of the IT Service Costing framework developed. Using a qualitative approach, this thesis presents an IT service cost model to methodically guide public organizations in determination of costs associated with provision of an IT service. The research used a descriptive design to obtain information concerning the current status of the phenomena. A target population of thirty four senior and middle level managers from the ICT, Finance and Administration departments of the Kenya Revenue Authority were considered. Purposive sampling was used to select twenty two target respondents. Semi-structured interviews consisting both open-ended and closed questions to provide greater depth were the primary data collection method used, in addition to scholarly journals, books and websites. The model presented provides an approach to cost estimating that can simplify the determination of costs associated with an IT service. The application of suitable abstraction principles in terms of cost categories, cost types, cost activities, cost elements and cost drivers yields a modified IT service specific cost model. The approach was verified using a real service as an example to provide insight into cost structures and potential cost drivers. It was applied to a case study of the email service at the Kenya Revenue Authority and in this way the flexibility and adaptability of the model to given service-orientated scenarios was demonstrated.
- ItemA model for estimating network infrastructure costs : a case for all-fibre LAN networksMaina, Anthony Mbuki; Ateya, Ismail LukanduThe 21st century is an era that has been characterised by phenomenal growth in data rates at the local area network (intranet), extranet and the Internet.This trend has been pushed by the widespread deployment in organisations of “bandwidth-hungry” applications such as VoIP, security surveillance systems, video conferencing and streaming of multimedia content. Due to demand placed on network resources by these applications and services,physical layer cabling solutions have had to evolve to support faster, improved LAN technologies such as Gigabit Ethernet.Although new network architectures (such as Centralised Fibre networks) address current and long term demands of the modern networking environment, concerns have been raised about its cost viability. The key problem identified in this study was an inadequacy of suitable tools that aid decision making when estimating the cost of a network infrastructure project. Factors of importance in this regard were collected in a survey and used in development of a cost model. The model is aimed at being a tool to assist network planners in estimating LAN infrastructure costs. A network was designed based on two architectures – centralised fibre (allfibrenetwork) and hierarchical star (UTP for horizontal cabling and optical fibre for backbone cabling). Thereafter, cost of implementing these two architectures was calculated using the model. Based on the results computed from the cost model, the all-fibre network (centralised fibre architecture) was found to be more cost effective compared to the hierarchical star network
- ItemA Model for mapping crime hotspots using neural networks: a case of Nairobi(Strathmore University, 2024) Echessa, R. G.Since the inception of the first modernized police agency, the primary objective of police organizations has been to prevent crime. Law enforcement, police, and crime reduction agencies commonly used hotspot mapping, an analytical technique, to visually determine the locations where a crime was most prevalent. This assisted in decision-making to determine the deployment of resources in target areas. This study aimed to investigate crime mapping techniques in crime analysis and suggest ways to enhance the implementation of crime mapping in Nairobi. Beginning with a historical analysis of GIS and crime mapping, the study then moved on to a consideration of the significance of geography in dealing with crime concerns. Neural Networks and K-Means machine learning models were used, and data was collected through quantitative and qualitative means in two phases. X was utilized in the first phase to collect information from the general public and important informants. The second phase involved collecting crime hotspot coordinates using a participative Geographic Information System. The study focused on utilizing social media data and machine learning techniques, particularly the KMEANS with NN (Neural Network) model, to identify and map crime hotspots in Nairobi. By analyzing crime-related tweets and categorizing them as either positive, negative or neutral using this NN (Neural Network) model then clustering them as either high risk or low risk using K-Means, the study achieved high accuracy, precision, recall, and Fl-Score, suggesting the effectiveness of this approach for crime prediction and prevention. Keywords: Hotspot mapping, X, Machine learning, crime, Neural Networks, K-Means
- ItemA Model for predicting greenhouse gas emissions from motorcycles in Kenya(Strathmore University, 2024) Cheruiyot, L. C.In Kenya, inefficient public transport systems coupled with rough terrains have made motorcycles the most preferred means of transport. The transport sector is a leading emitter of greenhouse gases, the main driver of global climate change. This is due to the reliance on fossil fuels which require Internal Combustion Engines to operate. The threat posed by climate change and variability has fueled the ongoing energy transition from fossil fuels to green technologies through Emobility. Motorcycles have been described as low-hanging fruit in the E-mobility transition from fuel-based engines to electric-powered motors. However, this transition has shown little progress due to fewer and inadequate models to inform E-mobility policy and investment decisions. This study sought to develop a model for calculating GHG emissions from conventional and electric motorcycles under different scenarios. The scenarios were based on traffic conditions and engine efficiency. The study also aimed to analyze existing ICE and electric two-wheeler technologies in Kenya. A descriptive and experimental research design was adopted for the study. Primary data was collected using a structured questionnaire embedded in the Kobo Toolbox and was administered to motorcycle operators in Nairobi and Machakos counties. Secondary data was also collected from the NTSA database. The R-programming tool was used for data analysis and simulation of GHG emissions under different scenarios. The model was validated using experimental results to increase confidence in the findings. The study results provided comprehensive insights into the determinants of greenhouse gas emissions from both conventional Internal Combustion Engine (ICE) and electric motorcycles. Through an analysis of rider demographics, and electric and conventional motorcycle characteristics, the study revealed the multifaceted factors that contributed to the environmental impact of motorcycles. The specifics of electric motorcycle technologies, including battery characteristics, charging habits, and daily travel distances, were explored, offering valuable insights into the state of electric mobility in the country. Additionally, the study developed and applied a General Additive Model (GAM) for predicting motorcycle emissions, yielding high predictive accuracy and significant predictors. The model underscored the influence of fuel type and temporal trends on emissions, emphasizing the importance of considering both technological and temporal factors in policy formulation. Projection of emissions to 2045 revealed an alarming exponential increase, necessitating urgent intervention. Keywords: Predictive model, GHG emissions, electric two-wheelers