MSc. CIS Theses and Dissertations
Permanent URI for this community
Find here Theses and Dissertations submitted for the award of Master of Science in Computer-Based Information Systems (MSIS) These works have been scanned and passed through the OCR. We do not hold liablity for correctness of content.
Browse
Browsing MSc. CIS Theses and Dissertations by Title
Now showing 1 - 20 of 94
Results Per Page
Sort Options
- ItemA Bi-Lingual counselling chatbot application for support of Gender Based Violence victims in Kenya(Strathmore University, 2024) Mutinda, S. W.Gender-based violence (GBV) remains one of the highest prevailing human rights violations globally, surpassing national, social, and economic boundaries. However, due to its nature, it is masked within a culture of silence and causes detrimental effects on the dignity, health, autonomy, and security of its victims. The prevalence of GBV is fuelled by cultural nuances and beliefs that justify and promote its acceptability. The stigma surrounding GBV in addition to fear of the consequences of disclosure deter victims from seeking help. Additionally, the resources available for addressing GBV such as legal frameworks and recovery centres are limited. Technological approaches have been established to tackle GBV as intermediate and supplementary support for victims as part of UN-SDG 5. Conversational Agents such as Chomi, ChatPal, and Namubot have been developed for counselling of GBV victims who struggle with disclosing their predicament to humans. The existing chatbots, however, are not a fit for Kenyan victims because they utilize languages such as Swedish, Finnish, Isizulu, Setswana and Isixhosa in addition to incorporating referral services specific to their regions. This research addressed this gap by developing a chatbot application suitable for the Kenyan region for counselling of GBV victims using both Kiswahili and English, the languages predominantly used in the country, in addition to including contacts to referral services within the country. The methodology utilized involved the development of a chatbot application based on Rasa open source AI framework by training a model using a pre-processed counselling dataset. The performance of the model was evaluated using NLU confidence score to determine the model’s certainty in its intent identification and a confusion matrix was generated which with 80% and 20% training and testing data split resulted in 100% classification threshold accuracy. Python’s Fuzzy Matching Token Set Ratio score was also used to determine the response which best matches the input with results indicating satisfactory performance of the model ranging between 63% and 92% for GBV queries input. The developed model was then integrated into a web application as the user interface for user access and interaction with the model hence achieving the research objective of developing a chatbot application to conduct counselling for GBV victims in Kenya using English and Kiswahili languages . Keywords: Gender-based Violence, stigma, chatbot, Rasa open source, NLU Confidence Score, Fuzzy Matching Token Set Ratio score
- ItemA Blockchain tool to detect and mitigate e-book piracy: a case study of Kenya(Strathmore University, 2023) Nzangi, J. M.Numerous online e-book markets have emerged along with the growth of e-book readers. This has also increased the speed and ease with which people share books. As a result, piracy has been skyrocketing since there is no security for books being shared, allowing only one person to have one copy of the purchased books. Globally, e-book piracy has been a significant setback for publishers, as the available solutions cannot offer the necessary content protection. A good example is the strict Digital Rights Management (DRM) that occasionally annoys real readers by preventing them from accessing their books or forcing them to forfeit ownership if the platform is shut down. Publishers, online platform providers, and writers currently comprise the e-book market. E-book piracy has real-world consequences that affect both publishers' and authors' bottom lines and their ability to produce more books. This work developed a non-fungible-token-based e- book platform that enables writers to self-publish e-books and sell them without the risk of piracy. NFTs, or non-fungible tokens, are digital assets that stand in for real-world things like artwork, collectibles, and game assets. New Financial Instruments (NFTs) use blockchain and smart contracts as their underlying digital infrastructure. When published, each book will have a separate non-fungible token (NFT) attached to it. The study used a trusted and secure e-book transaction system that meets the following security requirements: license verification for each e-book, content confidentiality, right to read authorization, authenticating a genuine buyer, confirming the validity and integrity of e-book contents, direct purchase safety, and preventing e-book piracy and illegal downloading. The developed solution will be a lifesaver for the e-book industry in Kenya and other regions worldwide since they offer an easy way for readers and authors to easily make secure e-book transactions with zero risk of piracy or denial of access for legitimate access users. Keywords: Blockchain, E-book, Non-Fungible Tokens, Piracy, Smart Contracts.
- ItemA Credit scoring model for mobile lending(Strathmore University, 2024) Oindi, B.An exponential increase in mobile usage has led to more accessible access to mobile loans for most Kenyans; this has created a lifeline for those excluded by traditional financial institutions; the easier way to borrow loans comes with its risks. The major one is borrower defaulting. This creates a need for credit scoring, which plays a crucial role in decision-making for lenders to determine borrowers’ creditworthiness, therefore minimizing credit risk and managing information asymmetry. On mobile lending, borrowers’ financial information is usually limited, making machine learning a favorable tool for credit assessment. Traditionally, the process required statistical algorithms and human assessment, which fall short when subjected to large datasets and are time-consuming. The traditional methods also need help adjusting to changes in borrowers' behavioral needs. Against this backdrop, this research developed a novel credit scoring model for mobile lending using Random Forest, XGBoost, LightGBM, Catboost, and AdaBoost algorithms. SMOTE was used to address the class imbalance problem. The model achieved the best accuracy of 86%. The research further analyzes the challenges in credit scoring and reviews related works by several authors. The research also looked at the feature importance of the models, which effectively analyzed the model's behavior. This model can analyze vast volumes of data, which would otherwise be resource-intensive if done manually. The machine learning model was then deployed into a Streamlit Web Application with a user interface where real-time predictions are made based on borrower data. The model can give lenders insights into determining borrowers' creditworthiness and enable them to make informed decisions before lending. Keywords: Mobile loans. Credit Scoring. Probability of Default. Machine Learning. Statistical Algorithms. SMOTE
- ItemA Customer churn prediction and corrective action suggestion model for the telecommunications industry using predictive analytics(Strathmore University, 2024) Wanda, R. K.The telecommunications industry is significantly susceptible to customer churn. Customer churn leads to loss of customer base which leads to reduction in revenue, reduced profit margins, increased customer acquisition costs and loss of brand value. Mitigating the effects of customer churn has proved to be a tall order for many organizations in the telecommunications industry. Most companies employ a reactive approach to customer churn and thus do not take any corrective actions until the customer has left. This approach does not enable organizations to know and prevent potential churn before it occurs. Alternatively, some organizations employ a more proactive approach to mitigate customer churn through predictive analytics. Although this approach is more effective, it only predicts which customers will churn without recommending the appropriate corrective action. In this dissertation, a customer churn prediction and corrective action suggestion model using predictive analytics was implemented to predict churn and suggest appropriate corrective actions. The IBM telco customer churn dataset accessed via API from the open machine learning.org website was used for this study. The dataset was subjected to pre-processing and exploratory data analysis to gain valuable insights into the data. To enhance the reliability of the developed model, an 80/20 train/test split was applied to the dataset. The training dataset was then divided into 5 folds before model fitting. Several classification algorithms; Logistic Regression, Gaussian Naive Bayes, Complement Naïve Bayes, K-NN, Random Forest and CatBoost were then fit with the training data and their performance was evaluated. Logistic Regression achieved a recall of 80% and was selected for system implementation. Logistic regression feature coefficients were then used to determine the appropriate corrective actions. A locally hosted web interface was then developed using the Python Streamlit library to enable users to feed input into the model and get churn predictions and corrective action suggestions. The developed model demonstrated ease of use and high performance and will enable telecommunication companies to accurately predict customer attrition and take appropriate corrective actions, reducing customer attrition's impact on the companies’ bottom line. Keywords: churn, machine learning, predictive analytics, telecommunications industry
- ItemA Machine learning tool to predict early-stage start-up success in Africa(Strathmore University, 2023) Gichohi, B. W.Most start-ups do not celebrate their first year in operation, and a few survive to see their fifth year of operation. This has been a challenge for all the stakeholders involved. Therefore, an effective tool for predicting the possibility of a start-up surviving its infancy stages and eventually growing into a profitable venture could be a breakthrough for entrepreneurs, innovators, and investors. This study assessed the factors that make early-stage start-ups successful, specifically in Africa and developed a web-based prototype that uses machine learning algorithms to predict the success of proposed start-ups. The study adopted both descriptive research design and applied research. Data was collected using a secondary data source called CrunchBase, a global investor platform. This data formed the basis for the development of the prediction tool. The tool was designed to predict the success or failure of start-ups based on the collected data. To ensure the accuracy and reliability of the prediction model, 80% of the collected data was used for training the model, while the remaining 20% was utilized for testing and validation purposes. The model development employed Artificial Neural Networks (ANNs) algorithm, known for its capability to analyze complex patterns and relationships in data. The developed model achieved an impressive accuracy of 86.81%, indicating its effectiveness in predicting the success of start-ups. The tool was implemented using Flask, a Python web framework, along with other Python machine learning frameworks such as Keras and Sci-kit Learn. This allowed for the development of a user-friendly and interactive web-based prototype. A number of users were provided access to the tool for usability testing, and their feedback indicated that the tool was intuitive, easy to use, and effective in predicting the success of start-ups. This study successfully developed a web-based prototype using agile methodology, integrating machine learning algorithms based on Artificial Neural Networks. The prototype demonstrated high accuracy in predicting start-up success, making it a valuable tool for entrepreneurs, innovators, and investors in Africa and beyond. Keywords: Business start-ups, machine learning algorithms, prediction tool, start-up success.
- ItemA Prototype for predicting energy consumption in buildings: a case of commercial office buildings(Strathmore University, 2019) Wachira, Paul Manasse MachariaEnergy consumption remains the highest cost areas for businesses together with facilities, people and equipment but unfortunately, it is the only one that is not carefully monitored. For businesses to be able to manage energy consumption they must first be able to predict future energy consumption so as to aid in budgeting and planning for cost reduction strategies. This study proposed an energy consumption prediction prototype to help predict future consumption of energy in commercial office buildings thus aiding proper budgeting and cost reduction. To develop the prediction model the study used the 2012 CBECS (Commercial Buildings Energy Consumption Survey) dataset hosted by the Energy Information Administration (EIA) of the United States of America. After cleaning and reviewing the data set, 26 Features were selected for Features Engineering. Features Engineering enabled the research to choose the best 4 Features which were used for training and testing different Regression Based Machine Learning Algorithms. Using R2 (R Squared), MAE (Mean Absolute Error) and RMSE (Root Mean Square Error) to determine performance, the study selected Gradient Boost Machines as the best algorithm for the prototype with an Accuracy of 97%. Python packages Pandas, NumPy, Matplotlib, Seaborn and Scikit-leam were used indata cleaning, descriptive statistics, features engineering, data visualization and training and testing the machine learning algorithms for the energy consumption prediction model. The prototype was developed using Flask (a Python micro web framework) to enable the Building owners provide the prototype with data via web browser related to the 4 features selected for energy consumption prediction. Usability Test was done with 48.1% of the users strongly agreeing and 44.2% agreeing to use the prototype in future for prediction of electricity consumption in their buildings.
- ItemA Prototype of a virtual union catalogue for Kenya Library and Information Service Consortium (KLISC) member libraries.Gichiri, Peter Mwangi; Marwanga (Dr.), ReubenKenya lacks a national library union catalogue. As a result, researchers contend with overwhelming array of independent catalogues whenever they want to do inter-library research. Most libraries in Kenya are individually uploading their catalogues on the World Wide Web. Although this is a positive scenario, it does not address effectively role and nature of bibliographic information sharing. A fully functional national union remains the ultimate solution to inter-library research. This work involved gathering requirements, designing and developing a prototype for a Virtual Union Catalogue of Kenya Library and Information Services Consortium (KLISC) member libraries. We used online questionnaires generated using SurveyGizmo to gather data that informed the design of the virtual union catalogue gateway. The survey period covered 14th December 2010 at 7.00 am to 24th February 2011 at 12.00 pm. This work reveals the state and capacities of different KLISC member libraries to participate in the virtual union catalogue and suggestions on the design architecture of the virtual union catalogue. The Search/Retrieve via URL (SRU) query interface architecture was used to develop a functional virtual union catalogue prototype for KLISC member libraries. This data retrieval system model was adapted from Purdue University in Indiana United States of America. The prototype uses single query form to search individual libraries one at a time. Performing searches from a single portal provides a one-stop–shop for bibliographic data held by KLISC member libraries. This improves the records retrieval, enhances the inter-library loan services hence greatly reducing costs and effort incurred during inter-library loan operations. The achievement is notable in the realization of a union catalogue for the Kenya Library and Information services Consortium (KLISC).
- ItemAn Air quality prototype for monitoring greenhouse gas emissions(Strathmore University, 2021) Ngugi, Maureen NjeriIn the world today, every human being wishes to live in a healthy, unpolluted and sustainable environment. This is because such a clean environment enables one to thrive and be productive in all aspects. Such environments are free from anything that may cause diseases and other physical injuries. Unfortunately, as years go by, our world has faced environmental degradation, global warming and high levels of pollution. This has not only affected wildlife and ecosystems in various parts of the world but it has also affected human health. This is evident by various respiratory diseases that have emerged such as pneumonia, bronchitis and many other diseases. This dissertation presents research work that focused on Green House Gas emissions which are a contributing factor to environmental degradation. It is important to monitor the amount of greenhouse gases in the atmosphere as it enables individuals, governments and environmental bodies to take action to tackle these emissions. This research used a prototyping methodology by developing an air quality monitoring system for greenhouse gases in the atmosphere. It incorporated an air quality monitoring prototype by integrating IoT with Wireless Sensor Networks. Collected data was then uploaded into a cloud platform using the Blynk API which relayed real-time information to a mobile device. The developed prototype achieved 95% accuracy. The developed systems can be used by individuals and environmental bodies to draw various strategies on how to lower Green House Gas emissions and adapt greener technologies that will be of great benefit to the environment as well as for a sustainable ecosystem.
- ItemAn Algorithm for predicting road accidents based on traffic offence data(Strathmore University, 2017) Jwan, Levice ObongoDrivers with multiple records of road traffic violations for instance speeding, driving under influence of alcohol and using mobile phones while driving have been considered as a high risk group for possible involvement in road accidents. Studies have shown that there are links between these reckless behaviors and road accidents. It is therefore critical that such drivers be identified early in advance to eliminate that likelihood. Currently, the road traffic offence data collected by National Transport and Safety Authority for instance speeding and drunk driving data is solely used for reporting and prosecution hence not adequately utilized in ensuring road safety. Effective utilization of these data can positively impact road safety management since authorities can put in place mitigation mechanisms in order to prevent the frequent road accidents. The algorithm-based system developed in this study makes use of traffic offence data to predict the likelihood of a driver causing road accident. Data was gathered using close-ended questionnaires and interviews. The questionnaires and interviews intended to determine causes of road accidents and specific aspects about; booking an offender, relaying of traffic accident data and the need for a system among users within the transport sector in Kenya. Three categories of respondents were used; the National Transport and Safety Authority, the Kenya Police and the motorists. Similar questionnaires were given to the police and the NTSA officials while the motorists had their own set of questions. From the research, it emerged that the major causes of accidents in Kenya were; speeding, dangerous overlapping and drunk driving. Of the 37 respondents; 22 supported the algorithm-based system, indicating a 59.47% approval for the system. The implication of the research is that there will be more people booked for traffic offences and it will be possible for law enforcement to know the risk level of a driver based on the offences committed.
- ItemAn Analysis of ICT-based e-learning assistive technologies for blind students : a case study of Kenyatta University.(Strathmore University, 2009) Mbui, Samuel MwangiThe power of ICT in facilitating learning has increased in the recent past. ICT has enabled many people to access educational resources. In particular ICT has enabled e-learning programs in many Universities the world over. However ICT based e-learning poses accessibility issues with the blind students, who cannot access e-learning resources through the conventional methods used by the sighted students.E-learning platforms such as Ilearn, Moodle, eXe or WikiEducator, have been commonly used to post e-learning resources. Several assistive technologies such as JAWS sreen reader, the Dolphin pen, and braille display software have been developed to aid the blind students access e-learning resources. However, it has not been established whether these assistive technologies can help the blind to access all educational resources posted on e-learning platforms. The study therefore analyzed the ICT-based assistive technologies that were being used in Kenyatta University (KU) to incorporate the blind students in e-learning programs, the effectiveness of these assistive technologies and identified technological challenges faced when incorporating the blind students in the E-learning program.The study also proposed an abstract framework for addressing the challenges identified.The study adopted a diagnostic case study at KU targeting the academic staff in special education department, ICT resource centre for visually impaired and the blind students. The study established that most blind students pursued courses that were not their favorite due to the inaccessibility issues posed by assistive technologies.This limited the level of education to which the blind could achieve. The study concludes that the available assistive technology was not adequate to develop the academic potentials of the blind fully.The researcher recommended the development of an assistive technology that could access and display coded 3-dimensional graphics that could be interpreted through touch. The study suggested a framework to develop that technology.
- ItemAnalyzing error detection performance of checksums in embedded networks(Strathmore University, 2016) Mirza, Ali NaqiNetworks are required to transport data from one device to another with adequate precision. For a majority of applications a system is expected to ensure that received data and transmitted data is uniform and consistent. Many elements can change one or more bits of a message sent. Applications therefore need a procedure for the purpose of identification and correction of errors during transmission. Checksums are frequently used by embedded networks for the purpose of identifying errors in data transmission. But decisions regarding which checksum to utilize for error detection in embedded networks are hard to make, since there is an absence of statistics and knowledge about the comparative usefulness of choices available. The aim of this research was to analyze the error detection performance of these checksums frequently utilized: XOR, one‟s complement addition, two‟s complement addition, Adler checksum, Fletcher checksum, and Cyclic Redundancy Check (CRC) and to assess the error detection effectiveness of checksums for those networks which are prepared to give up error detection efficiency in order to curtail costs regarding calculations and computations, for those wanting uniformity between error identification and costs, and finally for those networks which are ready to yield elevated costs for notably enhanced error detection. Even though there is no one size fits all method available, this research gives recommendations in order to decide as to which checksum approach to adopt. A bit flip fault design with manufactured error simulations was utilized for this research. Mathematical technique used for the proposed fault design was Monte Carlo simulations using Mersenne twister random number generator. This study concludes that the error identification performance of XOR, and Adler checksums for arbitrary and autonomous bit and burst errors is below an accepted level, rather 1‟s and 2‟s complement checksums should be utilized for networks prepared to surrender error identification efficiency in order to minimize the cost of calculations. Fletcher checksum should be utilized by networks wanting symmetry between computational costs and error identification and CRCs by networks prepared to yield greater costs of computation for notably enhanced error identification.
- ItemApplication of fingerprint authentication to fortify child safety in school transport(Strathmore University, 2024) Mutuku, S. WSafety of school-going children has been a great concern to parents, school administrations and the transport team in the recent past. In urban areas like Nairobi where most parents are busy working and crime is fast increasing, the need for an efficient and safe transport for pupils cannot be underestimated. Most current school transport systems use NFC cards or manual attendance records to keep track of the children picked in the morning or dropped after school. Using manual attendance is time consuming, especially where there are many students. NFC cards could also be lost or misplaced. This could be a security loophole if picked by someone else and manage to access the transport. This research uses fingerprint authentication for both learners and staff where fingerprints are captured, and database queried to authenticate the learner or staff. The choice of technology is inspired by the fact that fingerprints are unique to every individual adult or child. The research used Rapid Application Development (RAD) methodology because it is more flexible in accommodating the changing nature of requirements which are not well defined in the initial stages. The requirements are implemented in the system in separate prototypes until the final prototype is developed. It also allows for fast user feedback and speeds up delivery. Learners’ existing records will be used as input to the system and will be incorporated with the children fingerprint then stored in a database. Convenience sampling was used in the research to obtain simulated data. Keywords: Biometrics, safety, fortification, school transport, Facial Emotion Recognition, Biometric Fingerprint scanner, Geofencing
- ItemAn Assessment of the audit trail in pension management information systems :a case study of the pension department(Strathmore University, 2012) Mboni, Alphaxard KyaloThe purpose of this study was to assess the existing Audit Trail Model of the Pension Management Information System (PMIS), in the Pensions Department of the Ministry of Finance, Kenya, to track the processing of pension claims. The study assessed the weakness of existing Audit Trail model in use, identify and propose Audit Trail model that will address the needs of PM IS and identify ways of improving the use of Audit Trail. The study was a purposive targeting the key informants; Pension Directors, PMIS Administrators, Pension Accountants, and Pension Clerks dealing with the PMIS in various units. The Retirees were reached through snow ball technique. Both quantitative and qualitative data collection techniques were used; secondary data collection was also used through existing newsletters, office memos, text books and journals. Quantitative data from questionnaires was analyzed and interpreted with the help of the Statistical Package for the Social Sciences (SPSS). Qualitative analysis was undertaken through summarizing, categorization and structuring of meaning by drawing relationships in the study results of the Audit Trail model. The study found that the Audit Trail in the PMIS does not keep a chronological record of events; information captured is scanty and can not help Pensions Directors make informed decisions. Further, the study found that the purpose of the Audit Trail in the PMIS as used by the Pension Management is only for security reasons and tracking the whereabouts of a particular file. On the basis of information gathered during the research a more appropriate Audit Trail Model is proposed. The research also provided a foundation for future research on the Audit Trail in Pension Systems of developing countries and on Pension Systems policies. The conclusions and recommendations on the model of Audit Trail provide the Government, development partners, consultants, retirees and other stakeholders with a clear idea of the contribution it can make, and how it can assist in prioritization, planning of resource allocation in the Pension Department and toward realizing vision 2030.
- ItemAsset management system in the Government of Kenya(2013-11-14) Wambugu, Ndumia P.The purpose of this study was to assess the benefits of adoption of an asset management system in the public sector in Kenya. The study was guided by the following specific objectives; to determine the present state of asset management, determine the role an asset management system; analyze the present model for implementing an asset management system and to determine how the present model can be improved for successful implementation of an asset management system in the Government of Kenya. The research design for this study was a case study design. The population of interest for this study was employees from the Ministry of Finance. Stratified random sampling was used to select 70 respondents across the strata. Both quantitative and qualitative data was used in this study. Quantitative data was collected using a semi structured questionnaire. Qualitative data was collected by the use of interview guides. Secondary data were used; sources included text books, journals, newsletters, asset management implementation model manuals. Statistical Package for Social Science Software (SPSS) was used to help in analysis of quantitative data. The study found out that asset management in the government was not integrated across departments and therefore procurement department was most entrusted with the role of asset management. Study observed that previous AMS implementation process attempt did not observe the set timelines due to lack of policy, inclusion in implementation process and 'buy in' by the employees. Poor integration of activities within departments, lack of standardization in categorization/cataloging of assets, lack of AMS flexibility in its operation and poor design of the asset management model were challenges mentioned to hamper AMS implementation process. Study concludes that AM information system in the government is faced by various design, operation and implementation challenges leading to it being stalled. AM in the government remains exclusively a decision making process. The system also fails to integrate various departments either for regular information flow, monetary or occasional information.
- ItemBusiness process automation for Legislative and Procedural Services in the National Assembly: a case of the Kenyan Parliament(Strathmore University, 2018) Nduati, Paul NjaagaParliament as an institution and as the legislative arm of government still uses paper-based office processes to conduct its business. This accentuates the problem of meeting deadlines as stipulated by the constitution and failure to meet these deadlines result in Parliament forfeiting its mandate. This research proposed to develop a prototype automating the table office processes in the Legislative and Procedural Services (LPS) Department in the Kenyan Parliament. Secondary research on existing literature was gathered and analysed about key implementations in parliamentary systems mainly Portugal, Armenia and the United Nations. A conceptual framework was developed based on the secondary data. Primary research on the viability of such a system was done incorporating key stake holders in the Kenyan legislature. Finally, a prototype was proposed, designed and developed. The system prototype aims to improve productivity and efficiency by creating and automating the workflows derived from the paper-based processes in the LPS department in the Kenyan Parliament. The goal of the system is to have a paperless office work environment for Parliamentary staff. The system was tested by the research participants from the Legal and Procedural Servicers’ Department of the Kenyan parliament. Further, changes and improvement were proposed such as the inclusion of biometric processes as well as the development of inter-governmental systems to automate communication between different parliaments. All these steps proved that indeed table office automation can be a great improvement of the existing processes in the Kenyan parliament as well as globally
- ItemBusiness Processes Reengineering and Change Management in Public Sector: A case of Implementing an Integrated Financial Management Information System (lFMIS) in the Ministry of Finance-Kenya(Strathmore University, 2011) Kaua, Pius MuchaiThe Government of Kenya through the ministry of finance has been implementing IFMIS from the year 2003 which is intended to bring sanity in public spending. The project is intended to cover budgeting, accounts, procurement, asset management and projects management modules. Currently only the accounts and procurement modules have been implemented in all the accounting units. Implementation is done in a phased approach. The accounting modules of the project are aimed at increasing efficiency of the accounting services, timely reporting and integrating all its business functions. Efficiency and accountability has presented themselves as major challenges across all public expenditure entities. It is for these reasons that ERP (Enterprise Resource Planning) vendors like Oracle and SAP amongst others have come up with solutions that are meant to address these problems. The study focused on examining how IFMIS has managed Business Process Re¬engineering (BPR) on the accounting operations in government and challenges faced in the process of change management. In addition, the research seeks to establish how to manage change as a result of business process re-engineering. Specifically, the study examined the aims and objectives of the IFMIS in determining whether the system is addressing the problems it was meant to solve. The study used descriptive and exploratory research methods. The respondents were drawn from three departments in the Ministry of Finance, who are the key users in the accounting unit. The study established that business process re-engineering, change management; top management support and development of an implementation plan are critical success factors in ERP implementation. The implementation of IFMIS led to an increase in work efficiency due to increased accountability and work done within the time limit.
- ItemCentralized public parking management - case study of County Government of Nairobi(Strathmore University, 2020) Kang’ethe, Evans MungaiCommuting in Nairobi is part of life for anyone living within the city. This has seen the exponential growth of vehicles that operate within the county. The county government is in charge of controlling parking spaces which are limited. The methods used are mostly manual and several automated parking which also has limitations and is highly inefficient. Manual processes are lengthy with low accuracy of actual operations and accountability. This involves lots of manpower, which is costly, inconsistent, and inefficient. This has an effect on productivity of the economy since time and revenue are lost. This research will be aimed at evaluating the current system and finding ways to make it effective, efficient and convenient to both the county citizens and government. Information will be collected on the existing method used to parking management, analyzed to establish current gaps and a recommendation of the best approach will be presented. There is need for a holistic approach to managing parking in the count of Nairobi. A system that aggregates all parking slots centrally and identifies each uniquely. The system will be accessible from anywhere using a web based enabled interface. The drivers in the county will be able to log in and preserve slots at a defined time on a first come first serve basis. This will coordinate traffic flow in a more efficient way since the system is able to predict estimated number of vehicles expected in the city per unit of time apart from those that will be in transit and not stopping. It will improve the approach used to coordinate parking, people involved with the efforts required and the technology that is currently being used. Administration of the system will be centralized and accountability methods stringent.
- ItemA Conceptual data mining model (DMM) used in Selective Dissemination of Information (SDI) : case study - Strathmore University Library(Strathmore University, 2010) Ambayo, Jackson AlungaThe process of locating and acquiring relevant information from libraries is getting more complicated due to the vast amount of information resource one has to plough through. To serve users purposefully a library should be able to avail to users’ tools and services that will lessen the task of searching of documents and be more of an information provider. A data mining model that would be used in the selective dissemination of information is proposed. The purpose would be to link users’ information needs to the available and relevant information materials. This requires technologies much like search engines that will be specialised at rummaging through library databases and mining bibliographical enthes and user details to come up with what could be the closest to determining and anticipating user patterns and demand within libraries A case study approach was taken with the collection and analysis of data. Random sampling technique facilitated the choice of 100 library users and library staff from which data was collected using self-administered and researcher administered questionnaires. Data was analyzed and presented using descriptive statistics, cross tabulations and graphs, by the use of Statistical Package for Social Sciences (SPSS) version 12. It was ascertained that there is a demand of relevant information amongst the students with academic, financial and employment information being the highest sought. The OPAC, classmate referrals were the most highly sought information seeking mechanisms relied on. Majority of the users were of the opinion that they would prefer a systems that offers them the choice of selectively acquiring and informing them of the availability of relevant resources, as well as them (users’) having doing their own searches.
- ItemCritical success factors for SAP implementation : a model for implementing SAP projects in Kenya.(Strathmore University, 2009) Muthamia, Charles KirimiThe terms of ERP implementation is huge in terms of financial and human resource requirements. It is therefore important that proper planning of any ERP project is undertaken to ensure that it achieves its initial objectives. The success of any information systems project is determined by the methods, tools and procedures adopted in its implementation. It is therefore critical that appropriate methods, best practices, tools and procedures are used in project implementation to increase chances of success. This study is focused on the Systems Applications Product (SAP) implementation of Kenya and sought to understand the critical success factors for SAP project implementation, the best practices for SAP implementations and finally come up with a model for SAP implementation in Kenya The study established that top management support, effective project management business process re-engineering, data conversion and phased project implementation approach are critical to success in SAP project implementation in Kenya. Political interference has been noted as major cause of delay of projects in Kenya. Accelerated SAP (ASAP), an implementation methodology for SAP is preferred for SAP implementations in Kenya. It incorporates all the best practices required for a successful SAP implementation in Kenya. It incorporates all the best practices required for a successful SAP implementation in Kenya. This methodology was used by more than 90% of the companies that had implemented SAP in Kenya. The study finally recommends a model SAP implementation that requires senior management to devlop and implement strategy that directs the middle management level in laying a project and change management infrastructure. Operational teams from various business functions then designs the business procedures to accommodate business operations in SAP.
- ItemA Deep learning-based system for de-identification of personal health information on mobile devices(Strathmore University, 2021) Musila, Daniel MutisoCommunication in healthcare has evolved from older technologies like pagers to present day smartphone devices. The change has been largely driven by the capability of smartphones to facilitate information exchange at greater speed and efficiency to manage the rising patient numbers, complexity of cases and the multiple disciplines in modern medicine. Instant messaging services like WhatsApp offer a channel which meets most of these needs. This communication often involves exchange of patient clinical data containing Protected Health Information (PHI). Various laws and policies have been enacted in various geographies and jurisdictions to safeguard the confidentiality of patients through strict management of PHI. During the normal course of care provision, healthcare professionals and organizations are expected to maintain full confidentiality and integrity of the data against unauthorized exposure. Whenever patient data needs to be shared with external parties for research use, informed consent must be obtained from the data subjects along with an oversight of their activities by a relevant review board. The widespread use of smartphones and popular instant messaging applications in modern healthcare however presents security and data protection challenges which need urgent addressing. De-identification of the data offers an avenue to address these concerns, allowing clinical data containing PHI to be shared among healthcare providers and/or researchers with minimized risks. Deep learning de-identification systems demonstrate superior performance over other approaches. They are generally deployed on high-end workstations in medical facilities and research centres, or on cloud-based infrastructure. However, on-premises deployments present infrastructural, connectivity and cost implications while cloud de-identification services may involve transmitting sensitive data across different jurisdictions therefore potentially breaching data residency regulations. On the other hand, smartphone use worldwide continues to see incredible growth with mobile processors becoming more powerful and versatile. Deep learning models can be deployed on Android-based smartphones to perform complex tasks such as de-identification of PHI. This is in line with the growing interest and research in edge computing, where computations are carried out as close to data sources as possible as an alternative to cloud computing. Concretely, this research proposes a mobile-based de-identification system, in which the deep learning model is optimized and embedded onto a smartphone application from which de-identification can be done. Specifically, Long Short-Term Memory (LSTM) artificial neural networks will be leveraged to develop a deep learning model which can then be ported onto the Android operating system to be embedded into a mobile de-identification application.