Adjunct (assistant professor) at Jagiellonian University (May 2022 - January 2024), where I researched complex social systems such as urban mobility. Always curious about technological innovations and therefore earned a Ph.D. in Computer Science & Engineering. Strives for a complete understanding of Cloud Computing/Big Data and is always anxious to learn something new. Enjoys writing small programs in free time, either to automate everyday problems or simply for the sake of fun and is excited by the seemingly endless possibilities. My research interests and competencies encompass application aspects of distributed systems, cloud computing and Cloud-centric Data Acquisition. I focused on enabling cloud for data acquisition, synchronization and persistence.
My Research Focus:
Cloud-centric IoT |
Data Acquisition & Curation |
Cloud Computing |
Wellness-based Ubiquitous Platforms |
Performance-based Applications |
Software Engineering & Architecture |
Parallel & Distributed Systems |
Urban Mobility. |
List of main publications and preprints
-
Optimizing Ride-Pooling Revenue: Pricing Strategies and Driver-Traveller Dynamics
Akhtar, Usman,
Ghasemi, Farnoud,
and Kucharski, Rafal
arXiv preprint arXiv:2403.13384
2024
Ride-pooling, to gain momentum, needs to be attractive for all the parties involved. This includes also drivers, who are naturally reluctant to serve pooled rides. This can be controlled by the platform’s pricing strategy, which can stimulate drivers to serve pooled rides. Here, we propose an agent-based framework, where drivers serve rides that maximise their utility. We simulate a series of scenarios in Delft and compare three strategies. Our results show that drivers, when they maximize their profits, earn more than in both the solo-rides and only-pooled rides scenarios. This shows that serving pooled rides can be beneficial as well for drivers, yet typically not all pooled rides are attractive for drivers. The proposed framework may be further applied to propose discriminative pricing in which the full potential of ride-pooling is exploited, with benefits for the platform, travellers, and (which is novel here) to the drivers.
-
Exploring Computational Complexity Of Ride-Pooling Problems
Akhtar, Usman,
and Kucharski, Rafal
arXiv preprint arXiv:2208.02504
2022
Ride-pooling is computationally challenging. The number of feasible rides grows with the number of travelers and the degree (capacity of the vehicle to perform a pooled ride) and quickly explodes to the sizes making the problem not solvable analytically. In practice, heuristics are applied to limit the number of searches, e.g., maximal detour and delay, or (like we use in this study) attractive rides (for which detour and delay are at least compensated with the discount).
Nevertheless, the challenge to solve the ride-pooling remains strongly sensitive to the problem settings. Here, we explore it in more detail and provide an experimental underpinning to this open research problem. We trace how the size of the search space and computation time needed to solve the ride-pooling problem grows with the increasing demand and greater discounts offered for pooling. We run over 100 practical experiments in Amsterdam with 10-minute batches of trip requests up to 3600 trips per hour and trace how challenging it is to propose the solution to the pooling problem with our ExMAS algorithm.
We observed strong, non-linear trends and identified the limits beyond which the problem exploded and our algorithm failed to compute. Notably, we found that the demand level (number of trip requests) is less critical than the discount. The search space grows exponentially and quickly reaches huge levels. However, beyond some level, the greater size of the ride-pooling problem does not translate into greater efficiency of pooling. Which opens the opportunity for further search space reductions.
-
A cache-based method to improve query performance of linked Open Data cloud
Akhtar, Usman,
Sant’Anna, Anita,
Jihn, Chang-Ho,
Razzaq, Muhammad Asif,
Bang, Jaehun,
and Lee, Sungyoung
Computing
2020
The proliferation of semantic big data has resulted in a large amount of content published over the Linked Open Data (LOD) cloud. Semantic Web applications consume these data by issuing SPARQL queries. One of the main challenges faced by querying the LOD web cloud on account of the inherent distributed nature of LOD is its high search latency and lack of tools to connect the SPARQL endpoints. In this paper, we propose an Adaptive Cache Replacement strategy (ACR) that aims to accelerate the overall query processing of the LOD cloud. ACR alleviates the burden on SPARQL endpoints by identifying subsequent queries learned from clients historical query patterns and caching the result of these queries. For cache replacement, we propose an exponential smoothing forecasting method to replace the less valuable cache content. In the experimental study, we evaluate the performance of the proposed approach in terms of hit rates, query time and overhead. The proposed approach is found to outperform existing state-of-the-art approaches, increase hit rates by 5.46%, and reduce the query times by 6.34%.
-
A dynamic, cost-aware, optimized maintenance policy for interactive exploration of linked data
Akhtar, Usman,
Sant’Anna, Anita,
and Lee, Sungyoung
Applied Sciences
2019
Vast amounts of data, especially in biomedical research, are being published as Linked Data. Being able to analyze these data sets is essential for creating new knowledge and better decision support solutions. Many of the current analytics solutions require continuous access to these data sets. However, accessing Linked Data at query time is prohibitive due to high latency in searching the content and the limited capacity of current tools to connect to these databases. To reduce this overhead cost, modern database systems maintain a cache of previously searched content. The challenge with Linked Data is that databases are constantly evolving and cached content quickly becomes outdated. To overcome this challenge, we propose a Change-Aware Maintenance Policy (CAMP) for updating cached content. We propose a Change Metric that quantifies the evolution of a Linked Dataset and determines when to update cached content. We evaluate our approach on two datasets and show that CAMP can reduce maintenance costs, improve maintenance quality and increase cache hit rates compared to standard approaches.
-
Change-aware scheduling for effectively updating linked open data caches
Akhtar, Usman,
Razzaq, Muhammad Asif,
Rehman, Ubaid Ur,
Amin, Muhammad Bilal,
Khan, Wajahat Ali,
Huh, Eui-Nam,
and Lee, Sungyoung
IEEE Access
2018
The linked open data (LOD) cloud is a global information space with a wealth of structured facts, which are useful for a wide range of usage scenarios. The LOD cloud handles a large number of requests from applications consuming the data. However, the performance of retrieving data from LOD repositories is one of the major challenge. Overcome with this challenge, we argue that it is advantageous to maintain a local cache for efficient querying and processing. Due to the continuous evolution of the LOD cloud, local copies become outdated. In order to utilize the best resources, improvised scheduling is required to maintain the freshness of the local data cache. In this paper, we have proposed an approach to efficiently capture the changes and update the cache. Our proposed approach, called application-aware change prioritization (AACP), consists of a change metric that quantifies the changes in LOD, and a weight function that assigns importance to recent changes. We have also proposed a mechanism to update policies, called preference-aware source update (PASU), which incorporates the previous estimation of changes and establishes when the local data cache needs to be updated. In the experimental evaluation, several state-of-the-art strategies are compared against the proposed approach. The performance of each policy is measured by computing the precision and recall between the local data cache update using the policy under consideration and the data source, which is the ground truth. Both cases of a single update and iterative update are evaluated in this study. The proposed approach is reported to outperform all the other policies by achieving an F1-score of 88% and effectivity of 93.5%.