Publications

Publications

time
arXiv 2025
Benchmarking Instance-Level Learnability and Interpretability in Multiple Instance Learning

This study presents a unified benchmarking framework that evaluates MIL algorithms at both bag and instance levels, quantifying performance, learnability, and interpretability. Experiments on synthetic and digital pathology datasets reveal that although bag-level performance is robust across aggregation strategies, instance-level metrics are significantly affected by sample size and feature noise.

ICLR Workshop I Can't Believe It's Not Better 2025
On the Role of Structure in Hierarchical Graph Neural Networks

This analysis shows that graph structure is learned in the initial convolutional layers, typically before any pooling schemes are applied, by perturbing the input graph structure at varying depths of the hierarchical graph neural network. In fact, many popular benchmarking datasets for graph-level tasks only exhibit limited structural information relevant to the prediction task, with structure-agnostic baselines often matching or outperforming more complex GNNs. These findings shed light on the empirical underperformance of graph pooling schemes and motivate the need for more structure-sensitive benchmarks and evaluation frameworks.

SIGCOMM 2023
Towards Practical and Scalable Molecular Networks

This work introduces MoMA, a molecular multiple access protocol that enables communication between multiple transmitters and a receiver in molecular networks. It addresses key challenges in molecular communication, such as lack of synchronization and high inter-symbol interference, and scales up to four transmitters in the synthetic testbed evaluation.

TMLR 2023
Contextual Combinatorial Multi-output GP Bandits with Group Constraints

This paper proposes TCGP-UCB which is an algorithm for combinatorial contextual bandit problems with privacy-driven group constraints. It balances between maximizing cumulative super arm reward and satisfying group reward constraints and can be tuned to prefer one over the other, with information-theoretic regret bounds.

UYMS 2021
A Performance Study Depending on Execution Times of Various Frameworks in Machine Learning Inference

This study benchmarks inference latency across multiple machine learning frameworks using a 2-layer neural network model. The model is implemented in PyTorch and converted to TorchScript and ONNX formats. Inference is performed using LibTorch, ONNX Runtime, and TensorRT on both CPU and GPU. Results show that TensorRT with ONNX delivers the fastest performance, demonstrating its efficiency and potential for deployment scenarios.