Research

Research at the Privacy & Security Lab focuses on the design and analysis of privacy-preserving and secure machine learning systems. Our work integrates differential privacy, neural network security, and data analysis to enable trustworthy AI under rigorous privacy and security guarantees.

Research Directions

Differential Privacy Models

We study differential privacy across centralized, local (LDP), and shuffle models, with an emphasis on understanding their privacy–utility trade-offs and system-level implications in real-world deployments.

Privacy Amplification & Accounting

Our research investigates privacy amplification mechanisms, particularly amplification via shuffling, and develops accurate privacy accounting techniques based on advanced composition and Rényi Differential Privacy (RDP).

Privacy-Preserving Data Synthesis

We develop differentially private data synthesis techniques for tabular, image, and time-series data, enabling safe data sharing while preserving essential statistical properties and downstream task utility.

Neural Network Security

We analyze security threats in learning systems, including data poisoning and backdoor attacks, and design defenses to improve model robustness and reliability.

Secure & Trustworthy AI Applications

We apply privacy-preserving and secure machine learning techniques to sensitive domains such as healthcare and behavioral data analysis, aiming to build compliant and trustworthy AI systems.