Hassan Hamad

I am a PhD candidate working on AI at the University of Southern California (USC) with a B.E. in Computer & Communcations Engineering from Notre Dame University-Louaize (NDU) and an M.S. in Communications Engineering from the Technical University of Munich (TUM). I am advised by Professor Keith Chugg. My research focuses on training efficiency in deep learning. I study how trainig deep learning models can be made more efficient both from the computational side and the data side. Recently I've been interested in applying my methods to NLP problems involving LLMs such as information extraction and conversational AI. I am also part of the Hardware Accelerated Learning (HAL) group at USC.


HAL Group

I am part of the Hardware Accelerated Learning (HAL) group at USC. We focus on co-designing algorithms and hardware for reduced complexity training of Neural Networks.


Current Research

Leveraging Synthetic Data to train LLMs for Named Entity Recognition and Relation Extraction

Synthetic data has drawn a lot of interest recently with the improved reasoning abilities of large language models. This is especially useful on targeted problems where gathering human-annotated data is expensive or time-consuming. We investigate the use of synthetic data to train LLMs for Named Entity Recognition (NER) and Relation Extraction (RE). We propose a training paradigm that combines synthetic data with real data while avoiding the missing annotation problem, i.e. when LLMs generate synthetic data with missing annotations.

Enhancing the tool-calling ability of LLMs in Task-Oriented Dialogue Systems

While LLMs have displayed impressive abilities in conversational dialogues, they are limited by their textual input-output nature. To supplement this, recent research looks at augmenting LLMs with tools (also called function calling) in order to accomplish tasks that interact with the outside world such as sending email, setting alarms, etc. To improve the function-calling capability of LLMs, we build a diagnostic model that can detect mistakes the LLM makes when interacting with external tools, and can provide detailed feedback which the LLM can use to self-correct.


Selected Publications

Hassan Hamad, Abhinav Kumar Thakur, Nijil Kolleri, Sujith Pulikodan, Keith M. Chugg

Findings of the Association for Computational Linguistics: NAACL 2024

2024
William Chang, Hassan Hamad, Keith M. Chugg

2022 Asilomar Conference on Signals, Systems, and Computers

2022
Mari Kobayashi, Hassan Hamad, Gerhard Kramer, Giuseppe Caire

2019 IEEE International Symposium on Information Theory (ISIT), 270-274

2019
Ghassan M Kraidy, Hassan Hamad

2019 16th Canadian Workshop on Information Theory (CWIT)

2019
Wissam Hamad, Marwan Bou Sanayeh, Tobias Siepelmeyer, Hassan Hamad, Werner HE Hofmann

IEEE Photonics Journal

2019
Hassan Hamad, Ghassan M Kraidy

2017 15th Canadian Workshop on Information Theory (CWIT)

2017