Tenghao Huang

Welcome to my personal webpage! My name is Tenghao Huang, a fourth-year PhD candidate at USC Viterbi School of Engineering. I am adviced by Prof. Jonathan May and Prof. Muhao Chen. I have interned at Amazon Alexa AI, IBM Research, Microsoft’s Office of Applied Research and Salesforce AI Research.
Before I join USC, I did my bachelor at UNC-CH. I was fortunate to be supervised by Prof. Snigdha Chaturvedi and Prof. Colin Raffel.
I am genuinely interested in how LLM agents can help human solve real-world problems. Specifically, I am curious about how LLM agents can enhance human productivity in task-oriented scenarios, such as writing, coding, and planning. Check out our NAACL 2025 tutorial on Creative Planning with LLMs!
I am also interested in studying AI creativity, especially how AI can assist human in composing longer and more interesting narratives. Our paper Are Large Language Models Capable of Generating Human-Level Narratives? received the Outstanding Paper Award 🏆 at EMNLP 2024!
I am currently on the faculty job market this year and actively seeking academic positions.
news
Aug 25, 2025 | I am excited to return as a part-time research intern at Microsoft in Fall 2025. My spring intern paper Teaching Language Models to Gather Information Proactively is accepted by EMNLP-Findings 2025. I will keep working on post-training LLMs to align with human preferences:) |
---|---|
May 19, 2025 | Excited to start my summer internship at Salesforce! |
May 15, 2025 | Our paper R2D2: Remembering, Replaying and Dynamic Decision Making with a Reflective Agentic Memory is accepted by ACL 2025! |
May 04, 2025 | We’re so excited to have our tutorial presented at NAACL 2025. This is a tutorial on Creative Planning in Large Language Models. |
selected publications
- arXivWebDS: An End-to-End Benchmark for Web-based Data SciencearXiv preprint arXiv:2508.01222, 2025
- arXivDiscoSum: Discourse-aware News SummarizationarXiv preprint arXiv:2506.06930, 2025
- EMNLPTeaching Language Models To Gather Information ProactivelyFindings of the Association for Computational Linguistics: EMNLP, 2025
- ACLNewsInterview: a Dataset and a Playground to Evaluate LLMs’ Grounding Gap via Informational InterviewsProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL), 2025Oral
- ACLR2D2: Remembering, Replaying and Dynamic Decision Making with a Reflective Agentic MemoryProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL), 2025
- WorkshopA Novel Multi-Document Retrieval Benchmark: Journalist Source-Selection in NewswritingProceedings of the 4th International Workshop on Knowledge-Augmented Methods for NLP, 2025
- KDDFoodPuzzle: Developing Large Language Model Agents as Flavor ScientistsProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Datasets and Benchmarks Track), 2025Oral
- EMNLPFamiliarity-Aware Evidence Compression for Retrieval-Augmented GenerationFindings of the Association for Computational Linguistics: EMNLP, 2025
- NAACLCreative Planning with Language Models: Practice, Evaluation and ApplicationsProceedings of the 2025 Conference of the North American Chapter of the ACL: Tutorials, 2025Tutorial
- NAACLPlanning and Editing What You Retrieve for Enhanced Tool LearningFindings of the Association for Computational Linguistics: NAACL, 2024
- EMNLPRed Teaming Language Models for Contradictory DialoguesProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
- EMNLPAre Large Language Models Capable of Generating Human-Level Narratives?Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024Outstanding Paper Award
- ICMLGit-Theta: A Git Extension for Collaborative Development of Machine Learning ModelsProceedings of the 40th International Conference on Machine Learning (ICML), 2023
- EMNLPAffective and Dynamic Beam Search for Story GenerationFindings of the Association for Computational Linguistics: EMNLP, 2023
- NAACLRevisiting Generative Commonsense Reasoning: A Pre-Ordering ApproachFindings of the Association for Computational Linguistics: NAACL, 2022
- NeurIPSFew-shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningAdvances in Neural Information Processing Systems (NeurIPS), 2022
- ACLRead Top News First: A Document Reordering Approach for Multi-Document News SummarizationFindings of the Association for Computational Linguistics: ACL, 2022
- EMNLPUncovering Implicit Gender Bias in Narratives through Commonsense InferenceFindings of the Association for Computational Linguistics: EMNLP, 2021