Howdy! I am a 5th-year Ph.D. student in Computer Engineering at Texas A&M University. My research began with image and video restoration, where I explored high-fidelity recovery using advanced sequence models such as Transformers and Mamba.
Recently, my focus has shifted toward generative video modeling and multi-modal AI. Current directions include:
(1) 3D Reconstruction & Style Transfer, developing geometry-aware generative models, including Stylos, a VGGT + Gaussian Splatting framework for consistent 3D style transfer.
(2) Video Diffusion, exploring diffusion-based frameworks for high-fidelity video restoration, streaming generation, and temporally consistent 3D-aware video modeling.
(3) LLM-RAG, building retrieval-augmented generation pipelines with large language and vision-language models to enhance reasoning and controllability in generative systems.
I am actively seeking a research internship for Spring 2026 or Summer 2026, where I can contribute expertise in visual generation and restoration while continuing to grow in applied AI research.
π₯ News
- 2025.09: Β ππ Our paper Stylos is now available on arXiv.
- 2025.08: Β ππ I have joined the Urban Resilience Lab as a research assitant with cooperation with Resilitix AI on a LLM-Rag system.
- 2025.04: Β ππ XYScanNet has been accepted by NTIRE CVPR 2025. See you in Nashville.
- 2024.02: Β ππ Mamba4rec has been selected as the Best Paper Award for KDDβ24 Resource-efficient Learning for Knowledge Discovery Workshop (RelKDβ24).
π Publications and Preprints

Stylos: Multi-View 3D Stylization with Single-Forward Gaussian Splatting
Hanzhou Liu*, Jia Huang, Mi Lu, Srikanth Saripalli, Peng Jiang* β , arXiv 2025
- Stylos couples VGGT with Gaussian Splatting for cross-view style transfer, introducing a voxel-based style loss to ensure multi-view consistency.

XYScanNet: A State Space Model for Single Image Deblurring Hanzhou Liu, Chengkai Liu, Jiacong Xu, Peng Jiang, Mi Lu, NTIRE CVPR 2025
- XYScanNet, maintains competitive distortion metrics and significantly improves perceptual performance. Experimental results show that XYScanNet enhances KID by 17% compared to the nearest competitor.
-
Mamba4rec: Towards efficient sequential recommendation with selective state space models, Chengkai Liu, Jianghao Lin, Jianling Wang, Hanzhou Liu, James Caverlee, RelKD KDD 2024 Best Paper Award
-
Behavior-Dependent Linear Recurrent Units for Efficient Sequential Recommendation, Chengkai Liu, Jianghao Lin, Hanzhou Liu, Jianling Wang, James Caverlee, CIKM 2024
-
Real-world image deblurring via unsupervised domain adaptation, Hanzhou Liu, Binghan Li, Mi Lu, Yucheng Wu, ISVC 2023
π Educations
- 2021.08 - now, Texas A&M University, PhD in Computer Engineering.
- 2019.08 - 2021.06, Texas A&M University, MS in Computer Engineering.
- 2014.08 - 2018.06, Jilin University, BS in Electrical Engineering.
π¬ Invited Talks
- 2024.07, Mamba4Rec, invited talk at Uber.
π§Ύ Community Services
- 2025, Reviewer, New Trends in Image Restoration and Enhancement workshopin conjunction with CVPR 2025 (NTIRE).
- 2024, Reviewer, IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
- 2024, Reviewer, The Conference on Information and Knowledge Management (CIKM).