Chen Zhao
  • Bio
  • News
  • Publications
  • Talks
  • Teaching
  • Awards
ESC

Searching...

No results found

↑↓ Navigate ↵ Select
Powered by Hugo Blox
  • Talks
    • Invertible Diffusion Models for Inverse Problems
    • Video Understanding for Embodied AI
    • Long-form Video Understanding in the 2020s
    • 神经网络可逆化:大模型微调的高效显存优化策略
    • Long-form Video Understanding in the 2020s
    • Reversifying Neural Networks: Efficient Memory Optimization Strategies for Finetuning Large Models
    • Towards More Realistic Continual Learning at Scale
    • Optimizing Memory Efficiency in Pretrained Model Finetuning
    • Optimizing Memory Efficiency in Pretrained Model Finetuning
    • Toward Long-form Video Understanding
    • Challenges and Advances in Long-form Video Understanding
    • Towards Long-form Video Understanding
    • Challenges and Innovation for Long-form Video Understanding: Compute, Algorithm, and Data
    • Research Highlights at IVUL with a Focus on Video Understanding
    • Towards Long-form Video Understanding
    • A Simple and Effective Approach for Long-form Video Understanding
    • Detecting Actions in Videos via Graph Convolutional Networks
    • Let the Computer See the World as Humans
    • Image/Video Cloud Coding
    • Making a Lighter Encoder: Image/Video Compressive Sensing
    • Compressive Sensing-based Image/Video Coding
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • Awards
    • First place, Ego4D Visual Queries 3D
    • First place, Epic-Kitchens action detection
    • First place, Epic-Kitchens action recognition
    • First place, Epic-Kitchens audio-based interaction detection
    • Recipient of Grant
    • Best Paper Award
    • First place, Visual Queries 3D Localization Challenge in Ego4D Workshop
    • First place, Visual Queries 3D Localization Challenge in Ego4D Workshop
    • Outstanding Reviewer
    • Finalist
    • Second place, HACS Temporal Action Localization Challenge
    • Finalist
    • Outstanding Graduate
    • Scholarship of Outstanding Talent
    • Best Paper Award
    • First Prize, Qualcomm Innovation Fellowship Contest (QInF)
    • Outstanding Individual in the Summer Social Practice
    • Outstanding Graduate Leader
    • Goldman Sachs Global Leaders Award
    • First-Class Scholarship
    • National Scholarship
  • Slides
    • Example Talk: Recent Work
  • Experience
  • Courses
    • Hugo Blox
      • Getting Started
      • Guide
        • Project Structure
        • Configuration
        • Formatting
          • Embed Media
          • Buttons
          • Callouts
          • Cards
          • Spoilers
          • Steps
      • Reference
        • Customization
        • Internationalization (i18n)
  • News
  • Publications
    • BOLT: Boost Large Vision-Language Model Without Training for Long-form Video Understanding
    • SMILE: Infusing Spatial and Motion Semantics in Masked Video Learning
    • OSMamba: Omnidirectional Spectral Mamba with Dual-Domain Prior Generator for Exposure Correction
    • Invertible Diffusion Models for Compressed Sensing
    • SEVERE++: Evaluating Benchmark Sensitivity in Generalization of Video Representation Learning
    • Effectiveness of Max-Pooling for Fine-Tuning CLIP on Videos
    • OpenTAD: A Unified Framework and Comprehensive Study of Temporal Action Detection
    • Ego4D: Around the World in 3,000 Hours of Egocentric Video
    • Towards Automated Movie Trailer Generation
    • Dr<sup>2</sup>Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning
    • Ego-Exo4D: Understanding Skilled Human Activity from First-and Third-Person Perspectives
    • End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames
    • Re<sup>2</sup>TAL: Rewiring Pretrained Video Backbones for Reversible Temporal Action Localization
    • EgoLoc: Revisiting 3D Object Localization from Egocentric Videos with Visual Queries
    • FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model
    • A Unified Continual Learning Framework with General Parameter-Efficient Tuning
    • Large-capacity and Flexible Video Steganography via Invertible Neural Network
    • ETAD: Training Action Detection End to End on a Laptop
    • Just a Glimpse: Rethinking Temporal Information for Video Continual Learning
    • Owl (observe, watch, listen): Localizing actions in egocentric video via audiovisual temporal context
    • R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning
    • End-to-End Active Speaker Detection
    • Evaluation of Diverse Convolutional Neural Networks and Training Strategies for Wheat Leaf Disease Identification with Field-Acquired Photographs
    • When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search
    • MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions
    • Ego4D: Around the World in 3,000 Hours of Egocentric Video
    • SegTAD: Precise Temporal Action Detection via Semantic Segmentation
    • Video Self‑Stitching Graph Network for Temporal Action Localization
    • ThumbNet: One Thumbnail Image Contains All You Need for Recognition
    • Improve Baseline for Temporal Action Detection: HACS Challenge 2020 Solution of IVUL‑KAUST team
    • G‑TAD: Sub‑Graph Localization for Temporal Action Detection
    • Optimization‑Inspired Compact Deep Compressive Sensing
    • Logistic Regression is Still Alive and Effective: The 3rd YouTube 8M Challenge Solution of the IVUL‑KAUST team
    • CREAM: CNN-REgularized ADMM framework for compressive-sensed image reconstruction
    • BoostNet: A Structured Deep Recursive Network to Boost Image Deblocking
    • Better and Faster, when ADMM Meets CNN: Compressive-sensed Image Reconstruction
    • Reducing Image Compression Artifacts by Structural Sparse Representation and Quantization Constraint Prior
    • Video Compressive Sensing Reconstruction via Reweighted Residual Sparsity
    • CONCOLOR: COnstrained Non-Convex Low-Rank Model for Image Deblocking
    • Nonconvex Lp Nuclear Norm based ADMM Framework for Compressive Sensing
    • Compressive-Sensed Image Coding via Stripe-based DPCM
    • A Dual Structured-Sparsity Model for Compressive-Sensed Video Reconstruction
    • 基于云数据的高效图像编码方法
    • Thousand to one: An image compression system via cloud search
    • Adaptive intra-refresh for low-delay error-resilient video coding
    • Video Compressive Sensing via Structured Laplacian Modelling
    • Image Compressive-Sensing Recovery Using Structured Laplacian Sparsity in DCT Domain and Multi-Hypothesis Prediction
    • Image Compressive Sensing Recovery Using Adaptively Learned Sparsifying Basis via L0 Minimization
    • Weakly Supervised Photo Cropping
    • Wavelet Inpainting Driven Image Compression via Collaborative Sparsity at Low Bit Rates
    • A Highly Effective Error Concealment Method for Whole Frame Loss
  • Recent & Upcoming Talks
    • Create Beautiful Presentations with Markdown
  • Teaching
    • Machine Learning (CS229), KAUST
    • Deep Learning for Visual Computing (CS323), KAUST
    • Video understanding, KAUST
    • Machine learning, KAUST
    • Deep learning, KAUST
    • Deep learning, KAUST
    • Deep learning, KAUST
  • Thanks for reaching out
Courses
Hugo Blox
Reference
Internationalization (i18n)

Internationalization (i18n)

· 1 min read
courses

Hugo Blox enables you to easily edit the interface text as well as translating your site into multiple languages using Hugo’s multilingual feature.

View the full docs at https://docs.hugoblox.com/reference/language/

Last updated on September 25, 2025
Chen Zhao
Authors
Chen Zhao
Research Scientist

← Customizing Hugo January 1, 1

© All rights reserved by Chen Zhao, 2026