Chen Liang (梁宸) |
lliangchenc@163.com |
Chen Liang (梁宸) |
lliangchenc@163.com |
Hi! This is Chen, a third-year computer science Ph.D. candidate from Pervasive Computing Group at Tsinghua University, supervised by Prof. Yuanchun Shi and Prof. Chun Yu. I received my Bachelor's degree in Computer Science from Tsinghua University in 2019. My major research direction is Human-Computer Interaction (HCI) and I am also keeping track of research updates on natural language processing and computer vision. My research interest focuses on facilitating natural and efficient interaction schemes with compact sensor form in XR and mobile scenarios by leveraging multi-modality sensing (e.g., vision, audio, inertial signal, and RF). My previous and ongoing work expanded the user's input capability both in spatial (e.g., enabling precise input on the subtle fingertip unintrusively) and temporal (e.g., enhancing the recognition of fast and transient gestures) domains. My goal is to develop fundamental interaction techniques, with which a user can interact with everything around them seamlessly, for the next generation XR interface (just like the mouse and the keyboard for GUI). |
![]() |
![]() |
IMWUT 2022
We present DRG-Keyboard, a gesture keyboard enabled by dual IMU rings, allowing the user to swipe the thumb on the index fingertip to perform word gesture typing as if typing on a miniature QWERTY keyboard. With dual IMUs attached to the user’s thumb and index finger, DRG-Keyboard can 1) measure the relative attitude while mapping it to the 2D fingertip coordinates and 2) detect the thumb’s touch-down and touch-up events combining the relative attitude data and the synchronous frequency domain data, based on which a fingertip gesture keyboard can be implemented ...
|
![]() |
VR 2022 Workshop
Mid-air text entry on virtual keyboards suffers from the lack of tactile feedback, bringing challenges to both tap detection and input prediction. In this poster, we demonstrated the feasibility of efficient single-finger typing in mid-air through probabilistic touch modeling. We first collected users' typing data on different sizes of virtual keyboards. Based on analyzing the data, we derived an input prediction algorithm that incorporated probabilistic touch detection and elastic probabilistic decoding. In the evaluation study where the participants performed real text entry tasks with this technique, they reached a pick-up single-finger typing speed of 24.0 WPM with 2.8% word-level error rate.
|
![]() |
IMWUT 2021
We present DualRing, a novel ring-form input device that can capture the state and movement of the user's hand and fingers. With two IMU rings attached to the user's thumb and index finger, DualRing can sense not only the absolute hand gesture relative to the ground but also the relative pose and movement among hand segments. To enable natural thumb-to-finger interaction, we develop a high-frequency AC circuit for on-body contact detection ...
|
![]() |
CHI 2021
We propose Auth+Track, a novel authentication model that reduces redundant authentication in everyday smartphone usage. By sparse authentication and continuous tracking of the user’s status, Auth+Track eliminates the “gap” authentication between fragmented sessions and enables “Authentication Free when User is Around”. To instantiate the Auth+Track model, we present PanoTrack, a prototype that integrates body and near field hand information for user tracking. We install a fisheye camera on the top of the phone to achieve a panoramic vision that can capture both user’s body and on-screen hands ...
|
![]() |
CHI 2019
We present HandSee, a novel sensing technique that can capture the state of the user’s hands touching or gripping a smartphone. We place a prism mirror on the front camera to achieve a stereo vision of the scene above the touchscreen surface. HandSee enables a variety of novel interaction techniques and expands the design space for full hand interaction on smartphones...
|
![]() |
AAAI 2019
We propose DeepChannel, a robust, data-efficient, and interpretable neural model for extractive document summarization. Given any document-summary pair, we estimate a salience score, which is modeled using an attention-based deep neural network, to represent the salience degree of the summary for yielding the document. We devise a contrastive training strategy to learn the salience estimation network, and then use the learned salience score as a guide and iteratively extract the most salient sentences from the document as our generated summary ...
|
Tsinghua University | 09/2019 - present |
Ph.D. student in Computer Science and Technology |
Tsinghua University | 09/2015 - 06/2019 |
B.Eng. in Computer Science and Technology |
National University of Singapore | 07/2018 - 09/2018 |
Summer Intern |
Reviewer, CHI 2020, TURC 2020, CHI LBW 2021, ACL 2021, CHI 2022, TIOT 2022 | 2019 - |
Teaching Assistant (TA), Fundamentals of Computer Programming | 2019 Fall, 2020 Fall, 2021 Fall, 2022 Fall |
Teaching Assistant (TA), Essentials to Signal Processing and Data Management for AIoT Applications | 2022 Fall |
Teaching Assistant (TA), Financial Big Data and Quantitative Analysis | 2021 Spring |
Teaching Assistant (TA), Calculus A(2) | 2020 Spring |
清华大学计算机系84创新未来奖学金 | 2021 |
清华大学综合优秀奖学金 | 2021, 2022 |
清华大学第39届“挑战杯”学生课外学术科技作品竞赛二等奖 | 2021 |
清华大学第37届“挑战杯”学生课外学术科技作品竞赛一等奖 | 2019 |
清华大学计算机系优秀毕业生 | 2019 |
Programming Language: C/C++, Python, Java, JavaScript, CUDA, R, Matlab, VHDL, Assembly Language |
Professional Skills: Machine Learning (CV, NLP, and RL tasks in Pytorch and TensorFlow), Digital Image Processing, Audio Processing, Embedded System (e.g., Arduino, NRF BLE) |
Updated in Dec. 2022