Zhenliang Zhang | BIGAI
Zhenliang Zhang | BIGAI
Home
Publications
Projects
Contact
HI-Lab
Light
Dark
Automatic
article-journal
[TOMM 2024] Demonstrative Learning for Human-Agent Knowledge Transfer
We propose a comprehensive system that combines the SDL paradigm with the TDL paradigm in VR from a top-down perspective.
Xiaonuo Dongye
,
Haiyan Jiang
,
Dongdong Weng
,
Zhenliang Zhang
PDF
Cite
Video
[TOG/SiggraphAsia 2023] Commonsense Knowledge-Driven Joint Reasoning Approach for Object Retrieval in Virtual Reality
we propose a commonsense knowledge-driven joint reasoning approach for object retrieval, where human grasping gestures and context are modeled using an And-Or graph (AOG).
Haiyan Jiang
,
Dongdong Weng
,
Xiaonuo Dongye
,
Le Luo
,
Zhenliang Zhang
PDF
Cite
Project
Video
Web
[Engineering 2023] The tong test: Evaluating artificial general intelligence through dynamic embodied physical and social interactions
The Tong test describes a value- and ability-oriented testing system that delineates five levels of AGI milestones through a virtual environment with DEPSI, allowing for infinite task generation.
Yujia Peng
,
Jiaheng Han
,
Zhenliang Zhang
,
Lifeng Fan
,
Tengyu Liu
,
Siyuan Qi
,
Xue Feng
,
Yuxi Ma
,
Yizhou Wang
,
Song-Chun Zhu
PDF
Cite
Project
News
[Virtual Reality 2023] DexHand: dexterous hand manipulation motion synthesis for virtual reality
We propose a neural network-based finger movement generation approach, enabling the generation of plausible hand motions interacting with objects.
Haiyan Jiang
,
Dongdong Weng
,
Zhen Song
,
Xiaonuo Dongye
,
Zhenliang Zhang
Cite
[Engineering 2023] A reconfigurable data glove for reconstructing physical and virtual grasps
We present a reconfigurable data glove design to capture different modes of human hand-object interactions, which are critical in training embodied artificial intelligence (AI) agents for fine manipulation tasks.
Hangxin Liu
,
Zeyu Zhang
,
Ziyuan Jiao
,
Zhenliang Zhang
,
Mingchen Li
,
Chenfanfu Jiang
,
Yixin Zhu
,
Song-Chun Zhu
Cite
Project
[Sensors 2019] HiFinger: One-handed text entry technique for virtual environments based on touches between fingers
We present a text entry technique called HiFinger, which is an eyes-free, one-handed wearable text entry technique for immersive …
Haiyan Jiang
,
Dongdong Weng
,
Zhenliang Zhang
,
Feng Chen
Cite
[SID 2019] Subjective and objective evaluation of visual fatigue caused by continuous and discontinuous use of HMDs
During continuous use of displays, a short rest can relax users’ eyes and relieve visual fatigue. As one of the most important …
Jie Guo
,
Dongdong Weng
,
Zhenliang Zhang
,
Yue Liu
,
Henry B.L. Duh
,
Yongtian Wang
Cite
[SID 2019] Vision-tangible interactive display method for mixed and virtual reality: Toward the human-centered editable reality
Building a human-centered editable world can be fully realized in a virtual environment. Both mixed reality (MR) and virtual reality …
Zhenliang Zhang
,
Yue Li
,
Jie Guo
,
Dongdong Weng
,
Yue Liu
,
Yongtian Wang
Cite
[SID 2018] Task-driven latent active correction for physics-inspired input method in near-field mixed reality applications
Calibration accuracy is one of the most important factors to affect the user experience in mixed reality applications. For a typical …
Zhenliang Zhang
,
Yue Li
,
Jie Guo
,
Dongdong Weng
,
Yue Liu
,
Yongtian Wang
Cite
[Optical Engineering 2018] Enhancing data acquisition for calibration of optical see-through head-mounted displays
Single point active alignment method is a widely used calibration method for optical-see-through head-mounted displays (OST-HMDs) since …
Zhenliang Zhang
,
Dongdong Weng
,
Jie Guo
,
Yue Liu
,
Yongtian Wang
,
Hua Huang
Cite
Project
Cite
×