.

研究项目 :

Consistent Correspondence between Arbitrary Manifold Surfaces

We propose a novel framework for consistent correspondence between arbitrary manifold meshes. Different from most existing methods, our approach directly maps the connectivity of the source mesh onto the target mesh without needing to segment input meshes, thus effectively avoids dealing with unstable extreme conditions (e.g. complex boundaries or high genus). In this paper, firstly, a novel mean-value Laplacian fitting scheme is proposed, which aims at computing a shape-preserving (conformal) correspondence directly in 3D-to-3D space, efficiently avoiding local optimum caused by the nearest-point search, and achieving good results even with only a few marker points. Secondly, we introduce a vertex relocation and projection approach, which refines the initial fitting result in the way of local conformity. Each vertex of the initial result is gradually projected onto the target model's surface to ensure a complete surface match. Furthermore, we provide a fast and effective approach to automatically detect critic points in the context of consistent correspondence. By fitting these critic points that capture the important features of the target mesh, the output compatible mesh matches the target mesh's profiles quite well. Compared with previous approaches, our scheme is robust, fast, and convenient, thus suitable for common applications.

 

Figure  The morphing result between a girl model and a deer model (which has been applied to a new 3D cartoon movie)

 

我们的三维对应技术成功应用到国产三维动画电影《麋鹿王》中的女主角麋鹿公主与美少女之间的无缝渐变。该动画片获得了第13届电影华表奖优秀动画片奖、第二十七届中国电影金鸡奖最佳美术片提名奖。

 

Global and Local Isometry-Invariant Descriptor for 3D Shape Comparison and Partial Matching based on Manifold Harmonics (“Shape-DNA”)

In this project, based on manifold harmonics (“Shape-DNA”), we propose a novel framework for 3D shape similarity comparison and partial matching. First, we propose a novel symmetric mean-value representation to robustly construct high-quality manifold harmonic bases on nonuniform-sampling meshes. Then, based on the manifold harmonic bases constructed, a novel shape descriptor is presented to capture both of global and local features of 3D shape. This feature descriptor is isometry-invariant, i.e., invariant to rigid-body transformations and non-rigid bending. After characterizing 3D models with the shape features, we perform 3D retrieval with a up-to-date discriminative kernel. This kernel is a dimension-free approach to quantifying the similarity between two unordered feature-sets, thus especially suitable for our high-dimensional feature data. Experimental results show that our framework can be effectively used for both comprehensive comparison and partial matching among non-rigid 3D shapes.

 

Figure  Characterize the shape features both globally and locally.

 

3D Face Tracking and Editing in Real-Time (Depth) Video

This project presents a novel approach to tracking 3D face in real time. Large-angle motions are carefully considered. Then, we propose an interesting application: exchanging/editing the face in real-time video. The high-quality results are achieved and can be used to web-based entertainment scenes.

 

 

Figure  3D Face Tracking and Editing in Real-Time (Depth) Video.

 

3D Human-Body Pose Reconstruction and Action Analysis in Real-Time Video

The action analysis is an important topic in video-based applications. In this project, inspired by Hornung's method, we work on 3D human-body pose reconstruction and action analysis in real-time video. First, a 3D human-body template is used to reconstruct the 3D human pose in real time. Then, we track the human motion and analyze his/her actions. Some interesting applications are developed.

Figure  Inspired by Hornung's method, we work on 3D human-body pose reconstruction and action analysis in real-time video.

Model Transduction

This project proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Given a source mesh and a target mesh, a correspondence between two meshes is established from only a few pairs of markers. Then the pose of the source mesh is directly transferred to the target mesh while preserving the surface details of the target mesh. Different from previous retargetting methods, model transduction does not require an extra reference source mesh to obtain source deformation, and provides more flexibility to handle deformation operation with better control. Our approach is numerically efficient, as the solution to the optimization problem can be obtained by fast solving a sparse linear system. Experimental results show that model transduction can successfully transfer both complex skeletal structures and subtle skin deformations.

Figure  (a) Deformation transfer [Sumner:2004:SIGGRAPH] cannot produce satisfying results when the source and target have different reference poses. (b) With model transduction, a lion successfully imitates a cat's pose even if the reference source mesh is absent.

Angle-Based Feature Sensitive Metric and the sketch-based mesh segmentation framework

Meaningful mesh segmentation plays a more and more important role in various graphics applications, such as texture mapping, shape retrieval, and high-quality metamorphosis. This paper proposes a sketch-based interactive framework for real-time mesh segmentation. With an easy-to-use tool, the user can freely segment a 3D mesh while needing little effort or skill. In order to meaningfully segment the mesh, two dimensionless feature sensitive metrics are proposed, which are independent of the model and part size. We show that these metrics give the clear physical meaning to illustrate discrete differential geometric features, such as the curvature tensor and the curve length of Gaussian image. Finally, we discuss three kinds of boundary smoothing methods, and present two fast topology-invariant, geometry-invariant adjustment algorithms based on convex hull features and angle features. Compared with previous methods, our method can easily achieve multi-level segmentation in just one session.

Figure  Angle-Based Feature Sensitive Metric and the sketch-based mesh segmentation framework

 

Edge-directed Single Image Super-resolution via Adaptive Gradient Magnitude Self-interpolation

Super-resolution from a single image plays an important role in many computer vision systems. However, it is still a challenging task, especially on preserving local edge structures. To construct high-resolution images while preserving the sharp edges, an effective edge-directed super-resolution method is presented in this paper. An adaptive self-interpolation algorithm is first proposed to estimate a sharp high-resolution gradient field directly from the input low-resolution image. The obtained highresolution gradient is then regarded as a gradient constraint or an edge-preserving constraint to reconstruct the high-resolution image. Extensive results have shown both qualitatively and quantitatively that the proposed method can produce convincing super-resolution images containing complex and sharp features, as compared with the other state-of-the-art super-resolution algorithms.

Figure  Super-resolution comparison (3X magnification) of other edgedirected methods. (a) Back-projection. (b) Laplacian edge-directed [30]. (c) Soft-cut edge-directed [11]. (d) Our result. The bottom row illustrates the close-ups of the corresponding results.

 

Mean-shift Object Tracking with a Novel Back-Projection Calculation Method

In this paper, we propose a mean-shift tracking method by using the novel back-projection calculation. The traditional back-projection calculation methods have two main drawbacks: either they are prone to be disturbed by the background when calculating the histogram of target-region, or they only consider the importance of a pixel relative to other pixels when calculating the back-projection of search-region. In order to solve the two drawbacks, we carefully consider the background appearance based on two priors, i.e., texture information of background, and appearance difference between foreground-target and background. Accordingly, our method consists of two basic steps. First, we present a foreground-target histogram approximation method to effectively reduce the disturbance from background. Moreover, the foreground-target histogram is used for back-projection calculation instead of the targetregion histogram. Second, a novel back-projection calculation method is proposed by emphasizing the probability that a pixel belongs to the foreground-target. Experiments show that our method is suitable for various tracking scenes and is appealing with respect to robustness.

 

 

Figure  This figure shows three tracking results based on our mean-shift tracking method with the novel back-projection calculation.

 

Adaptive ζLBP for Background Subtraction

Background subtraction plays an important role in many computer vision systems, yet in complex scenes it is still a challenging task, especially in case of illumination variations. In this work, we develop an efficient texture-based method to tackle this problem. First, we propose a novel adaptive ζLBP operator, in which the threshold is adaptively calculated by compromising two criterions, i.e. the description stability and the discriminative ability. Then, the naive Bayesian technique is adopted to effectively model the probability distribution of local patterns in the pixel level, which utilizes only one single ζLBP pattern instead of ζLBP histogram of local region. Our approach is evaluated on several video sequences against the traditional methods. Experiments show that our method is suitable for various scenes, especially can robust handle illumination variations.

 

Figure  Overview of three local difference sequences along with time.

 

Forward-Backward Mean-shift for Visual Tracking with Local Background Weighted Histogram

Object tracking plays an important role in many intelligent transportation systems. Unfortunately, it remains a challenging task due to some factors such as occlusion and target appearance variation. In this work, we present a new tracking algorithm to tackle the difficulties caused by the two factors. First, considering the target appearance variation, we introduce the local background weighted histogram (LBWH) to describe the target. In our LBWH, the local background is treated as the context of the target representation. Compared with traditional descriptors, LBWH is more robust to the variability or clutter of the potential background. Second, to deal with the occlusion case, a new forward-backward mean-shift (FBMS) algorithm is proposed by incorporating a forward-backward evaluation scheme, in which the tracking result is evaluated by the forward-backward error. Extensive experiments on various scenarios have demonstrated that our tracking algorithm.

 

Figure  Illustration of forward-backward mean-shift.

 

Facial Image Composition Based on Active Appearance Model

In this project, based on Active Appearance Model (AAM), we present an easy-to-use framework for facial image composition, which can automatically exchange the source image's face or facial features onto the target image. The manual interaction is simple and the user only needs to input semantic information of ROI (region of interest) to be exchanged, such as 'face' or 'eyes'. Our framework mainly consists of two steps: model fitting and component compositing. Model fitting is designed to interpret each input image and obtain a synthesized model face of the image. Then by using component compositing, visual pleasing result is generated by solving Poisson equation with the boundary condition, produced automatically from model fitting. Furthermore, we propose a solution for eliminating the artifacts when part of the target face is occluded by hair, glasses, etc. The visually satisfactory results demonstrate the effectiveness of our facial image composition system.

 

Figure  Facial Image Composition (Automatically Exchange Faces) Based on Active Appearance Model