Director of Machine INtelligence and Data analytics (MIND) Lab

Cybersecurity Center(Affiliated)
The Center for Scientific Computing and Data Science Research (Affiliated)

Phone: 508-910-6893
Email: mshao[at]umassd[dot]edu

Office Hours (Fall 2022)
Tue/Thu: 10am-11am; Fri: 10am-12pm

Computer and Information Science
College of Engineering
University of Massachusetts Dartmouth

Address: 285 Old Westport Road
Dion 303A, Dartmouth, MA 02747-2300

 

Ming (Daniel) Shao
Dr. and Associate Professor
About Me [Google Scholar] [Linkedin] *******Opening Positions*******

I am an Associate Professor in Computer and Information Science Department at Umass Dartmouth, starting from Fall 2016. My research interests include: multi-view learning, transfer learning/domain adaptation, deep leanring, large-scale graph approximation/clustering, social media analytics. I received my Ph.D. degree from Department of Electrical and Computer Engineering, at Northeastern University in 2016.

 

I am always looking for self-motivated graduate students and visiting students/scholars. Feel free to contact me with your CV.

For prospective students, you will find PhD application information here and here

[Highlighted Research]

  • Adversarial Machine Learning
  1. Adversary for social good
  2. Adversarial multi-modal action modeling
  3. Graph induced adversarial attack
  • Healthcare Informatics and Medical Image Analysis
  1. Clinical state progression, 2. Medical image segmentation, 3. Super-resolution
  • Visual Kinship Understanding. Check recent:
  1. Paper: FIW dataset, survey
  2. Workshop: data challenge I, data challenge II, data challenge III, data challenge IV
  3. Tutorial: tutorial1@ACM-MM2018, tutorial2@CVPR2019, tutorial3@FG2019
  • Multi-view Learning. Check recent:
  1. Paper: survey, action prediction;
  2. Tutorial: tutorial1@CVPR2018, tutorial2@BigData2018, tutorial@IJCAI-2020
 

[NSF REU Site Summer 2023]

  1. Ten-week intensive research on AI, security, and system engineering
  2. $6,000 stipend in total for each participant
  3. On-campus housing and meal allowance
  4. Up to $600 travel expenses to and from the REU site
   
 

[News]

  • [8-2022] Grateful to receive NSF CPS Medium Award ($512k) as site-PI to support our AI-enabled organs-on-chips research. Total amount is $1.1M and UMassD is the lead institute
  • [8-2022] Grateful to receive grant from NFWF ($207k) as co-PI to support our Electronic Monitoring Program for New England Ground Fish
  • [7-2022] Grateful to receive Mass Skills Capital Grant ($500k) as co-PI to support our industrial robotics and cybersecurity research
  • [7-2022] Grateful to receive NSF CAREER Award ($498k) as sole PI to support my continual multi-view representation learning
  • [2-2022] Grateful to receive MIT Sea Grant from NOAA ($190k) as co-PI to support our Multispecies Groundfish Electronic Monitoring Programs
  • [12-2021] Grateful to receive MSF internal SEED Award ($20k) as PI to support AI research to assess changes in demersal fish population and species assemblage

[Sposors]

Transfer learning, and multi-modality recognition

Low-rank transfer learning

[Introduction]

Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them.

In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits.

  • First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain.
  • Second, the discriminative power of the source domain is naturally passed on to the target domain.
  • Third, noisy information will be filtered out in the knowledge transfer.

Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.

[Related Work]

  1. Ming Shao, Carlos Castillo, Zhenghong Gu, and Yun Fu, Low-Rank Transfer Subspace Learning, International Conference on Data Mining (ICDM), pages 1104--1109, 2012. [pdf] [bib]
  2. Ming Shao, Dmitry Kit, and Yun Fu, Generalized Transfer Subspace Learning through Low-Rank Constraint, International Journal on Computer Vision (IJCV), vol. 109, no. 1-2, pages 74--93, 2014. [pdf] [bib]
  3. Zhengming Ding, Ming Shao, and Yun Fu, Latent Low-Rank Transfer Subspace Learning for Missing Modality Recognition, AAAI Conference on Artificial Intelligence (AAAI), pages 1192--1198, 2014. [pdf] [bib]
  4. Zhengming Ding, Ming Shao, and Yun Fu, Missing Modality Transfer Learning via Latent Low-Rank Constraint, IEEE Transactions on Image Processing (TIP), vol. 24, no. 11, pages 4322--4334, 2015. [pdf] [bib]
  5. Zhengming Ding, Ming Shao, and Yun Fu, Latent Low-Rank Transfer Subspace Learning for Missing Modality Recognition, AAAI Conference on Artificial Intelligence (AAAI), pages 1192--1198, 2014. [pdf] [bib]
  6. Hongfu Liu, Ming Shao, and Yun Fu, Structure-Preserved Multi-Source Domain Adaptation, IEEE International Conference on Data Mining (ICDM), pages 1059–1064, 2016. [pdf] [bib]
  7. Ming Shao, Zhengming Ding, Handong Zhao, and Yun Fu, Spectral Bisection Tree Guided Deep Adaptive Exemplar Autoencoder for Unsupervised Domain Adaptation, AAAI Conference on Artificial Intelligence (AAAI), pages 2023–2029, 2016. [pdf] [bib]

Multi-modality learning

[Introduction]

Recent face recognition work has concentrated on different spectral, i.e., near infrared, that can only be perceived by specifically designed device to avoid the illumination problem. This makes great sense in fighting off the lighting factors in face recognition. However, using near infrared for face recognition inevitably introduces a new problem, namely, cross-modality classification. In brief, images registered in the system are in one modality while images that captured momentarily used as the tests are in another modality. In addition, there could be many within-modality variations —pose, and expression — leading to a more complicated problem for the researchers.

To address this problem, we propose a novel framework called hierarchical hyperlingual-words in this work. First, we design a novel structure, called generic hyperlingual-words, to capture the high-level semantics across different modal-ities and within each modality in weakly supervised fashion, meaning only modality pair and variations information are needed in the training. Second, to improve the discriminative power of hyperlingual-words, we propose a novel distance metric thorough the hierarchical structure of hyperlingual-words. Extensive experiments on multi-modality face databases demonstrate the superiority of our method compared to the state-of-the-art works on face recognition tasks subject to pose and expression variations.

[Related Work]

  1. Ming Shao, and Yun Fu, Cross-Modality Feature Learning through Generic Hierarchical Hyperlingual-Words, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol 28, no. 2, pages 451–463, 2017. [pdf] [bib]
  2. Ming Shao and Yun Fu, Hierarchical Hyperlingual-Words for Multi-Modality Face Classification, International Conference on Automatic Face and Gesture Recognition (FG), pages 1--6, 2013. [pdf] [bib]

Zero-shot learning

[Introduction]

Zero-shot learning for visual recognition has achieved much interest in the most recent years. However, the se-mantic gap across visual features and their underlying se-mantics is still the biggest obstacle in zero-shot learning. To fight off this hurdle, we propose an effective Low-rank Embedded Semantic Dictionary learning (LESD) through ensemble strategy. Specifically, we formulate a novel frame-work to jointly seek a low-rank embedding and seman-tic dictionary to link visual features with their semantic representations, which manages to capture shared features across different observed classes. Moreover, ensemble s-trategy is adopted to learn multiple semantic dictionaries to constitute the latent basis for the unseen classes. Conse-quently, our model could extract a variety of visual char-acteristics within objects, which can be well generalized to unknown categories. Extensive experiments on several zero-shot benchmarks verify that the proposed model can outperform the state-of-the-art approaches.

[Related Work]

  1. Zhengming Ding, Ming Shao, and Yun Fu. Low-Rank Embedded Ensemble Semantic Dictionary for Zero-Shot Learning, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [pdf] [bib]

Deep representation learning

Low-rank coding and dictionary learning

[Introduction]

Low-rank and sparse modeling is a critical in robust feature learning, and transfer learning. Here we investigate the two usage of low-rank and sparse modeling: 1. deep robust encoder; 2. deep transfer learning.

In the first work, we propose a novel Deep Robust Encoder (DRE) through locality preserving low-rank dictionary to extract robust and discriminative features from corrupted data, where a low-rank dictionary and a regularized deep auto-encoder are jointly optimized. First, we propose a novel loss function in the output layer with a learned low-rank clean dictionary and corresponding weights with locality information, which ensures that the reconstruction is noise free. Second, discriminant graph regularizers that preserve the local geometric structure with the data are developed to guide the deep feature learning in each encoding layer. Experimental results on several benchmarks including object and face images verify the effectiveness of our algorithm by comparing with the state-of-the-art approaches.

In the second work, we develop a novel approach, called Deep Low-Rank Coding (DLRC), for trans-fer learning. Specifically, discriminative low-rank coding is achieved in the guidance of an iterative supervised structure term for each single layer. In this way, both marginal and conditional distributions between two domains are mitigated. In addition, a marginalized denoising feature transform is employed to guarantee the learned single-layer low-rank coding to be robust despite of corruptions or noises. Finally, by stacking multiple lay-ers of low-rank codings, we can build robust cross-domain features from coarse to fine. Experimental results on several benchmarks have demonstrated the effectiveness of our proposed algorithm on facilitating the performance for target domain.

[Related Work]

  1. Zhengming Ding, Ming Shao, and Yun Fu, Deep Robust Encoder through Locality Preserving Low-Rank Dictionary, European Conference on Computer Vision (ECCV), pages 567–582, 2016. [pdf] [bib]
  2. Zhengming Ding, Ming Shao, and Yun Fu, Deep Low-Rank Coding for Transfer Learning, International Joint Conferences on Artificial Intelligence (IJCAI), pages 3453--3459, 2015. [pdf] [bib]

Weak style learning

[Introduction]

Style classification (e.g., Baroque and Gothic architecture style) is grabbing increasing attention in many fields such as fashion, architecture, and manga. Most existing methods focus on extracting discriminative features from local patches or patterns. However, the spread out phenomenon in style classification has not been recognized yet. It means that visually less representative images in a style class are usually very diverse and easily getting misclassified. We name them weak style images. Another issue when employing multiple visual features towards effective weak style classification is lack of consensus among different features. That is, weights for different visual features in the local patch should have been allocated similar values.

To address these issues, we propose a Consensus Style Centralizing Auto-Encoder (CSCAE) for learning robust style features representation, especially for weak style classification. First, we propose a Style Centralizing Auto-Encoder (SCAE) which centralizes weak style features in a progressive way. Then, based on SCAE, we propose both the non-linear and linear version CSCAE which adaptively allocate weights for different features during the progressive centralization process. Consensus constraints are added based on the assumption that the weights of different features of the same patch should be similar. Specifically, the proposed linear counterpart of CSCAE motivated by the “shared weights” idea as well as group sparsity improves both efficacy and efficiency. For evaluations, we experiment extensively on fashion, manga and architecture style classification problems. In addition, we collect a new dataset—Online Shopping, for fashion style classification, which will be publicly available for vision based fashion style research. Experiments demonstrate the effectiveness of the SCAE and CSCAE on both public and newly collected datasets when compared with the most recent state-of-the-art works.

[Related Work]

  1. Shuhui Jiang, Ming Shao, Chengcheng Jia, and Yun Fu, Learning Consensus Representation for Weak Style Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017 (in press). [pdf] [bib]
  2. Shuhui Jiang, Ming Shao, Chengcheng Jia, and Yun Fu, Consensus Style Centralizing Auto-encoder for Weak Style Classification, AAAI Conference on Artificial Intelligence (AAAI), pages 1223–1229, 2016. [pdf] [bib]

Large-scale graph approximation and clustering

Graph approximation

[Introduction]

We consider exploiting data’s structure for sparse graph approximations. Graphs play increasingly important roles in learning problems: manifold learning, kernel learning, and spectral clustering. Specifically, in this paper, we concentrate on nearest neighbor sparse graphs which are widely adopted in learning problems due to its spatial efficiency. Nonetheless, we raise an even challenging problem: can we save more memory space while keep competitive performance for the sparse graph? To this end, first, we propose to partition the entire graph into intra- and inter-graphs by exploring the graph structure, and use both of them for graph approximation. Therefore, neighborhood similarities within each cluster, and between different clusters are well preserved. Second, we improve the space use of the entire approximation algorithm. Specially, a novel sparse inter-graph approximation algorithm is proposed, and corresponding approximation error bound is provided. Third, extensive experiments are conducted on 11 real-world datasets, ranging from small- to large-scales, demonstrating that when using less space, our approximate graph can provide comparable or even better performance, in terms of approximation error, and clustering accuracy. In large-scale test, we use less than 1/100 memory of comparable algorithms, but achieve very appealing results.

[Related Work]

  1. Ming Shao, Xindong Wu, and Yun Fu, Scalable Nearest Neighbor Sparse Graph Approximation by Exploiting Graph Structure, IEEE Transactions on Big Data (TBD), vol. 2, no. 4, pages 365–380, 2016. [pdf] [bib]

Fast deep clustering

[Introduction]

Clustering has been one of the most critical unsupervised learning techniques that has been widely applied in data mining problems. As one of its branches, graph clustering enjoys its popularity due to its appealing performance and strong theoretical supports. However, the eigen-decomposition problems involved are computationally expensive. In this paper, we propose a deep structure with a linear coder as the building block for fast graph clustering, called Deep Linear Coding (DLC). Different from conventional coding schemes, we jointly learn the feature transform function and discriminative codings, and guarantee that the learned codes are robust in spite of local distortions. In addition, we use the proposed linear coders as the building blocks to formulate a deep structure to further refine features in a layerwise fashion. Extensive experiments on clustering tasks demonstrate that our method performs well in terms of both time com-plexity and clustering accuracy. On a large-scale benchmark dataset (580K), our method runs 1500 times faster than the original spectral clustering.

[Related Work]

  1. Ming Shao, Sheng Li, Zhengming Ding, and Yun Fu, Deep Linear Coding for Fast Graph Clustering, International Joint Conferences on Artificial Intelligence (IJCAI), pages 3798--3804, 2015. [pdf] [bib]

Visual kinship understanding

[Introduction]

There is an urgent need to organize and manage images of people automatically due to the recent explosion of such data on the web as well in social media. Beyond face detection and face recognition, which have been extensively studied over the past decade, the most interesting aspect related to human-centered images is the relationship of people in the image. In this work, we focus on a novel solution to the latter problem, in particular the kin relationships. Our contributions are two fold:

  1. we develop a transfer subspace learning based algorithm in order to reduce the significant differences in the appearance distributions between children and old parents facial images. Moreover, by exploring the semantic relevance of the associated metadata, we propose an algorithm to predict the most likely kin relationships embedded in an image.
  2. Motivated by the lack of a single, unified image dataset available for kinship tasks, our goal is to provide the research community with a dataset that captivates interests for involvement, i.e., large enough in scope to inherently provide platforms for multiple benchmarked tasks. We were able to collect and label the largest set of family images to date with a small team and an efficient labelling tool developed to optimize the process of marking complex hierarchical relationships, attributes, and local label information in family photos.

[Related Work]

  1. Siyu Xia*, Ming Shao*, and Yun Fu, Kinship Verification through Transfer Learning, International Joint Conferences on Artificial Intelligence (IJCAI), pages 2539--2544, 2011. (* indicates equal contribution) [pdf] [bib]
  2. Ming Shao, Siyu Xia, and Yun Fu, Genealogical Face Recognition based on UB KinFace Database, IEEE CVPR Workshop on Biometrics (CVPR BIOM), 65--70, 2011. (Database is available now) [pdf] [bib]
  3. Siyu Xia*, Ming Shao*, Jiebo Luo, and Yun Fu , Understanding Kin Relationships in a Photo, IEEE Transactions on Multimedia (T-MM), vol. 14, no. 4, pages 1046--1056, 2012. (* indicates equal contributions) [pdf] [bib]
  4. Joseph  Robinson, Ming  Shao, Yue  Wu, and Yun  Fu, Family in the wild (FIW): Large-Scale Kinship Image Database and Benchmarks, ACM Multimedia Conference, pages 242–246, 2016. [pdf] [bib]
  5. Junkang Zhang, Siyu Xia, Ming Shao, and Yun Fu, Family Photo Recognition via Multiple Instance Learning, ACM International Conference on Multimedia Retrieval (ICMR), 2017. [pdf] [bib]
  6. Joseph P. Robinson, Ming Shao , Yue Wu, Hongfu Liu, Timothy Gillis, and Yun Fu,, Visual Kinship Recognition of Families in the Wild, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018 (in press). [pdf] [bib]

Social media analytics

Occupation recognition

[Introduction]

In this work, we investigate the problem of recognizing occupations of multiple people with arbitrary poses in a photo. Previous work utilizing single person's nearly frontal clothing information and fore/background context preliminarily proves that occupation recognition is computationally feasible in computer vision. A more challenging task is recognizing occupation of multiple people with arbitrary poses in a photo. To that end, we propose to use discriminant clothing features and demographics, as well as structure classifiers for occupation recognition. To evaluate our method’s performance, we conduct extensive experiments on a new well-labeled occupation database with 14 representative occupations and over 7K images. Results on this database validate our method’s effectiveness and show that occupation recognition is solv-able in a more general case.

[Related Work]

  1. Ming Shao, Liangyue Li, and Yun Fu, What Do You Do? Occupation Recognition in a Photo via Social Context, International Conference on Computer Vision (ICCV), pages 3631--3638, 2013. [pdf] [bib]
  2. Ming Shao, Liangyue Li, Yun Fu, Predicting Professions through Probabilistic Model under Social Context, AAAI Conference on Artificial Intelligence (AAAI), pages 122--124, 2013. [pdf] [bib]

Human reidentification

[Introduction]

Person re-identification plays an important role in many safety-critical applications. Existing works mainly focus on extracting patch-level features or learning distance metrics. However, the representation power of extracted features might be limited, due to the various viewing conditions of pedestrian images in complex real-world scenarios. To improve the representation power of features, we learn discriminative and robust representations via dictionary learning in this paper. First, we propose a Cross-view Dictionary Learning (CDL) model, which is a general solution to the multi-view learning problem. Second, we propose a Cross-view Multi-level Dictionary Learning (CMDL) approach based on CDL. CMDL contains dictionary learning models at different representation levels, including image-level, horizontal part-level, and patch-level. Third, we incorporate a discriminative regularization term to CMDL, and propose a CMDL-Dis approach which learns pairs of discriminative dictionaries in image-level and part-level. We devise efficient optimization algorithms to solve the proposed models. Finally, a fusion strategy is utilized to generate the similarity scores for test images. Experiments on the public VIPeR, CUHK Campus, iLIDS, GRID and PRID450S datasets show that our approach achieves the state-of-the-art performance.

[Related Work]

  1. Sheng Li, Ming Shao, and Yun Fu, Person Re-identification by Cross-View Multi-Level Dictionary Learning, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017 (in press). [pdf] [bib]
  2. Sheng Li, Ming Shao, and Yun Fu, Cross-View Projective Dictionary Learning for Person Re-identification, International Joint Conferences on Articial Intelligence (IJCAI), pages 2155--2161, 2015. [pdf] [bib]

 

Human action recognition/dtection

Human action detection

[Introduction]

We present a multi-stream bi-directional recurrent neu-ral network for fine-grained action detection. Recently, two-stream convolutional neural networks (CNNs) trained on stacked optical flow and image frames have been successful for action recognition in videos. Our system uses a track-ing algorithm to locate a bounding box around the person, which provides a frame of reference for appearance and motion and also suppresses background noise that is not within the bounding box. We train two additional streams on motion and appearance cropped to the tracked bounding box, along with full-frame streams. Our motion streams use pixel trajectories of a frame as raw features, in which the displacement values corresponding to a moving scene point are at the same spatial position across several frames. To model long-term temporal dynamics within and between ac-tions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. We test on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we introduce and make available to the community with this paper. The re-sults demonstrate that our method significantly outperforms state-of-the-art action detection methods on both datasets.

[Related Work]

  1. Bharat Singh , Michael Jones, Tim Marks, Oncel Tuzel, and Ming Shao, A Multi-Stream Bi-Directional Recurrent Neural Network for Fine-Grained Action Detection, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1961–1970, 2016. [pdf] [bib]

 


Human action recognition

[Introduction]

In the action recognition work, we primarily focus on two problems. First, we approach action recognition through temporal alignment, and use sparse modeling to precisely align two videos by selecting key frames in different challenges: sub-action, multi-subject and multi-modality. Second, we try to approach action recognition with noisy data, that is video data contains both spatial and temporal corruptions. In this work, we propose a Coupled Stacked Denoising Tensor Auto-Encoder (CSDTAE) model, which approaches this corruption problem in a divide-and-conquer fashion by jointing both the spatial and temporal schemes together. In particular, each scheme is a Stacked Denoising Tensor Auto-Encoder (SDTAE) designed to handle either spatial or temporal corruption, respectively. SDTAE is composed of several blocks, each of which is a Denoising Tensor Auto-Encoder (DTAE). Therefore, CSDTAE is designed based on several DTAE building blocks to solve the spatiotemporal corruption problem simultaneously.

[Related Work]

  1. Chengcheng Jia, Ming Shao, and Yun Fu, Sparse Canonical Temporal Alignment with Deep Tensor Decomposition for Action Recognition, IEEE Transactions on Image Processing (TIP), vol. 26, no. 2, pages 738–750, 2017. [pdf] [bib]
  2. Chengcheng Jia, Ming Shao, Sheng Li, Handong Zhao, and Yun Fu, Stacked Denoising Tensor Auto-Encoder for Action Recognition with Spatiotemporal Corruptions, IEEE Transactions on Image Processing (TIP), 2017 (in press). [pdf] [bib]

Pre Print:

  1. Changsheng, Siyu Xia, Ming Shao, and Yun Fu, High-quality Ellipse Detection Based on Arc-support Line Segments, arXiv:1810.03243. [pdf]
  2. Zhengming Ding, and Ming Shao, Robust Knowledge Discovery via Low-rank Modeling, arXiv:1909.13123v1. [pdf]
  3. Donghui Yan, Zhiwei Qin, Songxiang Gu, Haiping Xu, and Ming Shao, Cost-sensitive Selection of Variables by Ensemble of Model Sequences, arXiv:1901.00456v1, 2019. [pdf]

Book Chapters:

  1. Ming Shao, Mingbo Ma, and Yun Fu, Sparse Manifold Subspace Learning, Low-Rank and Sparse Modeling for Visual Analysis, pages 87--115, Springer, 2014. [pdf] [bib]
  2. Ming Shao, Dmitry Kit, and Yun Fu, Low-Rank Transfer Learning, Low-Rank and Sparse Modeling for Visual Analysis, pages 117--132, Springer, 2014. [pdf] [bib]
  3. Sheng Li, Ming Shao, and Yun Fu, Low-Rank Outlier Detection, Low-Rank and Sparse Modeling for Visual Analysis, pages 181--202, Springer, 2014. [pdf] [bib]
  4. Ming Shao, Siyu Xia, and Yun Fu, Identity and Kinship Relations in Group Pictures, Human-Centered Social Media Analytics, pages 191--206, Springer, 2013. [pdf] [bib]
  5. Ming Shao, Yun Fu, Recognizing Occupations through Probabilistic Models: A Social View, Human-Centered Social Media Analytics, pages 175--190, Springer, 2013. [pdf] [bib]

Journals:

  1. Joseph Robinson, Ming Shao, and Yun Fu, Survey on the Analysis and Modeling of Visual Kinship: A Decade in the Making, IEEE Transactions on Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
  2. Deepak Kumar, Chetan Kumar, and Ming Shao, Collaborative Knowledge Distillation for Incomplete Multi-view Action Prediction, Journal of Image and Vision Computing (IVC), 2021.
  3. Donghui Yan, Zhiwei Qin, Songxiang Gu, Haiping Xu, and Ming Shao, Cost-Sensitive Selection of Variables by Ensemble of Model Sequences, Knowledge and Information Systems (KAIS), 2021.
  4. Zhangxing Bian, Siyu Xia, Chao Xia, and Ming Shao, VitSeg: Weakly supervised vitiligo segmentation in skin image,
    VitSeg: Weakly supervised vitiligo segmentation in skin image, Computerized medical imaging and graphics, vol. 85, 2020.
  5. Changsheng, Siyu Xia, Ming Shao, and Yun Fu, High-quality Ellipse Detection Based on Arc-support Line Segments, IEEE Transactions on Image Processing (TIP), 2019.
  6. Zhengming Ding, Ming Shao, Wonjun Hwang, Sungjoo Suh, Jae-Joon Han, Changkyu Choi, Yun Fu, Robust Discriminative Metric Learning for Image Representation, IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2018.
  7. Hongfu Liu, Ming Shao, and Yun Fu, Feature Selection with Unsupervised Consensus Guidance, IEEE Transactions on Knowledge and Data Engineering (TKDE), 2018.
  8. Zhengming Ding, Ming Shao, and Yun Fu, Generative Zero-Shot Learning via Low-Rank Embedded Semantic Dictionary, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018.
  9. Hongfu Liu, Ming Shao, Zhengming Ding, and Yun Fu, Structure-Preserved Unsupervised Domain Adaptation, IEEE Transactions on Knowledge and Data Engineering (TKDE), 2018.
  10. Joseph P. Robinson, Ming Shao, Yue Wu, Hongfu Liu, Timothy Gillis, and Yun Fu,, Visual Kinship Recognition of Families in the Wild, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018.
  11. Sheng Li, Ming Shao, and Yun Fu, Multi-View Low-Rank Analysis with Applications to Outlier Detection, ACM Transactions on Knowledge Discovery from Data (TKDD), 2017.
  12. Chengcheng Jia, Ming Shao, Sheng Li, Handong Zhao, and Yun Fu, Stacked Denoising Tensor Auto-Encoder for Action Recognition with Spatiotemporal Corruptions, IEEE Transactions on Image Processing (TIP), 2017.
  13. Sheng Li, Ming Shao, and Yun Fu, Person Re-identification by Cross-View Multi-Level Dictionary Learning, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017. [pdf] [bib]
  14. Shuhui Jiang, Ming Shao, Chengcheng Jia, and Yun Fu, Learning Consensus Representation for Weak Style Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017 (in press). [pdf] [bib]
  15. Hongfu Liu, Ming Shao, Sheng Li, and Yun Fu, Infinite Ensemble Clustering, Data Mining and Knowledge Discovery (DMKD), 2017. [pdf] [bib]
  16. Ming Shao, Yizhe Zhang, and Yun Fu, Collaborative Random Faces Guided Encoders for Pose-Invariant Face Representation Learning, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2017. [pdf] [bib]
  17. Yu Kong, Ming Shao, Kang Li, and Yun Fu, Probabilistic Low-Rank Multi-Task Learning, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2017. [pdf] [bib]
  18. Zhengming Ding, Ming Shao, and Yun Fu, Incomplete Multi-Source Transfer Learning, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2016. [pdf] [bib]
  19. Chengcheng Jia, Ming Shao, and Yun Fu, Sparse Canonical Temporal Alignment with Deep Tensor Decomposition for Action Recognition, IEEE Transactions on Image Processing (TIP), vol. 26, no. 2, pages 738–750, 2017. [pdf] [bib]
  20. Ming Shao, Xindong Wu, and Yun Fu, Scalable Nearest Neighbor Sparse Graph Approximation by Exploiting Graph Structure, IEEE Transactions on Big Data (TBD), vol. 2, no. 4, pages 365–380, 2016. [pdf] [bib]
  21. Ming Shao, and Yun Fu, Cross-Modality Feature Learning through Generic Hierarchical Hyperlingual-Words, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol 28, no. 2, pages 451–463, 2017. [pdf] [bib]
  22. Zhengming Ding, Ming Shao, and Yun Fu, Missing Modality Transfer Learning via Latent Low-Rank Constraint, IEEE Transactions on Image Processing (TIP), vol. 24, no. 11, pages 4322--4334, 2015. [pdf] [bib]
  23. Ming Shao, Dmitry Kit, and Yun Fu, Generalized Transfer Subspace Learning through Low-Rank Constraint, International Journal on Computer Vision (IJCV), vol. 109, no. 1-2, pages 74--93, 2014. [pdf] [bib]
  24. Siyu Xia*, Ming Shao*, Jiebo Luo, and Yun Fu , Understanding Kin Relationships in a Photo, IEEE Transactions on Multimedia (T-MM), vol. 14, no. 4, pages 1046--1056, 2012. (* indicates equal contributions) [pdf] [bib]

Conferences:

2020

  1. Deepak Kumar, Chetan Kumar, Chun Wei Seah, Siyu Xia, and Ming Shao, Finding Achilles’ Heel: Adversarial Attack on Multi-modal Action Recognition, ACM International Conference on Multimedia (ACM-MM), 2020.
  2. Chengyao Zheng, Siyu Xia, Joseph Robinson, Changsheng Lu, Wayne Wu, Chen Qian, and Ming Shao, Localin Reshuffle Net: Toward Naturally and Efficiently Facial Image Blending, Asian Conference on Computer Vision (ACCV), 2020.
  3. Chetan Kumar, Riazat Ryan, and Ming Shao, Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial
    Attacks, AAAI Conference on Artificial Intelligence (AAAI), 2020 (oral presentation).

2019

  1. Riazat Ryan, Handong Zhao, and Ming Shao, CTC-Attention based Non-Parametric Inference Modeling for Clinical State Progression, IEEE International Conference on Big Data (BigData), 2019 (regular paper).
  2. Zhangxing Bian, Siyu Xia, Chao Xiay, and Ming Shao, Weakly Supervised Vitiligo Segmentation in Skin Image through Saliency Propagation, International Conference on Bioinformatics & Biomedicine (BIBM), 2019.
  3. Chengyao Zheng, Siyu Xia, Ming Shao, and Yun Fu, Fast Facial Image Analogy with Spatial Guidance,” IEEE Conference on Automatic Face and Gesture Recognition (FG), 2019.

2018

  1. Chao Xiao, Siyu Xia, Ming Shao, and Yun Fu, Album to Family Tree: A Graph based Method for Family Relationship Recognition, Asian Conference on Computer Vision (ACCV), 2018.
  2. Zhengming Ding, Ming Shao, Sheng Li, and Yun Fu, Generic Embedded Semantic Dictionary for Robust Multi-label Classification, IEEE International Conference on Big Knowledge (ICBK), 2018 [pdf] [bib]
  3. Zhengming Ding, Sheng Li, Ming Shao, and Yun Fu, Graph Adaptive Knowledge Transfer for Unsupervised Domain Adaptation, European Conference on Computer Vision (ECCV), 2018. [pdf] [bib]
  4. Bin Sun, Ming Shao, Siyu Xia, and Yun Fu, Deep Evolutionary 3D Diffusion Heat Maps for Large-pose Face Alignment, British Machine Vision Conference (BMVC), 2018. [pdf] [bib]
  5. Chao Xia, Siyu Xia, Yuan Zhou, Le Zhang and Ming Shao, Graph based family relationship recognition from a single image, Pacific Rim International Conference on Artificial Intelligence (PRICAI). 2018. [pdf] [bib]
  6. Zhengming Ding, Ming Shao, and Yun Fu, Robust Multi-view Representation: A Unified Perspective from Multi-view Learning to Domain Adaption, International Joint Conference on Artificial Intelligence (IJCAI), 2018. [pdf] [bib]

2017

  1. Changsheng Lu, Siyu Xia, Wanming Huang, Ming Shao, and Yun Fu, Circle Detection by Arc-Support Line Segments, IEEE International Conference on Image Processing (ICIP), 2017. [pdf] [bib]
  2. Junkang Zhang, Siyu Xia, Ming Shao, and Yun Fu, Family Photo Recognition via Multiple Instance Learning, ACM International Conference on Multimedia Retrieval (ICMR), 2017. [pdf] [bib]
  3. Zhengming Ding, Ming Shao, and Yun Fu. Low-Rank Embedded Ensemble Semantic Dictionary for Zero-Shot Learning, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [pdf] [bib]

2016

  1. Hongfu Liu, Ming Shao, and Yun Fu, Structure-Preserved Multi-Source Domain Adaptation, IEEE International Conference on Data Mining (ICDM), pages 1059–1064, 2016. [pdf] [bib]
  2. Zhengming Ding, Ming Shao, and Yun Fu, Deep Robust Encoder through Locality Preserving Low-Rank Dictionary, European Conference on Computer Vision (ECCV), pages 567–582, 2016. [pdf] [bib]
  3. Joseph  Robinson, Ming  Shao, Yue  Wu, and Yun  Fu, Family in the wild (FIW): Large-Scale Kinship Image Database and Benchmarks, ACM Multimedia Conference, pages 242–246, 2016. [pdf] [bib]
  4. Hongfu Liu, Ming Shao, Sheng Li, and Yun Fu, Infinite Ensemble for Image Clustering, ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 1745–1754, 2016. [pdf] [bib]
  5. Bharat Singh , Michael Jones, Tim Marks, Oncel Tuzel, and Ming Shao, A Multi-Stream Bi-Directional Recurrent Neural Network for Fine-Grained Action Detection, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1961–1970, 2016. [pdf] [bib]
  6. Zhengming Ding, Ming Shao, and Yun Fu, Transfer Learning for Image Classification with Incomplete Multiple Sources, International Joint Conference on Neural Networks (IJCNN), pages 2188–2195, 2016. [pdf] [bib]
  7. Chengcheng Jia, Ming Shao, and Yun Fu, Sparse Alignment for Video Analysis in Discriminant Tensor Space, International Joint Conference on Neural Networks (IJCNN), pages 2260–2266, 2016. [pdf] [bib]
  8. Ming Shao, Zhengming Ding, Handong Zhao, and Yun Fu, Spectral Bisection Tree Guided Deep Adaptive Exemplar Autoencoder for Unsupervised Domain Adaptation, AAAI Conference on Artificial Intelligence (AAAI), pages 2023–2029, 2016. [pdf] [bib]
  9. Shuhui Jiang, Ming Shao, Chengcheng Jia, and Yun Fu, Consensus Style Centralizing Auto-encoder for Weak Style Classification, AAAI Conference on Artificial Intelligence (AAAI), pages 1223–1229, 2016. [pdf] [bib]
  10. Hongfu Liu, Ming Shao, and Yun Fu, Consensus Guided Unsupervised Feature Selection, AAAI Conference on Artificial Intelligence (AAAI), pages 1874–1880, 2016. [pdf] [bib]

2015

  1. Handong Zhao, Zhengming Ding, Ming Shao, and Yun Fu, Part-Level Regularized Semi-Nonnegative Coding for Semi-Supervised Learning, IEEE International Conference on Data Mining (ICDM), pages 1123–1128, 2015. [pdf] [bib]
  2. Ming Shao, Sheng Li, Zhengming Ding, and Yun Fu, Deep Linear Coding for Fast Graph Clustering, International Joint Conferences on Artificial Intelligence (IJCAI), pages 3798--3804, 2015. [pdf] [bib]
  3. Sheng Li, Ming Shao, and Yun Fu, Cross-View Projective Dictionary Learning for Person Re-identification, International Joint Conferences on Articial Intelligence (IJCAI), pages 2155--2161, 2015. [pdf] [bib]
  4. Zhengming Ding, Ming Shao, and Yun Fu, Deep Low-Rank Coding for Transfer Learning, International Joint Conferences on Artificial Intelligence (IJCAI), pages 3453--3459, 2015. [pdf] [bib]
  5. Ming Shao, Zhengming Ding, and Yun Fu, Sparse Low-Rank Fusion based Deep Features for Missing Modality Face Recognition, International Conference on Automatic Face and Gesture Recognition (FG), pages 1--6, 2015. [pdf] [bib]

2014

  1. Sheng Li, Ming Shao, and Yun Fu, Multi-view Low-Rank Analysis for Outlier Detection, SIAM International Conference on Data Mining (SDM), pages 748--756, 2014. [pdf] [bib]
  2. Shuyang Wang, Ming Shao, and Yun Fu, Attractive or Not? Beauty Prediction with Attractiveness-Aware Encoders and Robust Late Fusion, ACM-Multimedia Conference, pages 805--808, 2014. [pdf] [bib]
  3. Zhengming Ding, Ming Shao, and Yun Fu, Latent Low-Rank Transfer Subspace Learning for Missing Modality Recognition, AAAI Conference on Artificial Intelligence (AAAI), pages 1192--1198, 2014. [pdf] [bib]
  4. Ming Shao, Sheng Li, Tongliang Liu, Dacheng Tao, Thomas Huang, and Yun Fu, Learning Relative Features Through Adaptive Pooling For Image Classification, IEEE International Conference on Multimedia and Expo (ICME), pages 1--6, 2014, Best Paper Award Candidates. (4 out of 716) [pdf] [bib]
  5. Sheng Li, Ming Shao, and Yun Fu, Locality Linear Fitting One-class SVM with Low-Rank Constraints for Outlier Detection, International Joint Conference on Neural Networks (IJCNN), pages 676--683, 2014. [pdf] [bib]

2013

  1. Ming Shao, Liangyue Li, and Yun Fu, What Do You Do? Occupation Recognition in a Photo via Social Context, International Conference on Computer Vision (ICCV), pages 3631--3638, 2013. [pdf] [bib]
  2. Yizhe Zhang*, Ming Shao*, Edward Wong, and Yun Fu, Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition, International Conference on Computer Vision (ICCV), pages 2416--2323, 2013. (* indicates equal contribution) [pdf] [bib]
  3. Ming Shao, Liangyue Li, Yun Fu, Predicting Professions through Probabilistic Model under Social Context, AAAI Conference on Artificial Intelligence (AAAI), pages 122--124, 2013. [pdf] [bib]
  4. Ming Shao and Yun Fu, Hierarchical Hyperlingual-Words for Multi-Modality Face Classification, International Conference on Automatic Face and Gesture Recognition (FG), pages 1--6, 2013. [pdf] [bib]
  5. Mingbo Ma, Ming Shao, Xu Zhao, and Yun Fu, Prototype Based Feature Learning for Face Image Set Classification, International Conference on Automatic Face and Gesture Recognition (FG), pages 1--6, 2013. [pdf] [bib]
  6. Gaurav Srivastava, Ming Shao, and Yun Fu, Low-Rank Embedding for Semisupervised Face Classification, International Conference on Automatic Face and Gesture Recognition (FG), pages 1--6, 2013. [pdf] [bib]

2012 and Before

  1. Ming Shao, Carlos Castillo, Zhenghong Gu, and Yun Fu, Low-Rank Transfer Subspace Learning, International Conference on Data Mining (ICDM), pages 1104--1109, 2012. [pdf] [bib]
  2. Siyu Xia, Ming Shao, and Yun Fu, Toward Kinship Verication Using Visual Attributes, International Conference on Pattern Recognition (ICPR), pages 549--552, 2012 (Oral Presentation). [pdf] [bib]
  3. Zhenghong Gu, Ming Shao, Liangyue Li, and Yun Fu, Discriminative Metric:Schatten Norm vs. Vector Norm, International Conference on Pattern Recognition (ICPR), pages 1213--1216, 2012. [pdf] [bib]
  4. Wei Chen, Ming Shao, and Yun Fu, Clustering Based Fast Low-Rank Approximation for Large-Scale Graph, IEEE ICDM Workshop on Large Scale Visual Analytics (ICDM-LSVA), pages 787--792, 2011, Best Paper Award. [pdf] [bib]
  5. Siyu Xia*, Ming Shao*, and Yun Fu, Kinship Verification through Transfer Learning, International Joint Conferences on Artificial Intelligence (IJCAI), pages 2539--2544, 2011. (* indicates equal contribution) [pdf] [bib]
  6. Ming Shao, Siyu Xia, and Yun Fu, Genealogical Face Recognition based on UB KinFace Database, IEEE CVPR Workshop on Biometrics (CVPR BIOM), 65--70, 2011. (Database is available now) [pdf] [bib]
  7. Ming Shao, YunhongWang, and Xue Ling, A BEMD Based Normalization Method for Face Recognition under Variable Illuminations, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1114--1117, 2010. [pdf] [bib]
  8. Ming Shao, Yunhong Wang, and Yiding Wang, A Super-Resolution Based Method to Synthesize Visual Images from Near Infrared, International Conference on Image Processing (ICIP), pages 2453--2456, 2009. [pdf] [bib]
  9. Ming Shao, Yunhong Wang, and Peijiang, Liu, Face Relighting Based on Multi-Spectral Quotient Image and Illumination Tensorfaces, Asian Conference on Computer Vision (ACCV), pages 108--117, 2009. [pdf] [bib]
  10. Ming Shao and Yunhong Wang, Joint Features for Face Recognition under Variable Illuminations, International Conference on Image and Graphics (ICIG), pages 922--927, 2009. (Oral Presentation) [pdf] [bib]
  11. Ming Shao and Yunhong Wang, Recovering Facial Intrinsic Images from a Single Input, International Conference on Intelligent Computing (ICIC), pages 82--91, 2009. (Oral Presentation) [pdf] [bib]

PhD Students:

Master Students (Thesis):

  • Josue N Rivera Valdez, Fall 2019 -- now
  • Harshitha Srinivas Rao, Summer 2020 -- now

Undergraduate Students (Research Assistant):

  • David O Atunlute, Summer 2020 -- now

Visiting Scholar:

  • ChunWei Seah (2019 -- 2020)

Tutorial

  1. "Multi-view Data Analytics" at IEEE BigData 2018, IEEE CVPR 2018, IJCAI 2020
  2. Visual Kinship Understanding” at IEEE CVPR 2018, ACM-MM 2018, FG 2019

Program Chair

  1. 3rd Big Data Transfer Learning Workshop (BDTL) in Conjunction with IEEE Big Data 2018
  2. The 8th IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG2018) in
    conjunction with IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018)
  3. Faces in Multimedia Workshop in conjunction with IEEE International Conference on Multimedia and Expo
    (ICME 2018)
  4. 2nd Recognizing Families In the Wild (FIW) Data Challenge Workshop in conjunction with IEEE Conference
    on Automatic Face and Gesture Recognition (FG 2018)
  5. 2nd International Workshop on Big Data Transfer Learning (BDTL) in conjunction with 2017 IEEE Interna-
    tional Conference on Big Data (BigData 2017)
  6. 7th IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG2017) in conjunction
    with International Conference on Computer Vision (ICCV 2017)
  7. 1st Recognizing Families In the Wild (FIW) Data Challenge Workshop in conjunction with ACM Multimedia
    Conference (ACM-MM 2017)
  8. Workshop on Textual Customer Feedback Mining and Transfer Learning in conjunction with 2016 IEEE Inter-
    national Conference on Big Data (BigData 2016)

Senior Program Committee

  1. The AAAI Conference on Artificial Intelligence (AAAI), 2019

Program Committee Member

  1. 2nd International Workshop on Compact and Efficient Feature Representation and Learning in Computer Vision, 2018
  2. IEEE International Conference on Data Mining (ICDM), 2018
  3. IEEE International Conference on Multimedia Information Retrieval and Processing (MIPR), 2018
  4. Affective Computing and Intelligent Interaction (ACII), 2017
  5. International Joint Conference on Artificial Intelligence (IJCAI), 2017
  6. IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2017, 2018
  7. The AAAI Conference on Artificial Intelligence (AAAI), 2017, 2018
  8. IEEE International Conference on Machine Learning and Applications (ICMLA), 2016, 2017
  9. The 6th IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG2015) in Conjunction with
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR2015)
  10. The 6th International Workshop on Video Event Categorization, Tagging and Retrieval towards Big Data, in
    conjunction with European Conference on Computer Vision (ECCV2014)

Publicity Chair

  1. IEEE Workshop on Analysis and Modeling of Faces and Gestures in Conjunction with CVPR2013 (AMFG2013)

Conference Reviewer/External Reviewer

Reviewers for: ICMR, ACPR, IEEE BigData, ICDM, ACCV, ICCV, CVPR, ECCV, ACM-MM, SDM, FG, BMVC, WACV, AMFG, ICASSP, ICME, ICPR, AVSS

Journal Reviewer

Reviewers for: ACM-TIST, ACM-TKDD, IJMIR, IEEE-TPAMI, IEEE-TNNLS, IEEE-TIP, IEEE-TKDE, IEEE-TCSVT, IEEE-TAFFC, IEEE-TIFS, IEEE-TMM, IEEE-TBioCAS, IEEE-TETCI, PR, Information Science, PRL, Neurocomputing, IJPRAI, JMM, JEI, JVCI, IVC, MVAP

  • CIS 431 - Human and Computer Interaction, Fall 2016
  • CIS 599 - CIS Graduate Seminar, 2016-2019
  • CIS 530 - Advanced Data Mining, Spring 2017, Fall 2019, 2020
  • CIS 465 - Topics in Computer Vision, Fall 2017, Spring 2021
  • CIS 280 - Software Specification and Design, Spring 2018, 2019
  • CIS 550 - Advanced Machine Learning, Fall 2018, Spring 2020, Fall 2021
  • CIS 360 - Algorithms & Data Structure, Fall 2018, 2019
  • CIS 361 - Models of Computation, Spring 2019-2021