Abstract: (2941 Views)
Analyzing motion patterns in traffic videos can directly lead to generate some high-level descriptions of the video content. In this paper, an unsupervised method is proposed to automatically discover motion patterns occurring in traffic video scenes. For this purpose, based on optical flow features extracted from video clips, an improved Group Sparse Topical Coding (GSTC) framework is applied for learning semantic motion patterns. Then, each video clip can be sparsely represented by a weighted sum of learned patterns which can further be employed in very large range of applications. Compared to the original GSTC, the proposed improved version of GSTC selects only a small number of relevant words for each topic and hence provides a more compact representation of topic-word relationships. Moreover, in order to deal with large-scale video analysis problems, we present an online algorithm for improved GSTC which can not only deal with large video corpora but also dynamic video streams. Experimental results show that our proposed approach finds the motion patterns accurately and gives a meaningful representation for the video.