Simple MoCap System for Home Usage
DOI:
https://doi.org/10.9781/ijimai.2017.4410Keywords:
MoCap, Multimedia, Animations, LED SensorsAbstract
Nowadays many MoCap systems exist. Generating 3D facial animation of characters is currently realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In general it is a professional solution that is sophisticated and costly. This paper presents a solution with a system that is inexpensive. We propose a new easy-to-use system for home usage, through which we are making character animation. In its implementation we paid attention to the elimination of errors from the previous solutions. In this paper the authors describe the method how motion capture characters on a treadmill and as well as an own Java application that processes the video for its further use in Cinema 4D. This paper describes the implementation of this technology of sensing in a way so that the animated character authentically imitated human movement on a treadmill.Downloads
References
Assa, J., Cohen-Or, D., Yeh, I., & Lee, T. (2008). Motion overview of human actions. In ACM SIGGRAPH Asia 2008 Papers (SIGGRAPH Asia ’08).
Bares, W. H., Thainimit, S., & McDermott, S. A. (2000). A model for constraint-based camera planning. In Smart Graphics: Papers from the 2000 AAAI Spring Symposium (Vol. 4, pp. 84–91). AAAI Press.
Blanz, V., Tarr, M. J., & Bülthoff, H. H. (1999). What object attributes determine canonical views. Perception, 28(5), 575–600.
Cai, W., Leung, V. C. M., & Hu, L. (2014). A cloudlet-assisted multiplayer cloud gaming system. Mobile Networks and Applications (MONET), 19(2), 144–152. https://doi.org/10.1007/s11036-013-0485-4
Choensawat, W., Nakamura, M., & Hachimura, K. (2014). Genlaban: A tool for generating labanotation from motion capture data. Multimedia Tools and Applications, 74(23), 10823–10846. https://doi.org/10.1007/s11042-014-2209-6
Christie, M., & Olivier, P. (2006). Camera control in computer graphics. In Eurographics 2006 Star Report (pp. 89–113).
Dickson, L. (2008). Motion capture uses Meta Motion. Meta Motion. (Online).
Dyer, S., Martin, J., & Zulauf, J. (1995, December 12). Motion Capture White Paper (v2.0). (Online).
Furniss, M. (2000). Motion capture. MIT Communications Forum. (Online).
Gómez, F., Hurtado, F., Sellares, J. A., & Toussaint, G. T. (2001). Nice perspective projections. Journal of Visual Communication and Image Representation, 12(4), 387–400.
Grafika.SK. (2014). Motion capture už aj na Slovensku. (Online).
Halper, N., Helbing, R., & Strothotte, T. (2001). A camera engine for computer games: Managing the trade-off between constraint satisfaction and frame coherence. In Eurographics 2001 Proceedings (Vol. 20(3), pp. 174–183). Blackwell Publishing.
Halper, N., & Olivier, P. (2000). Camplan: A camera planning agent. In AAAI 2000 Spring Symposium on Smart Graphics (pp. 92–100). AAAI Press.
He, L.-W., Cohen, M. F., & Salesin, D. H. (1996). The virtual cinematographer: A paradigm for automatic real-time camera control and directing. In SIGGRAPH 1996 Conference Proceedings (pp. 217–224). ACM.
Hermann, T. (2006). AcouMotif or Acoustic Moon SysOn: An interactive sonification system for acoustic motion control. (E-ISBN: 978-3-540-32625-0).
Hirose, K., & Higuchi, T. (2012). Creating facial animation of characters via MoCap data. Journal of Applied Statistics, 39(12), 2583–2597. https://doi.org/10.1080/02664763.2012.724391
Kamada, T., & Kawai, S. (1988). A simple method for computing general position in displaying three-dimensional objects. Computer Vision, Graphics, and Image Processing, 41(1), 43–56.
Khan, M. A. (2016). Multiresolution coding of motion capture data for real-time multimedia applications. Multimedia Tools and Applications, 1–16. https://doi.org/10.1007/s11042-016-3944-7
Kostov, N. T., Yordanova, S. M., & Kalchev, Y. D. (2015). MoCap – The advantages of accelerometers and accuracy improvement. International Journal of Mathematics and Computers in Simulation, 9, 60–64.
Kwon, J.-Y., & Lee, I. K. (2008). Determination of camera parameters for character motions using motion area. The Visual Computer, 24, 475–483.
Lin, J. S., & Kulić, D. (2014). Online segmentation of human motion for automated rehabilitation exercise analysis. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22(1), 168–180. https://doi.org/10.1109/TNSRE.2013.2259640
Liu, L., Jones, A., Antonopoulos, N., Ding, Z., & Zhan, Y. (2015). Performance evaluation and simulation of peer-to-peer protocols for massively multiplayer online games. Multimedia Tools and Applications, 74(8), 2763–2780. https://doi.org/10.1007/s11042-013-1662-y
Lučenič, Ľ. (2005). Animácia a vizuálna analýza chôdze človeka (Team project FIIT STU).
Luz, F. (2010). Digital animation: Repercussions of new media on traditional animation concepts. In Lecture Notes in Computer Science (Vol. 6249). https://doi.org/10.1007/978-3-642-14533-9_57
Manovich, L. (2006). Image future. Animation, 1(1), 25–44.
Mareták, J. (2004). Animácia a vizuálna analýza chôdze človeka (Team project FIIT STU).
McCabe, H., & Kneafsey, J. (2006). A virtual cinematography system for first person shooter games. In Proceedings of the International Digital Games Conference (pp. 25–35).
Mihalovič, M. (2010). Animik – nástroj na interaktívne modelovanie animácií humanoidných postáv (Diploma thesis).
Nováček, J. (2011). MOCAP – snímání pohybu lidské postavy (Bachelor thesis).
Palmer, S., Rosch, E., & Chase, P. (1981). Canonical perspective and the perception of objects. In Attention and Performance IX (pp. 135–151).
Patoli, M., Gkion, P., Newbury, M., & White, M. (2010). Real time online motion capture for entertainment applications. In 3rd IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning (DIGITEL) (pp. 139–145). https://doi.org/10.1109/DIGITEL.2010.39
Pickering, J. H. (2002). Intelligent camera planning for computer graphics (PhD thesis). University of York.
Polonsky, O., Patanè, G., Biasotti, S., Gotsman, C., Spagnuolo, M., & (incomplete). (2005). What’s in an image: Towards the computation of the “best” view of an object. The Visual Computer, 21(8–10), 840–847.
Rahman, M. A. (2014). Multimedia environment toward analyzing and visualizing live kinematic data for children with hemiplegia. Multimedia Tools and Applications, 74(15), 5463–5487. https://doi.org/10.1007/s11042-014-1864-y
Thewlis, D., Bishop, C., Daniell, N., & Paul, G. (2013). Next-generation low-cost motion capture systems can provide comparable spatial accuracy to high-end systems. Journal of Applied Biomechanics, 29(1), 112–117.
Vozár, M., & Kuna, P. (2013). Evaluation of acquired knowledge of mathematical subjects in the field of applied informatics. In ICETA 2013 – 11th IEEE International Conference on Emerging eLearning Technologies and Applications (Proceedings) (pp. 405–410).
Wells, P. (2006). The fundamentals of animation. AVA Publishing.
Downloads
Published
-
Abstract44
-
PDF20






