Multi Layered Multi Task Marker Based Interaction in Information Rich Virtual Environments.

Authors

DOI:

https://doi.org/10.9781/ijimai.2020.11.002

Keywords:

Augmented Reality, Virtual Reality, Human-Computer Interaction (HCI)

Abstract

Simple and cheap interaction has a key role in the operation and exploration of any Virtual Environment (VE). In this paper, we propose an interaction technique that provides two different ways of interaction (information and control) on complex objects in a simple and computationally cheap way. The interaction is based on the use of multiple embedded markers in a specialized manner. The proposed marker like an interaction peripheral works just like a touch paid which can perform any type of interaction in a 3D VE. The proposed marker is not only used for interaction with Augmented Reality (AR), but also with Mixed Reality. A biological virtual learning application is developed which is used for evaluation and experimentation. We conducted our experiments in two phases. First, we compared a simple VE with the proposed layered VE. Second, a comparative study is conducted between the proposed marker, a simple layered marker, and multiple single markers. We found the proposed marker with improved learning, easiness in interaction, and comparatively less task execution time. The results gave improved learning for layered VE as compared to simple VE.

Downloads

Download data is not yet available.

References

[1] I. U. Rehman, S. Ullah, and M. Raees, “Two Hand Gesture Based 3D Navigation in Virtual Environments,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 5, pp. 128-140, 2019.

[2] D. A. Bowman, L. F. Hodges, and J. Bolter, “The virtual venue: Usercomputer interaction in information-rich virtual environments,” Presence, vol. 7, pp. 478-493, 1998.

[3] J. Chen, “Effective interaction techniques in information-rich virtual environments,” Proceedings of the Young VR, 2003.

[4] N. F. Polys, D. A. Bowman, and C. North, “The role of depth and gestalt cues in information-rich virtual environments,” International journal of human-computer studies, vol. 69, pp. 30-51, 2011.

[5] C. Ware, Information visualization: perception for design: Elsevier, 2012.

[6] D. A. Bowman, J. Chen, C. A. Wingrave, J. F. Lucas, A. Ray, N. F. Polys, et al., “New directions in 3D user interfaces,” IJVR, vol. 5, pp. 3-14, 2006.

[7] A. Munro, R. Breaux, J. Patrey, and B. Sheldon, “Cognitive aspects of virtual environments design,” Handbook of virtual environments: Design, implementation, and applications, pp. 415-434, 2002.

[8] M. C. Salzman, C. Dede, R. B. Loftin, and J. Chen, “A model for understanding how virtual reality aids complex conceptual learning,” Presence: Teleoperators & Virtual Environments, vol. 8, pp. 293-316, 1999.

[9] R. E. Mayer, “Cognitive theory and the design of multimedia instruction: an example of the two‐way street between cognition and instruction,” New directions for teaching and learning, vol. 2002, pp. 55-71, 2002.

[10] J. Saiki, “Spatiotemporal characteristics of dynamic feature binding in visual working memory,” Vision Research, vol. 43, pp. 2107-2123, 2003.

[11] H. Tardieu and V. Gyselinck, Working memory constraints in the integration and comprehension of information in a multimedia context: na, 2003.

[12] E. K. Vogel, G. F. Woodman, and S. J. Luck, “Storage of features, conjunctions, and objects in visual working memory,” Journal of Experimental Psychology: Human Perception and Performance, vol. 27, p. 92, 2001.

[13] S. Card, J. Mackinlay, and B. Shneiderman, “Information visualization,” Human-computer interaction: Design issues, solutions, and applications, vol. 181, 2009.

[14] M. Peter and M. Keller, “Visual Cues Practical Data Visualization,” IEEE Computer Society, Los Alamitos, 1993.

[15] D. A. Bowman, L. F. Hodges, D. Allison, and J. Wineman, “The educational value of an information-rich virtual environment,” Presence: Teleoperators & Virtual Environments, vol. 8, pp. 317-331, 1999.

[16] D. A. Bowman, C. North, J. Chen, N. F. Polys, P. S. Pyla, and U. Yilmaz, “Information-rich virtual environments: theory, tools, and research agenda,” in Proceedings of the ACM symposium on Virtual reality software and technology, 2003, pp. 81-90.

[17] N. F. Polys, D. A. Bowman, and C. North, “Information-rich virtual environments: challenges and outlook,” in NASA Virtual Iron Bird Workshop, 2004.

[18] S. Ressler, “A Web-based 3D glossary for anthropometric landmarks,” in Proceedings of HCI International, 2001, pp. 5-10.

[19] I. Rehman and S. Ullah, “The effect of Constraint based Multi-modal Virtual Assembly on Student’s Learning,” Sindh University Research Journal-SURJ (Science Series), vol. 48, 2016.

[20] I. U. Rehman, S. Ullah, and I. Rabbi, “The effect of semantic multi-modal aids using guided virtual assembly environment,” in 2014 International Conference on Open Source Systems & Technologies, 2014, pp. 87-92.

[21] I. U. Rehman, S. Ullah, and I. Rabbi, “Measuring the student’s success rate using a constraint based multi-modal virtual assembly environment,” in International Conference on Augmented and Virtual Reality, 2014, pp. 53-64.

[22] D. Khan, I. Rehman., S. Ullah, W. Ahmad, Z. Cheng, G. Jabeen, & H. Kato, “A Low-Cost Interactive Writing Board for Primary Education using Distinct Augmented Reality Markers”, Sustainability, vol. 11, no. 20, pp. 5720, 2019.

[23] K. Tateno, I. Kitahara, and Y. Ohta, “A nested marker for augmented reality,” in 2007 IEEE Virtual Reality Conference, 2007, pp. 259-262.

[24] I. Rabbi and S. Ullah, “Extending the tracking distance of fiducial markers for large indoor augmented reality applications,” Advances in Electrical and Computer Engineering, vol. 15, pp. 59-65, 2015.

[25] T. Höllerer, S. Feiner, T. Terauchi, G. Rashid, and D. Hallaway, “ExploringMARS: developing indoor and outdoor user interfaces to a mobile augmented reality system,” Computers & Graphics, vol. 23, pp. 779-785, 1999.

[26] I. Poupyrev, M. Billinghurst, S. Weghorst, and T. Ichikawa, “The go-go interaction technique: non-linear mapping for direct manipulation in VR,” in ACM Symposium on User Interface Software and Technology, 1996, pp. 79-80.

[27] D. A. Bowman and L. F. Hodges, “An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments,” SI3D, vol. 97, pp. 35-38, 1997.

[28] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana, “Virtual object manipulation on a table-top AR environment,” in Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000), 2000, pp. 111-119.

[29] M. Fiala, “Comparing ARTag and ARToolkit Plus fiducial marker systems,” in IEEE International Workshop on Haptic Audio Visual Environments and their Applications, 2005, p. 6.

[30] K. Rainio and A. Boyer, “ALVAR–A Library for Virtual and Augmented Reality User’s Manual,” VTT Augmented Reality Team, 2013.

[31] O. S. A. R. SDK, “ARToolKit. org,” ed.

[32] J. Jun, Q. Yue, and Z. Qing, “An extended marker-based tracking system for augmented reality,” in 2010 Second International Conference on Modeling, Simulation and Visualization Methods, 2010, pp. 94-97.

[33] D. Khan, S. Ullah, and I. Rabbi, “Factors affecting the design and tracking of ARToolKit markers,” Computer Standards & Interfaces, vol. 41, pp. 56- 66, 2015.

[34] D. Khan, S. Ullah, D.-M. Yan, I. Rabbi, P. Richard, T. Hoang, et al., “Robust tracking through the design of high quality fiducial markers: an optimization tool for ARToolKit,” IEEE access, vol. 6, pp. 22421-22433, 2018.

[35] M. Azhar, S. Ullah, I. U. Rahman, and S. Otmane, “Multi-Layered Hierarchical Bounding Box based interaction in virtual environments,” in Proceedings of the 2015 Virtual Reality International Conference, 2015, p. 14.

[36] Brooke J. SUS-A quick and dirty usability scale. Usability evaluation in industry. 1996 Sep 1;189(194):4-7.

Downloads

Published

2020-12-01
Metrics
Views/Downloads
  • Abstract
    222
  • PDF
    58

How to Cite

Ur Rehman, I., Ullah, S., and Khan, D. (2020). Multi Layered Multi Task Marker Based Interaction in Information Rich Virtual Environments. International Journal of Interactive Multimedia and Artificial Intelligence, 6(4), 57–67. https://doi.org/10.9781/ijimai.2020.11.002