| Peer-Reviewed

Human Activities Detection for Patient Convalescence

Published in Innovation (Volume 2, Issue 4)
Received: 28 October 2021    Accepted: 15 November 2021    Published: 23 November 2021
Views:       Downloads:
Abstract

A simple activity recognition method can allow a solitary human being to monitor all the surroundings with the purpose to guarantee safety and confidentiality while protective maintenance cost and efficiency with the rising level of accuracy. This monitoring system with real-time video surveillance can be deployed for patients and the elderly in a hospital or old age home and airport along with numerous human activities. For speedy analysis of action and accurate result while working with complex human behavior, we decided to use YOLOv4 (You Only Look Once) algorithm which is the latest and the fastest among them all. This technique uses bounding boxes to highlight the action. In this case, we have collected 4,674 number of dissimilar data from the hospital with different condition of ourselves. During this study, we divided the human action into three different patterns such as standing, sitting and walking. This model is able to detect and recognize numerous patients and other various human activities. This research accomplishes an average accuracy of 94.6667% while recognizing images and about 63.00% while recognizing activity from video clips. This study works with YOLOv4 while it performs better than TensorFlow and OpenPose platforms. The article proposed the outcome for patients in early recovery based on human activities investigation and analysis.

Published in Innovation (Volume 2, Issue 4)
DOI 10.11648/j.innov.20210204.15
Page(s) 84-91
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Human Activity, Image, OpenPose, Video Clips, TensorFlow, YOLOv4

References
[1] R. Devakunchari, “Analysis on big data over the years”, International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014.
[2] Mike Jude, “Worldwide Video Surveillance Camera Forecast, 2020–2025”, International Data Corporation (IDC), July, 2021.
[3] Zicong Jiang and et al., “Real-time object detection method based on improved YOLOv4-tiny”, Computer Vision and Pattern Recognition, Cornell University, 2 Dec 2020.
[4] Vrigkas M. and et al., “A Review of Human Activity Recognition Methods”, Frontiers in Robotics and AI, Nov., 2015.
[5] Jamie Shotton and et al. “Real-Time Human Pose Recognition in Parts from Single Depth Images,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011.
[6] Zeng M. and et al., “Convolutional Neural Networks for Human Activity Recognition using Mobile Sensors”, 6th International conference on mobile computing, applications and services. IEEE; 2014, November. pp. 197-205.
[7] Raptis M. and Sigal L., “Poselet Key-framing: A Model for Human Activity Recognition” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2650-2657.
[8] Ong Chin Ann and Lau Bee Theng, “Human Activity Recognition: A Review”, IEEE International Conference on Control System, Computing and Engineering, 28 - 30 November 2014, Penang, Malaysia.
[9] Daniele Ravi and et al., “Deep Learning for Human Activity Recognition: A Resource Efficient Implementation on Low-Power Devices”, IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN), 2016.
[10] Faeghe F. and et al. “Human Activity Recognition: From Sensors to Applications”, International Conference on Omni-layer Intelligent Systems (COINS), 2020.
[11] Ankita and and et al., “An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones”, Sensors, Publisher: MDPI, 2021.
[12] Shujuan Wang and Xiaoke Zhu, ‘A Hybrid Deep Neural Networks for Sensor-based Human Activity Recognition”, 12th International Conference on Advanced Computational Intelligence (ICACI), 2020.
Cite This Article
  • APA Style

    Shammir Hossain, Yeasir Arafat, Shoyaib Mahmud, Dipongker Sen, Jakia Rawnak Jahan, et al. (2021). Human Activities Detection for Patient Convalescence. Innovation, 2(4), 84-91. https://doi.org/10.11648/j.innov.20210204.15

    Copy | Download

    ACS Style

    Shammir Hossain; Yeasir Arafat; Shoyaib Mahmud; Dipongker Sen; Jakia Rawnak Jahan, et al. Human Activities Detection for Patient Convalescence. Innovation. 2021, 2(4), 84-91. doi: 10.11648/j.innov.20210204.15

    Copy | Download

    AMA Style

    Shammir Hossain, Yeasir Arafat, Shoyaib Mahmud, Dipongker Sen, Jakia Rawnak Jahan, et al. Human Activities Detection for Patient Convalescence. Innovation. 2021;2(4):84-91. doi: 10.11648/j.innov.20210204.15

    Copy | Download

  • @article{10.11648/j.innov.20210204.15,
      author = {Shammir Hossain and Yeasir Arafat and Shoyaib Mahmud and Dipongker Sen and Jakia Rawnak Jahan and Ahmed Nur-A-Jalal and Ohidujjaman},
      title = {Human Activities Detection for Patient Convalescence},
      journal = {Innovation},
      volume = {2},
      number = {4},
      pages = {84-91},
      doi = {10.11648/j.innov.20210204.15},
      url = {https://doi.org/10.11648/j.innov.20210204.15},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.innov.20210204.15},
      abstract = {A simple activity recognition method can allow a solitary human being to monitor all the surroundings with the purpose to guarantee safety and confidentiality while protective maintenance cost and efficiency with the rising level of accuracy. This monitoring system with real-time video surveillance can be deployed for patients and the elderly in a hospital or old age home and airport along with numerous human activities. For speedy analysis of action and accurate result while working with complex human behavior, we decided to use YOLOv4 (You Only Look Once) algorithm which is the latest and the fastest among them all. This technique uses bounding boxes to highlight the action. In this case, we have collected 4,674 number of dissimilar data from the hospital with different condition of ourselves. During this study, we divided the human action into three different patterns such as standing, sitting and walking. This model is able to detect and recognize numerous patients and other various human activities. This research accomplishes an average accuracy of 94.6667% while recognizing images and about 63.00% while recognizing activity from video clips. This study works with YOLOv4 while it performs better than TensorFlow and OpenPose platforms. The article proposed the outcome for patients in early recovery based on human activities investigation and analysis.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Human Activities Detection for Patient Convalescence
    AU  - Shammir Hossain
    AU  - Yeasir Arafat
    AU  - Shoyaib Mahmud
    AU  - Dipongker Sen
    AU  - Jakia Rawnak Jahan
    AU  - Ahmed Nur-A-Jalal
    AU  - Ohidujjaman
    Y1  - 2021/11/23
    PY  - 2021
    N1  - https://doi.org/10.11648/j.innov.20210204.15
    DO  - 10.11648/j.innov.20210204.15
    T2  - Innovation
    JF  - Innovation
    JO  - Innovation
    SP  - 84
    EP  - 91
    PB  - Science Publishing Group
    SN  - 2994-7138
    UR  - https://doi.org/10.11648/j.innov.20210204.15
    AB  - A simple activity recognition method can allow a solitary human being to monitor all the surroundings with the purpose to guarantee safety and confidentiality while protective maintenance cost and efficiency with the rising level of accuracy. This monitoring system with real-time video surveillance can be deployed for patients and the elderly in a hospital or old age home and airport along with numerous human activities. For speedy analysis of action and accurate result while working with complex human behavior, we decided to use YOLOv4 (You Only Look Once) algorithm which is the latest and the fastest among them all. This technique uses bounding boxes to highlight the action. In this case, we have collected 4,674 number of dissimilar data from the hospital with different condition of ourselves. During this study, we divided the human action into three different patterns such as standing, sitting and walking. This model is able to detect and recognize numerous patients and other various human activities. This research accomplishes an average accuracy of 94.6667% while recognizing images and about 63.00% while recognizing activity from video clips. This study works with YOLOv4 while it performs better than TensorFlow and OpenPose platforms. The article proposed the outcome for patients in early recovery based on human activities investigation and analysis.
    VL  - 2
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh

  • Department of Information Technology, Asian University for Women, Chattogram, Bangladesh

  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh

  • Department of Information and Communication Technology, Mahbubur Rahman Mollah College, Dhaka, Bangladesh

  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh

  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh

  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh

  • Sections