Document Type: Original Articles
School of Mechanical Engineering, Shiraz University
Background: Sit-to-stand motion is a frequent and challenging task in daily lifeactivities especially for elderly and disabled people. Central nervous system usesseveral strategies for sit-to-stand movement. Many studies have been conductedto understand the underlying basis of the optimal approach. Reinforcementlearning (RL) is a suitable method for modeling the control strategies that occurin neuro-musculoskeletal system.Methods: In this paper a dynamic model of human sit-to-stand was derived, andkinematic data of a healthy subject has been extracted in this task. An optimalcontrol problem was formulated considering minimum energy and Q-Learningmethod has been utilized to find the optimal joint moments during sit to standmovement.Results: The simulation results have been compared to the experimental data.The lower extremity joint angles have been simulated and tracked the actualhuman angles extracted from the experiments. Also the joints moments showeda satisfactory precision by the proposed approach.Conclusion: An RL-based algorithm was used to model the human sit-to-stand,in which the model explores the state space with a Markov based approach andfinds the best actions (joint moments) at each state (posture). In this approach themodel successfully performs the task while consuming minimum energy. Thiswas achieved by updating the algorithm in every trial using a Q-learning method.
- Noroozian M. The elderly population in iran: an ever growing concern in the health system. Iranian journal of psychiatry and behavioral sciences. 2012;6(2):1.
- Yamasaki HR, Kambara H, Koike Y. Dynamic optimization of the sit-to-stand movement. Journal of applied biomechanics. 2011;27(4):306-13.
- BÅaÅ¼kiewicz M, Wiszomirska I, Wit A. A new method of determination of phases and symmetry in stand-to-sit-to-stand movement. International journal of occupational medicine and environmental health. 2014;27(4):660-71.
- An Q, Ishikawa Y, Nakagawa J, Oka H, Yamakawa H, Yamashita A, et al., editors. Analysis of contribution of muscle synergies on sit-to-stand motion using musculoskeletal model. Advanced Robotics and its Social Impacts (ARSO), 2013 IEEE Workshop on; 2013: IEEE.
- Schlicht J, Camaione DN, Owen SV. Effect of intense strength training on standing balance, walking speed, and sit-to-stand performance in older adults. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences. 2001;56(5):M281-M6.
- Faria CDCdM, Saliba VA, Teixeira-Salmela LF. Musculoskeletal biomechanics in sit-to-stand and stand-to-sit activities with stroke subjects: a systematic review. Fisioterapia em movimento. 2010;23(1):35-52.
- Cachia C. A Biomechanical Analysis of the Sit-to-stand Transfer in Parkinson's Disease: ProQuest; 2008.
- Janssen WG, Bussmann HB, Stam HJ. Determinants of the sit-to-stand movement: a review. Physical therapy. 2002;82(9):866-79.
- Elibol E, Calderon J, Llofriu M, Quintero C, Moreno W, Weitzenfeld A, editors. Power usage reduction of humanoid standing process using q-learning. Robot Soccer World Cup; 2015: Springer.
- Lord SR, Murray SM, Chapman K, Munro B, Tiedemann A. Sit-to-stand performance depends on sensation, speed, balance, and psychological status in addition to strength in older people. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences. 2002;57(8):M539-M43.
- An Q, Ishikawa Y, Funato T, Aoi S, Oka H, Yamakawa H, et al. Muscle synergy analysis of human standing-up motion using forward dynamic simulation with four body segment model. Distributed Autonomous Robotic Systems: Springer; 2016. p. 459-71.
- Cheng P-T, Chen C-L, Wang C-M, Hong W-H. Leg muscle activation patterns of sit-to-stand movement in stroke patients. American journal of physical medicine & rehabilitation. 2004;83(1):10-6.
- Goulart FR-d-P, Valls-Sole J. Patterned electromyographic activity in the sit-to-stand movement. Clinical neurophysiology. 1999;110(9):1634-40.
- Roberts PD, McCollum G. Dynamics of the sit-to-stand movement. Biological cybernetics. 1996;74(2):147-57.
- Taghvaei S, Tavasoli A, Feizi N, Rajestari Z, Abdi M. A control-oriented dynamic model for sit-to-stand motion with fixed support. Proceedings of the Institution of Mechanical Engineers, Part K: Journal of Multi-body Dynamics. 2017:1464419317731059.
- KuÅ¾eliÄki J, Å½efran M, Burger H, Bajd T. Synthesis of standing-up trajectories using dynamic optimization. Gait & posture. 2005;21(1):1-11.
- Kaelbling LP, Littman ML, Moore AW. Reinforcement learning: A survey. Journal of artificial intelligence research. 1996;4:237-85.
- Barto AG. Reinforcement learning control. Current opinion in neurobiology. 1994;4(6):888-93.
- Bagnell JA, Schneider JG, editors. Autonomous helicopter control using reinforcement learning policy search methods. Robotics and Automation, 2001 Proceedings 2001 ICRA IEEE International Conference on; 2001: IEEE.
- MusiÄ J, Kamnik R, Munih M. Model based inertial sensing of human body motion kinematics in sit-to-stand movement. Simulation Modelling Practice and Theory. 2008;16(8):933-44.
- Kleijn R, Kachergis G, Hommel B. Predictive Movements and Human Reinforcement Learning of Sequential Action. Cognitive science. 2018.
- Hall SJ, Sciences H. Basic biomechanics. 7 ed. United States: McGraw Hill Higher Education; 2014 2014/03/01.
- Sutton RS, Barto AG. Reinforcement learning: An introduction: MIT press Cambridge; 1998.
- Watkins CJ, Dayan P. Q-learning. Machine learning. 1992;8(3-4):279-92.
- Bellman R. A Markovian decision process. Journal of Mathematics and Mechanics. 1957:679-84.
- Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, et al. Continuous control with deep reinforcement learning. arXiv preprint arXiv:150902971. 2015.