Comparison on Cloud Image Classification for Thrash Collecting LEGO Mindstorms EV3 Robot
Abstract
The world today faces the biggest
waste management crisis due to rapid economic
growth, congestion, urban planning issues,
devastating negative symptoms and political
affairs. In addressing this waste management
problem, many methods of solving waste
management have proven not to be as planned.
In this high technology era, the innovation of
humanoid robots is found to be helpful to support
the everyday human life. The industry is gearing
towards automation to increase productivity at the
same time will improved quality of life to local
communities. Therefore, in this paper Thrash
Collecting Robot (TCR) is proposed to help
provide automatic control in thrash collection. The
TCR, built on the LEGO Mindstorm EV3 robot, can
distinguish between static and dynamic barriers,
and can move according to the programming that
has been created. TCRs are basically composed
of sensors designed according to different
requirements in order to detect dynamic barriers.
TCR is one type of Cloud Robot that implements
image processing techniques to identify the type
of waste that has been collected. The concept of
image processing built in TCR by using Cloud
Representational State Transfer (REST API).
This concept has been applied by Google Cloud
API and Sighthound. This cloud services used
machine vision techniques to identify and classify
the type of thrash images; whether it is plastic,
metal or paper. Experiment results show that
SightHound gives accurate result compared to
Google Cloud in classifying thrash types.
Downloads
References
Z. Othman and A. Abdullah, “An Adaptive
Threshold Based On Multiple Resolution Levels
for Canny Edge Detection,” in IRICT 2017:
Recent Trends in Information and Communication
Technology, 2017, pp. 316–323.
Z. Othman, A. Abdullah, and A. S. Prabuwono,
“Supervised Growing Approach for Region of
Interest Detection in Iris Localisation,” Adv. Sci.
Lett., vol. 24, no. Number 2, p. 1005–1011(7), 2018.
M. K. Nurul Nadirah, S. A. (UTeM) Sharifah
Sakinah, and S. Abdul Samad, “Improved fuzzy_
PID controller in following complicated path
for LEGO Mindstorms NXT,” in Proceedings of
Mechanical Engineering Day 2017, 2017, pp. 474–
A. Mohammed, L. Wang, and R. X. Gao,
“Integrated image processing and path planning
for robotic sketching,” Procedia CIRP, vol. 12, pp.
–204, 2013.
F. Umam, “Optimalization of Detection and
Navigation Smart Bin Robot Using Camera,” Adv.
Sci. Lett., vol. 23, no. 12, p. 12432–12436(5), 2017.
J. T. C. Tan, K. Okuno, and T. Inamura,
“Integration of work operation and embodied
multimodal interaction in task modeling for
collaborative robot development,” 4th Annu. IEEE
Int. Conf. Cyber Technol. Autom. Control Intell. Syst.
IEEE-CYBER 2014, pp. 615–618, 2014.
P. Kopacek, “Development Trends in Robotics,”
IFAC-PapersOnLine, vol. 49, no. 29, pp. 36–41,
E. Guizzo, “Robots with their heads in the
clouds,” IEEE Spectr., vol. 48, no. 3, pp. 17–18,
I. A. T. Hashem, I. Yaqoob, N. B. Anuar, S.
Mokhtar, A. Gani, and S. Ullah Khan, “The rise of
‘big data’ on cloud computing: Review and open
research issues,” Inf. Syst., vol. 47, pp. 98–115,
A. G. Del Molino, B. Mandal, J. Lin, J. H. Lim,
V. Subbaraju, and V. Chandrasekhar, “VC-I2R@
ImageCLEF2017: Ensemble of deep learned
features for lifelog video summarization,” CEUR
Workshop Proc., vol. 1866, 2017.
S. Z. Masood, G. Shu, A. Dehghan, and E. G.
Ortiz, “License Plate Detection and Recognition
Using Deeply Learned Convolutional Neural
Networks,” 2017.
A. Dehghan, E. G. Ortiz, G. Shu, and S. Z.
Masood, “DAGER: Deep Age, Gender and
Emotion Recognition Using Convolutional
Neural Network,” 2017.
“Google Cloud Platform.” [Online]. Available:
https://cloud.google.com/apis/docs/overview.
“Sighthound.” [Online]. Available: https://www.
sighthound.com/technology/.
Z. Othman, N. A. Abdullah, C. K. Yee, F. Farina,
W. Shahrin, and S. S. Syed, “Image Processing
Technique using Google Cloud API and
Sighthound for Lego Mindstorms EV3 Robot,”
Robot. Autom. Eng. J., vol. 2, no. 3, pp. 2–4, 2018.
Downloads
Published
How to Cite
Issue
Section
License
The articles may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Authors alone are responsible for the contents of their articles. The journal owns the copyright of the articles. However, within the framework of Creative Commons (CC) copyright license, authors can use their published works in non-profit environments and share them on their own platforms on the internet. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of the research material. All authors are requested to disclose any actual or potential conflict of interest including any financial, personal or other relationships with other people or organizations regarding the submitted work.
The author (s) guarantees that the submitted article is his/her/their original research. All authors participating in this study assumed public responsibility and confirmed that the article was not submitted for another journal. All the articles in the article do not violate the existing copyright rules and intellectual property rights of any person or organization. The article meets the ethical standards applicable to the research discipline.
The authors cannot withdraw the article they have uploaded to IJHaTI journal and upload it to another journal without the approval of the journal editor.
Authors are responsible for obtaining written permission to include any images or artwork for which they do not hold copyright in their articles, or to adapt any such images or artwork for inclusion in their articles. The copyright