Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 5th International Conference on Automation and Robotics Las Vegas, Nevada, USA.

Day 2 :

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker Magnus S Magnusson photo
Biography:

This talk presents a self-similar pattern type called T-pattern, a kind of statistical peseudo fractal recurring with signifi cant translation symmetry on a single discrete dimention. It now comes with a specialized detection (evolution) algorithm implemented as the soft ware THEMETM for Windows (see patternvision.com), which has allowed the discovery of numerous and complex interaction patterns in many kinds of human and animal interactions as well as in neuronal interactions within living brains. T-patterns have also been detected in interactions between robots and humans and also seem characteristic for the structure of DNA and text. A defi nition of T-patterns is presented as well as the essentials of the current detection
algorithms including examples of detected T-patterns using the specially developed T-pattern diagrams. Th e T-pattern is now a
part of a larger set of pattern types and relations called T-system that will be shortly described including examples of patterning detected with specially developed algorithms also implemented in THEMETM. Th e potential importance of T-patterns is fi nally illustrated through a comparison of human mass societities and the mass societies of proteins within biological cells (sometimes called “Cell City”), where self-similarity of organization evolved over billions of years is striking from nano to human scales based on self-similar T-patterns, but appearing suddenly among large-brain animals in humans only and partly based on massively copied standardized T-patterned letter strings such as holy books and constitutions.

Abstract:

Magnus S Magnusson is a Research Professor, Founder and Director of the Human Behavior Laboratory, University of Iceland. He has completed his PhD in 1983 from University of Copenhagen. He is the Author of the T-pattern model and detection software THEMETM (PatternVision.com), focused on real-time organization of behavior. He has Co-directed DNA analysis. He has numerous papers (>1700 citations) and talks/keynotes in ethology, neuroscience, mathematics, religion, proteomics and mass spectrometry. He was the Deputy Director 1983-1988, in National Museum of Natural History, Paris. He has been repeatedly invited as a temporary Professor at the University of Paris, V, VIII and XIII.

Keynote Forum

Richard Satava

University of Washington, USA

Keynote: Future of healthcare: Surgical robotics and the fi fth generation – non-invasive procedures

Time : 10:00-10:30

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker Richard Satava photo
Biography:

Richard Satava is a Professor of Surgery at the University of Washington Medical Center, and Senior Science Advisor at the US Army Medical Research and Materiel Command in Ft. Detrick, MD. His prior positions include Professor of Surgery at Yale University and a military appointment as Professor of Surgery (USUHS) in the Army Medical Corps assigned to General Surgery at Walter Reed Army Medical Center and Program Manager of Advanced Biomedical Technology at the Defense Advanced Research Projects Agency (DARPA).

Abstract:

Non-healthcare industries have used a wide spectrum of robotic and energy-based systems for literally all diff erent purposes, from manufacturing to artist creations, whereas only a small portion of these commercially available systems have been exploited by physicians. Although many of the technologies are large and sophisticated image-guided systems, numerous other technologies are small, hand-held portable systems. Th us, the operating room or procedure room of the future may well need to be reconfi gured for some of the larger systems, enabling new forms of non-invasive therapies; on the other hand, many other time-honored procedures will be performed as outpatient or offi ce procedures with small, hand-held devices. Although the military (DARPA) has demonstrated a prototype robotic surgical system for the battlefi eld, it has yet to complete development. An important area in robotics that has not been exploited by surgery is the area of automation. Th e great majority (of the few existing) surgical robots are actually single location-based tele-operated systems, with virtually no automation. Even the initial success in true remote tele-surgery has been discontinued. Th ere are opportunities in the area of control motion and feedback that could revolutionize surgical robotics – such as developing ‘virtual fi xtures’ (constraints) or automating simple tasks like suturing. Surgical robotics is in their infancy. Even as this fourth revolution in surgery in 25 years (robotic surgery) is gaining in popularity, a much more disruptive change is beginning with the next revolution: Directed Energy Surgery. While surgeons have been investigating a few diff erent types of energy for decades, including success with some forms such as lithotripsy, photonics, high-intensity focused ultrasound (HIFU), etc., these pioneering techniques are but the tip of the iceberg that heralds the transition to non-invasive surgery. Such systems are based upon the premise which robotics and automation can bring – precision, speed and reliability, especially as surgery ‘descends’ into operating at the cellular and molecular level. Richard Feynman was right – this is “room at the bottom”!.

Keynote Forum

Petra Perner

Institute of Computer Vision and Applied Computer Sciences, Germany

Keynote: Multi time-series mining for medical, engineering, and smart maintenance purposes in order to fi gure out critical system statuses

Time : 10:50-11:20

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker Petra Perner photo
Biography:

Petra Perner is the Director of the Institute of Computer Vision and Applied Computer Sciences IBaI. She has received her Diploma degree in Electrical Engineering and her PhD degree in Computer Science for the work on data reduction methods for industrial robots with direct teach-in-programing. Her habilitation thesis was about a methodology for the development of knowledge-based image-interpretation systems. She has been the Principal Investigator of various national and international research projects. She has received several research awards for her research work and has been awarded with three business awards for her work on bringing intelligent image interpretation methods and data mining methods into business. Her research interest includes image analysis and interpretation, machine learning, data mining, big data, machine learning, image mining and case-based reasoning

Abstract:

In many applications multiple time series of measurement parameters are taken. Th e aim is not to forecast how the single time series will evolve. Th e aim of this study was to fi gure out when a biological system, an engineering system, or a system under observation will go into a critical status that requires immediately action to preserve the system. Th is task requires diff erent intelligent observations from prediction to decision making over multiple time-parameters. Oft en the measurement data points are not equidistant. Th ey are oft en on diff erent time-intervals and they have to be brought into a common time interval by adequate interpolation methods. Th e status of the system in the past and how it will be behaving in the future will also play an important role. Th at does not bring it into a single point observation but rather into a more complex consideration that needs to take into an account the system status. We will show on diff erent application how such an application can be solved. We will review the state of the art of single time-signal prediction. We will show how the system theory method has to be applied. We demonstrate that it is necessary to take the system theory quotation into account to solve the problem, it does not matter if it is a biological, engineering, or maintenance object; and fi nally, we will show on diff erent application how we solved the applications with system-theory data mining methods

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker David Moss photo
Biography:

David Moss is the President, CTO, and Co-founder of People Power Company. His talents span the full technology spectrum, including hardware design, embedded fi rmware, wireless sensor networks, cloud platforms, mobile apps, user experiences, product management, and business development. Prior to People Power, he worked with Bitfone Corporation as both as Java Developer and Firmware Engineer, where he integrated Bitfone’s revolutionary over-the-air fi rmware updates into tens of millions of Motorola phones. His open source contributions have been used by thousands of developers around the world

Abstract:

We’re in an exciting era of computing beyond the boundaries of mobile phones. Let’s face it, app development has hit a wall, smart home adoption is slow and developers are hungry for the next big thing. Th ere’s a huge market opportunity for developers in the IoT industry with AI assistants. We’re on the cusp of a new economy driven by AI assistants, which provide a 24/7 consciousness that learns from users’ personal data to identify patterns and anticipate problems. Today’s smart home products are too dumb to create actual value for mass consumer adoption. Consumers don’t need more devices to make their lives more convenient, they instead need personalized AI-enabled soft ware services designed to meet their individual needs. AI assistants enable service providers the opportunity to deploy sophisticated machine learning-based services. With pattern behavior analysis from deep-learning algorithms that mimic the human brain, AI assistants track adjustments to smart home devices, like smart lighting or connected thermostats, and automatically adapt to consumer’s routines. Th ey enable reoccurring micro-services, like monitoring energy usage, that don’t require a phone screen like apps do. AI assistants deliver smart home services beyond simple screen interactions and get deeper into people’s lives to better manage their personal and home preferences. Th e author will demonstrate how to create AI assistant services, leveraging open source tools and IoT platforms. He’ll focus on the architecture of AI assistants, forming a representative business model which can incorporate intelligent assistants and coordinating devices and data sources to make products work intelligently together.

Keynote Forum

Andy Pandharikar

COMMERCE.AI, USA

Keynote: How is artifi cial intelligence disrupting commerce?

Time : 13:30-14:00

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker Andy Pandharikar photo
Biography:

Andy Pandharikar is the CEO of COMMERCE.AI, which develops AI for Commerce. Before that, he co-founded Fitiquette based in SF, which got acquired by Flipkart group, India’s largest online retailer. Fitiquette had developed machine-learning technology for fashion e-commerce and was selected as the top commerce- tech startup at Techcrunch Disrupt SF 2012. He held various product and engineering positions at Cisco. He is also a Member of SF Angels Group and has invested in over 12 startups. He attended MS in Management Science and Engineering at Stanford University and obtained an Executive Degree at Harvard Business School. He is an outdoor enthusiast, an ultra-marathoner, and a certifi ed lead rock climber

Abstract:

There are over fi ve billion unique products sold worldwide with over 30,000 new products introduced every month. From product design to manufacturing, to merchandising, to supply chain, to delivery, all the blocks in commerce are getting dismantled and rebuilt with the help of machine learning and deep learning. How are brands and retailers leveraging algorithms and data to win in this changing landscape? Where will it all leads to? Which problems need to be solved in order to realize autonomous commerce? I will share our learning from building an applied AI startup with our vision for self-driving commerce.

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker Marzieh Jalal Abadi photo
Biography:

Marzieh Jalal Abadi has completed her PhD at University of New South Wales (UNSW) in 2016 and joined data61-CSIRO as Research Associate in 2017. Her research is in machine learning, sensory data, IoT and cybersecurity.

Abstract:

Today’s mobile devices are equipped with a range of embedded sensors. Th ese sensors can be used to infer contextual information such as location, activity, health, etc. and thus enable a range of applications. Recent research has demonstrated that applications with access to data collected from GPS, accelerometer and even device battery profi le can accurately track the location of users as they move about in urban spaces. In recent years, vibration energy harvesting (VEH) has emerged as a viable option for mobile devices to address the inadequacy of current battery technology. VEH harnesses power from human motions and ambient sources and it could be used as a motion sensor. Th is is due to the fact that diff erent ambient vibrations and human motions produce a unique pattern of energy in the VEH circuit. In this paper, we reveal that VEH signal contains rich information and it is possible to precisely identify the trip using machine learning techniques. A typical train ride consists of episodes of continuous motion interspersed with brief stoppages at train stations. Our key hypothesis is that the train tracks between any two consecutive stations create a unique vibration characteristic that is refl ected in the VEH data and we model it using machine learning techniques. Th en, we leverage the sequential nature of a trip to correct the occasional segment misclassifi cations and ultimately infer the entire trip. To demonstrate our hypothesis, we collected real-world motion data from 4 distinct train routes in the Sydney metropolitan area. Our data set includes motion data from 36 trips. To exploit a thresholding-based segmentation algorithm and extract the individual segments, we employ diff erent machine learning classifiers and ensemble classifi er achieves accuracy of 60.9% for identifying individual segments. Finally, we use the sequential properties of a train trip and achieve a trip inference accuracy of 97.2% for a journey of 7 stations.

Conference Series Automation and Robotics 2018 International Conference Keynote Speaker Francis X Govers III photo
Biography:

Francis X Govers III develops autonomous vehicles for Bell Helicopter. Previously, he has worked as Chief Robotics Offi cer for Gamma 2 Robotics; Chief Engineer of Elbit Systems Land Solutions; Special Missions Manager for Airship Ventures; Lead Engineer for Command and Control of International Space Station and Deputy Chief Engineer of US Army Future Combat Systems. He has also participated in DARPA Grand Challenge and DARPA EATR project. He has been author of over 40 articles on Robotics and Technology.

Abstract:

Unmanned vehicles, drones, self-driving cars and other sorts of advanced autonomous vehicles are being announced on an almost daily basis. Uber is working on fl ying taxis, every car company has a self-driving car in the works, and drones are the hottest Christmas toy for people of all ages. Inside these, autonomous vehicles are systems based on advanced artifi cial intelligence, including artifi cial neural networks (ANN), machine learning based systems, probabilistic reasoning, and montecarlo models providing support for complex decision making. One of the common concerns about autonomous vehicles, be they fl ying or driving, is for safety. Safety testing is usually based on deterministic behavior, the aircraft or car or boat, which faced a similar situation, behaves the same way every time. But, what happens when the vehicle is learning from its environment, just as we humans do. Th en it may behave diff erently each time based on experience. How then to predict and evaluate in advance? How safe an autonomous system might be? Th is paper will present two complementary approaches to this problem. One is a stochastic model for predicting how an autonomous system might behave as it learns over time, providing a range of behavioral responses, to be used as a risk assessment tool? Th e other is a set of methods and standards for writing test procedures for such vehicles.