Cream of the crop - The growth of Agri-Tech | MVPro 21 | June 2020

Page 1

CREAM OF THE CROP THE GROWTH OF AGRI-TECH PRODUCT SPOTLIGHT: FRAME GRABBERS

INFLUENCER INSIGHT: NEUROMORPHIC VISION EXPLAINED

ROBOTICS: THE IMPACT OF COBOTS

ISSUE 21 - JUNE 2020

mvpromedia.eu MACHINE VISION & AUTOMATION



MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu

Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu

CONTENTS 4

EDITOR’S WELCOME - How you all doing?

7

INDUSTRY NEWS - Who is making the headlines?

12

PRODUCT NEWS - A selection of what’s new on the market

14

SICK UK - A prediction comes true

18

CAELESTE - The Mars Webcam: To infinity and beyond

Cally Bennett

20 TELEDYNE IMAGING - SWIR shines a light on farming

Group Business Manager cally.bennett@mvpromedia.eu

23

YANMAR - Agri-Robotics and sustainable farming

26 PHOTONEO - 3D machine vision breeds new crops Spencer Freitas Campaign Delivery spencer.freitas@cliftonmedialab.com

Becky Oliver Graphic Designer Lotte De Kam, Kynan Eng, Mike Grodzki, Carrie Halle, Kane Luo, Andrea Pufflerova, Neil Sandu, Donal Waide, Paul Wilson Contributors

Visit our website for daily updates

www.mvpromedia.eu

mvpromedia.eu

28

SCORPION VISION - Intelligent 3D imaging

30

HIKROBOT - Enhancing flat panel display inspecting

32

ADVANCED ILLUMINATION - An illuminating idea

34

INFLUENCER Q&A - iniVation CEO Kynan Eng

37

GARDASOFT - Into the future of machine vision

38

BITFLOW - Is CoaXPress 2.0 perfect for multi-camera systems?

40

EURESYS - Coaxlink CXP-12 frame grabbers

43

ONROBOT - 2.5D vision: The ‘eyes’ have it

44 LMI TECHNOLOGIES - The 3D impact in automotive production 46

RARUK AUTOMATION - How the cobot is saving labour costs

48

ROCKFORD SYSTEMS - Your next co-worker could be a cobot

MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2020. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.

mvpromedia.eu

3


HOW YOU ALL DOING? Let’s face it, lockdown has been tough wherever you are in the world. Given humans are creatures of habit, who strive on routine, these past few weeks have severely tested us all. We have found new ways to occupy our time, whether we’re working from home or just taking an enforced break. But this has also been time for reflection and a deep perspective of what we value most and why. This could be family and friends, our work colleagues, a favourite restaurant or our weekly dose of sport (in my case). Whatever it is we have missed or cherish, I am sure we appreciate them all a whole lot more now. We cannot ignore the impact the coronavirus has had on our lives so far nor what it will have in the coming weeks and months ahead as the world attempts to return to some semblance of normality. While we have been in lockdown, the industry is to be applauded for the way it has continued to deliver business as usual. I have been particularly impressed with how the main associations – EMVA, AIA, UKIVA – have responded after seeing their events, which brings us all together, decimated. The online events have been an incredible success and a triumph over adversity. I don’t think I have ever attended so many presentations, webinars or Zoom meetings. I have even enjoyed a virtual insight of MVTec’s launch of Halcon 20.05, such are the wonders of modern communications and technology. I am sure this is here to stay, in one guise or another, given how successful it has been, but nothing will beat us all being together again. In terms of this issue, there is some great insight into how machine vision, automation and robotics are impacting in agriculture; I really enjoyed the Q&A with iniVation CEO Kynan Eng, I’ll let you discover why, plus there is a focus on the impact of cobots in the workplace. Enjoy the read. Lee McLaughlan Editor

Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.eu

4

mvpromedia.eu


www.inspekto.com

W ORL D’ S FIRS T A UTO NO M O US VI SI O N I NS PE CTI ON S YSTE M

ANY INDUSTRY

ANY PROCESS

ANY PRODUCT

• 1000 times faster to install setup in 45 minutes • No experts needed plant QA personnel can install by themselves • 1/10 the cost of a traditional solution major savings, come rain or come shine • 30 good parts only No defected parts needed

Inspekto has reinvented industrial vision inspection. Our product – the INSPEKTO S70 – has disrupted the industrial vision inspection experience forever. Plug & Inspect™ runs three AI engines in tandem. It merges computer vision, deep learning and real-time software optimization technologies to achieve true plug-and-play vision inspection.

TO AR R ANGE FOR A F R E E V I RTU A L D E M O NS TRA T IO N VISIT WWW.INSPEKT O.CO M OR EMAIL I N F O@I N S PE KTO . COM



INDUSTRY NEWS

XILINX ESTABLISHES ADAPTIVE COMPUTE RESEARCH CLUSTERS Xilinx is establishing Xilinx® Adaptive Compute Clusters (XACC) at four of the world’s most prestigious universities. The XACCs provide critical infrastructure and funding to support novel research in adaptive compute acceleration for high performance computing (HPC). The scope of the research is broad and encompasses systems, architecture, tools and applications. The XACCs will be equipped with the latest Xilinx hardware and software technologies for adaptive compute acceleration. Each cluster is specially configured to enable some of the world’s foremost academic teams to conduct state-of-the-art HPC research.

The high-end servers are equipped with the latest Xilinx software including Vitis™, a unified software platform for software engineers, AI researchers and data scientists who want to exploit adaptive compute acceleration. All four XACCs are expected to be operational within the next three months. They will be expanded with the newest 7nm Versal™ Adaptive Compute Acceleration Platform (ACAP) in a future deployment. Researchers who would like to participate in the XACC programme can find out more at www.xilinx.com/university. MV

The first of the XACCs is installed at ETH Zurich in Switzerland. XACCs will follow at the University of California, Los Angeles (UCLA) and the University of Illinois at Urbana Champaign (UIUC). A fourth cluster is being setup at the National University of Singapore (NUS). The XACCs are composed of high-end servers, Xilinx Alveo™ accelerator cards and high speed networking. Each Alveo card has two connections to a 100Gbps network switch to allow exploration of arbitrary network topologies for distributed computing.

CHRIS YATES JOINS VISION VENTURES He said: “I am very happy to be on board, looking to the future with the Vision Ventures team.” Vision Ventures’ managing partner Gabriele Jensen believes it is the perfect time for Dr Yates to join the boutique mergers and acquisition company, with an expected growth in demand for vision tech companies. “I am very happy to have found in Chris a vision and technology expert with a great network in our industry, many years of entrepreneurial experience as well as a background in large corporations and first-hand experience of M&A processes,” she said.

Vision Ventures have appointed Dr Chris Yates as a director. Dr Yates, who is also the current EMVA president, expands the close-knit team at the German-based company bringing a wealth of knowledge, experience and business contacts to his new role at Vision Ventures.

mvpromedia.eu

“He will be a great addition to our team, taking care of both sell-side and buy-side projects. Vision Tech is one of the most interesting growth areas for strategic and financial buyers and Vision Ventures has established itself as the first-choice advisory for M&A projects in this sphere. “Demand for corporate transactions will increase ever higher as the Covid-19 pandemic recedes and therefore it is great to have been able to further strengthen the team at this time.” MV

7


INDUSTRY NEWS

INSPEKTO CHOOSES DETROIT AS ITS US HQ Inspekto is one step closer to having a strong presence on US soil upon striking up a partnership with the Michigan Israel Business Accelerator.

“Detroit is home to more than 15,000 industrial robots and the largest engineering workforce in the US. We’re eager to inject some of Israel’s start-up-nation spirit into the city.

After successfully setting up its European HQ in Heilbronn, Germany, the Israeli start-up and Autonomous machine vision pioneer, is putting a major focus on its US operations so that it can replicate its achievements in North America.

“The Michigan Israel Business Accelerator will help us establish office space, staff and even an assembly facility. It will play a huge part in our successful US launch this year.” MV

Since its launch in 2018, Inspekto has gained endorsements from 17 global industrial brands, including Bosch, BMW, BSH, Daimler and Schneider Electric. It has also received major financial investment from the likes of ZFHN Zukunftsfonds, ACE Equity Partners and Grazia Equity. In April 2019, Inspekto entered the US market at Automate, Chicago. The colossal amount of interest that the company gained from the event made opening a US headquarters and hiring a team on the ground an easy decision. “Detroit is widely known as the US headquarters for some of the world’s largest automotive manufacturers, including Ford, General Motors and Fiat Chrysler,” explained Zohar Kantor, VP of sales and projects at Inspekto.

SICK ACHIEVES 2019 FINANCIAL TARGETS SICK overcame a challenging market environment as the company announced increased sales by seven per cent over the 2019 financial year. It reported in April 2020, it achieved annual sales of €1,750m (2018: €1,636m), which is significantly above the sensor industry average and the predictions of the AMA Association for Sensor Technology and Measurement of a one per cent drop. The sales growth, among other measures to increase efficiency, contributed to an increase in EBIT of 13.1 percent to EUR 132.9 million, which remained high with a share of 7.6 percent of sales.

maturity and contributed to sales with solutions such as in the area of AI-supported camera sensors and driverless transport systems. Sales in Germany dropped -0.6 per cent. Europe, Middle East and Africa (EMEA) increased sales by 7.9 percent. Sales growth across the Americas was 7.6 per cent. The strongest growth region remained Asia-Pacific with an increase of 11.3 percent. Along with the growth in sales, the number of employees also increased in the 2019 financial year as it increased by 2.6 percent to 10,204 employees worldwide. MV

This ensured SICK met its forecast for the 2019 financial year, even though the company was faced with a slowdown in the global economy, the deepening of global trade disputes and a difficult market situation in factory automation in general and the automotive industry in particular. In addition, SICK maintained its innovation strategy with 11.5 percent of its sales income invested back into research and development. The company’s financial report noted that its majority of start-up initiatives founded in 2018 had now reached

8

mvpromedia.eu


INDUSTRY NEWS

EMBEDDED VISION AND VISUAL AI MARKET LAUNCH INTERACTIVE MAP said Rudy Burger, managing director of Woodside Capital Partners. “With so many companies in the space, and new companies entering constantly, it has become difficult to find the companies that match a specific profile or need. We’ve created the Embedded Vision and Visual AI Industry Map to address this challenge.”

The Edge AI and Vision Alliance and Woodside Capital Partners have announced the launch of a new free web-based tool, the Embedded Vision and Visual AI Industry Map. The map provides a new way to visualise the market and for embedded vision professionals to efficiently identify prospective customers, suppliers and partners. “Today, hundreds of companies are developing embedded vision and visual AI building-block technologies, such as processors, algorithms and camera modules—and thousands of companies are creating systems and solutions incorporating these technologies,”

The map is a free-to-use tool that provides an easy, efficient way to understand how hundreds of companies fit into the vision industry ecosystem. Interactive and visual, the map displays companies within different layers of the vision value chain, and in specific end-application markets. The map covers the entire embedded vision and visual AI value chain, from sub-components to complete systems. The Embedded Vision and Visual AI Industry Map is available for all to access on the Edge AI and Vision Alliance and Woodside Capital Partners websites at https://www. edge-ai-vision.com/resources/industrymap/ and https:// www.woodsidecap.com/embedded-vision-and-visual-aiindustry-map/. MV

EURESYS AGREE DISTRIBUTION DEAL WITH STEMMER IMAGING In order to strengthen the distribution across Europe and Latin America, Euresys has announced a new agreement with STEMMER IMAGING, effective from April 2020. The agreement covers Euresys’ CoaXPress and Camera Link frame grabbers as well as frame grabbers for analogue cameras. Klaus Mählert, product manager at STEMMER IMAGING for Euresys products, said: “Euresys is a leading and innovative high-tech company that designs and provides image and video acquisition components, frame grabbers, FPGA IP cores and software. The Euresys product range meets many different machine vision needs.

mvpromedia.eu

Jean Caron, vice president sales and support EMEA at Euresys, said: “Working with STEMMER IMAGING is a very positive move. It reinforces the Euresys presence across Europe and South America giving easy access to our CoaXPress and Camera Link frame grabber series to machine makers and system integrators. “Coupled with its extensive camera portfolio, the large Euresys frame grabber offer will enable STEMMER IMAGING to always provide customers with the adapted solution for their projects.” MV

9


INDUSTRY NEWS

MVTEC OPENS NEW SUBSIDIARY IN CHINA MVTec Software is strengthening its presence in China opening MVTec Vision Technology (Kunshan) Co Ltd near Shanghai. Together with Chinese sales partner, DAHENG IMAGING, MVTec is focused on increasing its presence and support in China to capitalise on the enormous growth potential of that market.

MVTec has been operating successfully in the Chinese machine vision market for more than 10 years and will establish a dual distribution model in China – serving its customers through DAHENG as well as serving customers directly – to take advantage of synergies on both sides. “As the leading international manufacturer of standard machine vision software, we will continue making a significant contribution to the growth and development of the Chinese machine vision market,” explains Martin Krumey, VP Sales at MVTec. “We see China as a strategic growth market and are demonstrating our commitment by opening our own Chinese subsidiary.” MV

SCHNEIDER-KREUZNACH APPOINTS NEW CEO Dr Wolfgang Ullrich has been named as CEO of Jos. Schneider Optische Werke as of April 2020. It concludes the leadership of Heiko Kober (CFO) as interim CEO as well as Dirk Christian and Frank Jocham as authorized representatives. They were placed at the helm following the departure of Dr Thomas Kessler at the end of 2019. “We are delighted that Dr Ullrich has joined us as an experienced image processing and sensor specialist,” said Dr Walther Neussel, chairman of the Advisory Board. “He will be able to contribute his expertise especially in the areas of automation technology, robotics, the semiconductor industry and medical technology, and help to move Schneider-Kreuznach forward.” The group intends to continue its strategic focus on industrial applications, cine and photo optics. “Optical technology is a key technology for the future, and we want to support our customers in this area as a reliable and innovative partner. I am looking forward

10

to the new challenge, especially as the SchneiderKreuznach brand enjoys an excellent global reputation,” said Ullrich. MV

mvpromedia.eu


USB3 LONG DISTANCE CABLES FOR MACHINE VISION

Active USB3 Long distance cables for USB3 Vision. CEI’s USB3 BitMaxx cables offers the Industry’s First STABLE Plug & Play active cable solution for USB Vision, supporting full 5 gig USB3 throughput and power delivery up to 20 meters in length with full USB2 backward compatibility.

1-630-257-0605 www.componentsexpress.com sales@componentsexpress.com


PRODUCT NEWS

IDS ADDS 5 MEGAPIXELS POLARISATION CAMERA IDS now offers the IMX250MZR 5 MP sensor from SONY with integrated on-pixel polarisers in the uEye CP camera family.

A polarisation sensor makes details visible that remain hidden to other image sensors. The new models ensure better object detection in cases of low contrast or light reflections. They also provide a convenient way of detecting fine scratches on surfaces or the stress distribution within transparent objects. Both USB3 Vision and GigE Vision are available as interfaces. Using polarisation filters, the sensor generates an image with four polarisation directions in a single image. Based on the intensity of each directional polarisation, the polarisation angle and the degree of polarisation can be determined. This makes it versatile - for example, for checking residues on surfaces before further processing or for removing reflections for traffic monitoring. Thanks to their patented housing design and manufacturing, the compact industrial cameras also make designers´ hearts beat faster. With dimensions of only 29 x 29 x 29 millimetres, the models are ideal for spacecritical applications. Screwable cables also ensure a reliable electrical connection. MV

A NEW ERA IN AUTONOMOUS MOBILE ROBOTS RARUK Automation is launching a major innovation that will enhance operations for all those looking to automate the internal transportation of goods. The new MiR250 autonomous mobile robot (AMR) is an agile, rapid-charging, cost-effective means of transporting payloads of up 250kg around premises such as factories, warehouses and healthcare facilities – essentially any business seeking automation of their internal logistics.

The MiR250 is designed to function in collaboration with people without external safety measures. For this reason, the device is equipped (as standard) with more integrated safety functions than any other type of mobile robot. MV

Featuring compact dimensions, the specially engineered agility of the MiR250 allows it to move under objects, navigate in narrow spaces and take corners quickly. Able to drive at 7.2km/h (2m/s) while avoiding both static and dynamic obstacles, this latest addition to the company’s range of AMRs has a shorter (800mm) and lower (300mm) design, enabling it to navigate in limited spaces. By way of example, the MiR250 can move through door openings with a width of only 800mm.

12

mvpromedia.eu


PRODUCT NEWS

BASLER BOOST CAMERAS AND BUNDLES IN PRODUCTION Basler has started series production of its new boost CXP-12 camera family and bundles. The boost bundle combines the strengths of the camera – high resolution and speed thanks to modern CMOS sensor technology – with the performance of the new CoaXPress 2.0 standard. Thanks to its features, the camera with its CXP-12 interface is particularly suitable for automated optical inspection tasks such as packaging, PCBs, bottles or surfaces, as well as for inspection applications in the medical sector.

Suite. This ensures simple integration and operation of the camera. Since a separate I/O cable is not required, Power-over-CXP (PoCXP) provides a convenient single cable solution with a maximum possible cable length of 40 meters. This reduces the complexity of the hardware setup and has a positive effect on system costs. Currently four models of the Basler boost camera are available, the expansion of the series with new models is planned for the future. MV

Optionally equipped with either of Sony’s modern IMX253 sensor with 12 megapixel resolution and a maximum frame rate of 68 fps or the IMX255 sensor with nine megapixels at a maximum of 93 fps, the boost camera series offers excellent image quality. In addition, there is a bandwidth of up to 12.5 Gbps thanks to the modern CoaXPress 2.0 standard; both components – camera and interface card – can be conveniently controlled via the SDK of the pylon Camera Software

PLUG AND PLAY SOLUTION SIMPLIFIES AI TRAINING AND DEPLOYMENT visual inspection capabilities without any additional programming. Images and data are simply uploaded to Neurocle’s Neuro-T, deep learning vision software which supports a unique “Auto Deep Learning” technology with predefined parameters optimized for the NVIDIA GPU in Pleora’s AI Gateway.

Pleora Technologies and Neurocle have announced a technology partnership that simplifies the deployment of deep learning-based classification, segmentation, and object detection capabilities for visual inspection applications. Lead customers are evaluating the AI Gateway and Inspection plug-in developed with Neurocle to reduce errors, false-positives and secondary screenings in packaging, pharmaceutical and finished goods analysis applications.

With Pleora’s “no code” approach, AI models are transferred and deployed on the AI Gateway for production environments. In comparison to this plug and play approach, traditional AI algorithm development requires multiple time-consuming steps and dedicated coding to input images, label defects, fine-tune training, and optimise models. The AI Gateway handles image acquisition from any vision standard-compliant image source and sends out the processed data over GigE Vision to inspection and analysis platforms. This means end-users can avoid vendor lock-in while maintaining infrastructure, processes, and analysis software. MV

With the Pleora AI Gateway and Inspection Plug-in, end-users and integrators can deploy machine learning

mvpromedia.eu

13


A PREDICTION

COMES TRUE Neil Sandhu, SICK U K product manager for imaging, measurement and ranging explores the brave new world of ‘SensorApps’

Nearly three years ago, SICK began signposting a future in which it will be commonplace to download a ready-made “App” to program and configure a programmable vision sensor, perfect for a specific application. It was a concept that some found difficult to envisage at first, especially coming from a traditional sensor hardware manufacturer. Now that prediction is coming true and it’s all down to SICK’s ground-breaking AppSpace software ‘ecosystem’. While, it can, and will, be a model that could be applied to all kinds of smart sensors, SICK AppSpace offers the tantalising prospect of being a game-changer to demystify machine vision, and especially 3D vision, for the many and no longer just the few.

With SICK AppSpace, developers have the freedom to design, develop and deploy their own customised solutions, to perfect simple web-based graphical user interfaces for operators and to distribute their applications across multiple hardware platforms and locations. Developers use the SICK AppStudio to create customerspecific applications, then the SICK AppManager to import Apps into the sensor and adapt it to the task in hand. The SICK AppPool cloud service makes it easy to install, manage and download sensor Apps to programmable SICK devices anywhere in the world.

It might seem curious for a sensor company to be taking the lead in the field of software. Yet, according to SICK, once sensors become smart and programmable, investing in software development, and even marketing application ‘kits’ including all the necessary hardware, makes perfect sense.

Developers have access to industry-standard image processing libraries including HALCON. They can work with their preferred programming technologies including graphical flow Editor, Lua scripting tools as well as C++ or Java. There are many integrated support functions such as auto completion, so that programmable sensor app developments will insert themselves easily into existing development processes.

Machine-builders and systems integrators are already beginning to reap the rewards of cheaper, quicker and easier integration using AppSpace. End-users, too, are benefiting from the plug and play benefits of a growing basket of ‘SensorApps’ rolled out by SICK’s own R&D teams.

Helpful utilities such as emulators, debuggers, resource monitors, and an extensive range of documentation and demo apps also make the development process easy. All software components are combined by the PackageBuilder into a single package that safely defines access rights.

14

mvpromedia.eu


Previously developed ‘Apps’ can be adapted without having to start from scratch, then set-up and configure as required.

Meanwhile end-users can dispense with costly and time-consuming programming by downloading and adapting a ready-made SensorApp that already has most of the hard work done for them. As a result, set up time is dramatically reduced, rapid product changeovers are easily accommodated, and localised edge-based sensor solutions can be easily configured using Sensor Integration Machines.

More recently, SICK launched an innovative 3D SensorApp that has enabled rapid, damage-free guidance of automated and driver-assisted high-bay forklifts into pallet pockets, as well as the precise and efficient pick-up of dollies by automated guided vehicles (AGVs). The SICK Pallet Pocket and SICK Dolly Positioning SensorApps run on SICK’s Visionary T-AP 3D time-offlight snapshot camera. The new SensorApps work by positioning the camera in front of the pocket or dolly chassis. Using a single shot of light, the SICK Visionary T-AP 3D camera captures a 3D image, then pre-processes and evaluates the co-ordinates of the pallet pocket or space under the dolly, before outputting to the vehicle controller. The information can also be sent to a driver display to aid manual forklift operation, particularly useful in high-bay warehouses.

ROBOT GUIDANCE This is all being made possible by greater accessibility to ‘all-in-one’ intelligent vision sensors, such as SICK’s fully programmable TrispectorP1000 3D vision camera. One of the first Apps to be developed was the SICK Trispector P Beltpick, a complete vision-guided belt-picking solution for industrial and collaborative robots. It offers the improved z-axis control available through 3D vision, so products with complex profiles can be picked from variable heights without risk of damage.

LABEL INSPECTION Among the first integrators to successfully exploit the opportunities of SICK AppSpace has been AutoCoding Systems. Their new 4Sight Automatic Print Inspection System is a breakthrough innovation for label inspection and validation that operates on SICK’s Inspector P smart 2D vision camera.

mvpromedia.eu

15


evaluation unit. Compact and stand-alone, it can be set up to recognise and switch between different label types, with custom settings for each type. Its capabilities encompass many standard types of 1D and 2D code, dot matrix printed and indented (peened) text, as well as Optical Character Recognition and Optical Character Verification in multiple regions and lines. With the MQCS (Modular Quality Control System), SICK is offering a complete package, with pre-written software for its high-performance Ranger3 3D camera, together with a control cabinet and HMI. Originally developed for mould inspection chocolate bar production for leading European brands, SICK has expanded the concept into a universal and scalable 3D high-speed quality control system. Application modules support specific inspection tasks, such as code comparison, counting, label inspection, volume measurement or 3D object inspection. AutoCoding’s 4Sight solution enables a direct, closed-loop communication of the printed message from standard inkjet, laser or thermal transfer printers and the smart camera. All processing is undertaken onboard the SICK camera where the 4Sight software resides, so no line-side PC is needed. With AutoCoding’s unique ability to selfoptimise the inspection, error-proof validation of printed codes such as dates, batch and line numbers is assured.

SIMPLE PASS/FAIL SICK’s Quality Inspection is a freely downloadable, easy to set up, SensorApp for the Inspector P programmable vision camera. It supports simple and economical pass/fail quality inspection tasks. It can be downloaded with a convenient user-guided set up and an accompanying tutorial video and is easily integrated into new or existing machinery to master a wide variety of classic 2D presence inspection tasks. The SICK LabelChecker, also running on the SICK Inspector P, provides an integral label quality control system without the need for an additional

16

Most recently launched is SICK’s Colour Inspection and Sorting SensorApp, a low-cost easy set up solution to a very common task in FMCG, food & beverage production. The SensorApp can be supplied as a package with the ultracompact SICK Pico- or midiCam 2D streaming camera, a SICK Sensor Integration Machine, LED illumination and a photoelectric sensor. The system is used to check that goods, assemblies or packs on a conveyor are the right size or colour. It can count objects with different sizes and colours as well as validate the correct colour or colour gradations, e.g. of baked goods. Objects with anomalies can be sorted out or the integrity and completeness of secondary packaging can be detected.

SAVE TIME AND COST AppSpace promises huge flexibility for vision engineers and programmers because they can save a great deal of time and development costs. SICK has established a Developers Club to provide support, training and encourage knowledge-sharing among the community, with its own annual Developers Conference. Meanwhile, SICK will continue to roll out its own SensorApps and make them available as “plug and play” solutions on SICK devices for end users, as well as launching ready-made hardware and software packages. For the future of SensorApps, simply watch this space.

MV

mvpromedia.eu


Precision Perfect images at high speed

Precision at high speed. Precision skydiving is a perfect match for extreme athletes – and precision inspections at high speed are a perfect match for the LXT cameras. Thanks to Sony® Pregius™ sensors and 10 GigE interface, you benefit from high resolution, excellent image quality, high bandwidth and cost-efficient integration.

Learn more at: www.baumer.com/cameras/LXT


TO INFINITY AND BEYOND

In the depths of space, the Mars Webcam is active. It was developed by Belgium-based Caeleste, a specialist in bespoke sensors. Bart Dierickx, CTO and founder of Caeleste and Dirk Uwaerts project manager and partner at Caeleste share their insight of the project with Caeleste sales and marketing officer Lotte De Kam. Q: Bart, Dirk, it was recently brought to our attention that the VMC (visual monitoring camera) - the Mars Webcam - has become the most popular planetary orbiting instrument amongst amateur astronomers, students and teachers. I believe that you were the designers of this instrument. Dirk: The objective was to design, develop and test a novel micro camera chip, to be used as a miniature monitoring camera for space applications. Compact and with as few components as possible. Bart: The design was fully CMOS technology and allowed for an easy combination of control logic, ADC, interfaces and image compression circuitry. The “Mars Webcam” is based on the IRIS-1 image sensor (Integrated Radiation-tolerant Imaging System). Q: What was the origin of this camera? Why was it created? Bart: It all started in 1997 with its predecessor the “VTS” – the Visual Telemetry System. After the debacle of the Ariane 501 launch in 1996, ESA commissioned a “little” monitoring camera for the on-board observation of spacecraft activities.

18

Dirk: We were part of the project as project supervisor and designer and had to create in a few months the first CMOS camera ever used in space missions. The VTS was based on the logarithmic Fuga15d and mounted on the ESA launch vehicle Ariane 502. Q: Hence the successor of this project was the Visual Monitoring Camera (VCM) IRIS-1? Bart: After the VTS, the successor was developed, the “VMC”. It uses the IRIS-1 colour or black and white camera chip and has direct interfacing to the spacecraft’s telemetry system, not requiring a bulky camera master unit. Dirk: VMC cameras have been used successfully on the XMM and Cluster II (space missions by ESA), to verify spacecraft separation and solar panel deployment. Bart: The VCM on the actual Mars Express that took off in June 2003 during Europe’s first Mars expedition, was used to monitor the separation of the Beagle 2 lander. The Beagle dramatically failed, but the Mars Express with the IRIS-1 VCM is still in orbit and alive and is re-baptised the “Mars Webcam”. Q: For this reason, the VMC is not just an ordinary camera in an extraordinary place? What are the unique features of the design? Bart: The IRIS Pixel used the “high fill factor patent”. It allowed for extraordinary high light sensitivity.

mvpromedia.eu


THE FUTURE DEPENDS ON OPTICS™

Dirk: The challenges were particularly interesting for this project. CMOS imagers were immature and there was sparse knowledge on radiation tolerance of CMOS imagers in space. It is still operational today – not really bad…

LH Series

LS Series

Bart: We are most challenged by projects that have never been done before and to see how far we can go. CA Series

Q: As far as Mars apparently? Dirk: To create a Mars “Webcam” was never the intention, but it is nice to know that our baby is a space legacy today. *Bart and Dirk started the development of the VMC with Imec and FillFactory. Also involved in the realisation of the VMC were Werner Ogiers (now at AMS) and Guy Meynants (now KULeuven) and the company OIP in Oudenaarde. MV

For more information • http://blogs.esa.int/vmc/

NEW

High Resolution Lenses for Large Format Sensors A new range of lenses specifically designed for large format high resolution sensors. CA Series: Optimized for APS-C sensors featuring TFL mounts for improved stability and performance. LH Series: An ultra-high resolution design for 120 MP sensors in APS-H format. Also compatible with 35 mm full frame sensors.

• https://en.wikipedia.org/wiki/Visual_Monitoring_ Camera

LS Series: Designed to support up to 16K 5 micron 82 mm line scan cameras with minimal distortion.

• https://www.flickr.com/photos/esa_marswebcam/

Get the best out of your sensor and see more with an imaging lens from Edmund Optics. Find out more at:

• https://phys.org/news/2016-05-mars-webcam-pro. html

www.edmundoptics.eu

mvpromedia.eu

UK: +44 (0) 1904 788600 I GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 I sales@edmundoptics.eu


SWIR SHINES A LIGHT ON NEW FARMING TECHNIQUES Mike Grodzki of Teledyne Imaging explains the impact of how infrared imaging is adding precision to agriculture.

Whatever your stance on the causes of climate change, the data related to the results are stark. The United Nations estimates that the world’s farmers will have to produce 70 percent more food using just five percent more land by 2050. That’s a difficult equation to compute. In California’s Central Valley—the seven-million-acre region where more than half of the fruits, vegetables and nuts produced in the US are grown—long-time farmers are proud of their innovation and accustomed to tough equations. After all, since the 1930s, they have managed to consistently harvest almost 10 percent of the country’s agricultural output—by value—on less than one percent of the country’s farmland. They’ve succeeded by mastering long-range irrigation and groundwater extraction. Every year, more than 250 different crops are grown in the Central Valley, with an estimated value of $17 billion.

Over an area southwest of Sacramento, the effect can be clearly seen in these two February satellite images acquired in 2014 and 2003. Vegetation is depicted in shades of red, while barren fields are dark brown and gray. The left image was acquired on Feb. 11, 2014 by Landsat 8, and the right image was acquired 11 years earlier, on Feb. 8, 2003 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA’s Terra spacecraft. The great increase of barren fields, and the bare hills in the southwest corner, are readily apparent. The images cover an area of 7.4 by 13.5 miles (12 by 21.7 kilometers). Image courtesy of NASA

As skilled as the agrarian visionaries of the Central Valley are, they had run out of tricks when a string of drought years in the first two decades of the century threatened to decimate America’s market garden. Between 2012 and 2016 alone, the estimated groundwater volume fell by ten cubic miles. The fact that the western edge of the valley is just 100 miles from the heart of another valley, where silicon is valued more highly than celery, has turned out to be quite fortuitous. Contemporary infrared scanning technology is providing the farmers of the Central Valley, and others much farther afield, with a new technique for boosting the chances for good harvests, using water more efficiently, and moving toward what is being termed precision farming. It required a new type of imaging.

20

mvpromedia.eu


A TRICK OF THE LIGHT

corrective action is even more critical to combat drought conditions and other threats to plant health.

Developed for military use, during the last decade, infrared scanning has become a widespread technology for applications such as predicting water stress in crops and fruit yield, planning irrigation scheduling, detecting disease and pathogens in plants, and evaluating fruit maturation. Farther down the food chain, thermal imaging is used to detect the ripeness of fruits and vegetables at distribution points and in retail outlets, and for detecting foreign bodies in foods.

Using imaging can allow farmers to understand crop conditions and fix problems, such as nutrient deficiency or moisture stress, before they create long-term loss. Data collected from imagery can also give them the insight they require to make decisions related to investments. For example, imaging can provide estimated crop yields, which can help determine whether investments in extra irrigation or fertilisers are warranted.

The technology uses the fundamental science of infrared radiation, which Sir William Herschel, a German-born astronomer, discovered in 1800. Working in England, two decades after discovering the presence of the planet Uranus, Herschel observed that sunlight produced significantly more heat when passed through a red filter. Passing the sunlight through a prism and measuring the temperatures produced by various colours, he surmised that the highest temperatures existed beyond the visible light rays, in what he termed “calorific rays.” In technical terms, what Herschel concluded was that infrared radiation is emitted or absorbed by molecules when they change their rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for studying various heat-producing targets. Infrared light is electromagnetic radiation with wavelengths longer than those of visible light, extending from the nominal red edge of the visible spectrum at 700 nanometers (0.7 µm) to 1 millimeter (1000 µm). As outlined in a 2005 article produced for the Canadian Society for Engineering in Agricultural, Food and Biological Systems (CSAE): “Many characteristics of infrared radiation are similar to visible light. For instance, infrared radiation can be focused, refracted, reflected and transmitted. The paper concluded that thermal imaging “has potential to be used in many pre-harvest and post-harvest operations of agriculture,” but noted that the application opportunities were still in the experimental stage. Just 14 years later, as reported in the New York Times profile of farming in the Central Valley, one-third of the area’s “specialty” crops—things like nuts, grapes, broccoli, artichokes and cucumbers—were being monitored using thermal imaging. Among the applications are nursery monitoring, irrigation scheduling, soil salinity detection, disease and pathogen detection, yield estimation, maturity evaluation and bruise detection. While efficiencies are an important result—with farmers using imaging to determine where to focus the efforts of their field workers and reducing the amount of direct inspection of crop rows and individual plants—

mvpromedia.eu

In that sense, the technology is playing a major role in transforming farming from being simply sustainable—using methods like those developed over decades in places like the Central Valley—to the precise application of techniques based on data that rely on things like GPS tracking, sensors, soil sampling, satellite mapping and the use of drones. According to CropLife Canada—part of the CropLife International consortium, a global federation with members across 91 countries—precision farming has the potential to transform the outcome of agriculture during the coming century.

SHORTER WAVES, HIGHER RETURNS With humanity’s demands on our farms increasing, the technology to drive precision farming is changing, too. On the digital imaging front, the evolution includes the move to short-wave infrared (SWIR) imaging, which is defined as using a wavelength range of 1.4–3 µm, somewhat shorter than either mid-wave infrared (MWIR) or long-wave infrared (LWIR) and just above the nearinfrared range. Shorter SWIR wavelengths behave similarly to photons in the visible range. While the spectral content of targets in SWIR is different, the images produced are more visual in their characteristics and less like the lower resolution thermal behaviour of the MWIR and LWIR light bands. Compared to MWIR and LWIR, SWIR’s shorter wavelengths enable images with higher resolution and stronger contrast, both of which are important criteria for inspection and sorting. Because water is highly light-absorbent in the SWIR wavelength, it appears almost black in images of objects illuminated using SWIR cameras. As a result, applying an appropriate filter or light source can help make moisture content highly evident in bruised fruit, well-irrigated crops, or bulk grains. Thanks to this property, scientists can precisely follow water absorption from the roots into the leaves. Conversely, evaporation and desiccation can also be seen. A large number of applications that are difficult or impossible to perform using visible light are possible using SWIR. When imaging in SWIR, water vapor, fog, and certain materials such as silicon are transparent. Additionally, colours that appear almost identical in the visible spectrum may be easily differentiated using SWIR.

21


Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. These imaging techniques (above) give farmers actionable information about crop and field status by estimating crop development at various stages. 1. True colour maps from aerial digital photography 2. The normalised difference vegetation index (NDVI) is a dimensionless index that describes the difference between visible and near-infrared reflectance of vegetation cover and can be used to estimate the density of green on an area of land 3. Leaf area index (LAI) is defined as the projected area of leaves over a unit of land (m2 m−2), so one unit of LAI is equivalent to 10,000 m2 of leaf area per hectare. The statistical methods use the empirical relationship between the LAI and surface reflectance or vegetation indices. 4. Estimated chlorophyll concentration using thermal and broadband multispectral high resolution imagery

Perhaps best of all, for the future of precision agriculture and the ability of farmers to utilise this promising technology, SWIR offers savings over other types of imaging, too. Since SWIR wavelengths transmit through glass, the lenses and other optical components, like filters, designed for SWIR can be manufactured using the same techniques used for visible components, decreasing manufacturing costs. Paired with a mobile device app, the technology can provide handheld views in a wide range of settings for about $500. In the Central Valley, farmers report that SWIR imaging costs them up to $30/acre, including the cost of delivery mechanisms like drones or aircraft. While a price tag of $500 may still make SWIR impractical for personal use today, it’s not difficult to imagine future shoppers parsing data on their smartphones while cruising the produce aisles of their neighbourhood store. And, thanks to technology discovered more than two centuries ago, there should still be apples, pears and peaches to scan. MV

Source: International Journal of Applied Earth Observation and Geoinformation, 2015; Manal Elarab, Andres M Ticlavilca, Alfonso F. Torres-Rua, Inga Maslova, Mac McKee

22

mvpromedia.eu


AGRI-ROBOTICS FOR A SUSTAINABLE FARMING FUTURE

Agriculture in the future will see increasing use of scientifically precise farming techniques, where automated ‘agro-bots’ monitor, treat and work the land, using advanced technology designed to help maximise yields and minimise disease. With its extensive experience in automation and agricultural mechanisation, Yanmar is now showing the way in advanced field robotics research.

TIME FOR CHANGE IN OUR CHANGING TIMES? It’s hard to think of a more important economic sector than agriculture. This is an industry that directly affects the lives of everyone the world over, despite being beset by challenges from all sides. Changes in temperature and precipitation are influencing crop yields; farmers and agricultural workers are directly exposed to the effects of weather extremes, while millions more in food-related jobs are already feeling the impact of our changing climate. Furthermore, consumers today are increasingly aware of the issue of chemicals used in producing their food and demand sustainable production of ever tastier, higher quality produce. Finding the best way to deal with these issues and while meeting environmental pressures is causing many governments to turn to automation specialists and technology experts to try and improve the lot of the farmer, meet consumer demands, and tackle the myriad of challenges confronting the industry.

SMART FARMING RESEARCH WITH SMASH IN ITALY Increased automation and technology within agriculture is nothing new in itself of course – in fact, it’s been happening ever since the Industrial Revolution. But what is new is how technology is being used to tackle

mvpromedia.eu

problems related to food disease control and unstable weather patterns. The focus is now on achieving desired yields in an environmentally sustainable way, with a continuous focus on reducing the amount and type of chemicals used. Drought, flooding and the appearance of new pests and diseases are, however, now a threat on all continents. Even Europe faces a challenge right across its farming systems. This is especially true of countries such as Italy, which faced a 57 per cent plunge in its 2018 olive harvest – the worst in 25 years – as a result of climate change, according to scientists. With its European research facility nestled in the hills above Florence, Italy, Yanmar R&D Europe (YRE) is well

23


placed to focus on a variety of field-based studies to bring added value to the agriculture industry – and possibly even attract a new generation of workers to the land. These include the two-year, €4m ‘SMASH’ project being carried out in cooperation with 10 technology partners to develop a mobile agricultural ‘eco-system’ to monitor, analyse and manage agricultural crops. The acronym stands for ‘Smart Machine for Agricultural Solutions Hightech’, and this project was co-financed by the Tuscany government. It consists of the development of a modular robotic platform that employs the latest information communications technology to examine crops and soils, analyse gathered information and provide clear, actionable information to farmers to support crop management. One of Yanmar’s many roles was to develop control systems for the multipurpose robotic arm for mobile manipulation (including precision spraying), sensor integration for positioning technologies, and autonomous navigation and software development for the control of the system’s mobile base (in collaboration with other partners).

For YRE’s modelling and control engineer Manuel Pencelli, developing a prototype agro-bot that could be used to monitor and control crops, take soil samples for analysis and accurately target agricultural chemicals for precision application, required many different areas of expertise from the beginning of the project. Pencelli said: “There have been many partners involved throughout. We needed mechanical expertise for developing the structure of the vehicle, and many ‘communications’ experts because we have a lot of devices that need to ‘talk’ to each other. Our starting point was in fact a tracked vehicle that was originally built for moving along a beach and cleaning the shoreline!” There are two working SMASH prototypes – one for grapevines and the other for spinach – to cover the two different types of crops that were originally slated for research. The former has already undergone significant testing at a vineyard farm in the Pisa province, where Manuel has been instrumental in demonstrating the possibilities that this robotic eco-system could offer farmers. “SMASH is not a single machine, but a series of different devices including a robot, base station, drones and field sensors that together provide vital information to help

24

farmers. A farmer could programme the task that he wants SMASH to carry out, and while he is involved in other activities, this machine could move autonomously, monitoring crops, detecting and treating diseases, and saving the farmer or his workers significant time out in the fields manually checking crops.”

MAPPING AND MONITORING, WEEDING AND FEEDING SMASH consists of a mobile base, a robotic arm featuring manipulators and vision systems, a drone and an ancillary ground station. Imagine a system that is designed to function across a range of precision agriculture technologies, offering specific insights on geomatics, robotics, data mining, machine learning etc, while taking into account the environmental and social issues facing farmers. For Manuel, the possibilities for SMASH are endless: “In addition to all the functions that can be performed by the robotic arm, we also have some attachments that can be mounted on the back of the vehicle for mechanical weeding, or working the soil, as it moves. This work can be done simultaneously, together with the monitoring and detection.” Yanmar’s expertise has been in the software development for the agro-bot and the integration and installation of all of the other parties’ components. It’s a complicated mass of electronics, with wires, sensors, cameras, GPS receivers, and multiple electric motors (eight of them!) competing for space. But it all works – even on a muddy vineyard in late February where the independent steering system and superior traction is demonstrated on a variety of terrain. “The sensor fusion was one of the most challenging aspects of this project,” adds Manuel. “Because we have a very particular environment within fields, where a number of variables can change, such as the infrastructure, soil, shape of the fields and even other workers moving around the agro-bot. So, the localisation of the vehicle, improving the robustness of it and understanding its physical constraints were interesting – such as speed, steering angle, the positioning of, and communication between the mounted on-board devices – all these aspects can affect the motion of the vehicle.”

mvpromedia.eu


STRENGTH IN NUMBERS YRE joined forces with Florence University’s Agriculture Department to further advance research activities in the field. The university has significant experience in sustainable crop management, having recently completed the EU-funded Rhea project that looked at improving crop quality, health and safety for humans, and reducing production costs by using a fleet of small, heterogeneous robots – ground and aerial – equipped with advanced sensors, enhanced end-effectors and improved decision control algorithms.

that AI-based, technology-driven precision farming can offer in the coming years. The use of drones to map fields and check crops; and agro-bots to harvest fruit, sow seeds, identify and treat weeds with exact doses of pesticide and fertiliser – it’s all about targeting efforts only in areas that need work, which allows for a reduction in labour, capital costs and emissions as a result. With its ongoing research into advanced agricultural robotics, Yanmar is taking on the challenge of showing the possibilities and potential benefits of increased precision farming techniques in the future. Whether automated and robot tractors working the fields will become a familiar sight remains to be seen, but it’s hard to argue against using technology to sustainably increase quality and yields from the land. And if the sound of drones hovering over crops means that farmers are able to identify growth patterns and nutrient needs, and then deliver pesticides and fertilizers with pin-point accuracy with a fleet of robots, then surely that will be a welcome addition to the tools currently used in our fields. MV

For the SMASH project, the university’s Professor Marco Vieri believed that a holistic approach to research was needed, alongside enabling the latest technologies: “Farming provides food, feed, fibre and fuel for humans, but we also have to consider rural, cultural and historical issues.

PARTNERS AND THEIR ROLES IN THE SMASH PROJECT

“In the past, there was a yearly calendar of agricultural operations, but a new mindset is required these days that allows us to control and mitigate risks such as drought, pests and flooding. We needed to explore increased automation not only to enhance and increase the amount of product, but also to apply an added value.

AvMap: technology and sensors for mobile base navigation.

“Yanmar shares our vision to help farmers realise healthy, high-value production with a true technological system, so our part in SMASH has been to develop equipment and effectors for the two scenarios of vineyards and horticultural field crops like spinach. We have extensive knowledge of farm machinery and new technological possibilities, so it’s about helping reduce the use of pesticides that are not safe for the micro-organisms of the soil and plants, while increasing the level of nutrients and useful bacteria.” It’s fair to say that farmers are on the front line of the debates surrounding climate, emissions and sustainability. Even when it comes to high-value crops such as the grapes, olives and nuts found in this region of Italy, it’s hard to argue against using automated and connected agriculture to bring scientific data and farmers’ needs together. After all, robots can work 24 hours a day and they have less impact on the soil than tractors due to their smaller size.

EDI: A mechanical and electronic engineering company that helped develop the mobile base of the robotic system.

Base s.r.l.: data transmission, data processing, cloud data storage. Seintech: data analysis, data mining and machine learning. Florence University, Agriculture Department: development of the end effectors. IIT (Italian Institute of Technology): robotic system (Plantoid) to monitor and analyse the soil. Sant’Anna Bio Robotic Institute: development of a manipulator to be installed on the robotic arm, to collect samples and manipulate items. Copernico: preparation of drone for monitoring and mapping. DORIAN: technologies and algorithms for machine vision. Giuntini Filippo: agronomist.

Imagine a fleet of robots a fraction of the size of a conventional tractor and it’s easy to see the possibilities

mvpromedia.eu

25


3D MACHINE VISION HELPS BREED NEW WHEAT CROP VARIETIES Andrea Pufflerova of Photoneo explains how the U K’s National Physical Laboratory will use 3D machine vision in crop growing to feed future generations.

It is beyond dispute that one of the most pressing challenges the world faces today is to find ways to feed the ever-growing population in a sustainable way. Experts agree that crop yields will need to boost by 70 per cent to feed the population by 2050. Seed producers therefore endeavour to come up with the next variety of “Super Seed” that would tackle these concerns for future global food supply. The traditional, manual method of capturing and analysing crop phenotype data is very time consuming and does not always provide all the information that is needed. Automation of this process therefore becomes absolutely necessary and presents a big step forward, opening up completely new possibilities for crop breeding programs. Wheat is the most widely grown crop in the world and one of the pillars of global food grain crop supply.

phenotype data to guide their crossbreeding programs. The four basic factors that indicate the quality of wheat and how much a particular seed variety could yield are the ear length, ear height, volume, and the number of spikelets (grains per ear). This data is currently collected manually, using a ruler. However, this is very time consuming as the measurements need to be conducted across hundreds of field plots. The data might not be completely accurate as it is being taken from manual measurements and pictures. The method developed by the National Physical Laboratory uses a 3D scanner and point cloud analysis and promises much better results and benefits in terms of time reduction and cost savings.

AUTOMATION TAKES THE LEAD

Currently, the average wheat yield is about 13 tons per acre but with the expanding population, this number will necessarily need to rise. The National Physical Laboratory in the UK has developed 3D crop scanning technology to address these concerns. The technology aims to automate and improve the process of collecting and analysing crop phenotype data which is then used for breeding new crop varieties.

DATA COLLECTION AND ANALYSIS Each year seed producers send their wheat seed samples to be graded. Based on the results, farmers decide which seeds to purchase for next year’s planting. To create a better variety of wheat, seed producers use

26

A wheeled rig equipped with Photoneo PhoXi 3D Scanners promises much better results than the traditional, manual methods of wheat crop phenotype data collection.

mvpromedia.eu


The NPL uses a wheeled rig equipped with a number of 3D imaging techniques. One of them is structured light technology deployed in PhoXi 3D Scanners. These scanners, produced by the Slovak company Photoneo and supplied by Multipix Imaging in the UK, are mounted on the rig construction to capture the scene within the frame in three dimensions, with a turnaround speed of 30 seconds. The standard sample area comprises two by five metres. The scanners provide point cloud information with millimetre resolution and incredible details of the individual grains. This information is then processed in MVTec’s Halcon imaging software with proprietary algorithms.

ON THE WAY TO A “SUPER SEED” The new technology provides farmers with far more data than the traditional methods, enabling them to make better decisions about their crops. Farmers can see in detail how the crops are growing, if they need more water or fertilizer, and information on when to harvest. Therefore, farmers are able to control the whole process of supply and maximise what they get from the field. Dr Richard Dudley, a science area leader for electromagnetics, 5G, and precision farming at the National Physical Laboratory, said: “We can image what’s growing in fields and help seed growers and plant growers come up with the next variety of plant. “We’re just observing plants in their total natural habitat and picking up the ones that have the best yield and are also more tolerable to disease and drought, which is helping to find the best crops and have the best nutritional value for people.”

Point cloud: A point cloud obtained from the PhoXi 3D Scanner, featuring wheat crops in millimetre resolution.

The NPL opted for the PhoXi 3D Scanners XL based on its high scanning accuracy and resolution - they are able to capture up to 3.2 million 3D points for each scan - as well as its robustness. As the plants need to be observed in their natural outdoor habitat, the scanners need to reliably work under varying light conditions, from bright sunshine to overcast skies. At the same time, they are robust enough to be resistant to dust and harsh movements when attached to the rig pulled by a tractor.

mvpromedia.eu

This new method of crop phenotype data analysis is also beneficial to consumers. With climate changes, growing conditions have become more challenging, which naturally leads to an increasing cost of food. “We can help develop crops that are more tolerant of the harsh environments and changeable weather - we can keep the cost low which is really key for the globe,” added Dudley. The crop breeding trials will start this summer when Dudley’s team will be going out to customers and measure their field plots. MV

27


INTELLIGENT

3D IMAGING Paul Wilson, CEO of Scorpion Vision, explains the advantages of intelligent 3D imaging with artificial neural networks in the fresh food packaging industry.

The lack of seasonal workers to pick and pack vegetables and fruit across Europe is acute according to an article in The Food Navigator.

requires an ‘intelligent’ 3D vision system that can analyse the shape of the product to find the optimum cut point.

For example, in France it is estimated that up to 800,000 people are needed during the harvest season. Automation systems for harvesting and post-harvest processing can help solve these problems but it is still relatively early days in terms of the existing technological capability available to the horticultural industry. Automated harvesting of crops in the field is one big challenge that will not be solved overnight. One company that appears to be making good headway in this area is Israeli based FFRobotics, who have developed an apple harvesting solution which they claim works as fast as eight people. However, this system cannot be used to harvest other produce. Nevertheless, it’s a good start and perhaps the technology can be adapted for pears and other tree hanging fruit over time. Then there is the challenge of preparing the product for packing. This often requires trimming and cutting and is a very labour-intensive task. The most demanding aspect of automation in food production is the sheer diversity of what the produce might look like and how it is presented. If we consider the automated trimming of root vegetables - top and tailing of leeks for example, then the procedure requires locating the correct cut points without losing too much of the stalk and not undercutting it, so it leaves some of the roots in place. To ensure this is done accurately, it

28

The cutting coordinates can then be sent to a robot or servos, so that the final trimming can be completed accurately with repeatability - and faster than a human that only a machine can achieve. The benefits here are not just more optimum trimming but also the ability to scale up production without increasing the reliance on humans. Such is the efficiency of the system it is feasible to realise a return on investment within a few months of the initial capital investment in the system. Scorpion Vision has provided the technology to enable mass sorting and processing of freshly harvested fruit and vegetables. Combining 3D imaging with artificial neural networks offers great advantages over classic machine vision. The ability to look at and analyse each individual vegetable, be it leek, carrot,

mvpromedia.eu


potato, swede or turnip, before making a decision on how to process it. This means humans are no longer a critical part of the process. Our solution is built around our established 3D camera technology, which is tailored to suit the specific application. It can consist of a stereo vision system using two cameras or optics to create two images, a high-speed 3D scanner or traditional laser triangulation. Typically, though, stereo vision is the technique used for post-harvest processing tasks. The post-harvest processing system is built around a high performance embedded computer system running Scorpion Vision Software which controls the camera array and integrated light sources - usually random pattern projection in NIR and both white and IR LEDs. Housed in an IP67 rated stainless steel housing, the system is classed as a food safe camera that can be retrofitted to any production line. The challenges that a machine vision system has to overcome isn’t just in the variability of the product but its condition as well. Taking automated leek trimming as an example, the capital investment will only make sense to the farmer/food processor if the leeks are trimmed with high accuracy. Over-trimming the leek is highly undesirable as, over a period of several hours, that can amount to significant wasted crop. Undercutting it, so that some roots are left attached to the stem is not accepted by some volume customers. If the vision system simply had to find the point at which the leek stem should end and the roots start, then it would be a straightforward task of shape measurement, but leeks come with mud and debris attached. In many cases, leaves have been stripped from the stem and cover

mvpromedia.eu

the root completely. So, using standard 2D machine vision will not achieve the desired level of accuracy. A more sophisticated inspection method is required and this is where 3D stereo vision, enhanced with a neural network based algorithm comes into its own. The point from when the camera is installed to the point at which it is performing at 99 per cent cutting accuracy can take several weeks. This consists of several phases of image acquisition where thousands of images are collected and analysed and run through a Scorpion Vision Benchmarking tool that identifies the weaknesses. If a Neural Network is being used, the data is used to train the software with thousands of images. The same process can be applied to other vegetables and Scorpion Vision - with their robotics partners - have built systems to also trim swedes and sprouts. Accuracy performance is in the millimetre, which, when you are MV trimming a leek is critical to the machine’s success.

29


ENHANCING FLAT PANEL DISPLAY INSPECTION WITH ONE CAMERA Kane Luo, European Sales Manager at Hikrobot, explains how the company’s 151M P camera has impacted on FPD inspection.

In order to save labour costs and reduce mistakes caused by human inspection, machine vision is widely introduced in various industrial applications, among which flat panel display (FPD) inspection has always been a very important field where machine vision solution is applied a lot. In recent decades, the evolution of the display technology is more prosperous than ever, from CRT to LCD, then to OLED and the up-coming flexible OLED, the display technology is progressing with each passing day. The amazing evolution speed of the displays reflects the increasing requirement of FPD industry on quality inspection. In this context, the industrial camera with higher resolution is able to reduce the duration of work cycle and the complexity of the MV system platform, thus to reduce cost and increase the inspection efficiency.

151MP CXP AREA SCAN CAMERA In response to demand, Hikrobot has developed MV-CH1510-10XM which features 151MP ultra-high resolution and large data transmission bandwidth with CoaXPress interface, contributing to high-efficiency FPD inspection without missing a single defect. 151MP Area Scan Camera –

MV-CH1510-10XM is equipped with Sony IMX411 backilluminated RS CMOS Sensor. It has 3.76μm pixel size, 14192*10640 resolution and square pixel array, which allows a perfect reproduction of every detail of the inspected object. Additionally, the SLVS-EC (Scalable Low Voltage Signaling Embedded Clock) technology together with the CXP data interface realises a high frame rate up to 6.2 fps.

30

Rich Details under Ultra-High Resolution

With the help of the ultra-high resolution above all, and a relatively high frame rate, MV-CH1510-10XM distinguishes itself in many industrial inspection applications, such as LCD/OLED display inspection and PCB AOI inspection.

TFT-LCD 4K TV FPD INSPECTION TFT-LCD 4K TV FPD inspection is a typical application where the 151MP camera embodies its value. In terms of image handling, the 4K TV FPD inspection based on MV AOI system can be divided into following steps: • Capture image • Extract screen area • Eliminate Moiré effect • Image enhancement & Defect detection

CAPTURE IMAGE In order to apply the AOI system based on MV, the precondition is to meet the defect inspection algorithms’

mvpromedia.eu


requirement, which is majorly the requirement on image resolution of the inspected object. To be more precise, every pixel on the TV screen should be at least mapped into nine pixels in the captured image (MR=3) in order to ease the subsequent image handling algorithms. That is to say, a normal 55-inch 4K TV, which has 3840*2160 resolution, requires at least 11520*6480 (75MP) resolution image. It’s possible to combine multiple high resolution cameras, for example, four × 29MP cameras to capture respectively a part of the image of TV screen, or a single camera to take multiple pictures of every part of the TV screen, and then splice them to reproduce the complete screen image. However, an additional splicing procedure is needed. There will also be supplemental operations to calibrate each image captured, correct distortions, and unify their characteristics. In addition, a more complex mounting platform or motion control platform will be required. The additional time required to achieve this might impact on the production rhythm. In this context, using the AOI system based on a single 151MP MVCH1510-10XM to do the FPD inspection on 4K TV screen can largely simplify the inspection procedure and the complexity of the system setup. The 151MP camera is able to capture the Larger FOV Brought by image of the whole TV 151MP Camera screen in one shot. In order to inspect every pixel of the screen, a possible solution is to let the TV display 4 pure colors, which are red, green, blue and white, in aging mode, and use the camera to take a picture of each colour respectively by linking the trigger signal input to the automatic remote control’s output.

EXTRACT SCREEN AREA It’s pretty easy to capture the image of the whole screen using MV-CH1510-10XM but considering that when TVs are transported to the inspection position, there might be some position deviation due to potential mechanical vibration. Usually a static ROI cannot extract the complete screen for inspection. In this situation, the global threshold segmentation method is a better solution.

since the pixel array on camera’s sensor has similar spatial frequency to the liquid crystal array on the screen. The moiré effect will affect the subsequent defect detection, so it’s indispensable to remove the disturbance. There are several ways to reduce the moiré effect, among which, using Fourier transform, Low-pass Filter, and inverse Fourier transform is an effective and efficient method.

IMAGE ENHANCEMENT & DEFECT DETECTION Image enhancement is a necessary procedure before the extraction of defects. This step aims to increase the contrast of defects against the background. For example, the image enhancement algorithm based on DOG (Difference of Gaussian) is a useful way to realise this object. For the final step, the conventional ways are generally based on local-threshold segmentation method. On this basis, various algorithms such as morphological processing are used to further improve the recognition rate. As a new method, deep learning is introduced to do the defect inspection recently. Compared to the traditional methods, deep learning has a much better performance, while the requirement on PC’s hardware performance Defect Detection is higher as well.

BENEFITS DELICATE COOLING DESIGN The TEC cooling system, including smart thermoelectric cooling, isolated air duct design and advanced fan system, is integrated into the camera to stabilise the sensor’s temperature at a certain value. The sensor’s temperature would be up to 35oC higher than ambient temperature without the cooling system according to test results. By using TEC cooling system to control the temperature, the noise in image is 7.5 times less. The delicately designed cooling system guarantees the imaging quality, especially in long exposure applications.

POWERFUL ISP FUNCTIONS ELIMINATE MOIRÉ EFFECT Moiré effect is generated when two objects with similar spatial frequency join together. Unluckily, it’s often the case when trying to use digital cameras to take picture of a screen, Moiré Effect

mvpromedia.eu

MV-CH1510-10XM uses various ISP algorithms to improve the imaging quality. It supports flat field correction, including FPN correction and PRNU correction. It also supports defective point correction, defective line correction and lens shading correction (LSC). Those features effectively avoid the disturbance from the camera itself during inspection. MV

31


SPONSORED

AN ILLUMINATING IDEA Advanced illumination showcases its solution in solving the lighting for keg production inspection.

Concave, convex and cylindrical objects present a challenge for machine vision inspections, especially when constructed of a highly reflective or specular material such as aluminum or steel. When faced with inspections involving complex geometrical shapes, engineers have the difficult task of creating an appropriate and robust illumination solution that does not cause blooming or shadows, both of which result in inaccurate readings by the vision system.

THE CHALLENGE Inspection of an aluminum beer keg is a challenge for machine vision. Advanced illumination was tasked by a regional camera distributor to provide the solution for a two-part inspection of beer kegs - before and after refilling. In the first inspection, the vision system needed to check the top of the keg for roundness, rejecting any dents or damage that might indicate a potential weakness or leak in the keg. Although the concept behind the inspection is simple, the geometry creates difficulties when lighting is applied. The tubular-shaped uppermost rim reflects the most light with considerable intensity fall off as the surface curves away from the camera. Increasing light intensity, to adequately illuminate the portions further from the vision system, results in excessive glare.

32

THE AI SOLUTION After testing several direct light sources, which resulted in significant blooming or uneven lighting, a diffuse dome light was tested. Unlike spotlights, linear arrays or ring lights, diffuse light is created by aiming light away from the object of the inspection into the reflective surface of a dome. Once it strikes the dome surface, light is scattered, losing its directional nature, thereby decreasing the potential for creating glare or harsh shadows. Initial diffuse light tests appeared promising, but the lights were too small to image the entire diameter of the keg in a single inspection. To compensate for the keg’s size, Ai designed and built a customised 18-inch square diffuse light. Populated with the new generation of high current LEDs, the diffuse light provided sufficient illumination to allow the vision system tools to determine the roundness of the rim. For this inspection routine, a 4mm focal length lens was used.

mvpromedia.eu


2530 3D Smart Sensor Blue Laser

The second part of the inspection, performed after refilling, required the vision system to inspect the keg’s ball valve for signs of leakage. The recessed valve comprises of a black rubber seal and a stainless steel ball that, when functioning properly, creates a tight seal that is broken only when the tap is inserted.

THE RESULT Once again applying the diffuse dome, the vision system – with a 25mm focal length lens – was able to detect the presence of liquid. The result was in part due to the foaming that occurs when the beer escapes, but also because the presence of liquid alters the reflectivity of the stainless steel, changing the average intensity response in the area of the valve. This application resulted in a still-popular dome, being produced in a revised form. The success of the diffuse dome encouraged Ai to expand on other Full Bright Field Diffuse Lights, such as the FD Series and FX Series Flat Diffuse Lights, which provide space-saving illumination solutions for objects with high reflectivity.

HIGH SPEED 3D SCANNING AND INSPECTION FOR SHINY AND CHALLENGING SURFACES

Other Diffuse Lights available from Advanced illumination include larger hemispherical domes, such as the DL080 Large Dome Light and DL180 Extra Large Dome Light, along with cylinder-shaped specialty diffuse lights for use with area scan cameras (DL067 Wide Linear Diffuse Lights) and line scan cameras (DL151 Narrow Linear Diffuse Lights). Web: www.advancedillumination.com/

MV

2530

mvpromedia.eu

Gocator® 2530 provides 3D blue laser profiling, measurement, and inspection at speeds up to 10 kHz. Easily integrate this all-in-one smart sensor for robust inline inspection of shiny materials (e.g., batteries, consumer electronics), low contrast targets (e.g., tire sidewall), and for general factory automation applications.

Discover Gocator 2530 visit www.lmi3D.com/2530


INFLUENCER Q&A THE GROWTH OF NEUROMORPHIC VISION In our latest series of key insights with leading figures across the machine vision, automation and robotics sectors, serial entrepreneur and iniVation CEO Kynan Eng, explains about the development and benefits of neuromorphic event-based vision, his brush with Matt Damon and his collaboration with California artist Shana Mabari.

MVPro: iniVation are leaders in neuromorphic event-based vision. What is it? Kynan Eng: Neuromorphic vision aims to achieve quantum leaps in computer vision efficiency and speed, by emulating key aspects of the visual systems of humans and other animals. Our founders have been looking at this question as a research topic since the late 1980s, an effort which I joined in 2000. The study of the nervous systems of biological organisms, and efforts to replicate them in artificial systems, led to the explanation of a few key principles. Some of these principles – such as compute only when and where needed, always process in real-time, and exploit the physics of the “stuff” your system is made of – led to the creation of the neuromorphic technologies which are being adopted today. These principles apply to all types of natural computation, but especially to the visual system because it is so large (about one-third of human neocortex) and so relevant to behaving systems as we understand them.

MVPro: How is it being applied in the wider world and what are the sectors that are really benefiting? What are the opportunities for neuromorphic technology? KE: Very broadly speaking, I see the development of neuromorphic technologies in a series of decades. During the 1990s, the key principles were being discovered and replicated in the lab. In the 2000s, these

34

principles were developed into technologies that could be actually used in real-world systems. We sold our first camera in 2009. During the 2010s, the application space was explored. Interest grew slowly at first, then exponentially. I went full-time into iniVation in 2015 and over the five years we have sold cameras to more than 300 organisations in sectors as diverse as: • Automotive – autonomous vehicles, in-cabin observation • Aerospace – navigation of all types of flying objects, component testing and analysis • Industrial automation – high-speed, real-time inspection • Robotics – navigation, rapid object detection • Consumer electronics – mobile, Internet of Things, AR/VR This decade we will see the first large-scale deployments of these neuromorphic technologies. If we make an analogy to the development of digital transistors (invented 1947), we are just now around the year 1975. The possibilities are just starting to come into view, in a very blurry way. The future of neuromorphic technology will be full of surprises, and I believe it will be as fundamental to future society as digital von Neumann technology is to today’s world.

MVPro: Where is neuromorphic vision making a difference? KE: We are working on a number of exciting real-world use cases in automation and consumer electronics and will make first announcements later this year.

mvpromedia.eu


MVPro: What are the challenges to iniVation in terms of developing the technology and from rival companies expanding into the market? KE: We and other companies in the neuromorphic vision space share a joint interest – to grow market acceptance of neuromorphic vision technology. The shared challenges include demonstrating real-world use cases, integration with existing investments in legacy technology and managing customer expectations (hype vs reality). Our open approach, in providing development kits to the widest possible number and variety of users aims to address these challenges for the good of both ourselves and of this new industry overall. There is now a flourishing and growing research community in neuromorphic vision, which we do our best to service and continue to grow. From the open research has come a number of potential use cases, taking advantage of different aspects of our technologies, e.g. speed, dynamic range, low power consumption. Our key challenge is to bring the most promising use cases to market, in the right order, while maintaining a core focus.

having key impacts. What is clear, however, is that we are at the beginning of a flowering of many commercial neuromorphic vision activities. These activities will grow, merge, sometimes die, and ultimately end up in the hands of a number of giants, which will generate enormous long-term value from these early efforts.

MVPro: iniVation is now in its fifth year. How do you assess its growth and achievements over the five years? KE: iniVation is an unusual startup. We started our incubation efforts in 2009, and only created iniVation in 2015 when we felt that we were on to something. We bootstrapped the company without any external funding, selling to over 300 customers all over the world, and working with top-10 players in automotive, aerospace, and consumer electronics. It was only in late 2019 that we closed our first partnership agreement with Samsung Catalyst. By doing this, we did not benefit from the faster team growth and exposure created by venture capitalist money. However, we have also avoided many of the traps, and I hope that our long-term goodwill with our customers and our internal fiscal responsibility will serve us well in the long run.

A natural result of opening up a promising field is the emergence of competition. Naturally, there are competing advances in conventional computer vision and other technologies, e.g. optical/photonic computing. Within our currently silicon-based field, a number of competitors have appeared and while we do compete directly with each other for attention, customers and funding, the reality is that we are all in the same boat. We must all deal with the giants of the silicon vision industry, e.g. Samsung, Sony and Omnivision, as these are the companies with the enormous resources required to take silicon to mass markets. To this end, we have concluded a technology and investment partnership with Samsung. No one knows how the neuromorphic vision competition will play out. Beyond the usual financial and business factors, political national-interest factors and even pandemics such as COVID-19 are already

mvpromedia.eu

35


MVPro: What are the plans for the next five years? KE: We plan to be the leading technology provider for high-performance neuromorphic vision systems. Some customers will build their own use cases on top of our hardware/software technology platform. Others will take our ready-to-go applications for key use cases and integrate them directly into their end-user products. We are looking forward to enabling an entire ecosystem of neuromorphic vision applications enabled by our core technologies. MVPro: You have a passion for start-ups within the neuromorphic engineering field. How do divide your time between the companies you are involved in? KE: iniVation is my 100 per cent job and is the only company where I am CEO. In the other companies, I have small supporting roles. I try to use my network to benefit each company where it makes the most sense and I try to apply lessons I learn in one company to the others, where appropriate. One of the other companies I am involved in is an incubator, of which iniVation is a “graduate”, so clearly there is a large overlap in interests there. MVPro: Away from work, in 2016, you found yourself in the spotlight for an amusing hypothesis on Quora of ‘How much money has been spent attempting to bring Matt Damon back from distant places’. What was the motivation behind it and tell us more about the experience? KE: In 2015, I opened an account on Quora to teach myself about social media. The long-form question-andanswer format of Quora seemed to be a good place where I could share my knowledge about neuromorphic engineering, AR/VR, mechanical engineering, and other technical topics. After a while, I found myself answering increasingly silly ‘what-if’ questions that had nothing to do with my work. One of these was an estimation of the total hypothetical cost of “saving” Matt Damon in all of his movies as he tends to need a lot of rescuing. Like any good engineer, I threw out an answer in a few minutes with numbers pulled out of the air (the total was about $900B). Several months later, the answer blew up worldwide on social media. Radio stations were calling me asking if I worked for NASA, as they were looking for a link to Matt Damon’s film The Martian, which had been released earlier in 2015. Again, like any good engineer,

36

I hurriedly threw together a justification of my original numbers, to make it look as if I had calculated them carefully from the beginning. Finally, the meme reached the actor himself, who was asked about it on the Today US breakfast TV show. (www.today.com/video/mattdamon-and-ridley-scott-reveal-how-they-made-themartian-596699203508).

MVPro: Is Quora a vital tool in your work or somewhere to explore strange theories like the Matt Damon question? KE: Like any good engineer, I tried to replicate the results of my Matt Damon experiment with a few other questions. A few made it into the wider media, such as one question where I estimated the cost of constructing a Star Destroyer. A couple of other answers were also popular, such as one about how much it would cost to buy one of everything on Amazon, or how much it would cost to buy everyone in the world a Coke. With a work focus, my use of Quora was vital as a selfteaching tool for learning to write for online marketing purposes. At the time, LinkedIn did not quite have the dominant position in B2B marketing that it has today. I still don’t really understand Twitter, let alone Instagram. However, I am intrigued by the power of Twitter, from an engineering and neuroscience viewpoint. Never before have so few bytes been able to influence the feelings and actions of so many.

MVPro: Finally, we discovered you are involved in a collaboration with the Los Angelesbased artist Shana Mabari. Tell us more? KE: I have known Shana for over 15 years. She is a long-time fan of neuromorphic engineering and is particularly interested in different concepts of light and space. Over the past few years, she collaborated with NASA on a number of sculptures and other works. Recently, I contributed a short chapter to her book of essays and images, entitled Space (https://griffithmoon. com/art/). MV

mvpromedia.eu


SPONSORED

INTO THE FUTURE OF

MACHINE VISION Augmented and mixed reality: what is it, and where is it going?

Machine vision is enormously important to industry and new vision technologies are certain to cause a manufacturing revolution. Machine vision is a fascinating field with constant innovation. New technology will generate novel techniques to utilise advanced cameras, frame grabbers, lenses, lights, and lighting controllers. Embedded systems and Industry 4.0 will become commonplace and AI techniques will dramatically enhance the boundaries of possibility. Industry worldwide has been severely impacted by the Covid-19 pandemic during 2020. As relaxed government restrictions allows industry to rebuild, there may be enduring reluctance to employ large workforces. This will accelerate an existing trend towards automation and bring added flexibility to producers and processors.

HYPERSPECTRAL, 3D AND COMPUTATIONAL Hyperspectral imaging, 3D techniques and computational imaging will soon become mainstream, driven by huge increases in processing power. New 3D techniques, such as laser line triangulation, stereo vision, time of flight and structured light are already generating massive interest, particularly in robot-guided applications such as pick and place,

cheaper to identify surface defects, stress, and birefringence within transparent objects. Lights have seen recent innovation such as the new, thin and flexible OLED panels. Lighting controllers have become capable of driving very low or very high power in applications where backlighting is bright, and for high frequency, high power LED pulsing.

PROTOCOLS, INTERFACES AND AI The Precision Time Protocol (PTP) enables exact synchronization of Ethernet devices. PTP with GigE Vision 2.0 standard can be used to trigger components such as lights, lighting controllers, lenses, and shutters with microsecond precision. The recently-announced USB 4 interface promises even faster transfer speeds, better management of video and compatibility with Thunderbolt 3. There is no doubt that AI will be at the forefront of future developments. Deep learning and machine learning are already gaining traction, especially in applications such as the classification of foodstuffs. Massive advances in parallel processing and huge training data sets will make deep learning easier to implement and the recent emergence of ‘inference cameras’, where trained neural networks are implemented directly on a camera processor, may bring AI to mass applications. To read more about the future of machine vision, download the free Gardasoft paper at www.gardasoft.com/future-of-machine-vision.

MV

Processing power continues to increase and the lowpower processors with on-board image processing will bring embedded vision technology to more applications, such as photometric stereo. Computational and multi-shot imaging, where a sequence of images is combined into a composite image, will increasingly be used to create surface topography, glare reduction, ultra-resolution color, and extend depth of field.

FILTERS, LIGHTS AND LIGHTING CONTROL New sensors featuring inbuilt polarization filters have created a generation of polarization cameras, making it

mvpromedia.eu

37


IS COAXPRESS 2.0 THE PERFECT INTERFACE FOR MULTI-CAMERA SYSTEMS? Donal Waide, director of sales at BitFlow, answers this crucial industry question by comparing CXP to USB3 and GigE.

Designers of high-throughput, multi-camera machine vision systems have grown dissatisfied with aging standards and have found a new champion, CoaXPress (CXP), a highspeed, point-to-point, serial communications interface that runs data over off-the-shelf 75Ω coaxial cables. The original CXP, introduced in 2008, supported a maximum data rate of 6.25 Gbps, approximately six times faster than GigE Vision and 40 percent faster than USB3. Version 2.0 of CXP has added two more speeds: 10 Gb/s (CXP-10) and 12.5 Gb/s (CXP-12). CXP 2.0 is ideal for supporting multiple high-resolution cameras and does not require the complexity or cost of multiple cables and connectors and is a platform that is easily adapted and scaled to meet changing requirements. Also, CXP offers greater flexibility for system integrators who previously were handcuffed to a few metres of cable if using Camera Link or USB3. Below are CXP’s maximum cable lengths:

COAXPRESS TRANSMISSION DISTANCES Data Rate

Maximum Distance

1.25 Gbps (CXP-1)

105 meters

3.75 Gbps (CXP-3)

85 meters

6.25 Gbps (CXP-6)

35 meters

12.5 Gbps (CXP-12)

25 meters

Uncompressed data, power and low-speed uplink are all simultaneously distributed over a single coaxial cable, reducing complexity and potential points of failure. Coaxial has inherently excellent protection against EMC/radio frequency interference, minimizing the risks of costly downtime or latencies. Finally, CXP supports GenICam.

38

Widely adopted by industry partners, GenICam simplifies application development or upgrading components.

IS GIGE VISION THE ANSWER? Only GigE Vision can compete with CXP on cable length in multi-camera systems. Going head-to-head with CXP 2.0 is the new 10 Gigabit Ethernet (10 GigE Vision) interface. It provides a tenfold increase in data transmission speeds over its predecessor, GigE, and was specifically targeted for highspeed testing environments. Unfortunately, there are some serious drawbacks. For one, 10 GigE Vision is exceptionally power hungry. It requires up to seven watts for operation, not including the cameras’ power requirements. Power consumption is roughly twice that of other interfaces. Nor has 10 GigE shown itself to be efficient with handling heavy data loads. It leans on the PC’s CPU and internal memory bus for operation because processing and buffering cannot be offloaded to a frame grabber FPGA or to memory. PnP discovery and operation is also required under all circumstances, making for a complex system subject to bottlenecks. And while power over cable and real-time triggering are said

mvpromedia.eu


to be planned for its next version, 10 GigE does not offer them now. Ironically, high-data applications using 10 GigE actually need a frame grabber to offload CPU and memory, therefore eliminating what was the principle benefit of the standard over CXP. Cost benefits of GigE Vision are further diminished because expensive, high-end server components serve as its backbone. All things considered, 10 GigE appears to be a step backward, rather than forward. Similarly, USB 3.1 Vision (SuperSpeed+) has fallen short of expectations. Limited to one to three meters with a passive cable, USB 3.1 Vision requires expensive active cables for each camera on a typical system.

CXP IN MULTIPLE CAMERA SYSTEMS Multi-camera systems have been a fixture in machine vision for decades. What is new is CXP. It allows multiple cameras to be linked by a single frame grabber over long, inexpensive and very robust coaxial cables with zero latency and exact synchronization. Various resolution cameras set at high or low frame rates can be linked to a single CXP frame grabber, each performing a different inspection task. Even CMOS and CCD sensors can be mixed in the configuration.

One of the arguments against CXP is that it requires a frame grabber, an expense that USB3 and GigE Vision dodge. Mistakenly, the impression is given that a multiple-camera system based on CXP is therefore more complex and expensive. Yet this ignores the fact that the load on the PC significantly increases with USB3 and GigE Vision. What savings are realised with USB3 are quickly negated by the cost of computing resources, while an expensive network card must be purchased for GigE Vision for proper operation. Here is another point: The required precise synchronising of cameras in a multi-camera configuration is the byproduct of a deterministic interface. Both CXP and Camera Link are inherently deterministic. GigE Vision and USB3 are not. Workarounds are possible yet these steps invite unstable performance and latency when nodes are added or when bandwidth is shared due to packets being dropped. As Camera Link is not an option for highspeed multi-camera systems this leaves CXP as the only practical choice. MV

BITFLOW CLAXON-CXP4 The Claxon-CXP4 supports one to four CXP-12 or CXP10 cameras simultaneously, plus has high speed inputs and outputs to support an array of sensors and motion encoders. With four cables and four CXP-12 cameras, the maximum data transfer rate is five GByte/s.

BitFlow introduced the new Claxon-CXP4 in Q3 of 2019. It is a quad CoaXPress-12 frame grabber that accelerates video transmission speed to 12.5 Gb/s. The architecture of the Claxon-CXP4 is identical to he company’s previous generation Cyton CXP-6 frame grabber.

mvpromedia.eu

The Claxon-CXP4 accelerates the uplink to 41.6 Mbps, enabling a host to send triggers to a camera at rates of 600 kHz in single trigger mode or almost 300 kHz in dual trigger mode without requiring a dedicated high-speed uplink cable. Based on x8 PCI Express Bus (Gen 3), the ClaxonCXP4 provides 7,800 MB/s peak bus bandwidth and 6,700 MB/s sustained bus bandwidth for communication with the host PC for faster video uploading and configuration. Performance is also improved by the frame grabber’s use of Micro BNC connectors which are more robust and run at higher speeds than traditional DIN 1.0/2.3 connectors. The software included with the Claxon-CXP4 is compatible with all other BitFlow frame grabbers. MV

39


EURESYS INCREASE

COAXLINK CXP-12 FRAME GRABBER OPTIONS Euresys has expanded its CXP-12 frame grabber range with the addition of the Coaxlink Mono CXP-12 and Duo CXP-12. These one- and two-connection CoaXPress 2.0 frame grabbers complement the four-connection Coaxlink Quad CXP-12, already available. These new compact, frame grabbers pack a lot of power in a low-profile PCIe card design. As a single CXP-12 connection provides more bandwidth than Camera Link Full, they are a perfect upgrade for most applications. In addition, the Coaxlink CXP-12 range supports extralong 40-meter cables, just using standard coaxial cables. Users can also acquire images from the fastest and highest resolution CoaXPress cameras.

COAXLINK DUO CXP-12 TWO-CONNECTION COAXPRESS CXP-12 FRAME GRABBER AT A GLANCE Two CoaXPress CXP-12 connections: 2,500 MB/s camera bandwidth PCIe 3.0 (Gen 3) x4 bus: 3,300 MB/s bus bandwidth Low-profile card. Delivered with standard and low-profile brackets. Fan-cooled heatsink Feature-rich set of 10 digital I/O lines

These frame grabbers can be used in a variety of applications and sectors including the computer vision, machine vision, factory automation, medical imaging and video surveillance markets.

Extensive camera control functions

COAXLINK MONO CXP-12

FOUR-CONNECTION COAXPRESS CXP-12 FRAME GRABBER

ONE-CONNECTION COAXPRESS CXP-12 FRAME GRABBER AT A GLANCE One CoaXPress CXP-12 connection: 1,250 MB/s camera bandwidth

Memento Event Logging Tool

COAXLINK QUAD CXP-12

AT A GLANCE Four CoaXPress CXP-12 connections: 5,000 MB/s camera bandwidth PCIe 3.0 (Gen 3) x8 bus: 6,700 MB/s bus bandwidth

PCIe 3.0 (Gen 3) x4 bus: 3,300 MB/s bus bandwidth

Feature-rich set of 20 digital I/O lines

Low-profile card. Delivered with standard and low-profile brackets.

Memento Event Logging Tool

Extensive camera control functions MV

Passive (fanless) heatsink Feature-rich set of 10 digital I/O lines Extensive camera control functions Memento Event Logging Tool

40

mvpromedia.eu


CLIMB HIGHER WITH 3D

Easy3DLaserLine

Easy3D

Easy3DObject

AT A GLANCE • Single and Dual Laser Line Extraction into a depth map • Convenient and powerful 3D calibration for laser triangulation setups • Compatible with the Coaxlink Quad 3D-LLE frame grabber

AT A GLANCE • Point cloud processing and management • Flexible ZMap generation • 3D processing functions for cropping, decimating, fitting and aligning point clouds • Compatible with many 3D sensors • Interactive 3D display with the 3D Viewer

AT A GLANCE • Detection of 3D objects in point clouds or ZMaps • Metric detection criteria • Compatible with arbitrary regions • Computation of precise 3D measurements, like size, orientation, area, volume… • Automatic extraction of object local support plane • 2D and 3D graphical display of the results • Full-featured interactive demo application

3D laser line extraction and calibration library

www.euresys.com

3D image processing library

3D object extraction and measurement library


What’s inside X? Matrox Imaging Library X

Classification tools

3D tools

MIL CoPilot

A suite of image classification tools based on deep learning and tree ensemble let MIL X users leverage robust machine learning techniques to quickly and accurately analyze challenging images, whether working directly from the images themselves or features extracted from these images.

MIL X offers a full collection of vision tools for performing 3D capture, display, processing, and analysis including metrology and registration. Whether point clouds, depth maps, or profiles, users have the option to work with the best format for a given application.

The MIL CoPilot companion interactive environment lets MIL X users experiment, prototype, and generate functional program code to quickly get vision applications off the ground. New training and inference support assist users in deploying image analysis using deep learning.

All this and much more in the comprehensive toolkit trusted for more than 25 years.

It’s what’s inside that counts: MIL X matrox.com/imaging/mil_x/mvpro


THE ‘EYES’ HAVE IT: ONROBOT LAUNCHES 2.5D VISION As opposed to other vision systems on the market, Eyes just needs to take a single image for calibration and part recognition and has automatic focus to work at different distances within the same application. Eyes is ideal for sorting a wide variety of objects or for CNC machine tending with metal parts that are defined by outer shape, as well as many other pick-and-place applications where orientation is important. Sometimes you just have to take a fresh look at things. Onrobot’s 2.5D vision product ‘eyes’ is certainly giving a new perspective and attracting a lot of interest. Eyes is a development that is changing how robot picking as the 2.5D vision adds depth perception and parts recognition for all leading robotic arms. Flexibly mounted both on the robot wrist or externally, it is ideal for almost any unstructured application in need of vision guidance. It also delivers seamless integration, one-picture calibration, intuitive programming, and none of the complexity of existing vision systems. Robotic arms are often tasked with picking items not presented in the same orientation, shape or size and providing consistent positioning, manufacturers frequently add fixtures, bowl feeders and other hardware, adding cost and complexity to what ends up being rigid applications that lack the ability to easily pick different objects or achieve quick changeover times.

“2.5D is rapidly emerging as the perfect technology for vision-guided applications,” says Iversen. “Compared to 2D it adds not only length and width but also height information for the specific part, which is ideal when objects may vary in height or if objects must be stacked.” Eyes can be easily mounted and integrates seamlessly with all leading collaborative and light industrial robot arms through OnRobot’s One System Solution, a unified mechanical and communications interface based on the company’s Quick Changer, now an integrated part of all OnRobot products. The new vision system directly interfaces with other OnRobot devices making it is very easy to use Eyes together with any of OnRobot’s grippers. MV

“A significant part of our customer base does not want to be tied to a fixed incoming position of a product they want to pick,” says CEO of OnRobot, Enrico Krog Iversen. “They would love to eliminate complicated, bulky and expensive part feeders and fixtures to achieve this, but until now, vision systems have felt out-of-reach. Our new Eyes vision system changes all that.”

mvpromedia.eu

43


SPONSORED

THE 3D IMPACT IN

AUTOMOTIVE PRODUCTION LM I Technologies share a case study on how cobot-mounted Gocator 3D snapshot sensors perform automotive gap and flush measurement and inspection.

THE CLIENT Kibele-PIMS is an industrial imaging and robotic automation company with its main management office and R&D center in Izmir, Turkey, and a branch office in Istanbul. Founded in 2002, Kibele-PIMS produces dimensional control, surface control and sorting machines. Kibele-PIMS makes mechatronic applications using advanced software applications and various advanced imaging systems under factory conditions and offers a holistic automation solution with robotic applications.

THE APPLICATION This application involves inspecting unpainted, bare metal automotive car bodies (body-in white) for correct

door and panel gap and flushness measurements in order to verify critical assembly tolerances are met.

THE CHALLENGE Manual inspection of car bodies at this stage in the manufacturing process is very difficult because the car body consists of bare metal with sharp and shiny edges that require multiple inspections. These inspections demand custom filtering and feature measurement tools for completion in a very short cycle-time. In addition, in this particular plant, car bodies are manufactured using a mixed-model production methodology, which means every car that passes on the line is potentially a different model from the previous. The robot and 3D vision system must be able to communicate directly with the manufacturing controllers and key factory systems in order to adapt to inline production model changes, in real-time. The data reporting from this system must also be very detailed, providing insight to the operator at the retouch station.

THE SOLUTION Gocator 3D snapshot sensors provide high precision 3D surface data with excellent performance on shiny edges and metal surfaces. Together with fast scan rates of up to 6Hz (using GPU acceleration) and onboard measurement processing, Gocator can maintain

44

mvpromedia.eu


SPONSORED

• Communication flexibility and capabilities to connect any well-known robot which is good for the project because engineers are not limited strictly to using UR, they could also use KUKA, Omron, or other brand name cobots.

THE RESULT Shorter cycle times and increased number of inspection points for more robust measurement and quality inspection. production throughput while delivering highly repeatable measurement results. The PC-based GoX Accelerator software package was used to share the processing load and complete all analyses in the required cycle-time. Once set up, direct communication between the Gocator sensor and workstation was easy to program and fast to communicate via a custom-made front-end HMI software, which was also used for coordinating Gocator sensors and industrial cobots. The Gocator sensors were seamlessly integrated into the customer’s proprietary code in order to solve all the detailed reporting and communication problems.

Why Factory Automation? • Reliable, consistent, precise measurement in a challenging environment (unpainted, changing models, moving assembly line) • Eliminates safety risks (operators interacting with sharp metal edges) • Identifies rework before downstream processes attempt to mate panels that won’t fit due to incorrect or missing assembly features

THE GOCATOR® ADVANTAGE • High-precision and reliable measurements on shiny metal objects • Easy integration with factory systems, software, and industrial robots • Easy job setup, job switching, and measurement tool customisation

TESTIMONIAL Erdal Basaraner, managing director of Kibele-PIMS. “Kibele PIMS A.S. chose Gocator for this project not only because it’s a perfect fit for the measurement requirements, but also because of its proven industrial reliability, high-precision, and overall measurement stability. Our engineers were happy to program and integrate Gocator sensors because of the easy integration and programming flexibility.” MV

mvpromedia.eu

45


HOW THE

UR10 COBOT IS SAVING LABOUR COSTS AT BWI RARUK Automation has supplied a Universal Robots UR10 collaborative robot (cobot) to BWI Group’s Luton plant, which specialises in the assembly of complete suspension modules to the global transportation market. The UR10 is currently employed tending a blow-moulding machine, where completed parts are extracted every 25 seconds and placed in containers ready for shipping, saving on the need – and cost – of having a human operative in attendance.

“We invited almost every UK supplier of robot technology to pitch at our facility. However, while this process was ongoing, I attended an automation workshop at JLR, where I happened to see a Universal Robots UR5 cobot in action. We hadn’t given much consideration to cobots until then, but everyone was really impressed, including me.” Consequently, BWI Group invested in a UR10 model with 10 kg payload to suit a number of applications the company had in mind. “First of all, we put the UR10 to work tending a crimping machine,” added Hathway. “The machine crimps outer canisters for air-suspension modules. To explain further, outside the flexible sleeve/ bag that traps the air, there is a cylinder to help prevent over-expansion and punctures. The machine crimps the cylinder on to the bag, with the UR10 moving parts between each stage of the process: pre-check, crimp and after-check. There is one part in each station at any one time to provide continuous flow.”

UR10 tends crimping machine

With origins that can be traced back to the early 1900s, BWI Group is a full service supplier of chassis, suspension and brake products. The company has partnerships with many blue-chip OEMs, including JLR, Porsche and Honda, and has a proven ability to manage tier-two and tier-three suppliers to ensure the just-in-time (JIT) delivery of modules. At Luton, some 200 people help ensure production throughput matches demand. Of course, one way to simplify the task of meeting productivity targets, is to invest in automation. “We did quite an extensive market study before acquiring our first robot,” explains Mike Hathway, industrial engineer at BWI Group.

46

Previously this entire operation was manual, with a BWI Group employee performing all tasks. Not long after the cobot was installed, an internal change of strategy saw the manufacture of these outer canisters relocated to another BWI plant in central Europe, so the UR10 was re-assigned to a different machine. “UR-series cobots are extremely flexible and can be adapted to a whole host of applications, so we had no trouble finding another role for it,” added Hathway.

mvpromedia.eu


“We moved the UR10 to a blow-moulding machine, where it extracts dust covers every 25 seconds and loads them neatly into a container ready for shipment to another plant. Previously, the parts would just exit the machine and land randomly on a conveyor, which although a bit messy, was fine when we used the dust covers in-house.

seconds of waiting – hardly an efficient or cost-effective use of labour. “We’ve been nothing but impressed with our UR10. Aside from the flexibility and reliability, the collaborative capability means we don’t need guarding and can create a far more compact cell – with due care relating to speeds, motion and grippers. Our factory is highly utilised and we don’t have sufficient space to be adding large guarded robot cells. Guarding also adds to the cost of a project. We are now looking at introducing further cobots and have a particular interest in the larger UR16.” Another point Hathway highlighted, is the support and expertise provided by RARUK Automation.

UR10 tends blow-moulding machine to remove and load dust covers ready for shipment

“However, the dust covers are now sent to another facility for assembly, so shipping them in a tangled, random way would render them unusable. If we didn’t deploy the UR10 we would need a full-time operative at the end of the machine to pack the parts neatly. This job would be particularly dull, comprising five seconds of work and 20

For images

The CF-ZA series from Fujinon Especially developed for 1.1" sensors, the CF-ZA lenses are characterized by a lens design with a CRA of less than 5°. That ensures constant bright images from the center to the corners without vignetting. More at www.fujifilm.eu/fujinon. Fujinon. To see more is to know more.

“The company helped in the early project stages, particularly with proof-of-concept, and they are always on hand if I have a question about programming.” he concluded. As a final aside, BWI Group is currently in the process of creating a new business venture, Open Automation, that will semi-integrate UR cobots on a stand complete with PLC and program, ready to sell to other end users. This venture will tap into the full range of Universal Robots available from RARUK Automation. MV

without vignetting


YOUR NEXT CO-WORKER MAY BE A COBOT.

HOW WORRIED SHOULD YOU BE? Carrie Halle, vice president of marketing for US-based Rockford Systems, examines the risks and how to reduce them as more cobots enter the workplace.

Among his many famous tweets, tech executive and billionaire entrepreneur Elon Musk took to Twitter calling for the regulation of robots and Artificial Intelligence (AI), saying their potential, if left to develop unchecked, threatens human existence. Google, Facebook, Amazon, IBM and Microsoft joined in with their own dire forecasts and have jointly set up the consortium “Partnership on AI to Benefit People and Society” to prevent a robotic future that looks not unlike The Terminator movie series. National media heightened panic by broadcasting a video released by a cybersecurity firm in which a hacked industrial robot suddenly begins laughing in an evil, maniacal way and uses a screwdriver to repeatedly stab a tomato. The video demonstrated how major security flaws make robots dangerous, if not deadly. Is all this just media hyperbole or are robots really that hazardous to our collective health? Are productivitydriven manufacturers unknowingly putting employees at risk by placing robots on the plant floor? What kind of safeguarding is required? Should robots be regulated, as Elon Musk believes?

“DUMB” MACHINES V COBOTS Until now, the robots used in manufacturing have mostly been “dumb” robots; that is, room-sized, programmed

48

machinery engineered to perform repetitive tasks that are dirty, dangerous or just plain dull. Typical applications would include welding, assembly, material handling and packaging. Although these machines are very large and certainly have enough power to cause injuries, the instances of employees actually being injured by robots is relatively rare. In fact, over the past three decades, robots have accounted for only 33 workplace deaths and injuries in the United States, according to data from the Occupational Health & Safety Administration (OSHA). So, you might ask, why the sudden uproar when there are already 1.6 million industrial robots in use worldwide? Most of the clamour behind calls for regulation stems from a new generation of “cobots” (collaborative robots) that are revolutionising the way people work. Unlike standard industrial robots, which generally work in cages, cobots have much more autonomy and freedom to move on their own, featuring near “human” capabilities and traits such as sensing, dexterity, memory and trainability. The trouble is, in order for cobots to work productively, they must escape from their cages and work side-byside people. This introduces the potential for far more injuries. In the past, most injuries or deaths happened when humans who were maintaining the robots made an error or violated the safety barriers, such as by entering a cage. Many safety experts fear that since the cage has been eliminated with cobots, employee injuries are certain to rise.

mvpromedia.eu


Since cobots work alongside people, their manufacturers have added basic safety protections in order to prevent accidents. For instance, some cobots feature sensors so that when a person is nearby, the cobot will slow down or stop whatever function they are performing. Others have a display screen that cues those who are nearby about what the cobot is focusing on and planning to do next. Are these an adequate substitute for proven safeguarding equipment? Only time will tell.

3. Mechanical Part Accidents. The breakdown of the robot’s drive components, tooling or end-effector, peripheral equipment, or its power source is a mechanical accident. The release of parts, failure of gripper mechanism, or the failure of end-effector power tools (e.g., grinding wheels, buffing wheels, deburring tools, power screwdrivers, and nut runners) are a few types of mechanical failures.

There is another, more perilous problem with robots in general: Robots are basically computers equipped with arms, legs or wheels. As such, robots are susceptible to being hacked. But unlike with a desktop computer, when a robot is hacked it has the ability to move around. For instance, a disgruntled ex-employee could hack into a robot and re-program it to harm people and destroy property. The more functionality, intelligence and power a robot has, the bigger its potential threat.

4. Other Accidents. Other accidents can result from working with robots. Equipment that supplies robot power and control represents potential electrical and pressurized fluid hazards. Ruptured hydraulic lines could create dangerous high pressure cutting streams or whipping hose hazards. Environmental accidents from arc flash, metal spatter, dust, electromagnetic, or radio-frequency interference can also occur. In addition, equipment and power cables on the floor present tripping hazards.

TYPE OF INJURIES OSHA lists four types of accidents resulting from robot use in the Technical Manual “Industrial Robots and Robot System Safety” (Section IV: Chapter 4). 1. Impact or Collision Accidents. Unpredicted movements, component malfunctions, or unpredicted program changes related to the robot’s arm or peripheral equipment could result in contact accidents. 2. Crushing and Trapping Accidents. A worker’s limb or other body part can be trapped between a robot’s arm and other peripheral equipment, or the individual may be physically driven into and crushed by other peripheral equipment.

mvpromedia.eu

ROBOT SAFETY REGULATIONS Robots in the workplace are generally associated with machine tools or process equipment. Robots are machines, and as such, must be safeguarded in ways similar to those presented for any hazardous remotely controlled machine, falling under the OSHA General Duty Clause (5)(a)(1) which requires employers provide a safe and healthful workplace free from recognized hazards likely to cause death or serious physical harm. Also applicable are OSHA 1910.212 (a)(1) “Types of Guarding” and 1910.212 (a) (3)(ii) “The point of operation of machines whose operation exposes an employee to injury shall be guarded.”

49


There are also new requirements in ANSI/RIA R15.06-2012 for collaborative robots; in this case, ISO 10218 and the ISO/TS 15066 Technical Specification. This standard clarifies the four types of collaboration: Safety Monitored Stop, Hand Guiding, Speed & Separation Monitoring and Power & Force Limiting. ISO/TS 15066 holds key information including guidance on maximum allowable speeds and minimum protective distances, along with a formula for establishing the protective separation distance, and data to verify threshold limit values for power and force limiting to prevent pain or discomfort on the part of the operator. Various techniques are available to prevent employee exposure to the hazards that can be imposed by robots. The most common technique is through the installation of perimeter guarding with interlocked gates. A critical parameter relates to the manner in which the interlocks function. Of major concern is whether the computer program, control circuit, or the primary power circuit, is interrupted when an interlock is activated. The various industry standards should be investigated for guidance; however, it is generally accepted that the primary motor power to the robot should be interrupted by the interlock. In general, OSHA’s view on robot safety is that if the employer is meeting the requirements of ANSI/RIA R15.06, Industrial Robots and Robot Systems-Safety Requirements, then the manufacturer has no issues. For guidance on how to select and integrate safeguarding into robot systems, refer to Robotic Industries Association’s Technical Report: RIA TR R15.06-2014 for Industrial Robots and Robot Systems – Safety Requirements and Safeguarding. Published by the American National Standards Institute (ANSI) and Robotics Industry Association (RIA), ANSI/RIA R15.06 are consensus standards to provide guidance on the proper use of the safety features embedded into robots, as well as how to safely integrate robots into factories and work areas. The latest revision of the standard, ANSI/RIA R15.06-2012, references for the first time ISO 10218-1 & 2 to make it compliant with international standards already in place in Europe. Part 1 of ISO 10218 details the robot itself; Part 2 addresses the responsibilities of the integrator.

50

The requirement for risk assessments is one of the biggest changes in the new RIA standard. The integrator, or the end-user if they are performing the job of an integrator, now must conduct a risk assessment of each robotic system and summarize ways to mitigate against these risks. This may involve procedures and training, incorporating required machine safeguarding, and basic safety management. Risk assessments calculate the potential severity of an injury, the operator’s exposure to the hazard, and the difficulty in avoiding the hazard to arrive at a specific risk level ranging from negligible to very high. In the future, as cobot use rapidly expands throughout industry, regulation of this technology will grow more focused and specific. Consider this: although cobots currently represent only three percent of all industrial robots sold, they are projected to account for 34 percent of the industrial robots sold by 2025, a market that itself is set to triple in size and dollar volume over that period.

CONCLUSION The next 10 years will be pivotal for manufacturing, and success largely depends on companies’ ability to navigate the transition from traditional manufacturing to Industry 4.0-style automation and the widespread use of robots. While few people have as dire a view as Elon Musk on the subject, it is critical that employee safety is not lost in the excitement as we shepherd robots out of their cages to work hand-in-hand with humans. MV


Six Essential Considerations for Machine Vision Lighting

5. Make Setup Easy A well-designed lighting controller brings significant benefits to machine vision systems. Isolated trigger inputs make connection to signal sources easy and a front panel provides quick configuration. A quality controller has minimal delay between trigger signal and light pulse and should provide full ethernet compatibility with access to live performance metrics. Gardasoft Vision has used our specialist knowledge to help Machine Builders achieve innovative solutions for over 20 years.

To read more about the Six Essential Considerations for Machine Vision Lighting see www.gardasoft.com/six-essential-considerations

Semiconductor

|

PCB Inspection

Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com

www.gardasoft.com

|

Pharmaceuticals

|

Food Inspection


FILTERS: A NECESSITY, NOT AN ACCESSORY.

INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING

MIDOPT.COM


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.