5 - 7 June 2018
Messe Stuttgart, Germany

Preliminary Conference Programme



Day 1: Tuesday 5 June

Keynote Presentations
09:00 - 12:30

09:00

Automated road friction estimation using car-sensor suite: machine learning approach

Mats Jonasson
Technical expert
Volvo Cars
SWEDEN
Automotive active safety systems can significantly benefit from real-time road friction estimates (RFE) to adapt driving styles specific to the road conditions. This work focuses on using the Volvo car-sensor suite for road friction estimation and prediction tasks using machine learning algorithms. First, the most significant sensors for RFE estimation are detected by feature ranking, including: ambient temperature, GPS location, vehicle speed, forces and road surface, tyre types. Next, image processing modules are invoked to detect the drivable surface condition. Finally, information from processed segments of road images will be fused with vehicle dynamics sensors to initiate RFE-related warnings.

09:30

Artificial Intelligence in the Driver’s Seat

Serkan Arslan
Director of automotive
NVIDIA EMEA
GERMANY
Artificial intelligence is transforming every industry, but perhaps none more than transportation. From how we interact with vehicles, to how they drive us, to how our city's infrastructure will automatically adjust to reduce traffic congestion, deep learning will play a vital role. NVIDIA is building AI platforms for autonomous vehicles as well as smart cities. The talk will provide insight into and showcase demonstrations of the existing state of autonomous vehicles in development and in production, as well as discuss the steps to fully autonomous Level 5 robo-taxis.

10:00

Cognitive IT with storage and software defined solutions for ADAS

Frank Kraemer
Systems architect
IBM
GERMANY
Advanced driver assistance systems (ADAS/autonomous driving) are becoming part of all vehicles. All major OEM and Tier-1 auto manufacturers are implementing and testing AD facilities. We examine how real-time sensors, big data computing, data storage and data archiving are integrated in today's ADAS/AD systems, providing a fascinating case study, best practices for workflow design, testing and development, data storage and archiving, applicable to all industries. Come hear a fascinating investigation of an industry/technology that will soon affect us all, every day.

10:30 - 11:15

Break

11:15

Accelerating connected and autonomous vehicles through open-source software

Dan Cauchy
Executive director, Automotive Grade Linux
The Linux Foundation
USA
The race to roll out new technology features and autonomous vehicles continues to heat up. To compete at the speed of a tech company, many auto makers have shifted from traditional development processes to agile, rapid development through open-source software. Dan Cauchy will provide an overview of AGL, key milestones and the project roadmap. He will also discuss AGL's vision for functional safety as well as for an open-source platform for autonomous driving that will help accelerate the development of self-driving technology while creating a sustainable ecosystem that can maintain it as it evolves over time.

11:45

Contract-based design of automotive software systems

Fabio Urciuoli
Business development manager
Siemens PLM Software
GERMANY
Organisations have long struggled to break down large, complex embedded applications into manageable isolated components and boost cross-functional collaboration. It is clear that automotive OEMs, suppliers and involved system designers are faced with several challenges. We believe that an architecture-driven development associated with contract-based design will be helpful for software teams to be synchronised from start to finish and to efficiently manage the integration activities from the start. The proposed workflow will provide industrial value impact in terms of early design error detection, predictable integration of the system, reduced release time and flexible organisation level adoption.

12:15

AUTOSAR adaptive platform for intelligent vehicles

Dr Thomas Scharnhorst
Spokesperson
AUTOSAR Development Partnership
GERMANY
AUTOSAR Classic was released more than 10 years ago, with the first release entirely intended for the embedded architectures of classical ECUs. AUTOSAR has now developed a completely new approach – AUTOSAR Adaptive Platform – to cope with the challenging environment of internet access in cars to make vehicles intelligent and adaptive. This aims to support dynamic deployment of customer applications in providing an environment that requires high-end computing power and connects deeply embedded and non-AUTOSAR systems in a smooth way while preserving typical features originated in deeply embedded systems like safety.

12:45 - 14:15

Lunch

Afternoon Session

14:15

Virtual validation techniques for CNN-based ADAS/AD systems

Vignesh Radhakrishnan
Senior ADAS/AD systems engineer
AVL
UK
Virtual validation will become an important aspect of ADAS/AD development. However, there is an urgent need to define techniques for validating AI-based ADAS/AD systems. Due to the stochastic nature of AI systems, the challenge is to make sure the systems act as per defined requirements. AVL is leading a research project – SAVVY (Smart ADAS Verification & Validation Methodology) – funded by Innovate UK, which is addressing the challenge of defining a process to validate CNN-based ADAS/AD systems.

14:45

Challenges of neural networks for vehicle software

Joshua Davis
Software engineer
Horiba MIRA
UK
In recent years, artificial neural networks have been shown to be very effective at learning high-level semantic information from unstructured data such as audio, image and video, modelling complex non-linear relationships between input and output signals and allowing us to make sense of the world around us. This paper will give a brief introduction to neural networks, discuss some of the big challenges that we have seen putting them to use in autonomous vehicle applications such as training and validation, and review some of the methods being used to overcome these challenges.

15:15

Complex deep learning software stacks – revealing their inner secrets

Illya Rudkin
Principal software engineer/safety-critical software development lead
Codeplay Ltd
UK
The integration of complex software with hardware while meeting tight development constraints is a challenge for all automotive companies. For functional safety engineers the scope of concerns with an expanding code base supporting diverse hardware is immense. Khronos moves open standards APIs such as OpenCL and OpenVX to be ISO 26262 compatible software/hardware enablers. Codeplay will show, using Tensorflow as a use case, how a holistic toolchain with open standards can manage complex software stacks using SYCL and OpenCL. This helps to enable developers and safety functional engineers to manage issues such as power usage and concurrency timing while mitigating safety concerns.

15:45 - 16:30

Break

16:30

Preparing for the inevitable data growth challenges of ADAS

Larry Vivolo
Senior business development manager, automotive and electronic design automation
Dell EMC
USA
The growing number of high-resolution sensors in cars is driving new data management challenges for ADAS simulation and development. A single sensor for SAE Level 3 autonomy can consume >4 PB of storage to simulate 200,000 driven kilometres. SAE Level 5 may require 240,000,000km. Legal obligations may require sensor data archiving for decades, with only days for recovery. Machine learning will set new performance requirements, all while IT budgets shrink. During this sessions we will review how distributed file systems help solve these conflicting requirements and how to architect your data centre to meet future regulations and requirements for capacity, performance, collaboration and growth.

17:00

Safety-reinforced AI driver development

Dr Edward Schwalb
Lead scientist
MSC Software
USA
Safety must be a major focus for automated vehicle (AV) development. Today humans drive 100 million miles between fatal crashes. Consequently, proving that a specific revision of an AI driver is safe is not practical using road and track tests; neither investigations of Tesla's crash nor Waymo driving records resulted in conclusive results. State-of-the-art machine learning approaches, e.g. measuring F1 score, are inadequate to measure failure rates of a few in a billion; a 99.999% F1 score is not considered meaningful. We describe an approach for safety-reinforced training, and analysis of perception and decision components using simulation.

17:30

Using compilers for safety-critical systems

Dr Marcel Beemster
CTO
Solid Sands BV
NETHERLANDS
Compilers are 'just' tools in, for example, the ISO 26262 functional safety standard for the automotive industry. Developers prefer to do on-target application testing over compiler qualification. However, this does not take into account the complexity of a compiler and the artefacts it introduces into the generated code. Without detailed knowledge of the compilation process from source code to machine code, it is incorrect to assume that high code and branch coverage at the application source code translates to similarly high coverage at the machine code.

Day 2: Wednesday 6 June

Morning Session
09:00 - 12:45

09:00

Deep learning on Hadoop

Dr Tobias Abthoff
Member of the executive board
NorCom Information Technology AG
GERMANY
The software stack of autonomous vehicles increasingly consists of deep learning networks complementing traditional software. The development and particularly the verification of such networks requires completely new paradigms. We will present a new way of efficiently training and verifying deep neural networks by combining deep learning with state-of-the-art big data technology. The approach targets globally distributed teams in particular, and also distributed non-movable test data. The general architecture will be presented and then an actual use case in the field of image understanding will be shown in detail.

09:30

Making cameras self-aware for autonomous driving

Dr Florian Baumann
Technical director
Adasens Automotive GmbH
GERMANY
Two fundamental algorithms addressing the issue of making cameras self-aware of their status are proposed: online targetless calibration based on optical flow, and blockage detection based on image quality metrics (e.g. sharpness and saturation). The online calibration is based on the vanishing point theory; the soil/blockage detection is based on the extraction of image quality metrics and the identification of discriminative feature vectors by a support-vector machine. The presentation will include videos and real-world examples from the algorithms running in real time.

10:00

The amygdala of the self-driving car

Raul Bravo
CEO
Dibotics
FRANCE
When using the human brain as a model for intelligence, the human amygdala allows for ultra-fast, deterministic and effortless (low-power) reaction, while the neocortex allows for complex thinking (after learning and spending significant energy). Nobel Prize winner Daniel Kahneman calls these two different processes fast thinking (System 1) and slow thinking (System 2). Right now, most of the attention in autonomous car development is in using System 2 thinking. AI/machine learning is an excellent parallel to the neocortex of the human brain. We have developed the artificial amygdala and will give a live demonstration of how it works.

10:30 - 11:15

Break

11:15

Autonomous fleet management — challenges and opportunities

Pejvan Beigui
CTO
EasyMile
FRANCE
Fleet management is a key component of the coming autonomous vehicle-based mobility as a service revolution. At EasyMile, we are the centre of this revolution as we have been designing both the EZ-10 autonomous shuttle and the software stack required for autonomous driving. We have helped transport operators around the world with operating fleets of EZ-10. Our fleet management solution has been architected for high availability, fault tolerance and high scalability, while maintaining other key properties such as cybersecurity and agility. In this talk, we will discuss the challenges we've faced building this system, and present some of our key results.

11:45

Object detection and classification based on virtually trained neural networks

Ronnie Dessort
Simulation consultant
TESIS DYNAware GmbH
GERMANY
In autonomously driven vehicles, object detection and fusion of sensor information enables the vehicle to perceive its environment and decide for certain driving manoeuvres. The virtual development of such systems allows the comfortable definition and reproducibility of a large number of traffic scenarios. In this contribution, a publicly available state-of-the-art algorithm based on deep learning is trained and tested in a virtual 3D world. Special focus is put on variation of different harsh environmental conditions such as rain or soiled traffic signs. The results show that the presented development method can be used as an appropriate complement to common development.

12:15

Operating and optimising autonomous vehicle fleets – distributed cloud platform

Zhao Lu
CTO
BestMile
SWITZERLAND
The benefits of autonomous vehicles can only be leveraged when integrated into a coherent and coordinated mobility system. Challenges lie in how mobility providers will be able to offer services with autonomous vehicles when they can no longer rely on drivers and current software is not sufficient. There is a clear need for a platform to create coordinated, efficient, flexible and sustainable mobility services in which autonomous vehicles are operated and optimised as a fleet, meeting real-time demand or adhering to a schedule while adapting to network disruptions. The paper will detail the technical specificities of such a platform.

12:45 - 14:15

Lunch

Afternoon Session
14:15 - 18:00

14:15

Producing systems that enable the innovation that autonomous vehicles will require

Agustin Benito Bethencourt
Principal consultant - FOSS
Codethink Ltd
SPAIN
In order for autonomous vehicles to react to new situations and demands, software and data will need to be updated on a regular basis, which requires the way in which software systems are produced today to be turned upside down. With Open Source best practices and agile principles in mind, along with a background in other industries, Agustin will go over some of the key changes that auto makers and Tier 1s will need to face in the near future to enable all that innovation in vehicles in a sustainable way. He will focus on delivery and maintenance processes and practices.

14:45

Autonomous vehicles: providing software features quickly by model-based system design

Sébastien Christiaens
Department manager
FEV Europe GmbH
GERMANY
Autonomous driving vehicles are complex systems. Existing processes for component-orientated development reach a limit. New approaches are required to provide safe and affordable solutions. System modelling approaches offer the opportunity to smooth the steps from customer requirements to software development, and enable reuse and front-loading, leading to considerable effort and time reduction for integration and testing. This presentation shows how a systematic approach to system requirements definition can practically be applied to the development of autonomous driving vehicle functions. The different modelling layers will be explained, and the benefits of the approach will be discussed and illustrated through practical examples.

15:15

Building artificial brains that learn driving better and more quickly than humans

Karim Mansour
Vice president and co-founder
Sigra Technologies GmbH
GERMANY
This talk will concentrate on how artificial intelligence can help build sophisticated brains that can drive in complex scenarios. It will look at how these brains can react much more quickly than human brains, and how they can detect future hazards using simple data that human beings can simply ignore or can’t detect or see.

15:45 - 16:30

Break

16:30

Running a functional and open HAD software architecture on an Adaptive AUTOSAR infrastructure

Rudolf Grave
Head of product systems architecture
Elektrobit Automotive GmbH
GERMANY
This talk will explain how functional software architecture with open interfaces and software modules can be integrated on a high-performance micro-controller using Adaptive AUTOSAR middleware. In addition to the functional challenges, the handling of automotive safety integrity levels will be shown. The benefit of an open software framework for automated driving combined with a dependable operating environment is increased time to market due to fast integration and early testing on a system level.

17:00

Updating autonomous vehicle software remotely – even over non-secure channels

Alberto Troia
Memory system architect
Micron Technology Inc
GERMANY
For autonomous driving to become mainstream, passengers must trust the autonomous vehicle enough to give up driving control. As a broader component of that trust, it will also be essential for OEMs to implement an infrastructure to securely support software updates over remote, non-secure channels as Secure Over-The-Air (SOTA) software updates become mainstream. In this paper, we will present a methodology based on remote diagnostic software technologies to establish the authenticity of the vehicle and associated software updates. This methodology will prohibit cyberattacks when software is being downloaded or uploaded from the vehicle.

17:30

The zero-defect software factory – myth or reality?

Ingo Nickles
Senior field application engineer
Vector Software
GERMANY
Software development can no longer be considered as a creative process leading to mastery of how the function and logic is expressed in the application’s lines of code. Because the role of software is to deliver key features and safety-critical functions in so many pieces of equipment and systems, it now needs to be constructed with the precision and quality that can be seen in any modern manufacturing process. This presentation will discuss whether a zero-defect software factory can be a reality.

Day 3: Thursday 7 June

Morning Session
09:00 - 10:00

09:00

Hive mind: AI and ML for vehicle and fleet

Herman Coomans
Senior solutions architecture manager
Amazon Web Services
AUSTRALIA
Machine learning models require massive compute resources for training, but can be deployed with much more modest compute in-vehicle. Edge compute with machine learning can be used for fast decision making in the field, and vehicle fleet data, maps and other sources can be used for navigation and fleet training tasks in the cloud. Attendees will learn how to leverage both edge compute and a shared AI/ML platform without having to build the underlying IT infrastructure.

09:30

Only as good as your data

Sheikh Shuvo
Product and solutions manager
Mighty Ai
USA
To achieve Level 5 autonomy, vehicles must be able to process and respond to so-called 'edge cases' that rarely occur and are hard to account for. Teams around the world have collected terabytes of raw sensor data and must now sift through that data to uncover the moments that capture these rare objects and scenarios. In this presentation, we discuss the challenges behind developing a system to sort through this immense amount of data, and how to label that data accurately and at scale. Additionally, we examine best practices for using manual annotation when accuracy is critical.

Workshop 1
10:30 - 12:30

10:30

NVIDIA Deep Learning Institute

Join us for a two-hour hands-on DLI workshop called Introduction to Object Detection with TensorFlow. This workshop is a lightning introduction to object detection and image segmentation for data scientists, engineers and technical professionals. This task of computer-based image understanding permeates many major fields such as advertising, smart cities, healthcare, national defence, robotics and autonomous driving. Ultimately, the goals of this course are to provide a broad context and clear roadmap from traditional computer vision techniques to the most recent state-of-the-art methods based on deep learning and convolution neural networks (CNNs). Working our way from classic CV algorithms up through the R-CNN family of deep-learning-based solutions, we discuss how to incrementally leverage CNNs to iteratively improve performance and expand image understanding capabilities. We then dive deep into the Microsoft Common Object in Contexts dataset and the Google object detection API in TensorFlow. We will get our hands dirty understanding accuracy vs. performance trade-offs between state-of-the-art models such as Single Shot Multibox Detectors (SSDs) and Faster R-CNN with residual networks. Finally, we set the stage for network deployment, at the edge or on the road in an autonomous vehicle, using NVIDIA’s latest TensorRT release. If you do not have any experience with deep learning yet, we recommend you take at least the Image Classification with DIGITS lab from www.nvidia.co.uk/dlilabs prior to attending.

Workshop 2
14:15 - 16:15

14:15

Siemens

We are delighted to announce that Siemens will be hosting a world-first workshop in Stuttgart this year. Co-hosted with Mentor and TASS International, this session will provide insight on the end-to-end development of autonomous vehicles, with subjects including: • Model-based development, verification and validation framework for automated vehicles • Framework consisting of advanced simulation environments (MiL, SiL, HiL, ViL) as well as physical testing facilities (laboratories, test tracks, public roads) • Facilitating the full spectrum of automated driving technology, ranging from system-on-a-chip design, sensor development and systems integration to full vehicle performance evaluation and traffic impact analysis All delegates are welcome to attend this exclusive workshop. The Siemens team are focused on delivering an educational platform for participants, that will result in a more streamlined, robust and faster automated vehicle development process.
Please Note: This conference programme may be subject to change

AUTONOMOUS VEHICLE INTERNATIONAL

Autonomous Vehicle International magazine

SUBSCRIBE TO THE MAGAZINE FOR FREE!

CRASH TEST TECHNOLOGY INTERNATIONAL

SUBSCRIBE TO THE MAGAZINE FOR FREE!