Luxoft Holding Inc., a global IT service provider, has announced that it will exhibit its entire suite of automotive technology services at embedded world 2018 (Hall 4, Stand 4-371) in Nürnberg, Germany from 27 February to 1 March.
- AUTOSAR Adaptive Demonstrator.
- LuxTrace: Trace Management for Timing Testing.
- LUI Augmented Reality – AR User Interface Platform.
- Automotive Reference Platform co-developed with Intel.
- Triton UI – The New Automotive Reference UI for the Qt Automotive Suite.
- AllView II – HMI Demonstrator.
- Populus – UI Design, Development and Deployment Tool.
Luxoft is also taking part in embedded world’s Student Day on 1 March to encourage young developers and programmers to help engineer the cars of the future.
“We can’t wait to show exhibitors the experience we have integrating embedded systems across our Digital Cockpit, Autonomous Driving and Connected Mobility services in Nürnberg this year,” said Dr Marek Jersak, director of Autonomous Drive at Luxoft.
Dr Jersak has spent nearly 15 years working with developers at leading OEMs and suppliers to integrate embedded systems into cars. He says that, by exhibiting its entire range of services at Embedded World, Luxoft is highlighting the breadth of expertise it has to offer. “Car makers are seeking deep partnerships with technology and service providers that can integrate a range of technologies into the vehicle architecture. As an independent service provider, that’s what sets us apart.”
Luxoft Automotive’s senior technical director, Dr Kai Richter, will also be speaking at the Exhibitor Forum on 1 March at 3:30pm in Hall 4, Booth 4-428, about Luxoft’s commitment to AUTOSAR adaptive, the new software platform designed to support highly automated driving systems.
Dr Richter co-founded timing design and verification specialist Symtavision, now part of Luxoft, with Dr Jersak in 2005. He explains why car makers are now using AUTOSAR adaptive. “AUTOSAR adaptive enables high-performance computing for automotive platforms by supporting advanced CPU models, modern software frameworks and a flexible software distribution model. It’s basically responding to the need for more data processing through ultrasonic, RADAR, cameras, laser scanners and a highly complex sensor fusion. It’s also supported by AI and machine learning more and more.”