Lyft Perception Challenge was organized by Lyft and Udacity.

The goal of the challenge is pixel-wise semantic segmentation of images from a front facing camera mounted on a vehicle. Actually, the camera data for this challenge comes from an open-source CARLA simulator.

Two classes were included in the final scoring: roads and cars. The competition is noteworthy due to the fact that participants performance evaluation based on F-beta scores and the prediction frame rate (FPS) on a target machine was an essential part of the metric.

The final result of participation: the 4th place out of 155 participants (top-3%). The submitted pipeline was also the fastest one.

Contents:

  1. About Lyft Perception Challenge
  2. Multiclass semantic segmentation with LinkNet34
  3. Discussion of the Lyft Perception Challenge

Final results

The project code is available on Github.

Competition posts

Title img
05 Jun 2018

Discussion of the Lyft Perception Challenge

How to increase inference speed on a semantic segmentation task and further ideas.

4 mins read
Title img
05 Jun 2018

Multiclass semantic segmentation with LinkNet34

A CNN approach used for multiclass semantic segmentation during the Lyft Perception Challenge.

6 mins read
Title img
31 May 2018

About Lyft Perception Challenge

A partnered Lyft and Udacity semantic segmentation challenge with synthetic images.

2 mins read