Lyft Perception Challenge
Lyft Perception Challenge was organized by Lyft and Udacity.
The goal of the challenge is pixel-wise semantic segmentation of images from a front facing camera mounted on a vehicle. Actually, the camera data for this challenge comes from an open-source CARLA simulator.
Two classes were included in the final scoring: roads and cars. The competition is noteworthy due to the fact that participants performance evaluation based on F-beta scores and the prediction frame rate (FPS) on a target machine was an essential part of the metric.
The final result of participation: the 4th place out of 155 participants (top-3%). The submitted pipeline was also the fastest one.
- About Lyft Perception Challenge
- Multiclass semantic segmentation with LinkNet34
- Discussion of the Lyft Perception Challenge
The project code is available on Github.
Discussion of the Lyft Perception Challenge
How to increase inference speed on a semantic segmentation task and further ideas.
Multiclass semantic segmentation with LinkNet34
A CNN approach used for multiclass semantic segmentation during the Lyft Perception Challenge.
About Lyft Perception Challenge
A partnered Lyft and Udacity semantic segmentation challenge with synthetic images.