When great customer support meets data
Where do we come from?
Picnic is a Dutch tech startup that started in 2015, with the aim of revolutionizing the supermarket industry. Over the past three years, we’ve grown from 8 people working in a small office in Amsterdam to over 2’500 working in two countries, 6 Fulfillment Centers, and 30 Hubs. This growth is an incredible challenge for everyone involved, and we constantly search for innovative and in-depth solutions.
Why the challenge?
One of our core beliefs is to offer our customers the best support possible, by allowing them, for example, to send in pictures of defect products they wish to be reimbursed for. But processing these pictures is very time-consuming as it is all done manually.
What is the challenge?
The challenge we propose is the following: As a first step in helping customer support, come up with a way of labeling every picture that comes in according to the product that is in the picture. To keep with the Picnic spirit, we encourage to be as innovative and creative with your solutions as possible.
To that end, we offer the following dataset of pictures of defect items. NB: The following items were removed and can't be found in the set:
€1,000 in prizes
Submitting to this hackathon could earn you:
You must be:
- Registered for the hackathon on Devpost (individually or in a team).
- 18 years of age or over.
You cannot be:
- Related in any capacity to the Picnic Group or any of its partners or subsidiaries.
- Make sure that your project has the public repo with your code linked to it.
- The code should allow reviewers to reproduce results. The labels for the test set may not be hard coded in the code or added manually.
- Submissions should in the same general format as the example submission (included in the dataset).
- Submissions should be in the same file format as the example submission (.tsv).
- Please check the rules for the full requirements on submission formatting.
Scored on the macro-average F1 score. The test set follows a similar distribution as the training set.
In case two (or more) submissions score the same, their rank will be determined chronologically.
In the very unlikely scenario that two submissions obtain the same score and are submitted at the same moment, the quality of the overall submission will be used as a criterion: Documentation, structure, and readability will all be taken into account.