Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
Deepfake Detection Challenge
Public 2nd (Private 30th) Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Long 4 months have passed. Applaud to participants who have been working hard and thanks to hosts who arranged this dataset and competition.</p> <p>In social aspect, automatic deepfake detection algorithm will be a must in near future. In personal aspect, this competition was held in a good timing for me. So dfdc took priority in my head for the last 4 months.</p> <p>Luckily, I somehow managed to attain 2nd place in public leaderboard. While waiting for the private leaderboard to be revealed, I'll share my solution. </p> <p>Note that all of the following are what <strong><em>I</em></strong> have done for this competition. (This post does not include what my teammates have done)</p> <p>For those who are curious, I'll first share the major methods that I came up with, which improved public lb score.</p> <h1>Ingredients for Public LB</h1> <h3>1. Augmentations</h3> <p>Extensive augmentations significantly improved public lb score and reduced cv-lb gap. I guess it makes model robust on varying data.</p> <h3>2. Face Margin</h3> <p>I set margin when cropping face, where <code>new_face_width=face_width*(1+margin)</code> (same with height). Margin of 0.5~0.7 worked significantly better than 0 and further reduced cv-lb gap. I guess model learns to detect some inconsistency between manipulated region and surrounding region.</p> <p>Tuning augmentations and face margin were the two main ingredients that boosted public lb significantly.</p> <h3>3. Model Capacity</h3> <p>Efficientnet-b4 did quite better than efficientnet-b0. With extensive augmentations, appropriate model size improved the score.</p> <h3>4. Multi-task Learning</h3> <p>I used Unet with classification branch for model architecture. I calculated pixel-wise difference between fake and corresponding original video, generated mask with pixels that have outstanding differences, and used it as segmentation part target. By using mask information, the model is able to know which part of the image is modified and I guess it helped boost performance.</p> <h3>5. Conservative Fix</h3> <p>Train data and test data have quite different distribution, as can be seen in cv-lb discrepancy. Multiplying constant (&lt;1) to the logit then taking sigmoid helped improve logloss in this situation.</p> <h3>6. Ensemble</h3> <p>Ensemble always helps.</p> <h3>7. N Frames when Inferencing</h3> <p>I used simple average of frames to get probability, so the more frames extracted from the video, the better, until it hits 9 hours restriction.</p> <p><br><br></p> <p>Now, I'll go into details of my journey. I'll separate it into 10 phases, each of where I concentrated on certain subject. (number inside parenthesis in the title is approximate public lb score at that phase)</p> <h1>Phase 1. Setting Pipeline (0.69)</h1> <p>This was my first encounter to video type vision task and deepfake detection. So at the very beginning of the competition, I started searching google for articles and papers, and also there were nice posts in kaggle discussion that introduced related papers.</p> <p>At first, I tried to go with network architectures that take care of temporal information, but I found out that they lack pretrained weights and are very heavy to train, so I decided to make baseline based on XceptionNet which was introduced in faceforensics paper( <a href="https://arxiv.org/abs/1901.08971" target="_blank">https://arxiv.org/abs/1901.08971</a> ).</p> <p>At my initial experiments, I tried to integrate audio part in the image model by using spectrogram as second input, but it didn't help that much and complicated the process, so I abandoned audio part. Anyway my teammate wanted audio information, so I set up the pipeline which uses ffmpeg with subprocess to extract audio stably in both train and public test videos. (I used PyAv first, but it wasn't stable with public test videos.)</p> <p>Reading video was the main bottleneck for the runtime, so I struggled with optimizing the process. At last, I selected method kindly provided at <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/122328" target="_blank">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/122328</a> which was the fastest among those I tried. I extracted 20 frames evenly. Duration is 10 seconds so I extracted frame every 0.5s.</p> <p>Next step was to select fast but accurate face detector. I searched and found out that <code>retinaface</code> is the best available face detector evaluated on widerface dataset( <a href="http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html" target="_blank">http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html</a> ). Thereafter, I used its pytorch implementation <a href="https://github.com/biubug6/Pytorch_Retinaface" target="_blank">https://github.com/biubug6/Pytorch_Retinaface</a> .</p> <p>To save data loading time when training the network, I first processed each <code>original</code> videos and saved them as compressed joblib files at disk. I extract 20 frames from each video, detect face, crop them, and with additional meta information extracted, save them. The saved video will be a numpy array with shape <code>(n_people, n_detected_frames, height, width, 3)</code>. There were a lot of trivial issues and bugs when processing videos. I'll introduce some.</p> <ul> <li>Cropped frames don't have equal size -&gt; Just resized to match the maximum size within one video</li> <li>Some videos have 2 people -&gt; Set threshold to confidence score and if the confident score of second confident face detected is bigger than the threshold, confirm there are 2 people</li> <li>Detector finds face in some frames but doesn't in other frames within one video -&gt; just ignore no-detected frames (so the output can have &lt;20 frames)</li> <li>In some videos, detector fails to detect face -&gt; lower the confidence threshold for first confident face</li> </ul> <p>After processing all original videos, I used the bounding box information of original videos to process corresponding fake videos. I could use multiprocessing to speed up this process, since it didn't use cuda. After I processed all videos, I updated given metadata with the new extracted information.</p> <p>It took about 12 hours on my computer to process all of the train dataset. Then I trained the model with processed data.</p> <p>Inferencing in Kaggle notebook was another obstacle. Initially my notebook failed many times with varying error messages. I concluded there are some corrupt videos in public test set, so I introduced <code>try except</code> block and the submission went well thereafter.</p> <h1>Phase 2. UNet Architecture (0.6~)</h1> <p>In the Understanding Cloud Organization competition( <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion" target="_blank">https://www.kaggle.com/c/understanding_cloud_organization/discussion</a> ), I learned that multi-task learning with classification and segmentation helps improve model's classification score.</p> <p>I could generate masks by selecting pixels from the fake video that have large differences from corresponding real video. All of the training videos are compressed, so the masks were not perfect, but the strategy worked in my validation set and public test set (about 0.01 improvement in lb). </p> <p>By using mask information, the model was able to know which part of the image is modified and I guess it helped boost performance.</p> <p>I used UNet with classification branch. Also, since efficientnets are the state of the art architectures in imagenet, I used them as the encoder of Unet.</p> <p>At this moment, I was using first frame from each videos.</p> <h1>Phase 3. CV-LB Discrepancy (0.6~)</h1> <p>Then this huge problem came. My validation score and public lb score didn't correlate and had large gap (0.2 vs 0.6). It was the main problem in this competition from the start to the end. I managed to decrease the gap to ~0.05 at the end, but still don't fully understand how public test data differ from train data.</p> <h1>Phase 4. Augmentations (0.40)</h1> <p>I was only using horizontal flip since I thought augmentation will distort features that were introduced by manipulation. But it was wrong after all. Augmentation made the model robust on public test set.</p> <p>I added some basic augmentation such as shift, scale, rotate, rgbshift, brightness, contrast, hue, saturation, value. Also, referencing dfdc preview paper( <a href="https://arxiv.org/abs/1910.08854" target="_blank">https://arxiv.org/abs/1910.08854</a> ), I added JpegCompression and Downscale augmentation too.</p> <p>Local validation score improved a bit but the public score jumped massively to 0.4.</p> <h1>Phase 5. Large Models &amp; Learning Rate (0.33)</h1> <p>I was using <code>efficientnet-b0</code> and I switched to <code>efficientnet-b3</code>. It scored 0.36, then I switched to <code>efficientnet-b7</code> and it scored 0.33.</p> <p>By that time I thought the improvement was solely due to model capacity, but some of the improvement was actually due to learning rate - batch size relationship.</p> <h1>Phase 6. Experiments (0.3)</h1> <p>I thought I had some correlation with cv and lb by this time, so I tried a lot of experiment with local validation. I'll introduce some.</p> <h3>What Didn't Work for LB</h3> <ul> <li>Different learning rate between encoder - decoder</li> <li>Pad rather than resize when feeding the image</li> <li>CNN-LSTM architecture</li> <li>Construct model so that it uses statistical information across frames (ex. mean, std)</li> <li>Tune segmentation loss weight</li> <li>Guarantee fake-original pair to exist in each batch</li> <li>Use scse option in decoder</li> <li>Decrease face margin</li> <li>Use Vggface2 pretrained inceptionresnetv1 provided by FaceNet pytorch as encoder</li> <li>Validating with compress/downscaled validation set</li> <li>Split validation set with actors grouped by face encoding + KMeans</li> <li>One cycle learning rate scheduler</li> <li>Integrate retinaface confidence score as additional feature</li> <li>Stacking across frames with LGBM</li> </ul> <h3>What Worked for LB</h3> <ul> <li>Tune learning rate considering batch size (batch size 24 - lr 0.0002)</li> <li>RAdam + ReduceLROnPlateau</li> <li>Tune ratio of decreasing the learning rate when plateau (0.1-&gt;0.3)</li> <li>Use different frames of each video every epoch (#0-&gt;#7-&gt;#14-&gt;#1-&gt;#8-&gt;…)</li> <li>Increase resolution</li> <li>Use <code>conservative fix</code>: multiply constant(&lt;1) to the logits than take sigmoid. It helps improve logloss when the training and testing distributions differ.</li> </ul> <h1>Phase 7. More Augmentations (0.27)</h1> <p>After testing the tuned model on public lb, I once again found out cv and lb didn't correlate at most experiments I did. I needed to select cv vs lb at this time, and decided to value lb more, since the hosts implied that the private test set will be different than train set. So I started to validate experiments on public lb. Also, I decided to use b5 for my experiment, since I judged b0 had too low capacity when augmentations are applied.</p> <p>Then I remembered I got significant boost when I did more augmentations, so I added extensive augmentations including noise and blur, and increased the degree of augmentations. Score jumped to 0.269. I did harder augmentations but it didn't improve, so I stopped tuning augmentations.</p> <h1>Phase 8. More Experiments (0.27)</h1> <p>I did more experiments, but sadly nothing worked.</p> <ul> <li>Mixup fake-original pair</li> <li>Label smoothing</li> <li>Undersample rather than weighted loss</li> <li>Generate a mask using landmark information extracted by retinaface, and use it as an augmentation</li> <li>Use difference itself as a segmentation target, not (difference&gt;threshold)</li> </ul> <h1>Phase 9. Increase Face Margin (0.214)</h1> <p>I had an experience of increasing the face margin from 0.05-&gt;0.1, and it seemed to have increased overall public lb score. So I decided to increase face margin more.</p> <p>I experimented face margin of 0.5, and single b5 scored astonishing 0.222. So I increased face margin to 1.0 and started to tune how much I should crop the margins.</p> <p>With margin crop of 0.15 (margin of 0.7), I could get 0.218, and could get 0.214 with b4.</p> <h1>Phase 10. Ensemble (0.199)</h1> <p>I trained 7 b4s and 3 b5s with different seeds and did simple average. Also I increased nframes to 30 when inferencing. It resulted in my final public lb score of 0.199.</p>
Deepfake Detection Challenge
Journey to 0.3. Public 64th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi to all the competitors and organizers of DFDC, this was my first Kaggle competition and here I want to share the methods I've experimented with during the competition and share the details of how I've finally crossed the 0.3 border. </p> <p>Firstly I would like to write a list of methods that didn't work for me (or they worked but not as good as my final solution).</p> <h1>Classical methods</h1> <ul> <li><a href="https://arxiv.org/abs/1811.00661">Exposing Deep Fakes Using Inconsistent Head Poses</a></li> </ul> <p>Code: <a href="https://bitbucket.org/ericyang3721/headpose_forensic/src/master/">https://bitbucket.org/ericyang3721/headpose_forensic/src/master/</a> In few words this method uses difference between face and head poses. Firstly I've implemented it myself, then found the code from authors on bitbucket. Approach didn't work for the majority of cases. I guess it only works good for FaceSwap faces.</p> <p>Didn't test it against Public LB</p> <ul> <li><a href="https://arxiv.org/abs/1911.00686">Unmasking DeepFakes with simple Features</a></li> </ul> <p>Code: <a href="https://github.com/cc-hpc-itwm/DeepFakeDetection">https://github.com/cc-hpc-itwm/DeepFakeDetection</a> Based on analysis of high frequency components in fourier spectra. I've tested the method on a part of dfdc dataset, found out that it works, yet not as good to beat single CNN model.</p> <p>Didn't test it against Public LB</p> <h1>Deep Learning-based methods</h1> <ul> <li><a href="https://arxiv.org/abs/1811.00656">Exposing DeepFake Videos By Detecting Face Warping Artifacts</a></li> </ul> <p>Code: <a href="https://github.com/danmohaha/CVPRW2019_Face_Artifacts">https://github.com/danmohaha/CVPRW2019_Face_Artifacts</a> This was my first submission to the competition. I've used pretrained model provided by authors.</p> <p><strong>Public Score: 1.16260</strong></p> <ul> <li><a href="https://arxiv.org/abs/2001.07444">Detecting Face2Face Facial Reenactment in Videos</a></li> </ul> <p>I've liked this approach and it even improved my score at some point. I've used ResNet-18 models as stated in the paper and compared it to single ResNet-18 model that was trained on the same data. The improvement in public score was about 0.045. This is either because the approach really works and paying attention to different face regions really matters or because this is just a kind of ensembling (5 similar models, but trained on different image regions). Or maybe both, I don't know 😃. But this approach didn't work with EfficientNet models, which I've used later. </p> <p><strong>Public Score: 0.55683 (for single ResNet-18)</strong> *<em>Public Score: 0.51062 (for ResNets-18)</em>* <strong>Public Score: 0.38760 (for EFNets-4)</strong></p> <ul> <li><a href="https://arxiv.org/abs/1809.00888">MesoNet: a Compact Facial Video Forgery Detection Network</a></li> </ul> <p>Code: <a href="https://github.com/DariusAf/MesoNet">https://github.com/DariusAf/MesoNet</a> I've tested only pretrained models. Didn't train it on dfdc dataset.</p> <p><strong>Public Score: 1.54918</strong></p> <ul> <li><a href="https://arxiv.org/abs/1912.11035">CNN-generated images are surprisingly easy to spot... for now</a></li> </ul> <p>Code: <a href="https://github.com/peterwang512/CNNDetection">https://github.com/peterwang512/CNNDetection</a> I've already gave a thought about this approach here: <a href="https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134295#766094">https://www.kaggle.com/c/deepfake-detection-challenge/discussion/134295#766094</a></p> <p>Didn't test it against Public LB</p> <ul> <li><a href="https://engineering.purdue.edu/~dgueraco/content/deepfake.pdf">Deepfake Video Detection Using Recurrent Neural Networks</a></li> </ul> <p>The method which took the majority of my time, for some reason I've thought it has to outperform single CNN model. I've spend several weeks experimenting with Conv-LSTM models. </p> <p>I've firstly tried to use CNN model as feature extractor (removing last prediction layer) and train LSTM model on those features.</p> <p>I've tried CNN-LSTM model (similar to <a href="https://www.kaggle.com/unkownhihi/dfdc-lrcn-inference">https://www.kaggle.com/unkownhihi/dfdc-lrcn-inference</a>), experimented with dropouts, num of parameters, training generators, undersampling, class imbalance, custom losses, augmentations, etc.</p> <p>I've experimented with number of frames (16 and 32) and with changing the gap between them (sequential frames from different parts of video, every third frame and 2-3 frames per second).</p> <p>I've sad that it didn't perform even close to single CNN model, I don't know the reason why. Maybe I needed to experiment more or did some mistakes in architecture or training progress.</p> <p><strong>Public Score: 0.44083 (The best one across many experimental tries. Based on EFNet-4)</strong></p> <ul> <li><a href="https://arxiv.org/abs/1910.12467">Use of a Capsule Network to Detect Fake Images and Videos</a></li> </ul> <p>Code: <a href="https://github.com/nii-yamagishilab/Capsule-Forensics-v2">https://github.com/nii-yamagishilab/Capsule-Forensics-v2</a></p> <p>Trained it for few epochs on subset of DFDC dataset (1:1 real/fake). I suggest it could perform much better with more experiments, but I didn't have time for it. This is the method which I could regret I didn't pay much attention to.</p> <p><strong>Public Score: 0.60641</strong></p> <ul> <li><a href="https://hal.inria.fr/hal-02140558/document">MARS: Motion-Augmented RGB Stream for Action Recognition</a></li> </ul> <p>Code: <a href="https://github.com/craston/MARS">https://github.com/craston/MARS</a></p> <p>This is the only method I didn't have time to fully experiment with. The problem is in preparation of optical flows for training. Even with CUDA compiled OpenCV, it takes around 100 hours to extract x and y direction optical flows from 32 frames (the method requires sequences) on my PC. I so much wanted to train/test this approach, but due to the time constraints I've decided to stop it. And this is the second and the last method I could regret I didn't test.</p> <h1>My Solution</h1> <p><strong>Public score: 0.29477</strong></p> <p>The notebook is available here: <a href="https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15">https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15</a></p> <p>Now I will describe step by step how I've been improving single CNN model and ended up with ensemble. I'm kind of disappointed that after experimenting with other approaches not any of those came closer to single CNN model. But okay, let's continue.</p> <h3>Data preparation</h3> <p>Video reader: <a href="https://www.kaggle.com/humananalog/deepfakes-inference-demo">https://www.kaggle.com/humananalog/deepfakes-inference-demo</a> (Big thanks to <a href="/humananalog">@humananalog</a>) Face extractor: <a href="https://github.com/1adrianb/face-alignment">https://github.com/1adrianb/face-alignment</a></p> <p>I've used this specific face extractor, because I wanted to train my model on face only, not the whole head. Since the lib gives landmarks, I could crop face region easily. After some time I've realised that landmarks extraction takes much time (even though it fits to extract 16-20 frames and process them within 8 secs) and decided to keep only detection part, removing all the stuff related to landmarks. The detector used in the lib is S3FD. I've additionally used two more face detection models (MTCNN and BlazeFace) for better confidence. Using additional detectors helped me to remove some FPs and save some FNs. This is especially true for very dark images. Finally I've came up with extracting 32 frames for each person from each video. The reason for that is because I had many experiments with Conv-LSTM models. After all I had around 4M .png images of 224x224 size, but I didn't use so big number of faces for training my final single CNN model. I've randomly picked half of those (so I had 16 faces per person).</p> <h3>Training</h3> <p>Training EFNet-4 with setting Keras <code>class_weights</code> during training worked well for me. But I've found out better and what's more important much faster way to train my model. The idea is to balance fake and real data on each epoch. Since the training data has around 5:1 imbalance, each epoch we take all real examples but only 0.2 part of fake examples. In such approach 5 epochs is enough for model to see all examples. But i've experimentally found out that 6 epochs gave the best results. During the 6th epoch I've randomly picked fake examples. </p> <p>I was afraid that my model would overfit to real faces, but seems like it didn't. Augmentations helped to solve this issue. Couple of words about augmentations. I've used different albumentations functions, but during training I've checked if the image is too dark and applied more brightness with larger aug probability.</p> <p>If someone needs the generator, here is the code: (I'm too lazy now to clean up my codes and release on github). Note that here <code>fake_images</code> list is 5 times larger than <code>real_images</code>. Amount of images for each epoch was around 750K (350K real and 350K fake).</p> <pre><code>def generator(real_images, fake_images, preprocess_input_fn, batch_size=1, input_shape=(224,224,3), do_aug=False): i = 0 k = 0 images = real_images + fake_images[k*len(real_images):(k+1)*len(real_images)] random.shuffle(images) while True: x_batch = np.zeros((batch_size, input_shape[0], input_shape[1], 3)) y_batch = np.zeros((batch_size), dtype=np.single) for b in range(batch_size): if i == len(images): i = 0 k += 1 if k &amp;gt; (len(fake_images)//len(real_images) - 1): images = real_images + [str(i) for i in np.random.choice(fake_images, size = len(real_images), replace = False)] else: images = real_images + fake_images[k*len(real_images):(k+1)*len(real_images)] random.shuffle(images) x = Image.open(images[i]) x = np.array(x) if do_aug: x = augment(x) y = 1. if '/fake/' in images[i] else 0. x_batch[b] = x y_batch[b] = y i += 1 x_batch = preprocess_input_fn(x_batch) yield (x_batch, y_batch) </code></pre> <p>After I've trained several EFNet-4 models, I've found out that all models perform best after 6 epochs. Then I've started to train models for ensemble without validation. I've used RAdam optimizer with initial learning rate of 0.001. This learning rate was changed for 4th and 5th epochs to 0.0005 and for the last 6th epoch I've used 0.0001. Loss is binary crossentropy.</p> <h3>Inference</h3> <p>In the last two or three days I decided to switch from my EFNet-4 ensemble of 13 models to EFNet-6 ensemble of 5 models and trained them. Boost was around 0.005. Inference could be completely viewed in my released notebook here: <a href="https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15">https://www.kaggle.com/vostankovich/efnet6-ensemble-5-clip-1e-15</a>.</p> <h3>Tricks that worked</h3> <ul> <li>Using <code>read_frames_at_indices()</code> along with <code>capture.get(cv2.CAP_PROP_FRAME_COUNT)</code> instead of <code>read_frames()</code>. I don't know the reason why, but it works a little bit faster.</li> <li>Resizing video for the sake of saving time for detector.</li> <li>Removing small detected objects. These are mainly false positives and don't contain faces. I've used the threshold of <code>min(video.shape)//20</code>.</li> <li>Zooming out of face. Didn't find a reason why, but it works better.</li> <li>Median averaging across frames. I was surpised, but it gave a boost of more than 0.01 compared to mean averaging.</li> </ul> <h3>Tricks that didn't work</h3> <ul> <li><strong>Clipping</strong>. It worked for single models, but for ensembles of 5+ models it always made worse. Even though for safety I've submitted two versions, one with 0.01-0.99 clip and the second one without it (well there was a clip but very small one of 1e-15).</li> <li><strong>TTA</strong>. I've tried with complex TTAs as well as with simple Horizontal Flip. But it turned out that it is better to process more frames without TTA, then apply TTA to less frames.</li> <li><strong>Judge by one person</strong>. I've tested the approach of giving FAKE prediction if one of the persons in video is FAKE. I've sorted the multiple faces by x-coordinates, and then gave prediction to each person, finally giving FAKE prediction if one is FAKE. It performs slightly worse than judge all persons equally.</li> </ul> <p><strong>Public score: 0.34 (for single EFNet-4 model)</strong> *<em>Public score: 0.30 (for ensemble of 13 EFNet-4 models)</em>* <strong>Public score: 0.295 (for ensemble of 5 EFNet-6 models)</strong></p>
TensorFlow 2.0 Question Answering
Rank 93 Solution: Or how you can still win a medal without any model fine-tuning
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TensorFlow 2.0 Question Answering <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We didn't have the time, skill or resources to fine-tune our model, but that did not prevent us from securing a top 8% rank - it took a lot of perseverance with 90-odd submissions.</p> <p>The link to the kernel is <a href="https://www.kaggle.com/rohitagarwal/rank-93-solution-fork-of-bert-joint-14-changes-37">here</a>.</p>
Deepfake Detection Challenge
12th place in a nutshell
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><em>Disclaimer: It is my first experience in CV competition, so I apologize if my code or approach insult someone.</em> </p> <h2>Data preparation</h2> <ul> <li>1st dataset: extracted single random frame from each video</li> <li>2nd dataset: extracted 10 random frames from each video (used for stacking)</li> <li><strong>Blazeface</strong> for face extraction from <a href="/humananalog">@humananalog</a> kernel</li> <li>output images resized to 256x256</li> <li>main augmentations are jpeg compression and downscale with a chance of 0.5</li> <li>secondary augmentations (not all were used at the same time): blur, gaussian noise, random brightness, horizontal flip</li> <li>classes were balanced by oversampling for single model training and by undersampling for stacked model training</li> </ul> <h2>Models</h2> <ul> <li><strong>EfficientNet B0-B6</strong> (imagenet)</li> <li><strong>EfficientNet B1-B6</strong> (noisy student)</li> </ul> <p>Efficientnet gave me a huge boost on LB, but more importantly was to choose augmentations and hyperparameters right for this kind of models.</p> <h2>Training and Validation</h2> <p>I took #40-49 data chunks as validation data and it correlated with LB pretty well (about 10% difference), input size is 256x256 for every model - single model training: input size depends on model (from 224 to 260), adam 1e-4, StepLR with step 2 and gamma 0.1 - stacked model training: input size 256x256, adam 1e-3</p> <p>For every model it took about 3-5 epochs to train and start overfitting.</p> <h2>Stacking &amp; Ensembling</h2> <p>I trained a bunch of decent single models with public LB 0.38-0.4 on the first dataset and then combined different models into a stacking using second dataset. </p> <p>Then I took best 5 stacked models from this experiment, their validation score and made frame-wise weighted ensemble</p> <h2>What didn't work for me</h2> <ul> <li><strong>MesoNet, RNN</strong> - underfits on train data</li> <li><strong>Strong augmentations</strong> (e.g. cutout, huesaturation, 90 rotate)</li> <li><strong>Stacking of stacked models</strong> - seemed like it overfits, but not too much</li> <li><strong>FaceForensics++</strong> - the most I managed to get from this thing is 0.68 public LB</li> </ul> <h2>Links</h2> <ul> <li><strong>Refactored inference</strong> - <a href="https://www.kaggle.com/ims0rry/dfdc-inference-15-lb-private">https://www.kaggle.com/ims0rry/dfdc-inference-15-lb-private</a></li> <li><strong>Original inference</strong> - <a href="https://www.kaggle.com/ims0rry/inference-demo">https://www.kaggle.com/ims0rry/inference-demo</a></li> <li><strong>The whole project on github</strong> - <a href="https://github.com/1M50RRY/dfdc-kaggle-solution">https://github.com/1M50RRY/dfdc-kaggle-solution</a></li> </ul>
Google QUEST Q&A Labeling
1st place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all teams and especially to daring solo players! Thanks to organizers as well!</p> <p>Tremendous feeling for the whole team. I happened to finish one place away from the golden zone in 2 NLP competitions, and there it is. A few posts below I wrote down "respect your efforts, not results" and you'll be there. There is always a room for such motivational quotes. I don't know whether it's harder to compete with yourself or with such great minds. Thanks to the team ( @ddanevskiy , <a href="/kashnitsky">@kashnitsky</a> , <a href="/dmitriyab">@dmitriyab</a> ) for this thrilling race. And thanks to the whole community for ideas, criticism, and a sense of humor. </p> <p><strong>EDIT</strong></p> <p>The community seems to be waiting for a killer feature from us. I don’t reckon we’ve got a single one. Our solution is comprised of three cornerstones: - pretrained language models - pseudo-labeling - post-processing (yes we did one)</p> <p><strong>The basic pipeline</strong></p> <p>Less than 3 weeks ago just when we were done with the TF Q&amp;A competition, we jumped into this one. Oh yes, there were already so many great NLP folks seen on the leaderboard. We started from a public pytorch baseline [(0.377 I guess)] (<a href="https://www.kaggle.com/phoenix9032/pytorch-bert-plain">https://www.kaggle.com/phoenix9032/pytorch-bert-plain</a>) (kudos <a href="/phoenix9032">@phoenix9032</a>) and managed to update it effectively (0.396 for BERT-base and 0.402 for BERT-large) with a couple of tricks: - GroupKFold with question_title groups - Multi-Sample Dropout for Accelerated Training and Better Generalization, <a href="https://arxiv.org/pdf/1905.09788.pdf">paper</a> - Different learning rate settings for encoder and head (deserves a paper <a href="/ddanevskyi">@ddanevskyi</a>) - Using CLS outputs from all BERT layers rather than using only the last one. We did this by using a weighted sum of these outputs, where the weights were learnable and constrained to be positive and sum to 1</p> <p>We experimented with separate weights for different targets and with various loss functions but eventually ended up sticking to simple BCE loss. </p> <p><strong>Pretrained language models</strong></p> <p>When <a href="/dmitriyab">@dmitriyab</a> joined our team, he introduced stackexchange (SE) pretrained LMs with 110k-long dictionary including code and other domain tokens. In addition to the common MLM task, the model also predicted 6 auxiliary targets (<code>question_score, question_view_count, question_favorite_count, answer_score, answers_count, is_answer_accepted</code>) with a linear layer on top of pooled LM output. That looked beneficial enough for me to drop my code replacement experiments and update the existing bert pipeline with a huge pretrained embedding layer -&gt; 0.414. In the end <a href="/dmitriyab">@dmitriyab</a> and <a href="/kashnitsky">@kashnitsky</a> trained a few LMs (bert-large, roberta, roberta-large) on 7M SE data using whole-word-masking, but the initial model was still the best. </p> <p><strong>Pseudo-labeling</strong></p> <p>Pseudo-labeling is a well known semi-supervised learning technique within the Kaggle community and sometimes I wonder how they reinvent stuff in academic papers like <a href="https://arxiv.org/pdf/1911.04252.pdf">this one</a> (btw a great result). So we labeled additional 100k samples using our three basic bert models. First experiments with pseudo-labels showed huge improvement for SE pretrained bert (0.414 -&gt; 0.445). That was huge. But the leaderboard did not agree.</p> <p>Previously, my teammates and I used models from all folds to create pseudolabels. This strategy worked well in many cases including recent Recursion competition, where my team finished 6th. <a href="/ddanevskyi">@ddanevskyi</a> used the same approach for his 3rd place on Freesound. This way of creating pseudolabels gives quite accurate predictions as you are using a fold ensemble, but has an unobvious pitfall. If the pseudolabeled set contains a lot of similar samples to the training set, you could expect that some actual training labels could leak through those samples, even if there is no “direct” target leakage. </p> <p>Imagine that you’ve trained a 5-fold model on the training set then used those 5 folds to create pseudolabels on some external dataset. Then you add those pseudolabels to the training part of each of 5 folds. Very likely you will see an overoptimistic score on the validation part as 4 out of 5 models used to generate pseudolabels have “seen” the current validation set through their training process.</p> <p>We could expect a lot of very similar questions/answers in SO&amp;SE data, so we clearly couldn’t rely on that sort of pseudolabels. In order to fix the problem, we generated 5 different sets of pseudolabels where for each train/val split we used only those models that were trained using only the current train set. This gave much less accurate pseudolabels, but eliminated any possible source of target leakage.</p> <p>As expected, it didn’t bring such an improvement compared to a “leaked” approach (0.414 -&gt; 0.422) but that was a wise decision in terms of tackling overfitting. </p> <p><strong>BART</strong></p> <p>One week prior to the deadline, we noticed <a href="/christofhenkel">@christofhenkel</a> ’s post in the external data thread - the one about <a href="https://github.com/pytorch/fairseq/tree/master/examples/bart">fairseq BART</a> . Autoregressive sequence-to-sequence model for text classification? Looked strange to me but at the same time that was something new and daring, and we wouldn’t ignore a hint by such a prominent kaggler. It took us three whole days to train BART on 1080ti with batch size 2. The model scored 0.427 locally and it’s predictions decorrelated well with those of BERT. </p> <p>We also trained a few other bert-base, bert-large, roberta-base, and roberta-large models. They all scored around 0.420 locally but didn’t improve drastically our bert+bart baseline. </p> <p><strong>Post-processing</strong></p> <p>And finally, post-processing. Yes we did one. Based on <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">this discussion</a> we found that discretization of predictions for some challenging targets led to better spearman corr. So we used no thresholds but discretized such columns based on their distribution in the training set. That simple heuristics boosted score by 0.027-0.030 for almost every single model (up to 0.448 bert-base, 0.454 bart). </p> <p><strong>Finish</strong></p> <p>Our local validation and public score stayed the same for the whole last week - 0.468. Yesterday, since early morning with all this berts, barts, robertas we were not able to reproduce public score :harold:, having only 0.462 submissions but they seem to work the best -&gt; 0.430-0.433 private leaderboard (unfortunately we didn’t select the best one). But that was good enough. </p> <p><strong>What didn’t happen to work well</strong></p> <ul> <li>StackExchange meta-features used to boost our score in the very beginning but in the end they made no significant difference</li> <li>GPT, GPT2-medium, GPT2-large showed mediocre scores</li> <li>backtranslation</li> <li>stacking. Instead, linear blends worked unreasonably well</li> </ul> <p>Wonderful teamwork and team patience (not really haha). Thanks to you all. We are making our submission kernels and datasets public. By the end of the day I’m releasing our code on GitHub. </p> <p><strong>EDIT</strong> Code released <a href="https://github.com/oleg-yaroshevskiy/quest_qa_labeling">here</a>. There are 3 separate branches. I'll update with a nice readme soon. </p> <p><em>Oleg Yaroshevskiy, Dmitriy Danevskiy, Yury Kashnitsky, Dmitriy Abulkhanov</em></p>
2019 Data Science Bowl
76th place solution with Kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Kaggle, the host and everyone who has actively shared ideas and kernels throughout the competition. My teammate <a href="https://www.kaggle.com/negi009" target="_blank">@negi009</a> and I are excited with the silver medal and the 3 dots that is going to appear under our profile pic (Kaggle expert :P)</p> <h3>Model 1:</h3> <p>I had 5 lgbm models with GroupKFold CV and validation was done using truncated samples. I have included features from event data of games such as: <em>false-steps-average-per-round</em>, <em>game-time-average-per-round</em> etc. for each game. These features were showing higher feature importances but minimal improvement on LB score. </p> <p>CV: 0.555, public LB: 0.545, private LB: 0.538</p> <h3>Model 2:</h3> <p><a href="https://www.kaggle.com/negi009" target="_blank">@negi009</a> had xgb models with 4 GroupKFold CV with validation on truncated samples. Prior assessments in test sets are also included in train data. Within each fold he had truncated samples with 2 different seeds. So he had a total of 8 xgb models. </p> <p>CV: 0.561, public LB: 0.547, private LB: 0.539</p> <h3>Model 3:</h3> <p>5 fold lgbm model from <a href="https://www.kaggle.com/fatsaltyfish/convert-to-regression-feature-test" target="_blank">https://www.kaggle.com/fatsaltyfish/convert-to-regression-feature-test</a></p> <h3><strong>Ensemble:</strong></h3> <p>Final prediction is done using a custom voting function between all the above models. Out of the 18 models if an accuracy group gets more than 12 votes, it is selected as the final prediction. In all other cases, average distance from boundaries of model's thresholds has been used to determine the final prediction. This voting mechanism was giving stable results in multiple submissions as well as in public and private leaderboard.</p> <p>Public LB: 0.549, private LB: 0.549</p>
Google QUEST Q&A Labeling
7th place post-processing and ensembling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Post Processing</h2> <p>Our post-processing strategy is very similar to <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129840">the 1st place team</a>. </p> <p>The observation is that spearman is related to the ranks of samples, i.e. it based on the relative values but not the absolute values of the predictions and it prefer predictions that has the similar distribution to the labels.</p> <p>Our assumptions are:</p> <ol> <li><p>The model we built cannot predict the absolute values well for those very imbalanced columns like <code>question_type_spelling</code>, but it can, hopefully, differentiate the examples of higher labels and lower labels.</p></li> <li><p>For each column, the distribution of labels of the public/private test set are close to that of the trainning set.</p></li> </ol> <p>So what we have to do is just help our model to <strong>align</strong> the predictions according to the distribution of this column in the trainning set. It turns out that our assumptions may not hold true for every column, so we analyze the performance of this method on every column and ban the post-processing on several of them.</p> <p>The related notebook is <a href="https://www.kaggle.com/pyxiea/postprocessing-for-google-quest">Here</a></p> <h2>Ensembling</h2> <p>Our final solution consists of 5 models: bert-base-uncased, bert-wwm-uncased, bert-wwm-cased, bert-large-uncased, roberta-large. This schema lacks the diversity, but we cannot get good results on many other pre-trained models. We train each model with 5 folds and 5 epochs. </p> <p>We noticed that generally the better the original prediction, the more benefits we can gain from post-processing, so for each fold, we simply average predictions of 5 models to get a more accurate prediction and then apply PP on it, thus giving us 5 predictions for the test set. Then average the prediction of all folds and get only one prediction.</p> <p>It turns out that the average process, though very powerful and necessary, will change the distribution of the prediction to some extent, so we just heuristically apply the PP again to the prediction.</p>
Deepfake Detection Challenge
Silver zone solo, no credits used.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I regret I did not join this competition earlier, I had just one month of intensive work. This should be my first silver medal. I have read many write ups about getting good score and many of them lack one detail: some guys work pretty hard to climb high - this is very important detail. When I read comments like: "I worked 10 hrs/day on this competition" - I could not believe(maybe it is a bit exaggerated), before that I thought - well, ok, 2-3 hrs/week. This time I quit my job partially because I wanted to join some Kaggle competition and take it seriously plus I had no computer vision experience at my job and DFDC seemed very interesting to me. Finally I can say that I spent 30-40 hrs/week in average on this competition, working also over the weekends. I did not use any external machines for training, I joined late and could not use credits. My PC specs are 32 GB RAM, 128 GB SSD + 256 GB SSD, GeForce GTX 1080 Ti. SSD helped a lot in this competition.</p> <p>I believe the main "magic thing" for me was proper train-validation technique. For example, I have trained EfficientNetB1 which receives 224x224 image as input and outputs a binary number(fake or real) using the next data organization: For each epoch: 1. Select single face image from unique real video(random frame). 2. Select random fake video which was originated from a real video in a way that you <em>do not take two or more fake videos</em> originated from one unique real video. Then you pick the random frame and random face from each fake. Eventually you get the same number of subsampled fakes as for real samples. 3. Every epoch you pick faces/frames/videos randomly. 4. Also you split train/validation set by fixed folder-wise split for all epochs. 1 EPOCH takes just 5-10 minutes(if you have all images extracted on your SSD)! And whole training takes just 1-1.5 hours! Finally I had validation score ~0.22-~0.25 for different models. I extracted 20 frames during inference, classified them and calculated the median of predicitons.</p> <p>I trained LRCNN(EFNB1 as backbone) as well, its input is only 3 consequent frames. My final solution is the average of three single frame models multiply by 0.85 and one LRCNN model multiply by 0.15.</p> <p>I tried toooooons of things, I trained Increption-ResNet V2 during 2 days, attached the fan to my PC and I even was afraid that I will wake up in the house full of smoke :) I believe I could do better if a day had 50 hours instead of 24, if I had teamed up with someone. This competition has no end, we can apply here infinite number of approaches.</p> <p>I would like to thank guys like Human Analog and Shangqiu Li - they deserve many upvotes. And thank you all for participating!</p>
2019 Data Science Bowl
Solo top 3% solution (silver medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Solution</h1> <h3>New Features</h3> <p>all running totals - duration of assessments, activities, games, and clips - correct move percentage (event_id 3021 / (3021 + 3020)) - rate of correct moves (3021 / (assessment duration + game duration)) - proportion of time in assessments - difference between two event_ids for right (3021 and 3121) and wrong moves (3020 and 3120) - I considered <a href="https://en.wikipedia.org/wiki/Learning_curve">learning curve</a> features as the linear regression of correct move percentage within each game world's activities, because each game world teaches a different topic. Players with a high value are at the start of the learning curve and learning quickly, while players with low values have plateaued, having thoroughly learned the topic. The target is the last assessment's result, not average performance, so it makes sense to measure the upward trend. I found that many players would have only had two points, so I did not implement this.</p> <h3><a href="https://www.kaggle.com/c/dont-overfit-ii/discussion/80242">NK-fold Prediction</a></h3> <p>This is running <em>k</em>-fold predictions <em>n</em> times.</p> <p>I chose models based on gradient boosting type and then Bayesian optimized the hyperparameters with Optuna. I can set <em>n</em> arbitrarily high to use up the 9 hours. Scores here are using QWK rounded to the train set's proportions because it was more stable than OptimizedRounder. (GBDT, GOSS, DART, and RF are boosting types in LGB.) &gt;Starting Keras [ 7 N 5 K] score: 0.59346, total fit time: 2 hour 1 min, average fit time: 3 min Starting gbdt [ 20 N 5 K] score: 0.59890, total fit time: 46 min, average fit time: 27 sec Starting goss [ 30 N 5 K] score: 0.59882, total fit time: 23 min, average fit time: 9 sec Starting dart [ 5 N 5 K] score: 0.59875, total fit time: 1 hour 9 min, average fit time: 2 min 47 sec Starting rf [ 10 N 5 K] score: 0.58182, total fit time: 24 min, average fit time: 29 sec Starting CatBoost [ 1 N 5 K] score: 0.59514, total fit time: 26 min, average fit time: 5 min Starting XGB [ 5 N 5 K] score: 0.60283, total fit time: 1 hour 18 min, average fit time: 3 min</p> <p>A neural network particularly benefits from higher <em>n</em> and is less correlated with other algorithms.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2535866%2F98201b802ab2215e4598b18500334198%2Flevel%201%20correlations.png?generation=1579827039622041&amp;alt=media" alt=""></p> <h3>NK-predict and Optuna Interactions</h3> <p>When I Optuna optimized with larger <em>n</em>, it responded by lowering regularization parameters, making slightly overfitted models. Using <em>k</em> of 5 rather than 10 or more produces better results probably also from overfitting. Overfitted models perform better in the ensemble because models reach nuances and retain their strengths while their weak areas cancel out.</p> <h3>Keras Tuning</h3> <ul> <li>dense layers [192, 256, 384, 512]. I arrange ascending powers of two with "half-steps" in between, and pick where the first layer should be. I find this useful because there are more complex relationships than simple ones, so you make room for complexity in later layers. Preliminary Optuna testing gives layers in the 200's with a smaller last layer, so I might change this heuristic in the future.</li> <li>dropout 0.45. Optuna recommended over 0.60 for 400 features but for time I chose 0.45.</li> <li>scaled with the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html">Yeo-Johnson power transformation</a> over MinMaxScaler, StandardScaler, RobustScaler, MaxAbsScaler, and QuantileTransformer, all in sklearn. I don't currently have the results but <a href="https://www.kdnuggets.com/2019/04/normalization-vs-standardization-quantitative-analysis.html">here</a> is a detailed comparison of other datasets and algorithms.</li> <li>compared layer order &gt; Starting LayerNormalization Activation [ 10 N 5 K] score: 0.591645, total fit time: 1 hour 22 min, average fit time: 3 min Starting BatchNormalization Activation [ 10 N 5 K] score: 0.591235, total fit time: 1 hour 19 min, average fit time: 3 min Starting Activation LayerNormalization [ 10 N 5 K] score: 0.589965, total fit time: 1 hour 23 min, average fit time: 3 min Starting Activation BatchNormalization [ 10 N 5 K] score: 0.593635, total fit time: 1 hour 13 min, average fit time: 2 min 56 sec</li> </ul> <p>I selected the last one.</p> <h3>3 Level Ensemble</h3> <p><strong>level 2</strong></p> <p>I stacked level 1 predictions with most manually created features and used LGB and linear regression. Old_qwk is rounding to the train set's proportions and new_qwk is the OptimizedRounder. &gt; LGB level 2 RMSE: 0.96753, old_qwk: 0.60749, new_qwk: 0.61472 boundaries: [0.98744333 1.62733313 2.29026382] LR level 2 RMSE: 0.96682, old_qwk: 0.61039, new_qwk: 0.61572 boundaries: [1.01671793 1.49101464 2.28878774]</p> <p>I also used TensorFlow's GradientTape with each weight directly SGD optimizing a weighted average (similar to <a href="https://www.kaggle.com/barnrang/ashrae-leak-validation-gradient-descent-search">this kernel</a>). &gt; SGD level 2 RMSE: 0.96523, old_qwk: 0.61146, new_qwk: 0.61610 boundaries: [1.01336522 1.50943782 2.27812701] weights: keras[0.3242] gbdt[0.0434] goss[0.0296] dart[-0.2894] rf[0.3581] cat[-0.0589] xgb[0.5994], total[1.0064]</p> <p>It commonly gives negative weights for more than four inputs, something a human would never do. I thought about this for a while, looked it up online and didn't find anything, and concluded it likely has an equivalent all-positive solution, and it seems to work, so I keep it.</p> <p><strong>level 3</strong></p> <p>SGD weighted average of level 2 &gt; SGD level 3 RMSE: 0.96533, old_qwk: 0.61089, new_qwk: 0.61606 boundaries: [0.99402423 1.69609969 2.17251209] weights: SGD2[0.4853] LR2[0.1978] LGB2[0.3187], total[1.0018]</p> <p>All weights are positive this time. LGB started off at around 0.2 and rose to 0.3 after Optuna. My level two and three ensembles give similar results. Although SGD level 2 had slightly better scores than SGD3, SGD3 had more stable boundaries, so I used SGD3.</p> <h3>Miscellaneous</h3> <ul> <li>didn't normalize the test set's column mean with train, which was good because it would have lowered the private score by an average 0.018</li> <li>used OptimizedRounder over rounding to train's proportions for its higher median and variance</li> <li>tried clustering columns by game world, didn't help for me</li> </ul> <h3>The Last Two Days and Shake-up</h3> <p>On the second to last day, I submitted four combinations of clustering features into game worlds and normalizing test with train, and found that adjusting the test set did not do well. On the last day, I submitted four again with a different seed. That brought me to 26 entries. One entry had the best RMSE and QWK in both optimizations and I picked that one even though the public score was low.</p> <p>| entry | <strong>RMSE (lower is better)</strong> | <strong>proportional QWK (higher is better)</strong> | <strong>OptimizedRounder QWK (selected)</strong> | <strong>public score</strong> | <strong>private score</strong> | | --- | --- | | +clu +adj 1 | 0.96954 | 0.60516 | 0.59382 |0.522 | 0.524 | | |+clu +adj 2 | 0.96854 | 0.60573 | 0.61122 | 0.519 | 0.527 | | |+clu -adj 1 | 0.96768 | 0.60663 | 0.59914 | 0.541 | 0.547 | | |+clu -adj 2 | 0.96907 | 0.60416 | 0.61302 | 0.538 | 0.551 | | |-clu +adj 1| 0.96894 | 0.60516 | 0.61102 | 0.531 | 0.536 | | |-clu +adj 2 | 0.96732 | 0.60709 | 0.60954| 0.529 | 0.537 | | |-clu -adj 2 | 0.96761| 0.60752| 0.61302 | 0.526 | 0.545 | | |<strong><em>-clu -adj 1</em></strong> | <strong><em>0.96533</em></strong> | <strong><em>0.61089</em></strong> | <strong><em>0.61606</em></strong> | <strong><em>0.525</em></strong> | <strong><em>0.548</em></strong> |</p> <p>In hindsight, that was the one to pick. Clustering would have lowered the private score by 0.004 on average and adjusting 0.018. (The other submission I selected was elsewhere, 0.546 on both public and private.)</p> <p>There is a quirk with QWK that I was not aware of until the competition ended. James Trotman <a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/127203#727005">quoted</a> in his shake-up thread from <a href="https://www.kaggle.com/c/prudential-life-insurance-assessment/discussion/19010#108362">Gábor S</a>:</p> <p>&gt; - this yielded a very interesting insight: given two calibrations with roughly equal average performance (over all folds), if calibration A does better on the public test set, calibration B is very likely to outperform A on the private set (this appears to be a quirk of the kappa metric)</p> <p>That is exactly what I observe here every time: the entry with the lower public score has the higher private score; moreover, the difference is roughly the same amount.</p> <h3>Gradient Descent Hyperparameter Optimization</h3> <p>Gradient descent can handle many more hyperparameters than Bayesian methods. The authors of <a href="https://arxiv.org/pdf/1502.03492.pdf">Gradient-based Hyperparameter Optimization through Reversible Learning</a> used SGD to optimize thousands of neural network hyperparameters at a time, with a separate learning rate for every epoch. I did not get a Keras model to run in a TensorFlow custom train function as I did for a weighted average equation. With gradient descent, you can simultaneously optimize hyperparameters of every model of an ensemble, while creating many different models of diverse strengths. I'll be doing more of this in the future.</p>
2019 Data Science Bowl
Silver Medal Solution (that was not selected) :(
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank my team @bhavikapanara , @braquino , @muhakabartay, @yassinealouini , @manojprabhaakr, @kranthi9 and @carlolepelaars for all their efforts in the project. We didn't select this submission and tried building on this kernel (which didn't work)</p> <p>But there is a lot of work done by @braquino and @bhavikapanara in the following kernel, when it comes to feature generation</p> <p><a href="https://www.kaggle.com/roydatascience/silver-medal-solution-data-science-bowl-2019?scriptVersionId=26772494">https://www.kaggle.com/roydatascience/silver-medal-solution-data-science-bowl-2019?scriptVersionId=26772494</a></p> <p>Please upvote this kernel! if you like our ideas. </p>
Bengali.AI Handwritten Grapheme Classification
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We appreciate the efforts Bengali.AI and Kaggle spent on this competition, and congrats to all the winners. We are very impressed by your interesting findings, novol methods, deep understanding of the problem, the dataset, as well as the evaluation metric.</p> <p>Our solution is relatively straightforward and bears similarities with some of the top solutions.</p> <p>Let's call the 1295 graphemes in the train set ID (in-dictionary), and the unknown graphemes in the test set OOD (out-of-dictionary). The first step was training Arcface models and computing the feature centers of each of the 1295 graphemes. We can then tell if a test image is ID or OOD based on its smallest feature distance to every grapheme centers. In our 4th place submission, we set the classification threshold to 0.15 (cosine distance), which was estimated locally. After this ID/OOD classification, we applied the Arcface models to the ID set only and determined the component classes by the grapheme classes. For the OOD set, we trained another group of 1-head and 3-head component models and applied them to the OOD set only.</p> <p><strong>More of the Arcface models:</strong> network structures: inception_resnet_v2 and seresnext101 input image preprocessing: plain resizing input image size: 320x320 loss: Arcface target: 1295 grapheme classification total training epochs: 90 augmentations: cutout only</p> <p><strong>More of the OOD models:</strong> network structures: inceptions, resnets, densnets, efficientnets ... ... ... input image preprocessing: <a href="https://www.kaggle.com/iafoss/image-preprocessing-128x128">https://www.kaggle.com/iafoss/image-preprocessing-128x128</a> input image size: various sizes from 256x256 to 416x416 loss: CE target: 3-head for vowel_diacritic and consonant_diacritic, 1-head for grapheme_root total training epochs: 10 (more training harmed the OOD performance) augmentations: mixup only</p>
Bengali.AI Handwritten Grapheme Classification
Our simple approach 😮 and Gold Solutions Summary 💪
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, Kaggle community 🖖</p> <p><a href="https://github.com/Mishunyayev-Nikita/Kaggle-Bengali.AI-Handwritten-Grapheme-Classification-2020-solution">Here</a> you can find the code with our solution for this competition with huge shake-up 😲 and a list of cool approaches from the winners that helped them win 💯 </p> <p>Thanks to the whole Kaggle community and good luck!</p>
Bengali.AI Handwritten Grapheme Classification
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats everyone with excellent result.</p> <p>Here is our solution summary.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1620223%2Fb2143c50d9c92271e7b871912dd171d7%2Fbengali_pipeline%20.jpg?generation=1584447993610250&amp;alt=media" alt=""></p> <h1>Solution summary</h1> <h2>Dataset</h2> <ul> <li><strong>pretrain</strong>: triple identities by hflip and vflip</li> <li>image resolution: 137 x236</li> </ul> <h2>Model</h2> <p><strong>for seen grapheme and unseen grapheme</strong> * (phalanx) encoder -&gt; gempool -&gt; batch_norm -&gt; fc * (earhian) encoder -&gt; avgpool -&gt; batch_norm -&gt; dropout -&gt; fc</p> <p><strong>arcface</strong> * encoder -&gt; avgpool -&gt; conv1d -&gt; bn * s 32(train), 1.0(test) * m 0.5</p> <p><strong>stacking</strong> * conv -&gt; relu -&gt; dropout -&gt; fc</p> <h2>Augmentation</h2> <ul> <li>cutmix</li> <li>shift, scale, rotate, shear</li> </ul> <h2>Training</h2> <ul> <li><strong>seen grapheme</strong>: train model for seen grapheme with pretrain dataset, then finetune it with original dataset</li> <li><strong>arcface and unseen grapheme</strong>: train pretrained model for seen grapheme with original dataset</li> <li>replace softmax with <a href="https://arxiv.org/abs/1911.10688">pc-softmax</a></li> <li>loss function: negative log likelihood</li> <li>SGD with CosineAnnealing</li> <li>Stochastic Weighted Average</li> </ul> <h2>Inference</h2> <p><strong>arcface</strong> * use cosine similarity between train and test embedding feature * threshold: smallest cosine similarity between train and validation embedding feature</p> <p><strong></strong></p>
Bengali.AI Handwritten Grapheme Classification
Potential 6th place solution with a single model ! [all about layers]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello! After quite a rough LB shakeup , we ended being 317th in the private LB from 79th in public, however , scrolling down through our submissions , we did find one low scoring public kernel , which could have given us ~70th place on private with an LB of 9356. However , on using post-processing as Chris mentioned <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">here</a> , we managed to hit a private LB score of 9557 , with public LB 9703! Which would potentially give us 6th place as the private leaderboard stands right now.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3459992%2F95ceecb84fb83408fa0a27fc8dad1c60%2Foof.PNG?generation=1584697688199528&amp;alt=media" alt=""></p> <p>This was really unexpected for us , because it was a really low score for the public LB and we had been stuck there for about a month. </p> <h2>The Solution -</h2> <p>Briefly , the solution is really simple. - We use a single model , a SEResNeXt50 from pretrainedmodels, with 3 different output heads , one for each class, as used by iafoss in <a href="https://www.kaggle.com/iafoss/grapheme-fast-ai-starter-lb-0-964">this</a> notebook. - Data used was 128x128 images , cropped using <a href="https://www.kaggle.com/iafoss/image-preprocessing-128x128">this</a> notebook. - For training , we used a basic Shift , Scale , Rotate , and Mixup only. Training was done for ONLY 30 epochs over a OneCycle LR policy , and Over9000 optimizer.</p> <p>With this , we get a public LB of ~966 , and private LB ~92. Now, this is where the magic comes. With the following changes , we make a straight jump to a private LB of 9557, and public LB 9703, which is the main part of our solution.</p> <ul> <li>Change every AdaptiveAvgPool2D layer to GeM layer</li> <li>Change every AdaptiveConcatPool2D layer to GeM layer</li> <li>Removed Dropout and Weight Decay (refer to <a href="https://arxiv.org/abs/1912.11370">this paper</a> )</li> <li>Used Group Normalization instead of Batch Normalization</li> <li>Added weight standardization to all vanilla Conv2D layers</li> </ul> <p>Codes for the same:-</p> Generalized Mean Pooling (GeM) - <p><code>def gem(x, p=3, eps=1e-6): return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p)</code></p> <p><code>class GeM(nn.Module): def __init__(self, p=3, eps=1e-6): super(GeM,self).__init__() self.p = Parameter(torch.ones(1)*p) self.eps = eps def forward(self, x): return gem(x, p=self.p, eps=self.eps) <br> def __repr__(self): return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')'</code></p> Con2D with Weight Standardization - <p>`class Conv2d(nn.Conv2d):</p> <pre><code>def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True): super(Conv2d, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias) def forward(self, x): weight = self.weight weight_mean = weight.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True) weight = weight - weight_mean std = weight.view(weight.size(0), -1).std(dim=1).view(-1, 1, 1, 1) + 1e-5 weight = weight / std.expand_as(weight) return F.conv2d(x, weight, self.bias, self.stride, self.padding, self.dilation, self.groups)` </code></pre> Codes to convert the model - <p><code>def convert_to_gem(model): for child_name, child in model.named_children(): if isinstance(child, nn.AdaptiveAvgPool2d): setattr(model, child_name, GeM()) else: convert_to_gem(child)</code></p> <p><code>def convert_to_conv2d(model): for child_name, child in model.named_children(): if child_name not in ['fc1','fc2']: if isinstance(child, nn.Conv2d): in_feat = child.in_channels out_feat = child.out_channels ker_size = child.kernel_size stride = child.stride padding = child.padding dilation = child.dilation groups = child.groups setattr(model, child_name, Conv2d(in_channels=in_feat, out_channels=out_feat, kernel_size=ker_size, stride=stride, padding = padding, dilation=dilation, groups=groups)) else: convert_to_conv2d(child)</code></p> <p><code>def convert_to_groupnorm(model): for child_name, child in model.named_children(): if isinstance(child, nn.BatchNorm2d): num_features = child.num_features setattr(model, child_name, GroupNorm(num_groups=32, num_channels=num_features)) else: convert_to_groupnorm(child)</code></p> <p>If you have any ideas , or suggestions please let us know! We also have a feeling using non-cropped images , and some other augmentations would have given us even better results. Thanks for reading! </p> <p>EDIT - Here are the notebooks for this -</p> <p>Inference - <a href="https://www.kaggle.com/p4rallax/private-0-9557/">https://www.kaggle.com/p4rallax/private-0-9557/</a></p> <p>Training - <a href="https://www.kaggle.com/virajbagal/potential-6th-place-lb-magic-layers-pp">https://www.kaggle.com/virajbagal/potential-6th-place-lb-magic-layers-pp</a></p>
Bengali.AI Handwritten Grapheme Classification
[RECAP] 41st place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone!</p> <h1>Story</h1> <p>I've joined this competition in the first days of January, with the purpose of learning computer vision from both theoretical and practical sides. I thought that it would be a good idea to finally learn how to write a dl pipeline using config files. I also wanted to dive deeper into Catalyst framework, so Catalyst Config API seemed to fit perfectly.</p> <p>For the first two weeks, I couldn't get a score higher than 0.955 whilst there were 0.96+ kernels. I'd thought that picking a dataset with cropped images is a good idea since I didn't want the model to memorize the position of the character to not overfit on public lb.</p> <p>Following pieces of advice on forum, I started training with mixup and cutmix. At that moment there was no Cutmix implementation in Catalyst and Mixup implementation didn't support multi-output. So, I adopted code from forum and wrote MixupCutmixCallback, which later resulted in PR with Cutmix implementation to Catalyst library. It was my first time contributing something to open-source deep learning project.</p> <p>Mixup + Cutmix had given some improvement and I finally crossed 0.955 border. The next step was to go beyond high scoring public kernels (the best one had a score of 0.966 at that time). I tried different architectures (primarily sticked to Efficient Net series), optimizers, schedulers, lrs, losses, augmentations, but 0.966 was still far from me. I got very close to it after I added finetuning on 224x224 images (the first stage was on 128x128 cropped images). </p> <p>On the very next day after I got close to 0.966 kernel a new 0.97 kernel came out, so I had to do the climb all over again and at that time I'd tried tweaking almost every possible thing in my pipeline. I <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/131358">also</a> felt like I was doing some basic thing very wrong, but couldn't identify which exactly. I thought the problem was in misunderstanding some concept in Catalyst, so I decided to rewrite my whole pipeline to Pytorch. This is definitely not something you want to do taking into account there were 3.5 weeks before the end of the competition. Before I started rewriting my pipeline to Pytorch, I decided to train on uncropped images with Catalyst. On the next day I'd finished my Pytorch pipeline and decided to look at the results of training with uncropped images: it was the first time my CV crossed 0.97, resulting in 0.968 LB. Of course, I decided to stay with Catalyst😄 . I took a deeper model and finally beat the 0.97 kernel. </p> <p>Again, I was out of ideas and again I went on the forum. I skimmed through the main topics and even went through all the posts written by top-scorers individually. I decided to implement the tricks from <a href="https://arxiv.org/abs/1912.11370">this</a> paper posted by <a href="/drhabib">@drhabib</a> , but none of them seem to work. </p> <p>I started reading other papers, which had been posted on the forum and decided to improve my augmentations. I couldn't fit the geometric ones and thought to tweak <code>alpha</code> in Mixup and Cutmix. I decided to go aggressive with Mixup and set <code>alpha = 4</code>, which I think is pretty reasonable as there are unseen graphemes in the test set.</p> <p>In the last week, I'd switched to original images, trained 5-fold B3 and added 3-epoch finetuning with no augmentations, which yielded 0.985. I had a pretty small CV/LB gap and decided to choose my highest scoring model.</p> <h1>Configuration</h1> <ul> <li>Efnet B3 with weights from Noisy Student </li> <li>One model with three heads</li> <li>opt_level O2</li> <li>137x236 images</li> <li>CrossEntropyLoss. I guess I could spend some more time tweaking OHEM, it should give a better result.</li> <li>Mixup (alpha = 4) + Cutmix (alpha = 1), each with probability of 0.5</li> <li>loss weights: 7, 1, 2 respectively</li> <li>100 epochs</li> <li>AdamW optimizer</li> <li>OneCycleSchduler</li> <li>finetuning 3 epochs with no augs also on 137x236</li> </ul> <p>You can find my complete config files in the link to my repo provided below.</p> <h1>Forum</h1> <p>Thanks, to the people who shared actively on the forum. Namely: <a href="/drhabib">@drhabib</a> , <a href="/pestipeti">@pestipeti</a> , <a href="/hengck23">@hengck23</a> , <a href="/machinelp">@machinelp</a> , <a href="/haqishen">@haqishen</a> . You are not just giving hints, but implicitly encourage others to participate in discussions and share as well, as a form of expressing gratitude for what we have learned from your posts and kernels (poor wording, but I hope you get the idea😅 ).</p> <p>The other important thing related to the forum is I think in this competition many people trapped themselves into hyper-parameters tweaking (myself included). Most posts for this comp were about architecture, number of epochs, augmentations, and even optimizers. I guess some of us can relate that they were jumping back and forth between parameter options after some high-ranked users posted some information about his/her pipeline. </p> <p>A very few people were discussing validation and strategies to deal with unseen graphemes. </p> <p>The challenge here is the more information we have the harder to find the truth. So, it is very crucial to focus on things that matter. </p> <h1>Takeaways</h1> <ul> <li>Lab book to track your experiments</li> <li>Cam activations to better understand what is going on with you model thx <a href="/cdeotte">@cdeotte</a> </li> <li>Better error-analysis in general</li> <li>Data preprocessing (resize, file extension, etc) which allow you to conduct more experiments</li> <li>Use forum wisely</li> <li>Don't be afraid to ask questions on forum. The worst question is the question that hasn't' been asked.</li> </ul> <p>And congratulations to everyone who had taken part in this competition regardless of their score!</p> <p>You can find my code here:</p> <p><a href="https://github.com/LightnessOfBeing/kaggle-bengali-classification">https://github.com/LightnessOfBeing/kaggle-bengali-classification</a></p>
Bengali.AI Handwritten Grapheme Classification
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>At first, congratulations to the winner <a href="/linshokaku">@linshokaku</a> and all participants who finished this competition. <br> We did not imagine such a big shake ( We estimated that there would not be so much unseen data in private test dataset ). </p> <p>Anyway thank you Bengali.AI and kaggle for organizing this competition, also to my teammate <a href="/nejumi">@nejumi</a> !</p> <hr> <p>Below figure shows an overview of our solution. I think our solution is very simple and nothing special. <br> ( BTW 1st place solution is very unique and impressive for me ;-) ) Here I will add naive explanations to some points worth to be noted. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F479538%2F7b541b6993070fbf37181aa631843011%2FBengali_AI.png?generation=1584423861453528&amp;alt=media" alt=""></p> <p>NOTE: If you want to see a higher resolution image, please refer this <a href="https://speakerdeck.com/hoxomaxwell/kaggle-bengali-dot-ai-6-th-place-solution">link</a>.</p> <ol> <li>Intensive augmentations We used CutMix, MixUp, Cutout, width/height shift, rotate, Erosion, <a href="https://www.kaggle.com/haqishen/gridmask">GridMask</a> ( credit to <a href="/haqishen">@haqishen</a> ), Zoom... <br> CutMix and Cutout worked well for both of us, but in my case, height shift and rotation did bad, maybe due to 137 x 236 size (some graphemes become outside image region :-( ). <br> <br></li> <li>Co-occurence There are limited combinations between R, V and C in train dataset. But as a host said, in test dataset, some unseen combinations will come. So I did a simple trick; build a model which consider both co-occurence and non-co-occurence of R, V, C to be robust with unseen data in SE-ResNet50 based model. This model has dual paths after SE-ResNet block. One is straightforward path from GeM2D to each top fc layers (R, V and C). And the other is through 2 fc layers (please see above figure). <br></li> <li>Multi stage learning ( Xentropy, Reduced Focal Loss (credit to <a href="/phalanx">@phalanx</a> ) ) In the middle point of this competition, <a href="/phalanx">@phalanx</a> advised us about OHEM in this nice <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/123198#734490">thread</a>. <br> Indeed, our cv score (recall) became better with the same technique(Reduced Focal Loss). <br></li> <li>Different image size ensemble In some CV competitions, different size image ensemble boosted scores. We thought this will be same in this competition. This ensemble pushes our place from 12th to 6th. <br></li> <li>Post Processing (big impact for us) We adjusted our prediction to improve evaluation metric in this competition (Recall). Optimization was performed with all classes (168 + 11 + 7). And we used (1/each class counts) as initial values for optimization. The optimal coefficients are almost close to initial values. We were very nervous to use this post processing, but as a result this pushes us to gold range.</li> </ol>
Deepfake Detection Challenge
27th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Approach</h2> <p>We used a simple frame-by-frame classification approach.</p> <h2>Face-Detector</h2> <p>We used simple MTCNN detector. Input size for the face detector was scaled down by 50% to significantly reduce inference time and memory usage without much loss in detection accuracy. We then used the bounding box to extract a square shaped face with image center coordinates and height being the same, but adjusted width.</p> <h2>Final Ensemble</h2> <p>Our final ensemble consists of B5/B6/B7 single model ensemble. <a href="/ratthachat">@ratthachat</a> will give further details on model training in the comment section.</p> <h2>Models</h2> <p>We mostly used EfficientNets as we did not have much success with others Models. (1 Xception was used in the worse performing final essemble) We played around with input sizes and sticked with 224*224 as most cropped faces in the training dataset were around that size.</p> <p>We started with B0 - B2 with some success. But our final solutions of B5-B7 outperformed them.</p> <h2>Memory Usage and Inference Time</h2> <p>Due to memory constraints we used a batch generator that extracts and predicts only on two frames from a video at once. This also enabled us to use parallel inference to significantly reduce run-time and predict on every 27th frame in the end.</p> <h2>Things that may have worked</h2> <p>We considered organizers willl introduce organic videos and different deepfake methods in the test set. Thats why we constantly evaluated our models on organic videos using Predictions and GradCam and used face crops without big margin, we thought this may generalise better. We used a margin of 14 pixels from the MTCNN face detector. </p> <p>The final prediction was a simple average of all predictions per video. We had better results splitting predictions per actor, but due to time reasons and as we considered organizers introducing &gt; 3 actors videos and multiple deep faked faces per video and did not want to run in these edge cases in the test set, we decided against the idea.</p> <p>Using more frames per video was superior to using TTA in our experiments. We clipped prediction using 0.01 and 0.99.</p> <h2>Acknowledgments</h2> <p>Thank you to <a href="/ratthachat">@ratthachat</a> <a href="/hmendonca">@hmendonca</a> <a href="/hmmoghaddam">@hmmoghaddam</a> for the incredible experience and Congratulations to <a href="/hmmoghaddam">@hmmoghaddam</a> on becoming Kaggle expert. </p> <p>Thank you to the organizers for hosting this competition.</p>
Bengali.AI Handwritten Grapheme Classification
10th Place: Single Model Using Snapshot Ensemble w/ code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>UPDATED</h2> <p>I added the additional study at the end of this post. <br> * link to discussion about DataAugmentation * links to discussion and notebook about visualization of sSE Block * tables of my late submission results (w/o RandomErasing, w/o sSE Block, w/ common sSE Block, and w/ <strong><em>Chris's Magic</em></strong>) * link to the source code repository on GitHub</p> <hr> <p>I was more surprised than pleased when the private leaderboard uncovered. I didn't imagine such a big shake happens. I still don't understand why got this place (sorry, since I had only 4 days, I was not able to do enough experiments) , and want to investigate it by late submission.</p> <p>Although there remain some mysteries, this is my first solo gold medal. I'm so glad to share my solution as a gold place one and become Competitions Master!</p> <p>Lastly, congrats to all the teams got the medal and participants finished this competition! And thanks to Kaggle and Bengali.AI for hosting this competition!</p> <p><br> Here, I'd like to share my solution's summary.</p> <h3>Resources</h3> <p>Kaggle notebooks and a local machine (GTX1080ti x 1)</p> <h2>Approach</h2> <p>At first, for tackling the unseen graphemes problem, I split train data by multi-label stratified group K-fold (regarding a grapheme as a group) and tried training models. However, I was not able to train models successfully. I suspect this is because some components are contained only in one or few grapheme(s).</p> <p>My approach is very simple - training a model using all train data (because I want to include all components in training). This way, However, has a risk of overfitting. For preventing worsening of generalization performance, I train a model using cosine annealing scheduling and do snapshot ensemble.</p> <p>There is no special pre-processing nor post-processing.</p> <h2>Model</h2> <p>The model is based on SE-ResNext50 and not so special, but has something unique <strong>in global pooling step</strong>. The figure below is the model overview.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F473234%2F30dfe0e6ec593a3824841290c0b5df98%2F10th_place_model_overview_v2.png?generation=1584668448561081&amp;alt=media" alt="model overview"></p> <p>Instead of simply applying Global Average Pooling to feature map and using its result as <strong>common input</strong> of each component's head, I used <strong>sSE Block</strong> and Global Average Pooling <strong>for each component</strong>.</p> <p>If you want to know more details about sSE Block, read the original paper:</p> <p><a href="https://arxiv.org/abs/1803.02579">Concurrent Spatial and Channel Squeeze &amp; Excitation in Fully Convolutional Networks(Abhijit Guha Roy, et al., MICCAI 2018)</a></p> <p><strong>NOTE</strong>: For passing 1-channel images to the pre-trained model(3-channel), I use a simple way. See <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136815#781819">the comment bellow</a> for more details.</p> <h2>Training</h2> <h3>Data Augmentation</h3> <p>applying augmentations by the following order, implemented by <a href="https://albumentations.readthedocs.io/en/latest/">albumentations</a></p> <p>[Original Image: 137x236] -&gt; Padding(to 140x245) -&gt; Rotate(rotate limit=5, p=0.8) -&gt; Resize(to 128x224) -&gt; RandomScale(scale limit=0.1, p=1.0) -&gt; Padding(to 146x256) -&gt; RandomCrop(to 128x224, p=1.0) -&gt; <strong>RandomErasing(mask-ratio-min=0.02, mask-ratio-max=0.4, p=0.5)</strong> -&gt; Normalize(by channel-wise mean and std of train data) -&gt; [Model Input: 128x224]</p> <p>As you see, <strong>I did not use CutMix/MixUp.</strong></p> <h3>Loss</h3> <p>calculating softmax cross entropy for each component, and averaging them by weights (<code>grapheme_root</code>:<code>vowel_diacritic</code>:<code>consonant_diacritic</code> = 2 : 1 : 1) </p> <h3>Optimization</h3> <ul> <li>Optimizer : SGD + NesterovAG(momentum: 0.9) with weight decay(1e-04)</li> <li>batch size: 64</li> <li>epoch: 105</li> <li>learning schedule: cosine annealing (3 cycle) <ul><li>35 epoch per cycle</li> <li>learning rate: max=1.5e-02, min=0.0</li></ul></li> </ul> <h2>Inference</h2> <h3>transform</h3> <p>applying the following transforms to images before feeding them into models</p> <p>[Original Image: 137x236] -&gt; Padding(to 140x245) -&gt; Resize(to 128x224) -&gt; Normalize(by channel-wise mean and std of train data) -&gt; [Model Input: 128x224]</p> <h3>ensemble</h3> <p>Obtaining models(35, 75, and 105 epoch)' outputs using softmax by component, simply averaging them and applying argmax by component.</p> <h2>Why got this place ?</h2> <p>I don't still understand, but have some hypotheses.</p> <h3>RandomErasing is Good?</h3> <p>Many participants used CutMix or MixUp. While these techniques seem effective in making combinations of components, not effective in disentangling interdependence of components in original combinations, especially about MixUp.</p> <p>Because of this, may be, simple masking method like RandomErasing is good.</p> <h3>sSE Block is Good?</h3> <p>I'm not sure whether or not sSE Block is the best. But simply applying global average pooling looks not good for me because I suspect each component has spacial dependence.</p> <p>I suppose a kind of weighted global average pooling <strong>for each component</strong> is good.</p> <h3>Training a model by all data and Snapshot Ensemble is Good?</h3> <p>In previous competitions, training K-fold and K-fold averaging was the very effective way for me. But in this competition, it was very difficult for me to split K-fold because of unseen graphemes problem and rare components, I trained a model using all train data.</p> <p>Here are each cycle's and snapshot ensemble's scores.</p> <p>| model | Public Score (Rank) | Private Score (Rank) | |:-------------------:|:-------:|:--------| | cycle 1 (35epoch) | 0.9754 (288th) | 0.9438 (23rd) | | cycle 2 (70epoch) | 0.9821 (160th) | 0.9499 (14th) | | cycle 3 (105epoch) | <strong><em>0.9843 (124th)</em></strong> | 0.9497 (14th) | | snapshot ensemble | 0.9840 (131st) | <strong><em>0.9536 (10th)</em></strong> |</p> <p>The model cycle 3 achieved the best public score, but this slightly overfitted. Snapshot ensemble achieved the best private score, which is <strong>0.0037</strong> higher than the best single model(cycle 2).</p> <p>I think snapshot ensemble is good when you want to train models <strong>using all train data</strong>.</p> <p><br> That's all. Thank you for reading!</p> <p><br></p> <hr> <h2>Additional Study</h2> <h3>RandomErasing is Good?</h3> <ul> <li>Discussion: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/137029">CutMix/MixUp is <strong>Not</strong> All You Need?</a></li> </ul> <h3>sSE Block is Good?</h3> <ul> <li>Discussion: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/137552">Key of the 10th solution? : Where sSE Block looks</a></li> <li>Notebook: <a href="https://www.kaggle.com/ttahara/visualize-10th-place-model-where-sse-block-looks">Visualize 10th place model: Where sSE Block looks?</a></li> </ul> <h3>Late Submission</h3> All the results <ul> <li>w/o RandomErasing : not using RandomErasing</li> <li>w/o sSE Module : simply apply GAP to feature map extracted from SE-ResNeXt50 and feed it into each component's head (Dense -&gt; ReLU -&gt; Dropout -&gt; Dense)</li> <li>w/ <strong><em>Common</em></strong> sSE Module : apply one common sSE-Pooling to feature map and feed it into each component's head</li> </ul> <p>| model | cycle | Public Score | Private Score | |:-------:|:-------------------:|:-------:|:--------| | w/o RandomErasing | 1 (35epoch) | 0.9425 | 0.9187 | | 〃 | 2 (70epoch) | 0.9479 | 0.9201 | | 〃 | 3 (105epoch) | 0.9648 | 0.9345 | | 〃 | Snapshot Ensemble | 0.9643 | 0.9373 | | w/o sSE Module | 1 (35epoch) | 0.9746 | 0.9431 | | 〃 | 2 (70epoch) | 0.9824 | 0.9490 | | 〃 | 3 (105epoch) | 0.9836 | 0.9485 | | 〃 | Snapshot Ensemble | <strong>0.9843</strong> | 0.9517 | | w/ <strong><em>Common</em></strong> sSE Module | 1 (35epoch) | 0.9739 | 0.9433 | | 〃 | 2 (70epoch) | 0.9809 | 0.9474 | | 〃 | 3 (105epoch) | 0.9827 | 0.9496 | | 〃 | Snapshot Ensemble | 0.9832 | <strong>0.9527</strong> |</p> Compare results using Snapshot Ensemble (&amp; <strong><em>Magical</em></strong> Post-Processing) <p>| model | Public Score (rank) | Private Score (rank) | |:--------:|:----------- -:|:---------------| | w/o RandomErasing | 0.9643 (1210th) | 0.9373 (66th) | | w/o sSE Module | 0.9843 (124th) | 0.9517 (12th) | | w/ <strong><em>Common</em></strong> sSE Module | 0.9832 (149th) | 0.9527 (12th) | | final sub model (<strong><em>component-wise</em></strong> sSE Module) | 0.9840 (131st) | 0.9536 (10th) | | w/ <strong>Chris's Magic</strong>(-0.6,-0.6,-0.4) | <strong><em>0.9848 (112th)</em></strong> | 0.9623 (4th) | | w/ <strong>Chris's Magic</strong>(-0.8,-0.8,-0.7) | 0.9828 (153rd) | <strong><em>0.9653 (3rd)</em></strong> |</p> <p>OMG! It's a really magic!</p> <p>If you want to use this magic (it is very easy to use but effective!), check this discussion: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">https://www.kaggle.com/c/bengaliai-cv19/discussion/136021</a></p> <hr> <h2>code</h2> <p>I've published the source code on GitHub: <a href="https://github.com/tawatawara/kaggle-bengaliai-cv19">https://github.com/tawatawara/kaggle-bengaliai-cv19</a></p>
Bengali.AI Handwritten Grapheme Classification
Kaggle Competition Experience(Team: Learning at snail pace) | Rank 1473
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and Bengali AI for organising this competition. I would like to congratulate all the winners of this competition.</p> <p><code> Public LB: 0.9749(Rank 297) Private LB: 1473 </code></p> <p>I joined this competition when it was started, yet only got actively involved in this competition after <a href="/seesee">@seesee</a> released his public notebooks on using TPUs. At that time, I was participating in Kaggle’s flower recognition competition(Playground) and was able to understand what @see--- Notebook did.</p> <p>After a bit of fine-tuning, with hyperparameters(changing to EfficientNet B4) and training for 30 epochs, I was able to get a score of 0.9729. I was all excited because it was the first time I got a higher score than the best available public kernels available(in a live competition) at that time + I didn’t make many changes to original kernel.</p> <p>Last 7 days of the competition was full of excitement for me(as my college was shut down due to Covid-19 situation in Kerala, India). After getting this initial result, I read through almost all the posts in discussions.I tried a variety of ideas and organised them as github issues(suggestion from my teammate). I tried to implement augmentations like Mixup, Cutmix, Gridmask etc(Thanks to <a href="/cdeotte">@cdeotte</a>, <a href="/xiejialun">@xiejialun</a> Notebook from Flower classification with TPUs notebooks). I tried training on all the architectures of EfficientNet from B3 to B7. I started retraining my models with weights from already available model, yet for me always retraining on weights made my model perform worst.</p> <p>For me this competition was like <a href="/init27">@init27</a> saying, kaggle competition felt like a 100 Mile sprint where you are competing against people on Supercars (GrandMasters with a LOT of experience) while I was running barefoot.</p> <p>I tried changing with other architectures like Densenet 169, 121. Yet single models based on this approach was not so effective. After doing all these experiments for the last 4-6 days, I was not able to improve my model, any further. It gave me a feel when all the Kaggle grandmasters and masters were able to implement ideas and do things quickly, I was not able to perform so well. I saw a lot of failed ideas, and even after reading lots of papers I was not able to transform certain augmentations into Tensorflow for BengaliAI competition from Flower Classification with TPUs competition.</p> <p>On the final day of competition, I trained my model with EfficientNetB4 for 30 epechs, with step learning rate and regularisation. And I was able to obtain my highest score of 0.9749 in public LB(which was able to have a better score than the best public kernel available then). I tried ensembling weights with Densenet, but it didn’t work out any good. </p> <p>For me, this competition was a huge learning experience for me and I was able to spend about 100+ hours for this competition and learned a lot of new things from experienced folks here. I would like to thank <a href="/hengck23">@hengck23</a> for encouraging to share the solution. </p> <p>Obviously, after the competition, I got a lot of new insights which I am trying to ponder and experiment more in the coming days(both in Tensorflow and Pytorch).</p> <p>If you interested to look my code do check my kernels:</p> <p><a href="https://www.kaggle.com/kurianbenoy/efficientnetb4-parameter-tweaks">Training Notebook</a></p> <p><a href="https://www.kaggle.com/kurianbenoy/final-submit-inference">Inference Notebook</a></p>
Bengali.AI Handwritten Grapheme Classification
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Congratulations and Thank You</h2> <p>This has been an interesting, challenging, and very educational experience for us. We would like to congratulate all the winners and everyone else who has worked so hard over the past three months on this competition. Hope that even those of you who have been burned by the shakeup at the end can come away from this competition with a lot of new things that you have learned. We certainly are.</p> <p>We had lots of fun teaming up with each other. Keeping each other abreast of all that we were trying to do, and encouraging each other, especially in the last phase of this competition, has been an incredible experience for us. </p> <p>We would like to thank the organizers, Bengali.ai and Kaggle, for putting this competition together. We hope that the results of this competition can be used in a meaningful and productive way for the advancement of the handwritten Bengali language. </p> <p>Chris and Bojan would like to thank Nvidia for its ongoing generous support in terms of time and resources. </p> <h2>Frameworks</h2> <p>For our final solution, we have used both PyTorch and TensorFlow. We have also worked with Rapids on an alternative solution, which showed lots of promise but was not possible to incorporate with our best solution in the allotted submission time limits. </p> <h2>Datasets</h2> <p>Most of our models used just the competition dataset. One of our models (EfficientNet B7) trained partially with some of the external data. We had also tried using synthetic data, but that did not help our local models. It is possible that it did better on the private dataset, but we have not made any submissions with it yet.</p> <h2>Image Resolutions</h2> <p>Like everyone else, throughout the competition, we had experimented with different resolutions and aspect ratios. Our final solution was a combination of the following different resolutions: 128x128, 224x224, 137x236, 128x256. Our single best model used the 256x256 resolution, but the inference with it was taking too long, so we had to make a difficult decision to leave it out of the final ensemble in favor of a more diverse blend. </p> <h2>Validation Scheme</h2> <p>Our main validation scheme has been a 90/10 train/validation split. For most of the models, we have used the same 90/10 split, but in the latter stages of the competition, we had added a few with different splits for the sake of diversity.</p> <h2>Network Architectures</h2> <p>We have tried several different architectures and discovered that for the cutmix augmentation as we had implemented it, bigger networks were better, but they required a longer time to train and larger batch sizes. All but one network were trained with a single head. In hindsight, this might have seriously limited our post-processing as we had implemented it. Our final blend consisted of the following seven network architectures:</p> <ul> <li>EfficientNet B4</li> <li>EfficentNet B6</li> <li>EfficientNet B7</li> <li>DenseNet 201</li> <li>SeNet 154</li> <li>2 SE-ResNext 101</li> </ul> <h2>Augmentations</h2> <p>Two main augmentations that we used were CutMix and CoarseDropout. We had also used a mild shift, scale, rotate augmentations. We also used CAM CutMix, Chris describes it <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">here</a>.</p> <h2>Training Regiment(s)</h2> <p>Our main training consisted of one cycle with starting lr of 0.0001, and a decrease at the plateau. We would train between 100 and 200 epochs, with batch sizes between 150 and 300. After the one main cycle, we would retrain for another 70 epochs, which would usually bump our validation for another 0.001. Depending on the machine, network, and batch sizes, one epoch would take between 10 and 30 minutes to train. Our single best network achieved 0.9980 for local validation, and 0.9892 on public leaderboard. </p> <h2>Ensembling</h2> <p>We’ve tried to build some advanced second level models, but our best ensembling ended up being just a weighted average of our models. Since our models were trained to predict logits and not probabilities, finding a good blend was tricky. In the end, we managed to get a pretty decent set of weights, that was the best for both oof predictions as well as the public leaderboard. </p> <h2>Post Processing</h2> <p>This was one of the most important steps in our model, and most of the models that finished at the top in this competition. It prevented us from dropping like a stone after the final shakeup. In a nutshell, it turns out that the recall metric is very sensitive to the distribution of various classes. In order to exploit this insight, it was necessary to scale various predictions in such a way that the probabilities of less frequent items got proportionally higher weight. The most mathematically reasonable way of doing this is to multiply various probabilities by their inverse frequencies. In practice, we had to deviate slightly from the perfect inverse law. We had spent a lot of time in the last days trying to improve this post processing, but in the end, a very simple set of exponent worked best for our models. You can read more about this and other aspects of our solution in <a href="https://www.kaggle.com/cdeotte">Chris Deotte</a>’s <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021">wonderful writeup</a>. </p> <h2>Hardware</h2> <p>The main workhorses for our model building have been two Nvidia DGX Stations which have been generously allocated by Nvidia for the purposes of Kaggle competitions. We have also relied on our own private resources, including a dual RTX Titan workstation, and several 2080 Ti workstations. </p>
Bengali.AI Handwritten Grapheme Classification
8th Place Solution /w Code (minimal ver.)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, everyone</p> <p>First of all, I want to thank Kaggle and the hosts for hosting the competition.</p> <p>Congratulation to all winners!</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F448347%2Fd37b2cfc0fd3524b5be0ba10e5d8b649%2F1.png?generation=1584409865722874&amp;alt=media" alt=""></p> <p>An illustration of my pipeline. It contains 2 models, Seen Model and Unseen Model.</p> <h1>Summary</h1> <ul> <li><p>NO EXTERNAL DATA</p></li> <li><p>I used 2-stage prediction in my pipeline, it contains 2 models as one set (Seen Model and Unseen Model) as shown in the figure above.</p></li> <li>I used Arcface to distinguish unseen graphemes. Just like to distinguish unseen faces ;)</li> <li>If an image is detected to be seen, I use the output of Seen Model as prediction directly. Else if the image is detected to be unseen, I’ll pass it to Unseen Model to get unseen prediction.</li> <li>The different points between Seen Model and Unseen Model are as follow: <ul><li>Seen Model only uses very few augmentation to make sure it can ‘overfit’ to seen graphemes, while Unseen Model uses heavy augmentations to make it generalize to unseen ones.</li> <li>Seen Model has 5-head outputs, including Arcface output. While Unseen Model has normal 4-head outputs.</li> <li>Beside the number of output heads, the architecture of the top is a little bit different as well.</li> <li>I decompose the predicted grapheme to 3 components when the prediction is made by Seen Model, while I use predicted 3 components directly when the prediction is made by Unseen Model.</li></ul></li> </ul> <h1>Code</h1> <p>I’ve published a minimal version of my training &amp; testing pipeline on Kaggle kernel as follow:</p> <ul> <li>Step 1. <a href="https://www.kaggle.com/haqishen/bengali-train-seen-model">Bengali Train Seen Model</a> (trained ~5h on kernel)</li> <li>Step 2. <a href="https://www.kaggle.com/haqishen/bengali-train-unseen-model">Bengali Train Unseen Model</a> (trained ~8.5h on kernel)</li> <li>Step 3. <a href="https://www.kaggle.com/haqishen/bengali-predict-with-seen-unseen-models">Bengali Predict with Seen &amp; Unseen Models</a> (submission ~20min)</li> </ul> <p>The minimal version with two efficientnet-b1 which are training on 128x128 for 30 epochs can give you public LB 0.985+ or private LB 0.935+ If you find my notebooks helpful, please upvote them! </p> <hr> <p>To get over 0.993+ or 0.950+ on public or private LB, what you need to do is just simply: * Change the backbone to efficientnet-b5 / b6 / b7 / b8 * Train on 224x224 * Train for 60~90 epochs * Emsenble more Seen &amp; Unseen Model</p> <p>If you have any questions please feel free to leave a message to me. Thanks!</p>
Bengali.AI Handwritten Grapheme Classification
257th Place Solution and Observations (public LB 55)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to all winners! I was definitely shocked by the shake and losing my first solo silver. I hope to share my solution as offering a comparison to winner solutions and looking forward to hear you about what might went wrong.</p> <p>I built my solution based on <a href="/corochann">@corochann</a>'s kernel (<a href="https://www.kaggle.com/corochann/bengali-seresnext-prediction-with-pytorch">https://www.kaggle.com/corochann/bengali-seresnext-prediction-with-pytorch</a>).</p> <p>I first tried around with seresnext50 and some basic augmentations, but the public LB score wandered under 0.967x. Then I stopped cropping images and everything became bright.</p> <p>Next, I experimented with different augmentation combinations and found cutmix only to work the best and sticked to it in different model testing process.</p> <p>Afterwards, I tested with models including seresnext50, seresnext101, densenet121, and res2net using 256x256 images.</p> <p>My final submissions consist of weighted ensembles of seresnext50 and res2net. The highest public LB score resulted is 0.9884 but with private score only 0.9290. </p> <p>I used 80/20 split for all training and never used all data for one single model. I'm not sure that is a problem. In the 0.9884 ensemble, I used 2 splits of res2net and 1 split of seresnext. I used stepLR with milestone of 75 and 90.</p> <p>Interestingly, I found that my highest private LB score is produced by densenet121. The ensemble of 3 different epochs of densenet121 (same split) resulted in public LB of 0.9799 but private LB of 0.9381. I also saw that efficientnet worked poorly on public LB but well on private LB in other people's solutions. Not sure if these simple model would have better performance on private LB. I also observed that densenet has a smaller cv/lb gap of around 0.5%, while seresnext and res2net all have cv/lb gap around 1.3%.</p> <p>I also found that local CV does not correlate with private LB nor public LB in my case. The densenet121 model never made its local CV above 0.99, while seresnext and res2net both achieved 0.995-6. So I guess "trusting your CV" is not a valid strategy for this competition.</p> <p>All in all, an unexpected competition experience. Definitely a bit sad losing a silver medal. Will try my best to get good score in the rest competitions before summer. </p> <p>Please comment below if you see any potential problem and upvote if you find this post interesting.</p> <p>Happy kaggling.</p> <p>Below is my full training/testing record:</p> <p>'Se_resnext50_32x4d_kernel_baseline_newlr' mixup/cutmix Cv: 0.9868 Epoch 102 LB: 0.9646 diff : 0.0222</p> <p>Se_resnext50_32x4d_kernel_baseline_augmentation_randomseed666_8020split Epoch 35 Cv 0.9775 Lb 0.9641 Diff = 0.0134</p> <p>Epoch 44 Cv 0.978 Lb 0.9646 Dff: 0.0134</p> <p>Epoch 84 Cv 0.9824 Lb 0.9654 Diff = 0.017</p> <p>Milestone lr scheduler</p> <p>epoch 22 Cv 0.9783 Lb 0.9665 Diff 0.0118</p> <p>Epoch 30 Cv 0.98 Lb 0.9674 Diff 0.0126</p> <p>Epoch 42 Cv 0.9804 Lb 0.9678 Diff 0.0126</p> <p>Epoch 53 Cv 0.9805 Lb 0.9676 Diff 0.0129</p> <p>Milestone lr, ohem loss, stratified</p> <p>Epoch 17 Cv 0.9717 Lb 0.9527 Diff 0.019</p> <p>Epoch 29 Cv 0.9739 Lb 0.9588 Diff 0.015</p> <p>Epoch 39 Cv 0.9751 Lb 0.9584 Diff 0.0167</p> <p>Multilabel stratified - cross entropy loss</p> <p>Epoch 87 Cv 0.9869 Lb 0.9679 Diff 0.019</p> <p>Epoch 135 Cv 0.9886 Lb 0.9679 Diff 0.0207</p> <p>Noconv0</p> <p>Epoch 60 Cv 0.9833 Lb 0.9667 Diff 0.0166</p> <p>Epoch 76 Cv 0.9852 Lb 0.9670 Dff 0.0182</p> <p>Cnn - tail - noaffine Epoch 92 Cv 0.9892 Lb 0.9668 Diff 0.0224</p> <p>Epoch 77 Cv 0.9872 Lb 0.9665 Diff 0.0187</p> <p>Epoch 62 Cv 0.9849 Lb 0.9621 Diff 0.0228</p> <p>Tail + affine Epoch 96 Cv 0.9866 Lb 0.9668 Diff 0.0198</p> <p>Epoch 86 Cv 0.9858 Lb 0.9664 Diff 0.0194</p> <p>Epoch 120 Cv 0.9876 LB 0.9671 Diff 0.0205</p> <p>Size 256 Epoch 134 Cv 0.9896 Lb 0.9694 Diff 0.0202</p> <p>Epoch 126 Cv 0.9896 Lb 0.9689 Diff0.0207</p> <p>Epoch 114 Cv 0.989 Lb 0.9690 Diff 0.020</p> <p>Epoch 107 Cv 0.9886 Lb 0.9688 Diff 0.0198</p> <p>Epoch 98 Cv 0.9884 Lb 0.9691 Diff 0.0193</p> <p>Epoch 48 Cv 0.981 Lb 0.9672 Diff 0.0138</p> <p>Ensemble Epoch 134, 114, 98 Lb 0.9695</p> <p>Resnext 101 size 256 Epoch 81 crop Cv 0.9888 Lb 0.9680 Diff 0.0208</p> <p>Epoch 81 no_crop Cv 0.9888 Lb 0.9651 Diff 0.0237</p> <p>Epoch 78 crop Cv 0.9881 Lb 0.9688 Diff 0.0193</p> <p>Epoch 65 crop Cv 0.9851 Lb 0.9659 Diff 0.0192</p> <p>Resnext50 size 256 no crop</p> <p>Epoch 109 Cv 0.9924 Lb 0.9766 Diff 0.0158</p> <p>Epoch 98 Cv 0.9924 Lb 0.9762 Diff 0.0162</p> <p>Epoch 84 Cv 0.9911 Lb 0.9759 Diff 0.0152</p> <p>Epoch 79 Cv 0.9899 Lb 0.9750 Diff 0.0149</p> <p>Ensemble epoch 84, 98, 109 Lb 0.9767</p> <p>Epoch 164 Cv 0.9937 Lb 0.9769 Diff 0.0168</p> <p>Epoch 155 Cv 0.9934 Lb 0.9765 Diff 0.0169</p> <p>Epoch 143 Cv 0.9928 Lb 0.9756 Diff 0.0172</p> <p>Epoch 130 Cv 0.9928 Lb 0.9767 Diff 0.161</p> <p>ensemble of epoch 164, 130, 109 Lb 0.9770</p> <p>Ensemble (epoch 164) 0.9779</p> <p>Epoch ~180 Cv 0.9940 Lb 0.9767 Diff 0.0173</p> <p>Ensemble of epoch 164, 130, 109 Lb 0.9779</p> <p>Seresnext50 autoaugment Epoch 93 Cv 0.9897 Lb 0.9734 Diff 0.0163</p> <p>Epoch 85 Cv 0.9892 Lb 0.9742 Diff 0.015</p> <p>Epoch 77 Cv 0.9872 Lb 0.9732 Diff 0.014</p> <p>Epoch 65 Cv 0.9852 Lb 0.9709 Diff 0.0143</p> <p>Ensemble: epoch 77 auto augment, epoch 164 no crop: 0.9784</p> <p>Epoch 139 Cv 0.9908 Lb 0.9741 Diff 0.0167</p> <p>Epoch 122 Cv 0.9908 Lb 0.9737 Diff 0.0171</p> <p>Epoch 108 Cv 0.9903 Lb 0.9735 Diff 0.0168</p> <p>Densenet121 no cutmix</p> <p>Epoch 119 Cv 0.9862 Lb 0.9799 Diff 0.0053</p> <p>Epoch 92 Cv 0.9861 Lb 0.9796 Diff 0.0065</p> <p>Epoch 82 Cv 0.9859 Lb 0.9789 Diff 0.007</p> <p>Ensemble of epoch 119, 92, 82 Lb 0.9799</p> <p>Seresnext 50 only cutmix</p> <p>Epoch 65 Cv 0.9947 Lb 0.9831 Diff 0.0116</p> <p>Epoch 48 Cv 0.993 Lb 0.981 Diff 0.012</p> <p>Epoch 120 Cv 0.9963 Lb 0.9849 Diff 0.0114</p> <p>Epoch 118 Cv 0.9963 Lb 0.9849 Diff 0.0114</p> <p>Epoch 100 Cv 0.996 Lb 0.9847 Diff 0.0113</p> <p>seresnext 50 only cutmix Fold 0</p> <p>Epoch 112 Cv 0.9996 Lb 0.9862 Diff 0.0134</p> <p>Epoch 91 Cv 0.9955 Lb 0.9854 Diff 0.0101</p> <p>Epoch 81 Cv 0.9953 Lb 0.9851 Diff 0.0102</p> <p>Ensemble of epoch 112, 91, shuffle120 Lb 0.9869</p> <p>Epoch 118 Cv 0.9996 Lb 0.9867 Diff 0.0129</p> <p>Ensemble of epoch 112, 91, shuffle120, epoch 118 Lb 0.9869</p> <p>Ensemble of shuffle 120 epoch118 weighted 0.4 0.6 Lb 0.9873</p> <p>Weighted0.3, 0.7 Lb 0.9871</p> <p>Ensemble epoch 112, epoch 118 weighted 0.4 0.6 Lb 0.9865</p> <p>Res2net fold 0</p> <p>Epoch 77 Cv 0.9969 Lb 0.9858 Diff 0.0111</p> <p>Epoch 67 Cv 0.9954 Lb 0.9853 Diff 0.0101</p> <p>Epoch 53 Cv 0.9943 Lb 0.9829 Diff 0.0114</p> <p>Ensemble weighted 0.4 0.6 r2n lb0.9858 seresnext_fold0 lb0.0.9867 Lb 0.9880</p> <p>res2net_fold1_new_fromepoch66_epochrange80(cv0.9996e85).pt</p> <p>Fold 1 Epoch 85 Cv 0.9996 Diff 0.9865</p> <p>Epoch 70 Cv 0.9991 Diff 0.9858</p> <p>ensemble weighted 0.3 0.35 0.35 r2n lb0.9858 fold0 r2n lb0.9865 fold1 seresnext lb0.0.9867 0.9881</p> <p>0.3 0.3 0.4 0.9884</p>
Deepfake Detection Challenge
EDIT) Private 1089th Place Approach (BIG DROP from public 7th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>EDIT) We shared our approach with concern about shake-up before private leaderboard was revealed. Unfortunately, we experienced way bigger shake-up than we expected in private LB. Since we have no objective evidence about our bad private score, we cannot specify which part was wrong in our submissions. So please take care while reading our approach.</p> <p>+) Congratulations for winners and those who did overcome shake-up. </p> <p>================ Our Sharing Before Private LB ===============</p> <h1>0. Intro</h1> <p>It was great experience for us to participate in this competition. So we want to say thank you to organizers of this challenge and others helped us a lot. </p> <p><a href="/pudae81">@pudae81</a> Your great introductory seminar about kaggle and code competition helped us a lot in joining this competition. We'll keep our fingers crossed for your journey to new career. 🤞 </p> <p><a href="/humananalog">@humananalog</a> We couldn't make it this far without great jobs shared by you. We really appreciates your sharing of awesome training and inference kernels. And that's why we share our humble approach to community even if we're not sure whether it's overfitted to public LB or not.</p> <h1>1. Overview of Our Approach</h1> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure1.png" alt="Method Overview"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure1.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure1.png</a></p> <p>Figure above shows our approach. We used MTCNN as face detector and attention-based models as prediction model.</p> <h1>2. Face Detector : MTCNN</h1> <p>We used MTCNN to detect face in video frame. We resized frame to 1280x720 before we pass it to MTCNN. And parameters passed to MTCNN is - thresholds of [0.6, 0.7. 0.95] - min_face_size of (720/30)</p> <h1>3. Model Architecture</h1> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure2.png" alt="Model Architecture"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure2.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure2.png</a></p> <p>Model architecture we used is shown above. We first extract feature vectors from each face crop, then apply self-attention based model to sequence of feature vectors to aggregate information. </p> <p>ImageNet pretrained efficient-b4 was used as our face crop feature extractor. We froze first 4 blocks due to our GPU memory capability. </p> <p>We have two similar models to get prediction of video from sequence of sampled face features.</p> <h3>3.2.1. Model 1</h3> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure3.png" alt="Model 1 figure"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure3.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure3.png</a></p> <p>Our first model uses self-attention between all sampled faces. "self-attention module" in figure contains KQV projection, self-attention, and residual connection. Then fully-connected layer follows to get prediction for each face crop. Finally, we take max prediction value for video prediction values. </p> <p>The best score of this model in LB was 0.24827 without any TTA.</p> <h3>3.2.2. Model 2</h3> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure4.png" alt="Model 2 figure"> <br> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure4.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/figure4.png</a></p> <p>As figure shows, our second model takes self-attention between faces in the same frame first, then takes self-attention between all sampled faces. After that, the procedure is same with Model 1. </p> <p>The best score of this model in LB was 0.25178 without any TTA.</p> <h1>4. Training Method</h1> <h2>4.1. Face Crops Preparation</h2> <ul> <li>sample 100 frames uniformly for each videos.</li> <li>use MTCNN to detect face in frame.</li> <li>2x extra margin added to face crop box(so the size is 9 times bigger in area sense) for further augmentation at train time </li> <li>face aligned with five face landmarks.</li> </ul> <h2>4.2. Train/Valid Split</h2> <ul> <li><p>One of our teammates clustered all real videos so that no actor appears in the videos from different clusters at the same time. After that, he splitted some clusters as validation set so that validation set has videos of different resolutions. </p></li> <li><p>With this cluster based real video split, their corresponding fake videos are splitted to the group of their original videos.</p></li> <li><p>Every train epoch, we used all real videos and sampled fake videos randomly to balance the number of real and fake samples.</p></li> </ul> <h2>4.3. Augmentation at train time</h2> <ul> <li>Sample 1 to 20 frames from previsously sampled 100 frames and load corresponding face crops.</li> <li>Videowise (i.e. applied to all face crops samely) <ul><li>DFDC preview dataset paper based quality augmentation <ul><li><a href="https://arxiv.org/abs/1910.08854"></a><a href="https://arxiv.org/abs/1910.08854"></a><a href="https://arxiv.org/abs/1910.08854">https://arxiv.org/abs/1910.08854</a></li></ul></li> <li>Various image transforms such as rotate, equalize, solarize, posterize etc...</li> <li>Random horizontal flip</li></ul></li> <li>Imagewise <ul><li>Random Crop <ul><li>Randomness occures in margin and face-center translation</li></ul></li> <li>Resize Interpolation <ul><li>we use three different resize interpolation : bilinear, bicubic, nearest</li></ul></li></ul></li> </ul> <h2>4.4. Optimizer</h2> <p>We used same optimizer settings for both model. - SGD with momentum 0.9 - Learning Rate Schedule - 5 epochs of linear warm-up from 0 to 0.001 - Cosine annealing from 0.001 to 0 through 35 epochs (total 40 epochs)</p> <h2>4.5. Further Regularization</h2> <ul> <li>weight decay: 0.0001 </li> <li>dropout layer right before final fully connected layer: p=0.4</li> <li>label smoothing epsilon: 0.01</li> </ul> <h2>4.6. Validation losses of model 1 while training</h2> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model1.png" alt="Model 1 loss"> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model1.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model1.png</a></p> <h2>4.7. Validation losses of model 2 while training</h2> <p><img src="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model2.png" alt="Model 2 loss"> original image: <a href="https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model2.png">https://raw.githubusercontent.com/jinhanpark/Deepfake-Detection-Challenge/master/assets/model2.png</a></p> <h1>5. Inference Strategy</h1> <p>With our inference strategy below, we got 0.23086 in LB</p> <h2>5.1. Sample Faces</h2> <ul> <li>Sample 20 frames per video uniformly. </li> <li>Detect and crop at most 3 faces from each frame with threshold stated above.</li> </ul> <h2>5.2. Ensemble</h2> <p>We averaged predictions from two models</p> <h2>5.3. Test Time Augmentation for each model</h2> <p>For each face detected, make 8 different crops of different margin and offset - 2 of 8 : two different margins of 1.1 and 1.15 without offsets. - 2 of 8 : two different y-axis offset of 0.15, -0.15 with margin of 1.15 - 4 of 8 : horizontal flips of above 4 crops</p> <h1>6. Outro</h1> <p>Thank you for reading. All questions are welcome.</p>
Bengali.AI Handwritten Grapheme Classification
1st place solution /w code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1st Place Solution --- Cyclegan Based Zero Shot Learning</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F5231a0ab7a0bf93611fb6fd3d72c6295%2F1.png?generation=1584591840196431&amp;alt=media" alt=""></p> <h2>Classes and Labeling</h2> <p>I did not make inferences about the parts of the character. In other words, all my models classify against the 14784 (168 * 11 * 8) class. Therefore, I needed to know what the combination of labels made up of Grapheme.</p> <p>I predicted the relationship between the combination of labels and Grapheme from the label of the given train data, and created the following code.</p> <p>```python=</p> <p>class_map = pd.read_csv('../input/bengaliai-cv19/class_map.csv') grapheme_root = class_map[class_map['component_type'] == 'grapheme_root'] vowel_diacritic = class_map[class_map['component_type'] == 'vowel_diacritic'] consonant_diacritic = class_map[class_map['component_type'] == 'consonant_diacritic'] grapheme_root_list = grapheme_root['component'].tolist() vowel_diacritic_list = vowel_diacritic['component'].tolist() consonant_diacritic_list = consonant_diacritic['component'].tolist()</p> <p>def label_to_grapheme(grapheme_root, vowel_diacritic, consonant_diacritic): if consonant_diacritic == 0: if vowel_diacritic == 0: return grapheme_root_list[grapheme_root] else: return grapheme_root_list[grapheme_root] + vowel_diacritic_list[vowel_diacritic] elif consonant_diacritic == 1: if vowel_diacritic == 0: return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] else: return grapheme_root_list[grapheme_root] + vowel_diacritic_list[vowel_diacritic] + consonant_diacritic_list[consonant_diacritic] elif consonant_diacritic == 2: if vowel_diacritic == 0: return consonant_diacritic_list[consonant_diacritic] + grapheme_root_list[grapheme_root] else: return consonant_diacritic_list[consonant_diacritic] + grapheme_root_list[grapheme_root] + vowel_diacritic_list[vowel_diacritic] elif consonant_diacritic == 3: if vowel_diacritic == 0: return consonant_diacritic_list[consonant_diacritic][:2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic][1:] else: return consonant_diacritic_list[consonant_diacritic][:2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic][1:] + vowel_diacritic_list[vowel_diacritic] elif consonant_diacritic == 4: if vowel_diacritic == 0: return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] else: if grapheme_root == 123 and vowel_diacritic == 1: return grapheme_root_list[grapheme_root] + '\u200d' + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic] return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic] elif consonant_diacritic == 5: if vowel_diacritic == 0: return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] else: return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic] elif consonant_diacritic == 6: if vowel_diacritic == 0: return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] else: return grapheme_root_list[grapheme_root] + consonant_diacritic_list[consonant_diacritic] + vowel_diacritic_list[vowel_diacritic] elif consonant_diacritic == 7: if vowel_diacritic == 0: return consonant_diacritic_list[2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[2][::-1] else: return consonant_diacritic_list[2] + grapheme_root_list[grapheme_root] + consonant_diacritic_list[2][::-1] + vowel_diacritic_list[vowel_diacritic] ```</p> <p>The generation of synthetic data and the conversion of the prediction results into three components are based on this correspondence.</p> <h2>Split Data for Unseen Cross Validation</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F2fd56df85b7442295bc605b10053aac9%2F2.png?generation=1584591905364753&amp;alt=media" alt=""></p> <p>All are randomly selected and split.Ashamedly, I split the data without thinking, so that a non-existent Grapheme root class was created at the time of evaluation, and proper evaluation could not be performed. However, since the learning cost was very high, it could not be easily recreated, and all local cv were evaluated as they were. 😭</p> <h2>(1) Out of Distribution Detection Model</h2> <p>It is a model for distinguishing whether the input image is a Seen class or an Unseen class. This model outputs confidence for each of the 1295 classes independently. If all confidences are low, it is judged as Unseen, and if there is at least one confidence, it is judged as Seen class.</p> <ul> <li>No resize and crop</li> <li>Preprocess --- AutoAugment Policy for SVHN (<a href="https://github.com/DeepVoltaire/AutoAugment">https://github.com/DeepVoltaire/AutoAugment</a>)</li> <li>CNN --- EfficientNet-b7(ImageNet Pretrained)</li> <li>Optimizer --- <code>torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)</code> use defaule value</li> <li>LRScheduler --- WarmUpAndLinearDecay <code>python= def warmup_linear_decay(step): if step &amp;lt; WARM_UP_STEP: return step/WARM_UP_STEP else: return (train_steps-step)/(train_steps-WARM_UP_STEP) scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, warmup_linear_decay) </code></li> <li>Output Lyaer --- LayerNorm-FC(2560 -&gt; 1295)-BCEWithLogitsLoss + OHEM? <code>python= out = model(image) sig_out = out.sigmoid() loss = criterion(out, one_hot_label) train_loss_position = (1-one_hot_label)*(sig_out.detach() &amp;gt; 0.1) + one_hot_label loss = ((loss*train_loss_position).sum(dim=1)/train_loss_position.sum(dim=1)).mean() </code></li> <li>Epoch --- 200</li> <li>Batch size --- 32</li> <li>dataset split --- 1:0</li> <li>single fold</li> <li>Machine Resource --- 1 Tesla V100 6 days</li> </ul> <p>| 1168 class Local CV(auroc) | | -------- | | 0.9967 |</p> <h2>(2) Seen Class Model</h2> <p>This model classifies 1295 classes included in the training data.</p> <ul> <li>No resize and crop</li> <li>Preprocess --- AutoAugment Policy for SVHN (<a href="https://github.com/DeepVoltaire/AutoAugment">https://github.com/DeepVoltaire/AutoAugment</a>)</li> <li>CNN --- EfficientNet-b7(ImageNet Pretrained)</li> <li>Optimizer --- <code>torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)</code> use defaule value</li> <li>LRScheduler --- WarmUpAndLinearDecay <code>python= def warmup_linear_decay(step): if step &amp;lt; WARM_UP_STEP: return step/WARM_UP_STEP else: return (train_steps-step)/(train_steps-WARM_UP_STEP) scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, warmup_linear_decay) </code></li> <li>Output Lyaer --- LayerNorm-FC(2560 -&gt; 14784)-SoftmaxCrossEntropy</li> <li>Epoch --- 200</li> <li>Batch size --- 32</li> <li>dataset split --- 9:1 random split</li> <li>single fold</li> <li>Machine Resource --- 1 Tesla V100 6 days</li> </ul> <p>| CV Score | LB Score | | -------- | -------- | | 0.9985 | 0.9874 |</p> <p>Large gaps can be predicted due to the presence of Unseen Class.</p> <h2>(3) Unseen Class Model</h2> <p>Learning this model is done in two stages. The first step is training a classifier for images synthesized from ttf files. The second step is training the generator that converts handwritten characters into the desired synthesized data-like image. To perform these learnings, first select one ttf and generate a synthetic dataset. The image size of the synthesized data was 236x137, the same as the training data. Using Pillow and raqm, I drew Graphheme in four sizes, <code>[84, 96, 108, 120]</code>. The size of the dataset is 59136.</p> <h3>Font Classifier Pre-training</h3> <ul> <li>crop and resize to 224x224</li> <li>preprocess --- random affine, random rotate, random crop, cutout</li> <li>CNN --- EfficientNet-b0</li> <li>Optimizer --- <code>torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)</code> use defaule value</li> <li>LRScheduler --- LinearDecay ```python= WARM_UP_STEP = train_steps*0.5</li> </ul> <p>def warmup_linear_decay(step): if step &lt; WARM_UP_STEP: return 1.0 else: return (train_steps-step)/(train_steps-WARM_UP_STEP) scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, warmup_linear_decay) ``` - Output Lyaer --- LayerNorm-FC(2560 -&gt; 14784)-SoftmaxCrossEntropy - Epoch --- 60 - Batch size --- 32 - Machine Resource --- 1 Tesla V100 4 hours</p> <h3>CycleGan Training</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fac780d7f0e17487c7ac2338346da5e93%2F3.png?generation=1584591955177298&amp;alt=media" alt=""></p> <ul> <li>crop and resize to 224x224</li> <li>preprocess --- random affine, random rotate, random crop (smaller than pre-training one), and no cutout</li> <li>Model --- Based on <a href="https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix">https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix</a> + Pre-trained Font Classifier(fixed parameter and eval mode)</li> <li>Optimzer --- <code>torch.optim.Adam(params, lr=0.0002, betas=(0.5, 0.999))</code></li> <li>LRScheduler --- LinearDecay ```python= WARM_UP_STEP = train_steps*0.5</li> </ul> <p>def warmup_linear_decay(step): if step &lt; WARM_UP_STEP: return 1.0 else: return (train_steps-step)/(train_steps-WARM_UP_STEP) generator_scheduler = torch.optim.lr_scheduler.LambdaLR(generator_optimizer, warmup_linear_decay) discriminator_scheduler = torch.optim.lr_scheduler.LambdaLR(discriminator_optimizer, warmup_linear_decay) ``` - Epoch --- 40 - Batch size --- 32 - Machine Resource --- 4 Tesla V100 2.5 days - HyperParameter --- lambda_consistency=10, lambda_cls=1.0~5.0</p> <p>Please see the paper for CycleGan. The points that are different from normal CycleGan are as follows. The discriminator adds the supervised loss of the pre-trained font classifier when calculating the loss of a handwritten2font generator.</p> <p>The CV of the one that gave the highest score (lambda_cls = 4.0) is as follows. To avoid the influence of non-existent Graphheme root classes, macro average recall is calculated after excluding non-existent classes. (It does not mean that the recall value of a class that does not exist is calculated as 0.0 or 1.0.)</p> <p>| Local CV Score for Unseen Class | Local CV Score for Seen Class | | ------------------------- | --- | | 0.8804 | 0.9377 |</p> <p>After performing hyperparameter tuning, I trained two other types of ttf without evaluating local cv. They gave a higher LB score than the model trained in the first ttf, so the CV score of the parameter used in private may be higher.</p> <p>I created two models using different ttf for submission. At the time of submission, inferences were made using these ensembles.</p> <ul> <li><a href="https://www.omicronlab.com/download/fonts/kalpurush.ttf">https://www.omicronlab.com/download/fonts/kalpurush.ttf</a></li> <li><a href="https://www.omicronlab.com/download/fonts/NikoshLightBan.ttf">https://www.omicronlab.com/download/fonts/NikoshLightBan.ttf</a></li> </ul> <p>CycleGan's training code will be released after being modified so that it can run on 1GPU. Please wait.</p> <h2>Strange Points</h2> <p>To be honest, I can't justify why this method would increase the generalization of the Unseen class. I expected that the generation of images very similar to the synthetic data would affect Consistency Loss and Discriminator and gain generalization to the Unseen Class. However, as the hyperparameters were adjusted, it was observed that the generalization performance improved while the image became unnatural.</p> <p>| lambda_cls | Input Image | Generated Image | True Synthesis Image | Local CV | | ---------- | ----------- | --------------- | -------------------- | -------- | | 1.0 | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F711934da5f230beddb44a8739df91485%2Fa.png?generation=1584592112177362&amp;alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fe191a2b21f412a494f069c1f20a658bb%2Fc.png?generation=1584592112054776&amp;alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Ff8a1a40285305ea497c1602d149c4177%2Fb.png?generation=1584592116047842&amp;alt=media" alt="">| 0.8618 | | 4.0 | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fb3f573d385e3b9469d0ce6324e51b480%2Fe.png?generation=1584592113807041&amp;alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2F7a2f830547a3a2b8f5a4daec2e0382f3%2Fg.png?generation=1584592111614129&amp;alt=media" alt=""> | <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4114579%2Fb40309c1b75ade826b394b326fba9201%2Ff.png?generation=1584592114057138&amp;alt=media" alt=""> | <strong>0.8804</strong> |</p> <h2>Code</h2> <p>The reduced version of CycleGAN code is now available.</p> <ol> <li><a href="https://www.kaggle.com/linshokaku/cyclegan-classifier">https://www.kaggle.com/linshokaku/cyclegan-classifier</a> Font image classifier training code (7h)</li> <li><a href="https://www.kaggle.com/linshokaku/cyclegan-training">https://www.kaggle.com/linshokaku/cyclegan-training</a> Training code for a generator that converts a handwritten image to a font image (8h)</li> <li><a href="https://www.kaggle.com/linshokaku/cyclegan-submission">https://www.kaggle.com/linshokaku/cyclegan-submission</a> Submission code</li> </ol> <p>Running from 1 to 3 in order is equivalent to the method I used in this competition. Due to execution time and hardware resources, the batch size, training epoch, etc. are set very small, but if you run it with the hyper parameters as mentioned above, you should get results comparable to my execution results. I hope you find it helpful.</p>
Bengali.AI Handwritten Grapheme Classification
55th place solution - A frugal approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks Kaggle and Bengali.ai for hosting this contest. Congratulations to all the winners and all the participants who kept the discussion alive and encouraging throughout the contest with few special shoutouts to <a href="/haqishen">@haqishen</a>, <a href="/pestipeti">@pestipeti</a>, <a href="/hengck23">@hengck23</a>, <a href="/ildoonet">@ildoonet</a>, <a href="/bibek777">@bibek777</a>, <a href="/machinelp">@machinelp</a> for really interesting discussions inundating with ideas. Special thanks to <a href="/iafoss">@iafoss</a> for his starter notebook and <a href="/drhabib">@drhabib</a> for posting detailed code of previous competition solutions, which I relied upon. </p> <p>This is my first competition medal on Kaggle in 2 years. The public leaderboard standing is from one single model. Due to lack of hardware resources, most of my code was trained on Kaggle Kernels (The reason why I call this a frugal approach ;p). </p> <h2>Preprocessing and Augmentations</h2> <p>I resized the images to <strong>240x240x1</strong> and zero-padded them. Simple augmentations such *as crop_resize, image_wrapping, rotations, and image_lightening* were used. </p> <h2>Validation</h2> <p>I experimented with <strong>5 fold CV</strong> and for submission notebook, I performed a <strong>80-20</strong> split of the dataset. </p> <h2>Backbone and architecture</h2> <p>I performed most of the ideas shared on discussion forums keeping EfficientNet-B1 as the model backbone, which I took from EfficientNet Pytorch, and carried out certain modifications inspired from <a href="/iafoss">@iafoss</a>'s starter kernel. EfficientNet-B3 and above, being computationally expensive for Kaggle kernel, could not be trained properly in the limited kernel runtime of 32000s.</p> <h2>What didn't work</h2> <p>Although MixUp and Cutmix augmentation seemed really promising, they provided no effective improvement for me. They require higher epochs for training which was not at all feasible in my case.</p> <p>Based on some previous contest discussion advice, it was helpful to maintain experiment log files. I naively created the experiment log, but, it turned out to be confusing rather than helping when selecting the final solution for submission. </p> <p>Lack of hardware was a major turndown for me at the beginning of the contest and no improvement in the public lb was discouraging. The time that was wasted without any submission could have been used crucially. I timed every single epoch and then divided 32000s from that in order to train my model for longer and longer time. I used half precision and had to replace <em>Mish activation</em> in the code with <em>ReLU</em> for faster computation (It gave me almost one extra epoch). </p> <h3>What I learned during the competition ( note to self )</h3> <ul> <li>Prepare and maintain a proper experiment log file throughout the contest (Any help with this would be appreciated)</li> <li>Discussion forums are really interesting and full of ideas to experiment with but proper experiment documentation is also necessary.</li> <li>Proper selection of solution for submission is extremely important, this helped me through LB shakeup.</li> </ul>
2019 Data Science Bowl
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to thank the BS KIDS team, Kaggle team and all kagglers for this amazing competition. </p> <p>It was hard to handle qwk metric with so few test samples without fall in the public LB climbing! </p> <p>Here <a href="https://www.kaggle.com/steubk/dsb2019-bs-26-lgb-auc-xgb">https://www.kaggle.com/steubk/dsb2019-bs-26-lgb-auc-xgb</a> you can find (100% kaggle kernel) solution.</p> <p>This is a summary of my solution:</p> <ul> <li>The Magic For those about to "hey, where is the kaggle's magic ?", this is my <em>magic blend</em>:</li> </ul> <p><code>y = (2*0.7*y_lgb + 2*0.3*y_xgb + 2*3*y_auc_solved + 3*y_first_try_success)/5</code></p> <p>with y_lgb as lgb regression (rmse) , y_xgb as xgb regression (rmse), y_auc_solved as binary classification(auc) for solved assessments (0/123) and y_first_try_success as binary classifcation (auc) for first try success assessments (012/3) </p> <ul> <li><p>Data Augmentation thanks to @akasyanama13 and his kernel <a href="https://www.kaggle.com/akasyanama13/another-way-of-validation">https://www.kaggle.com/akasyanama13/another-way-of-validation</a> I added about 11% training samples (2.018/17.690) in public and many more (75% ?) in private. </p></li> <li><p>Validation Strategy For each learning model trained a 5 GroupKFold on installation_id, with fixed number of iterations (no early stopping), averaged with 5 seeds and validated with median of 5.000 shuffled truncated samples: this guaranteed the stability of the rmse and qwk on cv and qwk on private. </p></li> <li><p>Feature enginering I generated more than thousand features (1.047), starting from public kernels (@artgor <a href="https://www.kaggle.com/artgor/quick-and-dirty-regression">https://www.kaggle.com/artgor/quick-and-dirty-regression</a> , @braquino <a href="https://www.kaggle.com/braquino/890-features">https://www.kaggle.com/braquino/890-features</a> and others) and then added features for: current-assessment, last-game, last-activity, last-assessment, "correct" event_data and encoded timestamp hour in cyclical continuous features.</p></li> <li><p>Feature selection I selected 128 features with RFE. I done first elimation steps removing correlate features (corrcoef &gt; 0.9999) and features with minimal gain in lgb regression model. For last step (from 148 to 128) I computed rmse and qwk for each single feature and removed features for which the gain for rmse AND qwk was negative regardless of the lgb feature importance. Last step gives an improvement for qwk in cv and private (cv: 0,5616 --&gt; 0,5624, private: 0.545 --&gt; 0.553)</p></li> <li><p>From RMSE to QWK (aka threshold definition) I built a simple Bayesian optimization and validated with 5.000 shuffled truncated samples from train.</p></li> <li><p>The Magic Revisited (Simple models and ensembling) This was the score before blending: </p></li> </ul> <p>| model | cv |cv std|(public)|private| | --- | --- |---|---|---| |xgb|0.5574|0.008|0.544|0.550| |lgb|0.5638|0.008|0.538|0.553|</p> <p>A simple blend <code>(0.7*y_lgb + 0.3*y_xgb)</code> of xgb and lgb model has given no imporvements on cv and public but some improvements in private (0.555).</p> <p>In the last days I tried some binary classifications with auc: binary classification for solved assessments (0/123) and binary classification for first try success (012/3)</p> <p>The ordering induced by auc and the small number of elements for qwk made me think that I could try to blend the auc models directly with the regression models: In fact the blend gives an improvement in cv (0.5680) and an improvement of 0.002 over the simple blend in private. (qwk final score:0.557) </p> <p>Thank you for reading !</p>
Google QUEST Q&A Labeling
silver solution (44th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winners! And Thank you to Kaggle for hosting this. </p> <p>My Bert model had USE features + Crawl embeddings. USE features improved my score quite a bit. Crawl not so much. I had answers and questions model separate, a little later I realized I could have saved lot of time by having QA branch in the model itself (too late). My post processing was from public kernel. (All PyTorch) </p> <p>Roberta was surprisingly good pretty close to Bert on public LB, considering the amount of time I spent on Roberta vs Bert. This morning I realized that my Roberta model isn't using USE Features - it was too late, had to let it go. </p> <p><strong>Preprocessing:</strong> None for Bert, For crawl I used the preprocessing from public kernel. <strong>Tokenization</strong> Head + Tail approach. Custom loss function</p> <p>I'll have my training code on GitHub and I'll make my inference kernel also public. </p> <p>Look forward to reading better solutions ! </p> <p>Edit: <a href="https://www.kaggle.com/rashmibanthia/googlequest-inference?scriptVersionId=28474231">Inference Kernel</a></p>
Bengali.AI Handwritten Grapheme Classification
9th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Model</strong> My model backbone is simply based on seresnext50, using CV of randomly split 4 folds for training, no stratification. However, i have two extra loss heads which I found quite helpful to regularize the model - 1292-grapheme head CE loss and a Arcface loss of the grapheme head. </p> <p>However, the 1292 head is on top of the combined 3 tasks heads with binary mapping - </p> <p><code>self.logit_to_1292 = nn.Linear(168+11+7, 1292, bias=None)</code></p> <p><code>out_1292 = self.logit_to_1292(out_3tasks)</code></p> <p>where I initialize the <code>self.logit_to_1292</code> layer with a frozen binary weight that has dimension of (168+11+7, 1292), where it correctly maps each three combination of (grapheme_root, vowel_diacritic, consonant_diacritic) to its corresponding grapheme. I don't train this layer, however.</p> <p>During back-propagation, I also used a very low weighting for the 3 tasks head and arcface head (0.02), but have a very high weight for grapheme head (1), which is quite counter-intuitive. </p> <p><strong>Augmentation</strong></p> <p>I used 75% of CutMix, sample-wise. Also I gradually add in grid mask as training goes on, but I used 'grid-mix' instead - so instead of removing those grids, I randomly shuffle them within each image, so the per image stats won't change comparing to if you crop out those grids from image. I found this slightly more helpful. </p> <p>For sample-wise cut mix (this is the most helpful data aug) - each image will randomly draw a binary cutmix mask during pytorch's data loading function, and I only shuffle and pair up the images within each batch during training, using this pre-computed mask. </p> <p>I also add in 5% of augmentation proposed by XingJian Lyu. </p> <p>Overall cutmix is the most useful part. </p> <p>*<em>Self-training with external dataset *</em> I would say this is the part that helps my model much in generalizing to the test set. I only discovered the external dataset of ekush disclosed on the external data disclosure thread on the last week of the competition. </p> <p>Basically I downloaded and resize those 300K unlabeled images, and used my best trained 4 folds model to generate soft targets for the grapheme (1292) and 3 tasks heads (168+11+7). Then combined with original training data, I re-train my above model in 4 folds, with soft targets as labels. In each fold, I combine the entire 300K external data plus the training folds to train each model, and do this 4 times. </p> <p>This approach is similar to the idea of noisy students self-training paper shared earlier on in the forum, where I generated soft targets without noise, and retrain the model with noise. The student and teacher models are exactly same as above. </p> <p>The key thing is how you do sampling with those unlabeled dataset - the one that works best is first to generate hard labels for graphemes, and then combined with our train dataset, to use Heng's balanced sampler to do sampling based on all graphemes. Other sampling strategies simply don't work. </p> <p>This finally pushes my model from 994 to 996 on Local, and on LB to improved from 985 to 990, where ensembling of the 4 folds further boosts it to my current score of 916 (all based on public score) </p> <p><strong>Things didn't work for me</strong></p> <p>OHEM Efficientnet (slightly worse than seresnext50) senet152 dropblock attentiondrop shake-drop intensive-augmentation or autoaugment using flip or 180 rotate as addition identities (strategies that worked best for humpback competition)</p>
Bengali.AI Handwritten Grapheme Classification
My Solution Writeup + Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was an awesome competition! Congrats to the winners! Here is my final writeup for this comp: <a href="https://github.com/GreatGameDota/Bengali-Character-Classification">https://github.com/GreatGameDota/Bengali-Character-Classification</a> A shame I didn't pick the right submission but I'll take what a learned and apply it to the next competition! Happy modeling!</p>
Deepfake Detection Challenge
79st Place solution: A Hotchpotch of EfficientNets
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3><strong>Introduction:</strong></h3> <p>Hi Kagglers! </p> <p>Congratulations to everyone who survived the leaderboard shakeup. Some of you probably expected that it would be hard to generalize to new videos and especially DeepFake videos “in the wild”. In essence, we tried to include as much model diversity in our ensemble as possible in order to survive a leaderboard shakeup.</p> <p>For us ( <a href="/koenbotermans">@koenbotermans</a> , <a href="/kevindelnoye">@kevindelnoye</a> and I) this competition was 3 months of blood, sweat and tears with ups and downs and a radical switch from Keras to Pytorch halfway in the competition. I hope the insights in this post will be helpful to other competitors.</p> <p>As many other competitors we also use a frame-by-frame classification approach for detected faces. We agonized over adding the audio data in our model or doing sequence modeling (e.g. LSTM cells), but ended up not experimenting with it. This gave us a lot of time to focus on creating datasets and training classification models.</p> <p>The final kernel that we used for our submission can be found here: <a href="https://www.kaggle.com/carlolepelaars/efficientnet2xb5-b6200-b6finetuned-b4-2xres-01clip">https://www.kaggle.com/carlolepelaars/efficientnet2xb5-b6200-b6finetuned-b4-2xres-01clip</a></p> <h3><strong>Data Preparation:</strong></h3> <p>Over the course of the competition we created several image datasets of face detections. The first baseline (ImagesFaces1) only contained the first face detection in a video using MTCNN as our face detector. Fine-tuning EfficientNetB5 on this dataset already gave us a pretty reasonable baseline (0.23083 local validation, 0.39680 on public leaderboard). Half-way we switched to RetinaFace as face detector as it had better performance and was faster. The first dataset after this switch (ImagesFaces3) contains 200K faces as training data. The final dataset (ImagesFaces6) featured 2M face detections from the training videos. Our final solution has both models that are trained on ImagesFaces3 and ImagesFaces6. It is arguable that 2M face detections is overkill, but luckily we had sufficient computational resources to experiment with it. In all datasets we took a 20% sample as validation data avoiding data leakage due to the limited number of actors. The datasets we created were balanced so we had 50% real and 50% fake faces.</p> <h3><strong>Face Detection:</strong></h3> <p>We started out with MTCNN as our face detector, but we thought it was too slow and that we could have better performance. Eventually we settled on RetinaFace as it had better performance and faster inference time.</p> <p>The MTCNN library we used is built on Tensorflow so naturally our classification models where based on Tensorflow/Keras. We did experiments using XCeptionNet and EfficientNetB0-B8. When we switched to RetinaFace we could only find a Pytorch implementation and as both Tensorflow and Pytorch allocate all GPU memory we felt forced to choose one framework for face detection and classification models. With 1.5 months left we therefore chose to go with RetinaFace and start experimenting with Pytorch models.</p> <h3><strong>Models and Ensembling strategy:</strong></h3> <p>Our final 81st place solution is a hotchpotch of EfficientNet Models. This haphazard combination was intentional to include diversity and to have a good chance at generalizing well to new data. The final models that were used:</p> <ol> <li>EfficientNetB6, 200x200 resolution, ImagesFaces6</li> <li>EfficientNetB5, 224x224 resolution, ImagesFaces3</li> <li>EfficientNetB5, 224x224 resolution, ImagesFaces6</li> <li>EfficientNetB6, 224x224 resolution, ImagesFaces6, finetuned for one epoch on our validation data.</li> <li>EfficientNetB4, 224x224 resolution, ImagesFaces6, data augmentation and label smoothing.</li> </ol> <p>All models are trained with a starting learning rate of 0.001 and using a “Reduce on Plateau” learning rate schedule with a patience of 2 and with a multiplier of 0.5. All models have a dropout of 0.4 before the final layer and group normalization so we don’t lose performance with small batch sizes (e.g. 20).</p> <p>We settled on a simple mean from all models for each video. We made 25 predictions per video on evenly spread frames. No weighting of models was done. The predictions were clipped between 0.01 and 0.99.</p> <p>In order to speed up the inference process we decreased the frame resolution by a factor of 2. Most original video frames were already high resolution so the face detector was still effective on a frame with a reduced resolution.</p> <h3><strong>Generalization:</strong></h3> <p>Throughout the competition we generally had a large correlation between our local validation scores and public leaderboard scores. A lower log loss on the local validation generally meant a better leaderboard score, but our final validation scores were extremely low (approx. 0.068 log loss). This means that we most likely had some data leakage in our training data, but we don’t know exactly where it was coming from.</p> <h3><strong>Data Augmentation:</strong></h3> <p>One thing I regret is that we underestimated the power of data augmentation for this competition. With only a month left we started using rotation (15 degrees) and flipping to augment the data, but looking at other public solutions we probably could have gotten a much higher score with more extensive data augmentation. </p> <h3><strong>What we would have liked to try but didn’t:</strong></h3> <ul> <li>LSTM cells</li> <li>Mixed precision (with NVIDIA DALI or apex).</li> <li>Include audio data and extract features using LSTM cells.</li> <li>Including ResNeXT101_wsl models. We trained a few models, but the weight files were too large to add it to the ensemble.</li> <li>Stochastic Weight Averaging (SWA).</li> <li>Fine-tuning “Noisy Student” weights (The Noisy Student paper was fairly new and not implemented in the libraries we used yet.)</li> <li>Training with Mixup.</li> <li>Taking the difference between frames and include it as a channel.</li> </ul> <h3><strong>What didn’t work:</strong></h3> <ul> <li>Taking the median of predictions for a video.</li> <li>Naïve postprocessing (Changing a prediction of 0.8 to 0.95, etc.)</li> <li>Clipping more than 0.01 (Sometimes there was a leaderboard improvement, but we decided it was too risky).</li> <li>Test Time Augmentation (TTA) with rotations. </li> </ul> <p>I hope you got some insights from this solution overview! Feel free to ask questions or give feedback on this approach!</p>
Deepfake Detection Challenge
Public 34th/Private 47th place insights - Part1 (Inference)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>There is a lot to say about this competition. First I would like to share about our inference pipeline here and then I will make another post about the training part. Our solution uses only videos/frames (no audio at all) and total frames used in inference was quite important for us, the more the better. We tried to move frames into GPU early in the pipeline to benefit from GPU fast computation in the next stages. We've fighted with NVidia DALI and Decord as they can load/decode videos through GPU, unfortunately it never worked, Decord had a memory leak and DALI finished with timeout during submission (impossible to troubleshoot). So we used OpenCV and moved decoded frames in GPU just after. </p> <p>Then, for <strong>faces extraction</strong>, we've used MTCNN with facenet-pytorch with small changes to make it support GPU input tensors. Prior to MTCNN, we added a GAMMA correction (still in GPU) on frames to help MTCNN (some dark videos or bad contrast). Outputs are faces resized to 256x256, aspect ratio preserved and large margin (30 pixels on each border).</p> <p>Next steps is <strong>face tracking</strong> to identify how many faces in a videos, cleanup artifacts (face detected but not face) with different rules (based on tracked confidences and maximum faces). It is based on centroid boxes tracked across even spaced frames.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F698363%2F682f224bba7f3b69371c8903ac717158%2Fpipeline_inference.png?generation=1585728553095780&amp;alt=media" alt=""></p> <p>Next is <strong>models inference</strong>. We've 3 to 4 EfficientNet CV5 models with different input shapes (256x256, 240x240, 224x224) cropped by the normalizers. Each model was trained with a different validation strategy. Some with full data, some with partial data to make hold-out validation.</p> <p>Final step is <strong>post-processing</strong>. One may notice that some videos have blinking fake faces (a few frames real and a few frames fake within the same video). Probabilities of our models was moving up and down which means it worked but using simple average for final probablity would not work:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F698363%2F58a2b09dc3e876009533a523b95eb571%2Ffake_frames.png?generation=1585729302156617&amp;alt=media" alt=""></p> <p>So we decided to evaluate what could be the thresholds to detect such behavior and use maximum probability instead of average in this case only. The question was how many frames with high probability should we have to consider it's a fake:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F698363%2Fae7596e7d77a9f827e375d39a0ebedcb%2Ftheshold.png?generation=1585729646472824&amp;alt=media" alt=""></p> <p>And the answer was around 20%. So if 20% of frames have probability higher than around 0.85 then we prefer selecting maximum probability instead of average.</p> <p>There is an additional/optional step with ridge regression (not classification) built on hold-out data and applied to models' output.</p> <p>This pipeline works only if we have enough frames and we were able to run it up to 100 frames per video. Each single EFB model got around LB=0.32. Ridge regression on ensemble got LB=0.29 and thresholds provided boost to LB=0.27.</p>
2019 Data Science Bowl
How to gain +1839 places during shakeup - complete code!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p><a href="https://www.kaggle.com/marcogorelli/fork-of-quick-and-dirty-regression?scriptVersionId=27370451">Here</a>'s the notebook I submitted, which (to my great surprise) got me in the top 12%. Not quite my first bronze, but pretty close :)</p> <p>It's a fork of Andrew Lukyanenko's excellent kernel (thanks Andrew!), the only thing I changed was that I modified some features to use the <code>expanding</code> function from <code>pandas</code>, hoping this would prevent some leakage.</p> <p>So for example, instead of</p> <p><code> df['installation_duration_mean'] = df.groupby(['installation_id'])['duration_mean'].transform('mean') </code></p> <p>I would use</p> <p><code> df['installation_duration_mean'] = df.groupby(['installation_id'])['duration_mean'].expanding().mean().reset_index(drop=True) </code></p> <p>I also remove his hard-coded thresholds for the QWK.</p> <p>Still trying to figure out whether my gain was due to that or just sheer luck (I suspect the latter has a lot to do with it).</p>
Bengali.AI Handwritten Grapheme Classification
My overview solution 0.9361 with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all 🤠 </p> <p>I have published my solution that I didn't choose as a final score 😏 in this <a href="https://github.com/tanyapohn/kaggle-bengali-2020">github</a> where I explained most of my experiments.</p> <p>In this code, you can play around with your idea as well since I wrote it as simple as I can.</p> <p>Also, this <a href="https://www.kaggle.com/moximo13/spatial-transform-network-bengali?scriptVersionId=30662848">kernel</a> is a submission forked from my private one (uncleaned code).</p> <p>The reason I give up on this experiment is that I couldn't boost my accuracy on LB anymore (pretty bad decision) Hope this help beginners who just start their path in Machine Learning.</p> <p>Any comments or suggestions with my code and experiments are appreciated.</p> <p>Thank you! 😊 </p>
2019 Data Science Bowl
40th place solution 0.553 - 0.555, +1050 positions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: 2019 Data Science Bowl <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>I guess that many of those who are reading it understand how difficult it was to be at 1000+ position all the competition doing the right things at the same time. 😏</h3> <h2>Of course I did not do:</h2> <ul> <li>Any adjustment of test set features (adjustment by ratio of mean values was used in almost any public kernel).</li> <li>Tuning of round thresholds for final predictions (when we convert continuous values to discreat labels) to fit the train distribution of classes (that was used in almost any public kernel).</li> </ul> <p><strong>It saved me from overfitting to public test distribution which was of course biased</strong> (only 14% of data, it was clearly seen from my own experiments and this discussion: <a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/122767">https://www.kaggle.com/c/data-science-bowl-2019/discussion/122767</a> ) <strong>But at the same time it holds me at 1000+ LB position...</strong></p> <p>I solved regression problem. It was the right way because kappa measure penalize the different errors unequally. Besides some popular features available in public kernels I used features based on “misses” and “rounds” and found them helpful. Additional custom features based on counters were used. After feature selection only 505 features were remained.</p> <h2>Training:</h2> <ul> <li>Using labeled test samples for training.</li> <li><strong>Custom RandGroupKfold</strong> was used because sklearn GroupKfold does not have "seed" parameter to provide randomness.</li> <li>Ensamble of 15 LGBM models (3 CV iterations with 5 folds each, "feature fraction" parameter was changing at different folds).</li> <li><strong>Adversarial validation</strong> via selection of random samples with unique IDs was used (200 iterations). <strong>Median score</strong> for each model was calculated and saved (it was used as a weight at model voting stage).</li> <li>Round <strong>thresholds optimization</strong> via OptimizedRounder was implemented <strong>for each model individualy</strong> based on whole train dataset. So each of 15 models has its own set of round thresholds what was used for final prediction.</li> <li><strong>Prediction confidence values were calculated based on ratio between the distance to nearest round value and interval length between adjacent round values</strong>. These confidences were used as weights at model voting stage.</li> <li>Model voting stage was done with weights based on <strong>median fold score</strong> and <strong>confidence values.</strong></li> </ul> <p>I provided full results repeatability using SortedList, SortedDict, SortedSet (from sortedcontainers package) for features with "set" and "dict" as well as "seed" values for all models. It helped me a lot during model selection and submission stages.</p> <h2>Things that did not help:</h2> <ul> <li>Training dataset augmentation.</li> <li>Using kappa measures as evaluation approach for stopping criteria in LGBM, CATBoost and XGBoost (it was very unstable).</li> <li>Ensamble LGBM with CATBoost, XGBoost and NN models (results were nearly the same as LGBM standalone model has but training time increased a lot).</li> </ul> <h3>Thanks for reading and good luck in future competitions!</h3>
Google QUEST Q&A Labeling
My First Medal (Bronze) Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Google and Kaggle for the challenging competition (a good mixture of science and art imo)! </p> <p>Also, congrats to all teams! :)</p> <h2><strong>Pre-Processing:</strong></h2> <ul> <li>unidecode and html unescape</li> </ul> <h2><strong>Modelling:</strong></h2> <ol> <li>Fine-tune Bert base uncased on raw data + statistical features (LB: 0.384)</li> <li>Fine-tune Bert base uncased on pre-processed data + statistical features (LB: 0.384)</li> <li><p>Feature extraction model using GloVe, FastText, USE 5, Paragram and distilBert (LB: 0.371)</p> <ul><li>I used TF + hugging face (on hindsight, reproducibility was really problematic)</li> <li>All had similar architecture using Adam + BCE loss with sigmoid output on all 30 labels</li> <li>I used spatial dropout to reduce overfitting when fine-tuning Bert</li> <li>Plain vanilla 5 KFolds (lucky seed 42) combined with 2 epochs after early stopping</li> <li>TTA using pre-processed text and raw text</li></ul></li> </ol> <p>Ensembling them using simple mean (across folds, epochs, and models) got me LB: 0.394</p> <h2>Post-Processing:</h2> <ul> <li>Simple quantile threshold on "easy targets" mimicing train set labels (with a margin) e.g. <code>threshold = submission[c].quantile(0.75)</code> <code>submission[c].apply(lambda x: 1 if x &gt; threshold else x)</code></li> </ul> <p>"easy targets" selected based on std and nunique of train set labels - Rounding prediction to 4 significant figures to account for "tied ranks"</p> <p>Post-processing outputs gave me a boost to LB: 0.410</p> <h2>What I tried that didn't work:</h2> <ul> <li>GRUs/LSTMs </li> <li>Regression NN with target transformation (similar CV but doesn't generalise well to public LB)</li> <li>Data augmentation (random deletion, random insertion, switching question/answers sequences, etc)</li> <li>XLNet base, Roberta base, Albert base, Bert cased could not beat my Bert base uncased baseline</li> <li>Rank averaging only helped if no post-processing was done</li> <li>Rounding predictions to nearest 1/90 </li> </ul> <h2>Biggest take-away:</h2> <p>Don't focus <em>too much</em> on public kernels and discussion. Just have a look, get some ideas, look at the data again and try your own ideas. (p.s. I think someone mentioned this on the discussion thread but I forgot which and who)</p>
Google QUEST Q&A Labeling
11th Solution: A threshold oriented Competition is not good
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The model that I created 23 days before is doing better than my final submitted one. After that, I tried a lot, but what really works is tuning the threshold that what percent of the smallest value I should to zero out. </p> <p>I 'waste' my lot precious time to tuning the threshold. But it can help me nothing when I join the next competition. ``` def zero_out(df, column, percent): """Zero out all the values that close to 0""" index = df[column].nsmallest(int(percent*len(df))).index df.loc[index, column] = 0 print(column, " remain ", len(df)-len(index))</p> <p>zero_tupe = [ ("question_type_spelling", 0.99), ("question_not_really_a_question", 0.94), ("question_type_consequence", 0.92), ("question_type_definition", 0.91), ("question_type_entity", 0.88), ("question_type_compare", 0.96), ("question_type_choice", 0.44), ("question_conversational",0.78), ("question_multi_intent",0.55), ]</p> <p>for column, percent in zero_tupe: zero_out(df_sub, column, percent) ```</p> <p>The score compare is following: (PB ,LB) Without zero-out: 0.386 0.409 With zero-out (23 days ago): 0.424 0.450 After 23 days 'hard work' 0.424 0.465</p> <p>What I did later after turning the threshold is actually a harm to the PB score. </p>
Google QUEST Q&A Labeling
77th place post-processing method: DBSCAN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to all of you how shared your work and ideas during the competition!</p> <p>I started this competition late and without any knowledge about BERT or the current transformers architectures. My goal was just to learn about BERT, so I'm very happy to be in the silver zone. </p> <h2>Base models</h2> <p>My base models were just variants of what was shared in the public kernels:</p> <ol> <li><code>bert-large</code> trained over <code>question_titile</code> + <code>question_body</code> + <code>answer</code>.</li> <li>Two <code>bert-base</code> trained over <code>question_title</code>+<code>question_body</code> and <code>answer</code>, to predict answer and question targets separately.</li> <li>Three <code>bert-base</code> trained over <code>question_title</code>+<code>question_body</code>, <code>question_body</code>+<code>answer</code> and <code>answer</code>, to predict all targets jointly. </li> </ol> <p>I used 5-GroupKFold grouped by <code>question_title</code>. For the training process I first trained the regression head with BERT fixed, and then trained the whole network for 3 epochs.</p> <p>My final submission was just the average of these three models.</p> <h2>Post-processing</h2> <p>Two days previous to the competition end, I was at ~800th position in public leaderboard with ~0.39 score with no chances to enter the medal zone. Thanks to @corochann in this <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">discussion</a> I found out why I wasn't able to go up in the leaderboard: Even when my models were pretty good at predicting scores, the only thing that matter for the evaluation metric is the order. <strong>The solution</strong>: find groups of target predictions which are very close and set them with the same value, i.e., <strong>cluster</strong> nearby target predictions, and <a href="https://en.wikipedia.org/wiki/DBSCAN">DBSCAN</a> is pretty good at this task.</p> <blockquote> <p>DBSCAN is a density-based clustering non-parametric algorithm: given a set of points in some space, it groups together points that are closely packed together (points with many nearby neighbors), marking as outliers points that lie alone in low-density regions (whose nearest neighbors are too far away).</p> </blockquote> <p>This algorithm requires two parameters: <code>eps</code> the maximum distance for two points to be considered neighbors (and be assigned to the same cluster), and <code>min_samples</code> the number of samples in a neighborhood for a point to be considered as a core point.</p> <p>Thus, my post-processing routine was:</p> <p><code>python def cluster_predictions(submission, q_value=0.95): submission = submission.copy(deep=True) for column in submission.columns[1:]: print(column) max_dist = (submission[column] .sort_values() .diff() .quantile(q_value)) print(max_dist) if max_dist == 0: continue X = submission[column].values.reshape((-1,1)) clustering = DBSCAN(eps=max_dist, min_samples=2).fit(X) print(np.unique(clustering.labels_)) for cluster_id in np.unique(clustering.labels_): if cluster_id == -1: continue cluster_idx = np.argwhere(clustering.labels_ == cluster_id) cluster_median = np.median(submission.loc[cluster_idx, column]) submission.loc[cluster_idx, column] = cluster_median return submission </code> for each target column, I compute the differences between the sorted predictions, and set <code>eps</code> as a high quantile of these differences. I tuned <code>q_value</code> through the 5-GroupKFold predictions and found 0.95 to be a good value. The post-processing gave me a boost of <code>0.39X -&gt; 0.42X</code> in public LB and <code>0.37X -&gt; 0.39X</code> in private LB. </p> <p>With more time and a better ensemble, I think this approach could have achieved a much better correlation in the private leaderboard. </p>
Bengali.AI Handwritten Grapheme Classification
167th Place Solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you very much to the competition organizers at Kaggle, to Bengali.ai community for organizing the competition. Many thanks to everyone who discussed the competition on the Forum. With each competition, I have been learning a lot from the forum and notebook kernels. And finally congrats to all the winners and teams that participated in this competition! My solution summary is given below:</p> <p><strong>Dataset:</strong> • Image Size: 137x236 ( No preprocessing ) <strong>Augmentation:</strong> • CutMix <strong>Model:</strong> • EfficientNet-B5 with three heads</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1532763%2F9fdbc2230f1ca5639dd271f093bada56%2Fmodel_diagram.PNG?generation=1587308567296419&amp;alt=media" alt=""></p> <p><strong>Training:</strong> • 5 fold Configuration • Data split on the basis of grapheme root labels • Loss: Cross Entropy Loss • Optimizer: Over9000 • Scheduler: Reduce On Plateau • Gradient Accumulation • Batch Size 100 • Initial Learning Rate 0.03 <strong>Inference:</strong> • Best Average recall checkpoints were used • Simple Average of the outputs from 5 folds • Inference kernel: <a href="https://www.kaggle.com/mohammadzunaed/efficientnet-b5-inference-kernel-pytorch?scriptVersionId=32245517">https://www.kaggle.com/mohammadzunaed/efficientnet-b5-inference-kernel-pytorch?scriptVersionId=32245517</a> <strong>Things that did not work for me:</strong> • Preprocessing • GridMask, Cutout, AugMix • Label Smoothing Criterions • Single head instead of three heads • Activation functions and Convolutional layers in the heads <strong>Github Link:</strong> <a href="https://github.com/Rafizunaed/Kaggle-Bengali.AI-Handwritten-Grapheme-Classification.git">https://github.com/Rafizunaed/Kaggle-Bengali.AI-Handwritten-Grapheme-Classification.git</a></p>
Deepfake Detection Challenge
5th (from 20th) place approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3108186%2Fc48aafd2dcdc7ff90d51c894946eb8d3%2Fapproach.PNG?generation=1585749236643739&amp;alt=media" alt=""></p> <p><a href="https://james.dev/img/approach_large.PNG">Full size/res picture available here.</a></p> <h1>Intro</h1> <p>Our solution is largely based around 3D CNNs which we felt would generalise well to 'different' methods of deepfakes. Instead of focusing on specific pixel patterns in frames, we hope they might identify temporal problems like the often tried-and-failed 2D CNN-&gt;RNN models, but with a much lower tendency to overfit. The 3D models have similar numbers of parameters to 2D models, and yet also the kernels have a 'depth', so they are relatively modest in their ability to fit in the 2D plane.</p> <p>Only a month ago we had a solution using a few 3D CNNs, a 2D CNN-&gt;LSTM and 2D CNN and we were 3rd on the leader board. However, we found it hard to keep up over the last couple of weeks and have dropped to 22nd.</p> <p>Fingers crossed for the private leader board result, and good luck everyone!</p> <h1>Face extraction</h1> <p>The basis of the pipeline is the face extractor. This was written within the first week of the competition and since then it's undergone only minor tweaks. In summary, every nth frame (we settled on 10) is passed through MTCNN. The bounding box co-ordinates are then used to set a mask in a 3D array. Faces which are contiguous (overlapping) in 3D are assumed to be a single person's face moving through face and time. We then extract a bounding box which includes this entire face over time and create a video from this region of interest, including every frame (not just every 10th). </p> <p>One nice thing about this method, even ignoring the video aspect, is it greatly reduced false positives, because the 'face' had to be present for a long period of time of the video to count as a face.</p> <p>Separately, we use the bounding boxes of MTCNN for a traditional 2D CNN inference.</p> <h1>3D CNNs</h1> <p>We trained 7 different 3D CNNs across 4 different architectures (I3D, 3D ResNet34, MC3 &amp; R2+1D) and 2 different resolutions (224 x 224 &amp; 112 x 112). The validation losses of the 3D CNNs ranged between 0.1374 and 0.1905. I initially struggled to fit models with spatial-level augmentation, finding they were so disruptive that the model struggled to tell real apart from fake. I had therefore only been using only pixel-level augmentation such as brightness (and Hflip) and some random cropping. However, Ian developed what we hope is a very successful 3D cutmix approach which led to models with good CV scores to be trained and were added to the ensemble.</p> <p>To put the benefits of ensembling with these models into context, when we last tried the I3D model (lowest validation loss) alone we ended up with a public LB score of 0.341. With ensembling we’ve improved to 0.253. </p> <p>Both these networks and the 2D CNNs, these were trained with AdamW (or sometimes Ranger) with a 1-cycle learning rate scheduler.</p> <h1>2D CNNs</h1> <p>We ended up including a single 2D CNN in the ensemble in the end. It was trained with quite aggressive augmentation. Ian settled on SE-ResNeXT50 for the final model, though we tried multiple others including Xceptions and EfficientNets.</p> <h1>Things that didn't work, or didn't work enough to be included.</h1> <ul> <li><p>Using a series of PNGs for videos instead of MP4s. I think this is interesting. We trained these models by generating videos using an identical face extraction pipeline and saving the faces as MP4s (~216,000 fake faces, ~42,000 real, 17Gb of video). Towards the end we wondered if saving the faces as MP4s led to a loss of fidelity and so we saved every frame of these videos as a PNG and tried retraining the networks that way. This took almost a week, resulted in over 500 Gb of data, and resulted in the 3D CNNs overfitting even with reference to the validation data very early. It was a very time-consuming experiment that we still find surprising, given the PNG data is more like what the feature extractor is seeing at test time.</p></li> <li><p>Using fewer frames. This might seem obvious but it's a bit more nuanced. We only used faces for videos if they were contiguous over at least 30 frames, up to a maximum of 100. That 100 was set by time constraints. However, using fewer frames would also mean a face is able to move around less. We found in some videos someone would walk across the entire screen in a second. This meant that the face only took up a tiny proportion of the entire region of interest. We therefore tested reducing the maximum frame number to, say, 64 ensure we still got a couple of seconds of video (we trained on chunks of 64 frames) but faces moved around less. However, this gave a worse LB score. We also tried more intelligent methods, such as using at least 30 frames, but then ‘stopping’ as soon as a face had moved &gt; 1 face diameter away from its start point. Again, this was worse (this was particularly surprising). It probably shows that deepfakes are more and less detectable through different stages of the video, and so more frames is always better.</p></li> <li><p>CNN -&gt; RNNs did work a <em>little</em>, but not enough to use. Before Ian joined and we became a team, my pipeline did include one of these, but with his better models we dropped it. This seems in contrast to a lot of people who seemed to have 0 success. I suspect this is because we trained these models with the MP4 files rather than image files (see the previous point).</p></li> <li><p>A segmentation network to identify altered pixels. Ian developed masked by 'diffing' the real and fake videos and trained a network to identify altered pixels. It had some success, but overfitting meant it had an LB score of ~0.5 and wasn't enough to provide benefit in ensembling.</p></li> <li><p>Skipping alternate frames as a form of test-time-augmentation to amplify deltas between frames certainly didn't help, and possibly hindered.</p></li> <li><p>Training a fused 3D/2D model which allowed the ensemble weighting to be 'learned' on validation videos did worse than a simple 50:50 pooling.</p></li> <li><p>Using more complicated methods of pooling predictions across faces than taking a mean. I previously thought the 'fakest' value should be used, for example, if we were using very high thresholds for detection.</p></li> <li><p>Some 3D models. For example, architectures such as 'slowfast' were very easy to train but overfitted profoundly, akin to a 2D CNN. Other networks such as MiCT we struggled to fit at all!</p></li> </ul>
Bengali.AI Handwritten Grapheme Classification
38th Place - ZoneMix - A domain specific augmentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A really big thanks to all the amazing participants for sharing the ideas. @haqishen @hengck23 @bibek777 @machinelp @quandapro and so many other awesome users without whose suggestions I would have been completely lost. And also big thanks to Bengali.AI organizers for this competition about my native language, the lessons learned from this competition will also be applicable to other languages of the Indian subcontinent like Hindi, Odia, Telegu, etc. </p> <p>I'll be honest, I joined the competition by at nearly the end of January, and for most of the time, I was trying to improve the score just by simple augmentations/model/hyperparameter tuning to improve the score. And having only Google Colab to train doesn't help with that either. Had I continued on that track I surely wouldn't have been in any medal zone. Thanks a lot to my teammate Ram for pointing out the important discusions about unseen graphemes by the organizers. And subsequently Qishen Ha's post on validating on unseen graphemes. </p> <p>With only 10 days remaining after realizing how poorly the model will do on unseen graphemes. Particularly - Classes with few combinations - Which is more than half of the grapheme roots, and consonant 3 and 6 - Generalize the worst. (Check combination counting kernel <a href="https://www.kaggle.com/dipamc77/bengali-dataset-grapheme-combination-counts?scriptVersionId=30348992">here</a>)</p> <p>I probed for the scores of some of these classes similar to the method described <a href="https://www.kaggle.com/philippsinger/checking-r-c-v-scores">here</a>. From the gaps, I was sure that a huge shakeup was coming.</p> <p>After reading the top solutions I'm inspired to pursue ML-based methods to improve generalization rather than a rule-based method like I describe below; I would still like to share it</p> <h1>ZoneMix - Splitting graphemes by zones to apply cutmix</h1> <p>As a native Bengali speaker, I can say that generally graphemes can be split into particular zones based on the combinations. Below is a diagram of the zones.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2Ffc1a8235909e0be649a656d3bece6637%2FZones.png?generation=1584474729279357&amp;alt=media" alt=""></p> <p>Some extra rules have to be made for some combinations, which was kind of a pain to write in code and made me question why I'm even working on such a tedious idea 😭 . I'll share the code surely, just right now its a huge mess.</p> <p>Each combination takes up some of these zones and cutting by the zone's coordinates based on the character's bounding box can roughly give the components. Here is an example</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2Fdf89e7805d9fb22cb46320f15a51cf63%2Fgoodexample.PNG?generation=1584475233238792&amp;alt=media" alt=""></p> <p>Of course - Real-world data doesn't always follow rules and failure cases are common</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2F574f7c8d3cd2f39a37347ac672e5c4ab%2Fbadexample.PNG?generation=1584475938169427&amp;alt=media" alt=""></p> <p>Once the zone data is extracted, they can be placed on the other images by resizing based on the new zone coordinates. This can produce very realistic images, but also unrealistic ones.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2F6af3d256dc7ef3fccc04024763b75102%2FGoodMix.PNG?generation=1584476645803135&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2F4e649a79884043537ec5bc181b8e343c%2FBadMix.PNG?generation=1584476950698513&amp;alt=media" alt=""></p> <p>When all the setup was done I had about 3 days left to train with this method. And sadly it didn't work directly, I think its because the mistakes and unrealistic images are too much noise for the model to deal with.</p> <p>I was able to tone down the noise by doing the following tricks - The grapheme root zones are roughly correct most of the time, while the vowel and consonants are way off. I figured that since grapheme root and consonant 3 and 6 are the main classes to tackle, I should focus on those. </p> <p>Using zonemix I trained a model to only predict the grapheme root, this reduced the noisy gradients from the vowel and consonants and improved the recall as well as unseen recall.</p> <p>Consonant 3 and 6 have a particular property that they are basically a combination of Consonant 2 and 4 and Consonant 4 and 5 respectively. Dieter also explains this in his <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136129">solution post</a> and I feel he found a much better way to deal with it. Anyway, so my approach was to use zonemix to place consonant 4 onto images with consonant 2 and 5. Examples below. With these I trained a new model for consonants.</p> <p>I literally trained both models parallely on two colab sessions on the last day and didn't even have time to tune anything, had to manually stop the runs because I had only 3 hours left to the end of the competition. </p> <p><strong>Final solution</strong> - Zonemix models for grapheme roots with less combinations (&lt;5), and consonant 3, 6, along the old trained ensemble for the rest (A pretty bad one with public LB only 0.982 and private LB 0.931). But I confidently picked the zonemix kernel knowing it will surely be good on private set 😄 </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3468635%2Ff596e6402b241d8ee5cfe0be5646ce43%2FCapture.PNG?generation=1584474005219015&amp;alt=media" alt=""></p> <p>Maybe with some better tuning and validation it could have probably done much better. </p> <h1>Hyperparameters tuning that worked and didn't work for me</h1> <ol> <li>Wasted weeks on cropped and preprocessed images, didn't work at all. I'm still not convinced why.</li> <li>Did lots of experiments on resnet34 with 128x128 (just resize) - This was practical for fast experimentation on Google Colab.</li> <li>Balanced sampling - The metric is macro average, so helped to do balanced sampling I guess.</li> <li>Cutout/Cutmix - Gives better results than mixup.</li> <li>Interestingly Mixup + Gridmask gives much better results on Unseen CV</li> <li>Finally trained SeResNeXt50 with 128x128 which gave improvement on CV (Though same pathetic results on Unseen CV)</li> <li>Well tuned ReduceLRonPlateau converges as much as 1.5x - 2x faster than Cosine Decay. Didn't try other LR schedules.</li> </ol> <h1>Some ideas that I didn't get to try</h1> <ol> <li>CAM FMix - One of the top solution uses CAM Cutmix</li> <li>Correct zone segmentation via noisy student training methods on the noisy zones.</li> </ol> <h1>Final thoughts and important lessons learned</h1> <ol> <li><p>From the only other competition I took part in <a href="https://www.kaggle.com/c/severstal-steel-defect-detection">Severstal: Steel Defect Detection</a> - I had this lesson: If there is a gap in CV and LB, trust your CV more and don't overfit LB.</p></li> <li><p>I can change that lesson now - If there is a gap in CV and LB, try to understand why from the discussions, problem statement and the data. Don't dive into coding new ideas trying to reduce the gap without having a proper understanding of what is causing the gap.</p></li> <li>Try ML-based approaches for solving the problem, after seeing the top solutions I feel like I understand them very well, but it didn't occur to me to use these ideas. Maybe I'm biased to use rule-based ideas and that needs to change.</li> <li>Do lots of EDA from time to time, not just at the beginning of the competition. Seeing the results of trained models changes our intuition about the data.</li> <li>Don't ignore external datasets. Of course, working with them needs time and effort, but with correct usage, external data can work wonders.</li> <li>Lastly - It is important to keep hacks to a minimum, but at the end of the day, Its a competition, do what you need to do to improve your score, like the awesome metric hacking magic.</li> </ol>
Bengali.AI Handwritten Grapheme Classification
1416th place solution + alpha
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats all the winners and thanks for sharing solutions! I was shock down -1356th places, which means my model was good for seen classes but totally failed to predict unseen combinations. By reference to winners solution I fixed my models, finally it achieved public 0.9925 / private 0.9552. And this is trained only on TPU and using keras, so it may be useful for someone who want to use TPU and keras user. <a href="https://www.kaggle.com/bamps53/private0-9552-tpu-keras-metric-learning">https://www.kaggle.com/bamps53/private0-9552-tpu-keras-metric-learning</a></p> <p>| No | model | cv | public | private | rank | |----|--------------------------------------|-------|--------|---------|--------| | 1 | decode 1292 unique labels (original) | 0.999 | 0.9870 | 0.9035 | 1416th | | 2 | 3 heads for each r, v, c | | 0.9891 | 0.9228 | 639th | | 3 | 3 models for each r, v, c | | 0.9665 | 0.9383 | 56th | | 4 | seen: No.1 unseen: No.3 | | 0.9925 | 0.9552 | 8th | | 5 | No.4 with postprocess | | 0.9941 | 0.9704 | 2nd |</p>
Google QUEST Q&A Labeling
24th place - mixing all I can - BERT+XLNET+GPT2+LGB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My first share of anything in public. Got to Expert tier, so guess got to share back, even if the result is not in the very top!</p> <p>The key ideas which made impact: <strong>Pre-processing:</strong> - Feature engineering - number of punctuation, nouns and things, number of common words between question and answer etc. Those are used in separate model (LGB). - Two separate sets of tokenized inputs. First set consisting entirely of question title and question body and used to predict categories relying on question only (first 21 outputs). Second set consisting of question title, question body and answer, to use for remaining 9 categories related to how answer relates to question and answer only features.</p> <p><strong>Modelling:</strong> - Train BERT, XLNET, GPT2 all in base configuration - 1 model of each kind for each of 2 sets of outputs (question only and question + answer). Use one-cycle training, standard Adam, early stopping on decrease in validation metric (spearmanr). Train for 3 folds (but using 10-fold groupkfold so that each fold has larger train set) - Train LGB using just the engineered features, one LGB model per output variable</p> <p><strong>Post-processing:</strong> - Mix all the models using softmaxed validation scores as weights. Using beta of 18.5 as this maximised validation score (optimised in separate kernel) - Iterate through different rounding policies for each output variable, use best one for each. Rounding policies considered - normal with 1-5 decimal places, ceil/floor by multiplying prediction by 10^X (where X is in range 1-5) then using floor/ceil and then dividing by 10^X.</p> <p>PS: the initial kernel is forked from <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">Bert-base TF2.0 (now Huggingface transformer)</a>, big thanks @akensert for his share! Would not have been able to accomplish such results without this as I joined relatively late (19 days before the end).</p> <p>PSS: The actual code is relatively dirty and relies on saved model weights. Still in case anyone wants to have look, it is <a href="https://www.kaggle.com/bookerd/2nd-gen-predict-only-all-available-models">here</a> (v19 is the final submission)</p>
Google QUEST Q&A Labeling
45th Place Simple Solution - 4 model average
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks everyone for posting the elegant solutions . Thanks to the organizers and my team mates @iru538 @tarandro @mobassir and @shahules </p> <p>I had joined the competition early and then left it for PKU .After failing to secure a medal spot in PKU , I joined back the competition and teamed up with Shahul and Mobassir . Both of them had by that time already trained around 13+ vanilla (public Kernel) TF bert + xlnet models and doing prediction at the range of .385 to .387 , but we were not able to creack .400 mark . Then I applied the below post processing (collected from one of the public Kernel ) </p> <p>``` test_preds = final_predictions y_train = df_train[output_categories].values for column_ind in range(30): curr_column = y_train[:, column_ind] values = np.unique(curr_column) map_quantiles = [] for val in values: occurrence = np.mean(curr_column == val) cummulative = sum(el['occurrence'] for el in map_quantiles) map_quantiles.append({'value': val, 'occurrence': occurrence, 'cummulative': cummulative})</p> <pre><code>for quant in map_quantiles: pred_col = test_preds[:, column_ind] q1, q2 = np.quantile(pred_col, quant['cummulative']), np.quantile(pred_col, min(quant['cummulative'] + quant['occurrence'], 1)) pred_col[(pred_col &gt;= q1) &amp; (pred_col &lt;= q2)] = quant['value'] test_preds[:, column_ind] = pred_col </code></pre> <p>``` We crossed the .401 first time and was able to extend it further with XLNET and Roberta etc till .417 </p> <p>In parallel Toru joined our team and had a great single pytorch model with .409 score without post processing . Since I was more comfortable with pytorch , I used that model as base and started training the GPT2 and other pytroch BERT models . This took us to .410 based on pytorch models . </p> <p>After Alexis joined he pointed us to a post processing trick used in Data Sceience Bowl 2018 1st place solution ."Data Science is Important- Bert says" . Here is our OptimizedRounder</p> <p>``` from functools import partial import scipy as sp</p> <p>class OptimizedRounder(object): def <strong>init</strong>(self,correlation): self.correlation = correlation self.coef_ = 0 self.score = 0</p> <pre><code>def _kappa_loss(self, coef, X, y): a= X.copy() b=y.copy() X_p = pd.cut(a, [-np.inf] + list(np.sort(coef)) + [np.inf], labels = [0,1,2]) a[X_p == 0] = 0 a[X_p == 2] = 1 #print("validation score = {}".format(spearmanr(a, b).correlation)) if spearmanr(a, b).correlation &lt; self.correlation: self.score = spearmanr(a, b).correlation return - spearmanr(a, b).correlation + (self.correlation - spearmanr(a, b).correlation + 1)**10 else: self.score = spearmanr(a, b).correlation return - spearmanr(a, b).correlation def fit(self, X, y,coef_ini): loss_partial = partial(self._kappa_loss, X=X, y=y) initial_coef = coef_ini self.coef_ = sp.optimize.minimize(loss_partial, initial_coef, method='nelder-mead') def coefficients(self): return self.coef_['x'] def score_fc(self): return self.score </code></pre> <p>``` Once Alexis was onboarded he started applying post processing to our pytorch models and it gave a lot of boost . So we scrapped the TF models (As it was taking more GPU as well) . </p> <p>Our final selected solution consists of Ensemble of : 1.BERT_Base 5 Folds (Question Model and Answer Model separate)(Tried three different architecture) + Statistical Features (Boost the score a lot). 2. Sequence model with Statistical Features 2. Distillbert Embedding Sequence Model 3. Distillbert Embedding LSTM model </p> <p>We got a slightly better Private Score using GPT2 in place of Sequential Model , but we didnt select that as final score . Our highest scoring Kernel has : 1.BERT_Base 5 Folds (Question Model and Answer Model separate)(Tried three different architecture) 2. GPT2 Base (10 Folds Question and Answer Separate Model) 3. Simple LSTM model 4.Sequence Model with USE features </p> <p>Here is a Kernel with the Inference which will give some idea of the architectures : <a href="https://www.kaggle.com/phoenix9032/threshold-base-4toru?scriptVersionId=28450007">https://www.kaggle.com/phoenix9032/threshold-base-4toru?scriptVersionId=28450007</a></p> <p>We have not used any trick in training time .</p>
Google QUEST Q&A Labeling
39th place solution, submission errors and memory leakage
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I would like to say thank you to the <strong><em>organizers</em></strong> and <strong><em>all teams</em></strong>! It was very interesting and fruitful competition.</p> <h3>Agenda:</h3> <ul> <li>Models and Training </li> <li>Post-Processing</li> <li><strong>Pain 1 - submission errors</strong></li> <li><strong>Pain 2 - memory leakage using "load_weights" in keras</strong></li> </ul> <h3>Models and training</h3> <ul> <li>Dedicated BERT models for questions, answers and both questions and answers</li> <li>GroupKFold with question_title groups</li> <li>Weighted BCE loss function was the best. The higher weights were assigned to columns with small amount of non-zero elements like the "question type spelling"</li> <li>Blending strategy</li> <li>I tried weighted MSE loss function and mix of MSE + BCE but results were worse than using BCE standalone</li> </ul> <h3>Post-Processing</h3> <ul> <li>"0,1 mapping" - mapping of high values to 1.0 and low values to 0.0 worked the best. The percentige of values to be mapped is chosen for each column individually (optimization problem was solved for each predicted column of training set)</li> <li>"Optimizer rounder" helped me a lot to achive silver medal in previous competition "2019 Data Science Bowl". I was happy to find discreate classes in our targets (caused by limited number of assessors). I tried different quantization strategies:</li> </ul> <p><em><strong>standard:</strong></em> global optimization when the set of classes was set up using all unique values in target columns over the whole training set (single codebook case)</p> <p><em><strong>advanced:</strong></em> column-wise optimization when the number of classes were equal to the number of unique values in each target column in training set (multiple codebooks case)</p> <p>Using of "Optimizer rounder" standalone (standard and advanced) gave significant LB (+0.02) improvement but using it jointly with "0,1 mapping" led to LB decreasing (see scores below). Also it is well-known that "Optimizer rounder" is a very risky approach from overfitting point of view. Therefore I decided finally to chose two submissions: with and w/o "Optimizer rounder". When private results were published I saw that I was right and "Optimizer rounder" made final score lower.</p> <pre><code>public 0.43318 -&amp;gt; private 0.40901 ("0,1 mapping") public 0.43050 -&amp;gt; private 0.40635 ("0,1 mapping" + "Optimizer rounder") </code></pre> <p>Solution w/o "Optimizer rounder" allowed to push me 30 positions up on private test set</p> <h3>Pain 1 - submission errors</h3> <p>A lot of time was spent to overcome submission errors: - <strong>Submission CSV Not Found</strong> - <strong>Submission Scoring Error</strong></p> <p>I have met such errors more than 30 times. It was really annoyed. The worst thing is that there is no additional information from the system to understand that was wrong. I experimentally found that <strong>"Submission CSV Not Found"</strong> is usually caused by RAM memory restriction. It occures because private test set is 7 times bigger that public one. <strong>"Submission Scoring Error"</strong> is caused by columns with small amound of non-zero elements. The reason is features of correlation metric calculation.</p> <h3>Pain 2 - memory leakage in keras</h3> <p>I have met the problem with "load weights" method in keras when perform blending. It loads model weights to RAM but does not free memory after clear_session(). It is well-known issue that was discussed many times in different resources but nothing changed. (for example here <a href="https://github.com/keras-team/keras/issues/7271">https://github.com/keras-team/keras/issues/7271</a> and here <a href="https://github.com/tensorflow/tensorflow/issues/27511">https://github.com/tensorflow/tensorflow/issues/27511</a>). Unfortunately the latest version of keras 2.3.1 still has this issue.</p> <p>Possible reason according to @robieta is the following: <strong>"The issue is with K.clear_session(), which clears the default graph. In 1.x, Keras uses the default tf graph, but in 2.0 Keras manages its own graph. So the clear session call isn't actually clearing the keras graph and models are accumulating there and preventing memory from being freed. It should be a simple fix to make clear_session clears the right graph in 2.0"</strong></p> <p>I tried multiple things to overcome it including but not limited to: - garbage collector calling ("del" for all variables + "gc.collect()"). - multiprocessing strategy where tensorflow and keras were imported in a separate thread as well as models with weights.</p> <p>But nothing helped. It has limited the number of models that I could use in blending. I hope that it will be fixed somewhen. Please let me know if you aware how to fix this bug.</p> <h3>Thanks for reading and good luck in future competitions!</h3>
Bengali.AI Handwritten Grapheme Classification
Gold medal - 12 place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I want to thank the organizers for an awesome competition. It was really challenging My team got a good result, we have our first gold medal and most important than that, there were a lot of useful things to learn</p> <p>And a quick overview of our solution</p> <p>We used 3 different types of architectures </p> <p><strong>First Architecture Details:</strong> - 5 folds created with MultilabelStratifiedKFold (mixed up by mean for final prediction) - Image size: 128x128 - Augmentation: Cutmix (0.4) / Mixup (0.4), choosing by random one of them - Label Smoothing - Adam (lr=0.001, ReduceLROnPlateau(factor=0.8, patience=5)) - Trained for 100 epochs - One head with 186 results than split the results into 3 softmax</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4005865%2Fff5d567efb67719ae9c74fd1c7204fdc%2F1.png?generation=1584410546592180&amp;alt=media" alt=""></p> <p><strong>Second Architecture Details:</strong> - 5 folds created with MultilabelStratifiedKFold (mixed up by mean for final prediction) - Image size: original image - Augmentation: Cutmix (choose random from 0.3, 0.4 ,0.5, 0.6, 0.7) + Cutout(max_holes=6, max_height=12, max_width=12, p=0.5 ) - Label Smoothing - Adam (lr=0.001, ReduceLROnPlateau(factor=0.8, patience=5)) - Trained for 150 epochs <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4005865%2F39752ea74f035ab413744a7dada0717f%2F2.png?generation=1584410030146832&amp;alt=media" alt=""></p> <p><strong>Third Architecture Details</strong> - 2 folds created with MultilabelStratifiedKFold (mixed up by mean for final prediction) - Image size: 300x300 - Augmentation: Cutmix (choose random from 0.3, 0.4 ,0.5, 0.6, 0.7) + Cutout(max_holes=6, max_height=12, max_width=12, p=0.5 ) - Label Smoothing - Adam (lr=0.001, ReduceLROnPlateau(factor=0.8, patience=5)) - Trained for 150 epochs <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4005865%2Fbc442d6921d47e44b5bd97fb2a7efc46%2F3.png?generation=1584410773226026&amp;alt=media" alt=""></p> <p>The final submission was the mean of all the folds from the 3 architectures (12 folds) each with the same weight The difference from public to private was 0.9784 -&gt; 0.9507</p> <p><strong>Conclusions</strong> In the end, I can say that our solution did not use any unusual ace in the sleeve, not any unusual trick, just respecting the good practices in computer vision.</p>
Google QUEST Q&A Labeling
83rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Github repo: <a href="https://github.com/btrotta/kaggle-google-quest">https://github.com/btrotta/kaggle-google-quest</a></p> <p>My code relies heavily on the following public notebook: <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a> My BERT modelling code is almost identical to that kernel; the only change I made was to insert a special token between the question title and body. (I'm not sure whether this had any real effect.) As in the script, I did 5 folds and 3 epochs.</p> <p>To optimise the Spearman rho metric, I found it was helpful to limit the predicted scores to a smaller number of distinct values. The way I did this was by post-processing using a LightGBM model with small leaf size (3) and few iterations (20). The model just takes a single predictor (the output prediction of the BERT model) and optimises the cross-entropy loss. This gives a boost of around 0.02 on the private leaderboard, for a final score of 0.39507.</p>
Bengali.AI Handwritten Grapheme Classification
22 place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I almost gave up hope, cos no matter what i tried the accuracy was not improving on public LB. Regardless the learning here is deeply trust in your local CV and so much more to learn since the accuracy gap between (1, 2, 3, 4th place) and the rest is significant.</p> <p>Solution Outline - - Start with transfer learning, Densenet201, One cycle learning - Unfreeze layers and re-train (Included custom learning rate for individual layers) - 3 separate model approach - Albumentation Image Augmentation - Local CV</p> <p>Other details - - Training epochs - 5 fold CV with callback each fold training with min-20, max-32 epochs - Pre-processing - Converted the image to size 256*256 and applied - Image Aug - basic image augmentation like VerticalFlip=False, img_size of 256*256, - - - BrightnessContrast, RandomGamma, fastai's jitter - Error_metric - Default fastai error_metric for each class - No post processing except used the average of the 5 fold probabilities</p>
Bengali.AI Handwritten Grapheme Classification
13th place brief writeup (public 107th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>updated 2020/03/19</p> <hr> <p>Thanks to the hosts and helpful discussions. Especially, I really appreciated <a href="/hengck23">@hengck23</a>'s kind sharing.</p> <p>I am so lucky to get gold. I expected random-split public/private and did nothing special for unseen grapheme.</p> <p>It seems postprocessing for macro recall hack is really important.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2796584%2Fd7d8dab30ab64980f8b92edbf1c29af6%2Fbengali2.png?generation=1584544815730850&amp;alt=media" alt=""></p> <h2>Model</h2> <ul> <li>modified SE-ResNeXt50. 2-stride conv is replaced by 1-stride-conv+maxblur-pool basically following <a href="/hengck23">@hengck23</a> 's <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/123757#757358">advice</a></li> <li>phase1: trained with stem-conv(stride=2), phase2: replace stem-conv with 3 convs(stride=1)</li> <li>Adam + ReduceLROnPlateau, about 200 epochs</li> <li><p>5 fold ensemble</p> <h2>Loss</h2></li> <li><p>CrossEntropy for root, vowel, consonant</p> <ul><li>2 out of 5 folds are finetuned using ohem loss</li></ul></li> <li>Multilabel BinaryCrossEntropy to classify decomposed grapheme exists or not <ul><li>graphme (or root or vowel or consotant) can be decomposed as shown in the figure. 61 possible decomposed parts exist (except '0' part). below is my exact code to make multihot label.</li></ul></li> </ul> <p>```</p> <h1>load data</h1> <p>train = pd.read_csv(C.datadir/'train.csv') train_labels = train[['grapheme_root', 'vowel_diacritic', 'consonant_diacritic']].values.astype(np.int64)</p> <h1>components label</h1> <p>parts_pre = pd.read_csv(C.datadir/'class_map.csv').component parts = np.sort(np.unique(np.concatenate([list(e) for e in parts_pre]))) parts = parts[parts != '0'] # 0 has no meanning print("parts:", parts) # 61 parts shown</p> <p>train_labels_comp = [] for grapheme in train['grapheme'].values: train_labels_comp.append([part in list(grapheme) for part in parts]) train_labels_comp = np.array(train_labels_comp).astype(np.int64) print("train_labels_comp.shape", train_labels_comp.shape) if True: # debug print("train_labels_comp", train_labels_comp[0].tolist()) print("train_labels_comp", train_labels_comp[1].tolist()) print("train_labels_comp", train_labels_comp[2].tolist()) ```</p> <h2>Preprocess and Augmentation</h2> <p>100% same as <a href="/hengck23">@hengck23</a> 's <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/132898#763633">advice</a> Preprocess : just resizing with cv2.INTER_AREA Augmentation : OneOf(basic augmentations) + DropBlock</p> <h2>Postprocess</h2> <ul> <li>prediction for decomposed grapheme is not used</li> <li><strong>maximize expected recall by argmax( softmax(logits) / np.power(class_count, 1) )</strong>. note that final sub use factor of 1.15 for grapheme root(no difference for private lb). <ul><li>there may be better threshold optimization since confusion matrix still looks asymmetry after above optimization. But I gave up further improvement due to low sample size.</li> <li>below is plot of "macro recall scores V.S. factor n" for my model (which is not final sub). You can confirm that the peak is around factor=1. You can also find that the improvement is not huge unlike private LB. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2796584%2F26fd5a4a563451455a39333fe670aab4%2Fmodel033_recall_hack.png?generation=1584545521502984&amp;alt=media" alt=""></li></ul></li> </ul> <hr> <p><strong>What I learned during competition ( note to self )</strong> <em>how to use pytorch lightning / how to freeze specified layer / how to ensemble / concept of snapshot ensemble / ohem loss, class-balanced loss / hengck23's model surgery method / maxblur-pool and its implementation / keep high resolution may be important for low resolution image / interpolation method for resizing is sometimes important / how to implement some augmentation methods (mixup, cutmix, gridmask) / public &amp; private is not always split randomly / vast.ai is low-priced compared to GCP</em></p>
Google QUEST Q&A Labeling
Team Bronze Medal Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all thanks to Google and Kaggle for the interesting competition, and to my teammates, <a href="/patilaum">@patilaum</a> and <a href="/amancframewala">@amancframewala</a> for the generous contribution and patience. Our approach involved ensembling several BERT-family architectures like Bert, Distil Bert, XLNet trained separately on Questions and Answers to better predict the multi-labels. We used 5 fold GroupKFold CV for validating our models. We also were inspired by some of previous winning approaches of concatenating the hidden states of the models for prediction. The models were averaged using rank-averaging which boosted the score by +0.002 Finally, we used some basic thresholding for an easy boost for highly imbalanced predictors.</p> <p>What did'nt work for us(but some worked for others): BART /T5 encoder modelling SWA Training the head separately StackExchange statistical features BERT+USE GLOVE LSTM</p> <p>Wish we had properly implemented stuff like BART given that the top solutions found a great boost from them. We learned a lot from this competition and thank all the people who posted on the forums with tips and their results. Special thanks to our lab, CoE@ VJTI, for the hardware infra. Looking forward to more such NLP competitions in the year.</p>
Google QUEST Q&A Labeling
A general walk through my solution with trick 55th
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners of this competition. Your hard work paid off!</p> <p>First, I have to say thanks to the authors of the following three published notebooks: <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer</a>, <a href="https://www.kaggle.com/abhishek/distilbert-use-features-oof">https://www.kaggle.com/abhishek/distilbert-use-features-oof</a>, <a href="https://www.kaggle.com/codename007/start-from-here-quest-complete-eda-fe">https://www.kaggle.com/codename007/start-from-here-quest-complete-eda-fe</a>.</p> <p>These notebooks showed awesome ways to build models, visualize the dataset and extract features from non-text data. </p> <p>Our initial plan was to take <em>question title</em>, <em>question body</em> and <em>answer</em> all into a Bert based model. But after we analyzed the distribution of the lengths of question bodies and answers, we found two major problems: 1. If we fitted all three parts as input, we had to adjust the input space for both <em>question body</em> and <em>answer</em> due to the limitation of the input size of the Bert based models. In order to do this, we had to trim a bunch of text, which was a waste of training data. Also, the question of how to trim a long text immediately presented itself to us. 2. Roughly half of the question bodies and answers had code in them. When implementing tokenization, it really brought some troubles for us. The tokenization of the code looked extremely wired, and we didn’t know if these would have a large effect on the model prediction.</p> <p>We later did some experiments (I will list some at the end of the post and these turned out to not help a lot) and finally decided to train two separate models. The first one was to fit Bert based models with only <em>question title</em> and <em>question body</em>. The second one was to fit with only <em>question title</em> and <em>answer</em>. In this way, almost all question bodies and answers can be fitted into models. </p> <p>Like other authors in the discussion, we split 30 output features into two parts. We used the <em>question title</em> and <em>question body</em> to predict features related to questions and used the <em>question title</em> and <em>answer</em> to predict features related to answers. We trained weights for several times, and the final output is the average of several models with these weights. </p> <p>After doing all of these, we received a 0.415 on the public LB (ranked around 90 at that time). For the rest of the days, we shifted our focus to the data post-processing part cause at that time with the highest scores being over 0.480. But we didn’t believe the models we used had such a big difference. There must have been some trick parts that we had not come to realize. </p> <p>So, I checked and compared the distribution of the model prediction and the standard distribution of the training set. We had then found out the “magic” part: the ratings were DISCRETE in the training set. It makes sense, right? Probably raters could only choose certain scores that “make sense”. No one is going to give a score like 0.76354237 in real life :). So, I just did a simple adjustment: pick out every prediction output, find the closest score that “makes sense”, and then replace it. That simple change raised my scores and rank dramatically (LB over 0.450).</p> <p>I then plotted the distribution of prediction and the standard distribution of the training set again and fine-tuned (actually did some hard coding) every column. This process took me about two days. I submitted many versions, but it did not turn out well with the risk of overfitting. Finally, I selected one kernel with no fine-tuning and one with fine-tuning (hard coding) as my final submission, and the one with no fine-tuning won out.</p> <p>Things that didn’t work: 1. Since humans can still understand a sentence even if it missed some words, I tried to define a function that can randomly drop some words from the text and then make it compatible with the input. This did increase my public LB score around 0.003-0.004, but after we decided to build a model with separate two parts, it had little effect. 2. I roughly went through the text of the dataset and tried to find something in common among texts that have codes (It spent me almost a whole day &gt;_&lt; ). I then used some regular expressions and tried to delete all codes and links in all texts. However, I am not entirely sure if it was good to delete them. Creating some special tokens might be a better idea, but we really didn’t have enough time to do that. It turned out to not work that well and even made the score lower, so we decided to not use it. </p> <h2>UPDATE:</h2> <p>Here is an example when I tried to compare the distribution of the training dataset and the prediction.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1801021%2F496a430d255033e9002f4a38e7b9b828%2Fexample%20of%20dsitribution.png?generation=1581441524382191&amp;alt=media" alt=""></p> <p>In the following image, I was trying to mark which columns I need to do fine-tuning. The horizontal yellow lines indicated that I didn't need to do too much about those columns. The scores like 0.5 and 1.0 in other graphs indicated that I need to do some extra thresholding at those values. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1801021%2F13ff762ee5a47a3461971b82e3351f10%2Fstandard.png?generation=1581441885285144&amp;alt=media" alt=""></p> <h2>Update Technical Part:</h2> <h3><strong>Feature Engineering:</strong></h3> <p>Inspired by pubic kernels, we extract URLs from the <em>url</em> column and implemented one-hot encoding to both the URLs and category columns. There are 64 new feature columns (51 for <em>url</em> and 13 for <em>category</em>) after one-hot encoding. According to the paper <em>Universal Sentence Encoder</em>, The Universal Sentence Encoder (USE) encodes text into high dimensional vectors that can be used for diverse tasks. The input is the variable-length English text, and the output is a 512-dimensional vector. So, we encoded question title, question body, and answer into 512-dimensional vectors and then compared the L2 distance and cosine similarities between them. We then added all of these as new features. There are 512 + 512 + 64 + 2 = 1090 new features. Under the concern of overfitting, I tried to decrease the number of generated features by briefly reading through a few examples of each category and then manually reclassifying 51 categories into four sub-categories: <em>have code, daily life, academic, literature readings</em>. However, this didn’t improve the prediction.</p> <h3><strong>Model Structure:</strong></h3> <p>We used a combo of two pretrained Roberta base models to train the dataset. The first Roberta handles the question title and question body pairs, while the second Roberta handles the question title and answer pairs. We only fitted texts into Roberta models and ignored 1090 new features for now. The Roberta base model has 12 hidden layers and there are 13 layers in total (which includes the final output layer). Every layer has the output dimension *batch_size* x 512 x 768 (*batch _size x max_seq_len x emb_size*). Initially, I tried out convolutions with different filter sizes such as 4 x 768, 6 x 768 or 8 x 768 to capture the meaning of the combination of words. However, it turned out to not work that well, and the average of the whole sequences outperformed them. We took the last three output layers out, concatenated them together and then applied an average-pooling on it. After this, we averaged over 512 tokens for each input in the batch to capture the meaning of the whole sequence and hoped the model would learn some lower level features and added them to the final representation. Finally, we concatenated 1090 features with the average embedding and then added a fully connected layer with 21 units for the title-body pair model and 9 units for the title-answer pair model. I didn’t use the first unit of the output layer of the Roberta model since Roberta removes the <em>Next Sentence Prediction (NSP)</em> task from BERT’s pre-training, and the first unit seems not as useful as in Bert. The final predictions are the average of 6 pairs of Roberta models (6- fold). We were supposed to freeze Roberta models and then fine-tune the added fully connected layer, but we didn’t due to the time limit.</p> <h3><strong>Customized Learning rate:</strong></h3> <p>We used a range test in the beginning, to find out the proper learning rate during training. We then wrote up a customized scheduler inherited from PolynomialDecay to change the learning rate dynamically. We set up warm-up steps proportional to the total decay-steps to avoid the primacy effect at the beginning of the training process. The learning rate increases linearly over the warm-up period and we didn’t set up the learning rate to have cyclic behavior.</p> <h3><strong>Training:</strong></h3> <p>Like most of the other groups, we use GroupKFold with n_splits=5, did 6-fold cv for 6 epochs and saved the training weights with the highest cv scores. We used Adam optimizer with mixed-precision data types, which dynamically and automatically adjusting the scaling to prevent Inf or NaN values and saved training time.</p> <h3><strong>Post-processing:</strong></h3> <p>We first considered using one-hot encoding for all output columns, given that every column has finite discrete values, but this shouldn’t work well since it didn’t take the order of the values into account. Because of the time limit, we just compared the distribution of predictions for each column with the distribution of actual values in the training set and then adjust the threshold based on the distribution. We compared the scores between before post- processing and after post-processing to see if it improved performance.</p>
Google QUEST Q&A Labeling
79th place - Meta Feature Analysis
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google QUEST Q&A Labeling <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you everyone for sharing your great solutions. I'm learning so much!</p> <p>I had only a few days to work on this competition and I didn't know anything about NLP. So I spent a few days reading about BERT and a few days exploring meta features and analyzing the Spearman's rank correlation metric. </p> <p>I learned a lot from Akensert's great public kernel <a href="https://www.kaggle.com/akensert/bert-base-tf2-0-now-huggingface-transformer">here</a>. If you start with that and incorporate meta features and apply a Spearman rank transformation, you can achieve one position short of a Silver medal a.k.a The Best Bronze Solution :-)</p> <h1>Meta Features</h1> <p>Consider the following meta features * <code>question_user_name</code> * <code>question_title_word_count</code>, <code>question_title_first_word</code>, <code>question_title_last_word</code> * <code>question_body_word_count</code>, <code>question_body_first_word</code>, <code>question_body_last_word</code> * <code>answer_user_name</code> * <code>answer_word_count</code>, <code>answer_first_word</code>, <code>answer_last_word</code> * <code>category</code>, <code>host</code> * frequency encode all the above</p> <p>If we build LGBM models for each target using these 26 meta features and display the top 3 feature importances, we discover some interesting relationships. Above each image below is the target name and LGBM Spearman's rank correlation validation score:</p> <h2>Question Body Critical</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F273250fcf82c1639aa2504e0ab874d50%2Fbody_crit.png?generation=1581440405894463&amp;alt=media" alt=""></p> <p>Question body word count helps predict <code>question_body_critical</code>. That makes sense.</p> <h2>Question Interesting Others</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fa9828b713cfe7aaf7bd3c4e6649ed813%2Finteresting_others.png?generation=1581440530534709&amp;alt=media" alt=""></p> <p>Meta feature <code>host</code> helps predict whether a question is interesting to others. It's amazing that these meta features can achieve a score of 0.314 which is better than public BERT notebooks! Let's see how good <code>host</code> is by itself:</p> <pre><code>train, validate = train_test_split(train, test_size=0.2, random_state=42) mm = train.groupby('host')['question_interestingness_others'].mean().to_dict() validate['cat'] = validate.host.map(mm) spearmanr(validate.question_interestingness_others,validate.cat).correlation # This prints 0.311 </code></pre> <p>This code shows that we can build a model from just the feature host and achieve a validation score of 0.311 predicting <code>question_interestingness_others</code> ! Wow</p> <h2>Question Interesting Self</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fee6342f07a2ee54aa08cc170746098cc%2Finteresting_self.png?generation=1581441808653553&amp;alt=media" alt=""></p> <p>Once again, the meta feature <code>host</code> does a great job predicting <code>question_interesting_self</code>. By itself it scores 0.442 which does as good as public notebook BERTs!</p> <h2>Question Type Spelling</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F520ced1e879a9661265024c4b9dda477%2Fspelling.png?generation=1581441968092095&amp;alt=media" alt=""></p> <p>LGBM says that the most important feature to predict <code>question_type_spelling</code> is <code>answer_word_count</code>. From public kernels, we know that <code>host=='ell.stackexchange.com'</code> and <code>host=='english.stackexchange.com'</code> are very predictive too. (LGBM didn't find this which shows that LGBM is not optimizing Spearman's rank correlation). Using <code>host</code> and <code>answer_word_count</code>, we can predict <code>question_type_spelling</code> with score 0.300 ! Wow</p> <pre><code>validate['h_type'] = validate.host=='ell.stackexchange.com' validate['h_type'] |= validate.host=='english.stackexchange.com' validate['h_type'] &amp;amp;= validate.answer.map(lambda x: len(x.split()))&amp;gt;95 spearmanr(validate.question_type_spelling,validate.h_type).correlation # This prints 0.300 </code></pre> <h2>Question Type Definition</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F0dfbb4bf8645dd9f043f7a61f751fcfa%2Fdefinition.png?generation=1581443385397518&amp;alt=media" alt=""></p> <p>Question first word helps predict whether we have <code>question_type_definition</code>.</p> <h2>Question Well Written</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2F69285f93b55a135fd87ba01f1d264754%2Fwell_written.png?generation=1581442904056045&amp;alt=media" alt=""></p> <p>Question first word frequency helps predict whether a question is well written. That's interesting. If your writing is similar to how other people write, it is deemed well written.</p> <h2>Answer Level of Information</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fd2b111e2e208ff739dbce612b5444f5d%2Finfo.png?generation=1581443008426365&amp;alt=media" alt=""></p> <p>Answer word count helps predict <code>answer_level_of_information</code>. That makes sense. The code below shows that this single feature alone can predict better than public BERT notebooks!</p> <pre><code>validate['a_ct'] = validate.answer.map(lambda x: len(x.split())) spearmanr(validate.answer_level_of_information,validate.a_ct).correlation # This prints 0.392 </code></pre> <h2>Answer Plausible</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1723677%2Fbad5aaee8d43f49f3a9282c3381ae25f%2Fplausible.png?generation=1581443262147882&amp;alt=media" alt=""></p> <p>Answer user name frequency helps predict whether an answer is plausible. This makes sense. Active users are probably more plausible.</p> <h1>Meta Features Applied</h1> <p>Now that we know important meta features, we can either incorporate them into our BERT model inputs, or we can ensemble them with BERT's outputs. Using my OOF, I ensembled meta features to my BERT outputs. For example, the two best CV/LB increases came from replacing BERT's outputs for <code>question_type_spelling</code> with </p> <pre><code>test['a_ct'] = test.answer.map(lambda x: len(x.split())) test['h_type'] = test.host=='ell.stackexchange.com' test['h_type'] |= test.host=='english.stackexchange.com' test['h_type'] &amp;amp;= test.a_ct&amp;gt;95 sub.question_type_spelling = test.h_type.values </code></pre> <p>And updating <code>answer_level_of_information</code> with:</p> <pre><code>sub.answer_level_of_information += 7 * test.a_ct.values/8158 sub.answer_level_of_information /= sub.answer_level_of_information.max() </code></pre> <h1>Spearman's Rank Correlation</h1> <p>As pointed out in this discussion post <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/118724">here</a>, the Spearman's rank correlation metric can be improved if you convert continuous predictions into more discrete predictions if the true targets are discrete. By applying a grid search on my OOF, I found that applying <code>numpy.clip( pred, clippings[col][0], clippings[col][1] )</code> can maximize this metric using</p> <pre><code>clippings = { 'question_has_commonly_accepted_answer':[0,0.6], 'question_conversational':[0.15,1], 'question_multi_intent':[0.1,1], 'question_type_choice':[0.1,1], 'question_type_compare':[0.1,1], 'question_type_consequence':[0.08,1], 'question_type_definition':[0.1,1], 'question_type_entity':[0.13,1] } </code></pre> <h1>UPDATE</h1> <p>During the competition, I only built and trained one <code>bert-base-uncased</code> model. Other models didn't have good CVs. But after reading all the great solutions, I just built a <code>roberta-base</code> model by first applying <code>def cln(x): return " ".join(x.split())</code> which removes white space (as described in 8th place solution <a href="https://www.kaggle.com/c/google-quest-challenge/discussion/129857">here</a>). </p> <p>It turns out that <code>RobertaTokenizer.from_pretrained('roberta-base')</code> does not remove white space by itself. (whereas the bert-base-uncased tokenizer does). Using the same architecture as my bert-base, my Roberta has CV 0.395. When I ensemble it with my bert-base CV 0.392, the new CV is 0.412. That's an increase of 0.020 over my bert-base. </p> <p>I just submitted to the leaderboard and it increased my LB by 0.015 and my rank by 40 spots! Wow! I just learned how to make better NLP models. I can't wait for the next competition ! </p>
Deepfake Detection Challenge
39th Private LB: small and simple
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone, and I hope that those who had scores that errored out get that sorted… Very frustrating.</p> <p><strong>My approach in a nutshell - keep it simple and write off the hard deep fakes:</strong> </p> <p>I figured that a complex model would not get these right anyway, and would end up messing up the easy ones too because it would be prone to overfitting. I ended up with 39th on the private LB (0.477); around 100th public LB (0.328).</p> <p><strong>Details:</strong></p> <p>My main worry was that models were not going to generalise well, primarily because of the training data we had available to us. My reading of the rules was that any external data was at best in a grey area, and more probably out of bounds, so I only used the data given to us.</p> <p>I built as simple and as small a model as possible, knowing that this wouldn’t be able to detect hard deep fakes, but figuring that it might detect the easier ones more robustly.</p> <p><strong>Dataset:</strong></p> <p>I used @humananalog Blazeface solution to save off faces in 32 frames from each video to the hard drive (evenly spaced throughout the video). I screened the videos, only allowing training cases where all of the face parts (eyes, nose, mouth, etc) were visible in all frames. This reduced the training set by c.25% and I think this was important in getting the models to converge properly. </p> <p><strong>Sampling:</strong></p> <p>I sampled 50-50 real/fake videos in each epoch (all real videos + 1 sampled corresponding fake, each epoch choosing a random rake). Once a video was sampled, I randomly sampled 1 frame from the 10 frames with the highest confidence in that video. Overall, I’m not sure how much the sampling scheme affected the score. It didn’t have a huge impact on my validation set or the public leaderboard, but maybe it counted for something in the private leaderboard.</p> <p><strong>Augmentation:</strong></p> <p>Random horizontal flips, jpeg compression, brightness contrast. Randomly selecting frames as outlined above also did some of this job.</p> <p><strong>Models:</strong></p> <p>I used EfficientNet-b0 as it was the smallest model that gave reasonable results.</p> <p><strong>Resolution:</strong></p> <p>I used three different input sizes (separate models for each): 180, 196, 224. I used smaller rather than larger resolutions in a bid to make things more robust (reasons outlined above).</p> <p><strong>Ensemble:</strong></p> <p>Each model was fit over 5-fold samples (folder split). The end result averaged these.</p>
Bengali.AI Handwritten Grapheme Classification
5th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks at Bengali.ai and kaggle for hosting this interesting competition as it was more to it than simple ensembling of computer vision models. Thanks to the authors and contributors of pytorch, pytorch-lightning, apex and pytorchcv for making my life easier.</p> <p><strong>Acknowledgements &amp; notes on GM</strong></p> <p>I am humbled to finally become competition grandmaster. I learned a tremendous amount of tricks in the recent two years and it would not have been able if it wasn’t for all this generous sharing of top solutions and extremely talented teammates I was lucky to have along the way. </p> <h3>Short Summary</h3> <p>My solution is a simple bag (different seeds) of the same model seresnext50 with custom head. Major boost in LB score came from redesign of consonant diacritic target. Minor LB improvement from postprocessing tweaks such as thresholding and finding closest train examples. </p> <h3>Preprocessing</h3> <p>I did not resize the image, the only thing I did was normalize each image by its mean and std, since I experienced a good regularization from that in previous competitions.</p> <h3>Architecture &amp; Training</h3> <p><strong>Backbone</strong> I used a plain seresnext50 but adjusted the very first layer to replace resizing the image and account for single channel. I did that by changing input units and reducing stride from (2,2) to (1,2).</p> <p>``` from pytorchcv.model_provider import get_model as ptcv_get_model</p> <p>backbone = ptcv_get_model('seresnext50_32x4d', pretrained=True) backbone = backbone.features backbone.init_block.conv = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(1, 2)) ```</p> <p><strong>neck</strong></p> <p>same as in maciejsypetkowski solution <a href="https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/110543">https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/110543</a> </p> <p><strong>heads</strong></p> <ul> <li>3 heads, each for consonant, vowel and root and </li> <li>auxiliary head for grapheme with arccos loss</li> </ul> <p>So in total my used architecture looks like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1424766%2F3e968cbc82806b9211bf7609b662a18a%2Fbengali%20arch.001.jpeg?generation=1584456769762003&amp;alt=media" alt=""></p> <p><strong>Training</strong></p> <ul> <li>train on all data</li> <li>4 cycle cosine annealing with augmentation increase <ul><li>cycle 1 &amp; 2 cutmix</li> <li>cycle 3 &amp; 4 cutmix, cutout, scale, translate, rotate</li></ul></li> <li>Adam </li> </ul> <p><strong>Losses</strong></p> <ul> <li>root loss: CrossEntropy</li> <li>consonant loss: Multi label Binary Crossentropy</li> <li>vowel loss : CrossEntropy</li> <li><p>grapheme loss: ArcCos + CrossEntropy</p></li> <li><p>total loss = root loss + consonant loss + vowel loss + grapheme loss</p></li> </ul> <h3>The Magic (jôfôla)</h3> <p>Now to the key aspect of my solution. I realised all models perform poorly on consonant diacritic 3 (I abbreviate with cons3 in the following) and especially confuse it with cons4. And with poorly I mean game changing poorly: 89% recall compared to 99% for the other consonant classes. At first I just fought the symptoms (instead of the root cause) as several other teams did, e.g. by transferring low confident predicts from high frequent class cons4 to low frequent class cons3 and hence leveraging the definition of macro recall metric and that gave a good LB boost (0.9900 -&gt; 0.9915). That might have been enough for gold but I wanted to improve my chances. So I wanted to understand why the model did so poor on this one and read through discussions, wikipedia and other domain knowledge related things. Luckily, I found 2 things: - Wikipedia: As the last member of a conjunct, য jô appears as a wavy vertical line (called যফলা jôfôla) to the right of the previous member: ক্য "kyô" খ্য "khyô" গ্য "gyô" ঘ্য "ghyô" etc. In some fonts, certain conjuncts with যফলা jôfôla appear using special fused forms: দ্য "dyô" ন্য "nyô" শ্য "shyô" ষ্য "ṣyô" স্য "syô" হ্য "hyô".</p> <ul> <li>The consonant diacritics are itself composed from lower level components <ul><li>0: []</li> <li>1: ['ঁ']</li> <li>2: ['র', '্']</li> <li>3: ['র', '্', 'য']</li> <li>4: ['্', 'য']</li> <li>5:['্', 'র']</li> <li>6: ['্', 'র', '্', 'য']</li></ul></li> </ul> <p>So I realized that not only 3,4 and 6 have a jô component which might have the special jôfôla form but also that the special forms are in train for <strong>only</strong> cons4 but <strong>not</strong> for cons3, and thats why the models are so bad on cons3. My solution was the following. I recoded the cons classes into a multilabel classification problem using the following function:</p> <p>``` cons_components = ['ঁ', 'য', 'র','্']</p> <p>def is_sub(sub, lst): ln = len(sub) return any(lst[i: i + ln] == sub for i in range(len(lst) - ln + 1))</p> <p>def label2label_v2(label): res = np.zeros((6,),dtype=int) components = cons2components[label] for i,item in enumerate(cons_components): if item in components: res[i] = 1 if is_sub(['্', 'য'], components): res[4] = 1 if is_sub(['্', 'র'], components): res[5] = 1 return res</p> <p>```</p> <p>That function re-codes the 7 consonant class labels into multilabel 6 dimensional vectors</p> <p><code> 0 -&gt; [0, 0, 0, 0, 0, 0] 1 -&gt; [1, 0, 0, 0, 0, 0] 2 -&gt; [0, 0, 1, 1, 0, 0] 3 -&gt; [0, 1, 1, 1, 1, 0] 4 -&gt; [0, 1, 0, 1, 1, 0] 5 -&gt; [0, 0, 1, 1, 0, 1] 6 -&gt; [0, 1, 1, 1, 1, 1] </code></p> <p>That schema not only enabled to learn special forms of cons3 from cons4 but also enables to tune via thresholding (i.e. when to set a continuous prediction to 0 or 1) how the model separates the 6 classes. E.g. by setting a very low threshold on the 2nd logit more predictions would be set to 1 and hence more predictions would move from cons4 to cons3 as these two classes only differ in logit2. Purely changing the target for consonant diacritic in this way improved Public LB from 0.990 to 0.993 !</p> <h3>Postprocessing</h3> <ul> <li>I adjusted threshold for binarizing consonant diacritic. Best was a threshold of 0.1 for all logits classes. I tried not dare to temper which each logit threshold individually as I was afraid of overfitting. Result on Public LB: 9930 -&gt; 9937 </li> <li>I used cosine similarity to find closest train sample for root and vowel when top1 probability &lt; 90% and use its label. bestfitting did the same in the 1st place solution of Human Protein competition) as can be found under <a href="https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109">https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/78109</a> Result on Public LB: 9937 -&gt; 9942 </li> </ul> <h3>Choosing subs</h3> <p>I tempered around with binarization and metric learning thresholds and purely checked LB hence I was aware of a high risk to potentially overfit to LB. So I chose my best sub and a failsafe sub that I felt was the least tempered with but still doing ok. (Both were gold on private at the end)</p> <p>Cheers. </p>
Bengali.AI Handwritten Grapheme Classification
Solution 38, some lessons learned
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, let me thank my team mate Marios (Kazanova). Without him I'm not sure I would have a medal at all here. Second, I want to thank all those who shared so much. This was a goldmine for people with little experience in computer vision like me. I wish I had paid more attention on some of it, for instance discussion about unseen graphemes in test, or predicting graphemes rather than components as a secondary target. Last, but not least, many thanks to Bangali AI sponsor and Kaggle for providing a challenging image classification problems with rather small size dataset. Hardware (access to a number of recent GPU) wasn't the main deciding factor here. </p> <p>I am a bit disappointed to have dropped out of gold after being up to top 7th on public LB of course. At the same time, my first sub is only from 13 days before competition end. And I could team with Marios who had started earlier. Still, issue with little time is that you have to be right in selecting what to try and what you decide to no try. And given training computer vision models takes time, you easily lose a day or two on ideas that don't work. I won't enter computer visions less than 2 weeks before end again. First lesson learned.</p> <p>We didn't looked at which targets where hard to predict. In general, I didn't look at data enough. Second lesson.</p> <p>I mentioned some of what we didn't try unfortunately, here is what we tried, with more or less success.</p> <p>Before teaming I worked on 64x64 images, and a bit on 128x128 images with seresnext50. Small image sizes leads to high epoch throughput and allow for lots of experiments. I didn't work on image augmentation and focused on learning pytorch, understanding losses and image preprocessing. When I teamed, Marios had a efficientnet-b4 model with a good augmentation pipeline. He used cutout, grid mask, mixup, albumentation, and found that cutmix wasn't effective. He also used ohem loss, and cosine lr scheduler. We didn't revisit this except for ohem loss after we teamed.</p> <p>After teaming we explored ways to add diversity, and ways to combine as many models as possible in the scoring kernel. In hindsight it also helped overfit a bit more to the public test data. Our CV LB score gap was rather stable, which also explains why we did't explore more the possible public/private distribution discrepancy. Third lesson learned, pay more attention to this.</p> <p>We used an average of 14 efficientnet-b4 models predictions in the end. Our scoring kernel could handle up to 18 models within 2 hours. For each model we averaged weights of several checkpoints. Models differ by:</p> <ul> <li>input data, original size, or crop resize with same image ratio</li> <li>fold or full train data</li> <li>loss, weighted loss for R, V, C, or weighted loss per class within each of R, V, C.</li> <li>ohem or unmodified cross entropy loss</li> <li>use of external data or not</li> <li>postprocessing </li> </ul> <p>We looked at postprocessing predictions the last few hours before end. It was clearly promising, and one sub did improve a bit over our best blend.</p> <p>We only looked at external data, BanglaLekha-Isolated two days before end of competition. We predicted targets using our best blend, then took the majority class and checked the mapping by visual inspection between the BanglaLekha paper and the class_map csv. Our mapping was wrong in few case. Then we assigned the fixed for R target and the 0 target for V and C. CV and Lb scores were not really better but it added dieversity to the blend. Here again, we had to crop resize with same image ratio.</p> <p>All in all I learned a amazing lot in a short time. This is where Kaggle is great, you can get a glimpse of state of the art practice in a very short time. Again, this is only because of sharing from the community members. I'm now looking forward to next computer vision competition to be honest!</p> <p>I hope I haven't missed too much, especially on what Marios did before we teamed. I'll let him expand/correct me if need be.</p>
Bengali.AI Handwritten Grapheme Classification
7th Place Data-based Generalization Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to Kaggle and Bengali.AI for hosting such an interesting competition. It is my first all-in kaggle competition, and needless to say none of this would have been possible without our awesome team Igor, Habib, Rinat, and Youhan. <a href="/cateek">@cateek</a> <a href="/drhabib">@drhabib</a> <a href="/trytolose">@trytolose</a> <a href="/youhanlee">@youhanlee</a> </p> <h3>Analysis: This competition has a distinctly stratified LB:</h3> <ul> <li>With a good pipeline, single model with baseline augmentations LB ~.97. </li> <li><strong>Removing crops, training on full-resolution, adding cutmix / cutout, and training enough</strong> should get model up to LB .985+, and with some tweaking ~.989+. These are well-covered in the discussions posts.</li> <li>Our best single model is from Rinat, PNASNet-5-Large from <a href="https://github.com/Cadene/pretrained-models.pytorch">Cadene's repository</a> which scores LB 0.9900, CV .9985. I believe most of the participants LB .9850-.9910 are using ensemble of models scoring around this range.</li> </ul> <p>At this stage (.9900-.9905) we found it very hard to further improve our LB. As <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134601">this post</a> points out, higher CV does not necessary mean better LB anymore, so we were stuck for a while (a long while...).</p> <h3>Improving Generalization</h3> <p>With the CV &amp; LB relationship it is easy to see that the problem is that our model <strong>could not generalize to unseen graphemes</strong> (by grapheme I mean triplet combination of grapheme root, vowel and consonant diacritics). There are only ~1290 graphemes in training set, while there are 12936 possible combinations. Our models might be biased to output seen triplets; for whatever reason, as Qishen Ha's wonderful <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134434">post</a> pointed out, our models perform badly on these graphemes.</p> <p>In hindsight, it's no magic that the most convenient way to improve generalization is to <strong>add more data</strong>. When investigating the grapheme representations, I found that <strong>graphemes are actually encoded as sequence of unicode characters</strong>. The roots and diacritics also have their corresponding sequences. The most magical part is that <strong>the unicode sequence for the grapheme is a combination of the sequences of its roots</strong>. Best exemplified using this image, where the four rows are <em>grapheme, grapheme root, vowel, and conso</em>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1943421%2F82988244ae7fb6d4ca47f6c6094a3116%2Fimage_480.png?generation=1584399876994128&amp;alt=media" alt=""></p> <p>This observation works (with some minor exceptions) on all graphemes in the training set, and using a very basic algorithm I was able to generate correct labels for all 1295 graphemes in training set except for 2. Following Guanshuo Xu's post pointing to <a href="https://github.com/MinhasKamal/BengaliDictionary">this repository</a>, we are able to access a comprehensive list of grapheme combinations. Using the decomposition algorithm we selected ~3000 graphemes that could be broken down into the given roots and diacritics. Our LB boost on the final day is from better generalization on these graphemes, which partially overlaps with train &amp; test set.</p> <p>With these graphemes, we rendered them using various fonts and obtained a cleanly-labeled synthetic dataset with ~47K images spanning these characters which can be found <a href="https://www.kaggle.com/roguekk007/bengaliai-synthetic-magic">here</a>, all the ingredients for creating this synthetic dataset has been publicly available. Our hope was that by seeing synthetic images, our models should be able to at least learn the topological (if not stylistic) features of the grapheme. And it turned out they do. Here is a sample of synthetic image in our dataset.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1943421%2Ff582156ca8f174802dbbc28a18fa1562%2F1_EkSharifa.png?generation=1584402247696903&amp;alt=media" alt=""></p> <p>At this point we have only 24 hours until the end of the competition so just finetuned our previous models with the synthetic data added to both original training &amp; validation, thus resulting in the exciting quantum jump on the last day😊 </p> <h3>Some Details</h3> <p>My teammates have wonderful pipelines, here are some approaches which have turned out useful: - Train on 128x128 data for ~100 epochs, then finetune on 224x224 data. It definitely yields faster training and maybe better generalization (used in Rinat's pipeline) - Anneal augmentations except for cutmix gradually to 0 at late stages of training (used in Habib's and Igor's pipeline) - A wonderful new architecture called MixNet, we found that it nicely complements PNAS5 for ensembling and performs pretty well. - <strong>Loss:</strong> Plain cross entropy. Tried arface (did not work), focal (did not work), normalized softmax+label smoothing (looked promising but ditched along with my pipeline) - <strong>Scheduler:</strong> ReduceLRonPleateau for Rinat's pipeline, and hand-monitored dropLR for Habib's and Igor's pipelines.</p> <h3>Finally</h3> <p>It has always been my dream to do a gold-medal solution write-up before 17th birthday😃 . In the end, more than happy to see hundreds of experiments and weeks of stress translate successfully into solid rank. With more time than 24 hours, we would have: 1. Searched for a more comprehensive list of graphemes 2. Created more synthetic data with more fonts 3. Somehow make synthetic data more similar to handwritten data; we are using dimming max_val to 230 and gaussian-blur, but can definitely be improved by maybe CycleGAN 4. Make flipping work as pointed out <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/126761">here</a>. It's really ironic I did not make my own idea work when Qishen Ha did; guess this is the difference🤓 5. Try more model-regularization methods, ShakeDrop was on the list and looked promising but was preceded by other priorities.</p> <p>Overall, this has a very exciting and memorable puzzle-solving competition. Again kudos to the team and: Love Kaggling</p>
Bengali.AI Handwritten Grapheme Classification
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to the host and Kaggle for this very interesting competition. We learned a lot and specifically the issue of predicting unseen graphemes in test data was very intriguing and has led to some very elegant and nice solutions as imminent from all the other solution posts. As always, this has been an incredible Zoo team effort and always amazing to collaborate with <a href="/dott1718">@dott1718</a>.</p> <p>We will try to tell the story of progress a bit, without only explaining the end solution as we believe that the whole process of a competition is important to understand how decisions have been made.</p> <h3>Early efforts</h3> <p>We started the competition by fitting 3-head models for R,C,V and quite quickly got reasonable results landing us somewhere in range of top 40-50 on LB. As always, we tried to understand how test data can be different, and why there is a gap to local CV. It became quite quickly clear to us that it is mostly due to unseen graphemes in test. We tried to assess how such a gap can exist, and checked how R, C and V scored individually on LB (you can figure this out quite easily by predicting all 0s for the rest, which is baseline sub). We saw that most drop comes from R and C, which is impacted by the huge effect of rare case misclassifications. So we tried something funny, which is adding additional C=3 and C=6 predictions based on the next highest indices. Just adding 310 C=3 and 310 C=6 (which reflects one extra grapheme for each) boosted us 30-40 points on LB. So we figured that this will be key in the end, and we need to generate a solution that generalizes well to unseen ones. We did also not want to rely on hardcoding too much, so tried to find a better way. These early subs with no blending at all, but this simple hardcoding, would btw still be rank 10-20 now.</p> <h3>Grapheme models and fitting</h3> <p>Seeing reported CV scores on forums, which were all quite higher than ours, made us think that top teams are doing something different than us, which apparently was different targets. So our, first trick was to switch from predicting R,C,V to predicting individual graphemes. This also had benefits to us for not needing to care about proper loss weights of the 3 heads, different learning weights for the heads, how to deal with soft labels properly, and we could focus fully on NN fitting and augmentations. After we switched, mixing augmentations shined immediately - we replaced cutout with cutmix and then with fmix, which made it to the final models we had. Fmix (<a href="https://arxiv.org/abs/2002.12047">https://arxiv.org/abs/2002.12047</a>) worked clearly better for us than cutmix, and also the resulting images looked way more natural to us due to the way the cut areas are picked. This is an example of a mixed image:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F37166%2F18ef2d72114caf6d37e0fb7c90053969%2FSelection_007.png?generation=1584456239523632&amp;alt=media" alt=""></p> <p>In the end, we mixed either 2 images or 3 images with 50% probability and used beta=4 to mostly do equal mixing. </p> <p>Having this grapheme model, the tricky part was to apply the model to test. At first, we tried applying the model as it is, so predicting only the known graphemes by just converting the predicted grapheme to its R,C,V components. LB score was better than for the model predicting R,C,V separately, which tells us there are not that many new unseen graphemes in public LB. To make the model work better on unseen graphemes, we applied the second trick - a post-processing routine. Surprisingly, it also improved the metric on the known graphemes as well.</p> <p>The model outputs 1295 probabilities of each distinct train grapheme. For each component, e.g. C, we calculate the scores of each C=0,..,6 by averaging probabilities of the graphemes having this component. So, for C=3 it is an average of only 4 probabilities as there are only 4 graphemes with consonant diacritic of 3 in train data. For C=0 it is an average of hundreds of probabilities as it is the most common C value. The post-processing ends with picking C value with the highest “score”. The logic behind it was to treat each C value equally regardless of its frequency similar to the target metric, not to limit the model to the set of train graphemes, and pick more likely component values for the graphemes with non-confident predictions. This routine immediately gave us another 50 extra points on LB.</p> <h3>Improving unseen graphemes</h3> <p>The third trick was about improving the predictions of unseen graphemes without hurting predictions of known graphemes too much. As most of you probably know and as elaborated earlier, the gap between cross-validation and public LB was coming from the drop of accuracy in the C component, caused mainly by C=3 and to lesser extent by C=6. It is easy to explain, as these are the rarest classes in train, and most probably public LB has at least one new grapheme with C=3 and C=6. The second largest drop was coming from the R component, while V recall was quite close between cross-validation and public LB.</p> <p>The main problem with both 3-head models and specifically grapheme models is that they overfit to the seen graphemes as they are heavily memorizing them. So to close the gap a bit, we needed models which predict R and C better on unseen graphemes, which led us to fitting individual models for these 2 components. Specifically for C it was very hard to generalize and what helped a bit was to randomly add generated graphemes based on the code we found in this kernel (kudos to the author!) <a href="https://www.kaggle.com/kaushal2896/bengali-graphemes-starter-eda-multi-output-cnn">https://www.kaggle.com/kaushal2896/bengali-graphemes-starter-eda-multi-output-cnn</a>. But, as we learnt after the end of the competition, there was way more room to improve the models this way than we actually did.</p> <h3>Blending</h3> <p>Trick number four was about how to blend the models together. To test the blending approaches, we recreated the hold-out sample by removing a few graphemes completely from the training part, also a couple of graphemes with C=3 and 6. We ended up doing different blending per component. But first, the models we had by the end of the competition are: - 3 grapheme models, fitted on the whole train. The modes use adam and sgd scheduling, fitted for 80 - 130 epochs and all use fmix mixing 2-3 images. - 1 model with 3-heads for all components. Cutout instead of fmix, sgd and 40 epochs, similar to all the following models - 1 model for R - 2 models for C</p> <p>For R the blend is: post-processed average of grapheme models + 0.4 * average of 3-heads model and R model</p> <p>For V the blend is: post-processed average of grapheme models + 0.2 * 3-heads model scaled to have equal means</p> <p>For C the blend is: post-processed average of grapheme models + 15 * average of C models * C class weights</p> <p>C class weights were introduced to fix the imbalanced frequencies of C, especially C=3 and 6. They were set to inverse frequencies of the class in train and normalized.</p> <p>6 out of 7 models are SE Resnext50 and one is SE Resnext101. Image size was either original or 224x224. Adam scheduling with decay worked well, but SGD with scaled down every X epochs was even better. Fmix was done on the entire sample, meaning that for each image there were 2 or 3 (random with prob=0.5) random images picked from the whole training sample and mixed. </p> <p>Happy to try to answer any questions!</p>
Abstraction and Reasoning Challenge
5th place short notes
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to winners! And thanks to organizers - this was very interesting and unique competition.</p> <p>I'll leave longer descriptions to Top 3 teams, will just write some very brief summary of my approach ;)</p> <p>In general my idea is to use 3 connected functions:</p> <ul> <li>A: cut parts from input by color/shape/location/etc;</li> <li>B: re-color/modify/rotate/etc parts from step A output;</li> <li>C: combine parts from step B output in different ways, like AND, OR, XOR, hstack, vstack and many others and check if this gives valid solution.</li> </ul> <p>In each step "no changes" is also a valid result, in that way such pipeline could solve also tasks where e.g. only cropping is needed without further changes and ensambling parts together or where only re-coloring is needed.</p> <p>This is pretty much like brute-force and as there are a lot of possible outcomes in each step, multiplication of all variations of all 3 steps together was huge - but here helped the fact that public=private, so I used a lot of "feedback" from submissions to tune my functions and delete what wasn't helping.</p> <p>See you in future competitions!</p>
Abstraction and Reasoning Challenge
My part of the 3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is the description of our final solution <a href="https://www.kaggle.com/c/abstraction-and-reasoning-challenge/discussion/154305">https://www.kaggle.com/c/abstraction-and-reasoning-challenge/discussion/154305</a> @golubev has definitely contributed more than I into the final result, but even my solo solution could get at least the high silver so I share it with all the source code.</p> <p>The code and more detailed approach description can be found here: <a href="https://github.com/IliaLarchenko/abstract_reasoning">https://github.com/IliaLarchenko/abstract_reasoning</a></p> <p>Below is the short write up of my approach.</p> <ul> <li><p>In general, my method can be described as a domain-specific language (DSL) based. I have created the abstract representation of the colors, image blocks extracted from the original input image, and different binary masks. Using these abstractions, I wrote several "predictors" that try to derive a set of transformations used to get every output image from the corresponding input image and, if it succeeds – apply these transformations to test input to get the final answer.</p></li> <li><p>By the moment of team-up, I had 5 tasks solved (+2 from public kernels). (<a href="https://www.kaggle.com/ilialar/5-tasks-part-of-3rd-place-solution">my kernel demonstrating how it works</a>). Probably there are other tasks that can be solved as well, but I have not tested my solution separately after the merge. My approach also solves 138 and 96 samples from the train and valid sets, respectively.</p></li> <li><p>The general logic of every predictor is described in the pseudo-code below (although, it can be different for some classes). ``` for n, (input_image, output_image) in enumerate(sample[‘train’]): list_of_solutions = [] for possible_solution in all_possible_solutions: if apply_solution(input_image, possible_solution) == output_image: list_of_solutions.append(possible_solution) if n == 0: final_list_of_solutions = list_of_solutions else: final_list_of_solutions = intersection(list_of_solutions, final_list_of_solutions)</p> <p>if len(final_list_of_solutions) == 0 return None</p></li> </ul> <p>answers = [] for test_input_image in sample[‘test’]: answers.append([]) for solution in final_list_of_solutions: answers[-1].append(apply_solution(test_input_image, solution))</p> <p>return answers ```</p> <ul> <li><p>The best working predictors (on the test set) were those that restore some kind of mosaic on images.</p></li> <li><p>Other important tips: – I have used multiprocessing to significantly speed up computations and control RAM and time each process uses. – I have cached all blocks generated at any step. Without it, 16gb RAM often was not enough. – I have used unit test extensively to control that I have not broken anything with my last changes (I think, it saved me a lot of hours)</p></li> </ul>
Abstraction and Reasoning Challenge
ARC - OO Framework + XGBoost Multimodel Solvers
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Visualized Notebook Kernel</h1> <ul> <li><a href="https://www.kaggle.com/jamesmcguigan/arc-oo-framework-xgboost-multimodel-solvers">https://www.kaggle.com/jamesmcguigan/arc-oo-framework-xgboost-multimodel-solvers</a></li> </ul> <h1>Introduction</h1> <p>In many ways, I feel I had only just gotten started with this competition by the time the submission deadline hit. This is a very interesting problem that has let me start understanding the machine learning toolbox in a completely new way. I am curious now about what this dataset can show us about the limits of each of the other tools in the machine learning toolbox.</p> <p>My first attempt was written inside a Kaggle notebook, attempting to solve geometry and tessellation problems by hand visualizing the results as I went along. I had reached the level - <a href="https://www.kaggle.com/jamesmcguigan/arc-geometry-solvers/">https://www.kaggle.com/jamesmcguigan/arc-geometry-solvers/</a></p> <p>This got me sufficiently noticed to be invited to join my first team: "Mathematicians + Experts" who gave me new perspectives on how to approach Kaggle Competitions as well as introducing me to new ideas. Thank you.</p> <p>I realized at this point that I had reached the complexity limit for code written in a jupyter notebook, so ported my code to a multi-file IDE github repo and used my previously written <code>kaggle_compile.py</code> script to create a single <code>submission.py</code> which could easily be copy/pasted into Kaggle. - <a href="https://www.kaggle.com/jamesmcguigan/kaggle-compile-py-python-ide-to-kaggle-compiler/">https://www.kaggle.com/jamesmcguigan/kaggle-compile-py-python-ide-to-kaggle-compiler/</a></p> <p>My next approach was to write a typed Object Oriented Class Model around the dataset, to allow for static typing (yay for linters!) and ease of navigaton between related data-structures. I also converted all the grids into <code>numpy.array(dtype=np.int8)</code>.</p> <p>Attempting to solve the general problem of <code>f(g(h(x)))</code>, I wrote proof of concept code using <code>inspect.signiture()</code> figure out all possible permutations of <code>f(g(h(x)))</code> and implement an IoC dependency injection solver. Then the deadline started getting close.</p> <p>I then went back to my original notebook code and properly ported it into the IDE. This resulted in <code>Solver</code> and <code>ProblemSetSorter</code> and <code>ProblemEncoder</code> base classes containing all the common case logic for looping over the dataset and applying the <code>.detect(task)</code>, <code>.fit(task)</code>, <code>.predict(task)</code>, <code>.test(task)</code>, <code>.solve_grid(np.ndarray, *args)</code> lifecycle. This significantly reduces the effort required to write new solver algorithms, when all that is needed is to implement a subset of the lifecycle methods. The rest of the autowiring is handled by the base class. </p> <p>Combined with the OO DataModel, and a modified <code>plot_task()</code> function (that also shows the numbers and algorithm solutions), this provides a very clean interface for showcasing new Solvers within a Jupiter Notebook.</p> <p>Looking through the teams collective codebase, I encounter XGBoost for the first time. I ported the existing code into my Solver framework with significant refactoring and then started playing around with greatly increasing the featuremap and hyperparaeter tuning. I was able to greatly improve the success rate of the algorithm <br> ```</p> <h3>Original</h3> <p>training : {'correct': 18, 'guesses': 49, 'total': 416, 'error': 0.9567, 'name': 'training'} evaluation : {'correct': 3, 'guesses': 19, 'total': 419, 'error': 0.9928, 'name': 'evaluation'} test : {'correct': 0, 'guesses': 8, 'total': 104, 'error': 1.0, 'name': 'test'} time : 00:00:30</p> <h3>XGBGridSolver(n_estimators=32, max_depth=10)</h3> <p>training : {'correct': 43, 'guesses': 266, 'total': 416, 'error': 0.8966, 'name': 'training'} evaluation : {'correct': 14, 'guesses': 266, 'total': 419, 'error': 0.9666, 'name': 'evaluation'} test : {'correct': 3.4, 'guesses': 65, 'total': 104, 'error': 1.0, 'name': 'test'} time : 00:14:53 ```</p> <p>What surprised me was how many seemingly unrelated classes of problem XGBoost could autosolve (in the training and evaluation dataset) when given a sufficiently large multi-dimensional (flattened) feature map, including each pixel's relative "view" of neighbouring pixels. However, this XGBoost approach also led to a large amount of random guessing, as it could quite often find rules that matched all the <code>train</code> examples, but incorrectly matched the <code>test</code>. This approach was able to score <code>0.990</code> == <code>1/104</code> on the public leaderboard.</p> <p>I did a complete rewrite of the XGBoostSolver, turning it into a fully subclassable ProblemSetSolver and Encoder base classes, that would allow configuration based definition of typed univariate functions that could be used to auto-generate a feature map. However, it turns out that XGBoost had signifcant trouble when faced with the seemingly simple Single Color Problem. </p> <p>I also showed how to implement Hyperopt Bayesian Hyperparameter Optimization was also performed on XGBoost.</p> <p>I also wrote a few silly Solvers, as the <code>DoNothingSolver()</code>, and the <code>GlobSolver()</code> which did a hashmap lookup on the <code>training</code> and <code>evaluation</code> datasets incase any of the hidden submission dataset. Alas, these did not find anything.</p> <h1>Onwards</h1> <p>The full list of solved training and evaluation tasks is visualized in the notebook, along with full code and an extended writeup. - <a href="https://www.kaggle.com/jamesmcguigan/arc-oo-framework-xgboost-multimodel-solvers">https://www.kaggle.com/jamesmcguigan/arc-oo-framework-xgboost-multimodel-solvers</a></p>
Abstraction and Reasoning Challenge
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is a short overview. </p> <p>Part 1, Cropping problems. Here the answer picture is contained in the input. The program solved 9 of these, where 2 were special, explained below. This cropping method perhaps exhibited some tiny intelligence. It is a hand made decision tree with some complicated scoring. It is messy but seems to work.</p> <p>Part 2, Repairing Tiles and repairing symmetry. Original method was a search method and solved 4 of these. Today I worked on a last-minute hastily made version that searched for partial symmetries in a picture in a little more intelligent way and surprisingly it solved 5. (I didn't have time to check if together they would have solved more than 5, but likely 4 of the 5 was already covered by the other method)</p> <p>The old repairing symmetry method was also used in Part 1 for some of the stranger cropping problems. Here you first repair the symmetry and then use cropping algorithm to get what was the missing piece.</p> <p>So Part 1 and Part 2 together solves 7 straight cropping problems, 5 straight repairing symmetry problems and 2 problems that were combined tasks. </p> <p>The rest were just 4 unrelated tasks. One was a public decision tree kernel, one was my public ModuloQ method, one used a simplification method of a picture and then using the coordinates mod p in the smaller simplified picture. The last one was making a larger mosaic from a smaller picture.</p> <p>This was a fun competition! I hope there will be a similar competition in the future where we can try to combine different methods. (I think a more intelligent agent should have different "specialist" parts that work together). </p> <p>Update: The repairing symmetry part is now in a public notebook: <a href="https://www.kaggle.com/szabo7zoltan/repairingmosaicsandsymmetry">https://www.kaggle.com/szabo7zoltan/repairingmosaicsandsymmetry</a></p>
Abstraction and Reasoning Challenge
1st place solution + code and official documentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to <a href="/fchollet">@fchollet</a> for making this dataset, and organizing the competition. I think understanding and making intelligence is extremely interesting, and working on this problem and reading the dataset's paper significantly changed my view on AGI. The problem of squeezing prior knowledge and generalization into your model really becomes clear when you need to do it yourself. I had actually read the paper before the contest was announced, but working on the contest gave me a much more detailed view of the practical side of things.</p> <p>Unfortunately, I don't feel like my solution itself brings us closer to AGI. The main component is a DSL which applies up to 4 of 142 unary transformations (42 different functions, some have different variants). This DSL is solved by enumeration (exploiting duplicates) + a greedy stacking combiner. Everything is implemented efficiently in C++ (with no dependencies) and running in parallel.</p> <p>My code is currently &gt; 10k lines long, but (I think) most of it is not used. I will publish it after removing the unused / deprecated / failed code.</p> <p>Edit: Code and official documentation now available at <a href="https://github.com/top-quarks/ARC-solution">https://github.com/top-quarks/ARC-solution</a> and <a href="https://github.com/top-quarks/ARC-solution/blob/master/ARC-solution_documentation.pdf">https://github.com/top-quarks/ARC-solution/blob/master/ARC-solution_documentation.pdf </a></p> <p>Edit 2: Public notebook scoring 0.794 <a href="https://www.kaggle.com/icecuber/arc-1st-place-solution">https://www.kaggle.com/icecuber/arc-1st-place-solution </a></p> <h3>My path</h3> <p>In practice I started out with implementing the first 100 training tasks by hand, extracting and generalizing useful functions from that. I made a brute force combiner of these transformations, choosing to only support transformations from images to a single image. Originally I wanted more types such as positions and colors, but it turned out that it was easier to represent these as images (such as a dot at the position, or a dot of a given color). This decision made it much easier to make an DSL, and got 10 tasks on the LB.</p> <p>I then noticed that distribution of training, evaluation and LB were quite different, so I decided the evaluation dataset was better used as training data. I hand-coded 100 evaluation tasks, which I used to add more transformations, and improve my DSL. I noticed that functions taking more than 1 argument were growing my search space super-exponentially, and that they usually just took either the input or output image size as second argument. This led me to only keep unary functions. I also added lists of images as a type, to solve tasks that required looping. I also started representing my DSL in a DAG to exploit duplicates (multiple transformations giving same output). This required me to rewrite most of my code, but gave me 14 tasks on LB.</p> <p>Optimizations, multithreading, reducing memory usage, etc. gave me 17 on LB. And augmenting my samples with diagonally flipped tasks, boosted me up to 21.</p> <p>Note that the above (and below) is simplified, and there are many, many details I'm forgetting / omitting as I write this.</p> <h2>Main solver steps</h2> <h3>1. Recolor inputs</h3> <p>I remap colors in each sample input to be consistent across samples. A typical use of this is when each sample has a single color, but it is different for each sample. The recoloring would make all of them equal. I do this through iteratively, greedily recoloring the most extreme color across all images using features such as size, number of objects or position. So if in each sample input there is a single-colored object considerably larger than all others, I make all those objects the same color. This works surprisingly well, even for cases I did not think I accounted for. It even straight up solves some easy tasks from the training set by making all outputs equal.</p> <h3>2. Maybe add diagonally flipped images</h3> <p>I double the sample inputs by adding the same tasks flipped along a diagonal. This makes for 3 full runs (depth 3 for performance reasons): one normal, and one for each diagonal. This simple addition moved me from 17 to 21 tasks solved on the LB (and hence helped more than all the optimizations needed for doing depth 4 search instead of depth 3).</p> <h3>3. Solve for output size</h3> <p>I first generate pieces as explained in "Overview of DSL". I do combinations and transformations of piece sizes to fit the training samples, and pick the one consistent with the most training samples. I break ties by approximately ordering the combinations by complexity (so transforming a piece with size WxH to get (2W)x(H+1) is more likely than (W+2)x(3H). The output size is correct for about 97% of the evaluation tasks.</p> <h3>4. Generate transformed pieces using output size</h3> <p>Details in "Overview of DSL". Output size is used to give the second argument to functions like embed, which crops / pads images to a given size.</p> <h3>5. Greedily stack pieces to fit a training sample</h3> <p>See "Overview of DSL".</p> <h3>6. Pick the top 3 candidates</h3> <p>I order candidates by the following criteria in decreasing order of importance: - Fits most training samples - Uses fewest transformations (max depth across pieces) - Uses fewest pieces in the stack</p> <h2>Overview of DSL</h2> <p>All the nodes in my DSL are either images, or lists of images. An image has a position, a size, and a 2d array of colors (0-9). Black (0) is treated as transparent in most of the functions.</p> <p>First, sequentially apply up to 3 or 4 unary transformations (see "List of transformations") to the input image. Then optionally apply a subset of the following (in order): - Stack lists of images - Move image to origin - Color all non-black pixels some color (for each color in every training output) - Resize image (crop / pad) to fit output size</p> <p>I call result of the above transformations "pieces". Finally, stack some pieces to produce a final output.</p> <p>My solver mainly enumerates all the pieces, and applies a greedy solver to stack them. Since there are 142 image transformations at each depth, plus transformations at the end, we can't naively generate all the pieces. Fortunately, many different paths of transformations lead to the same image. I therefore make a DAG (Directed Acyclic Graph) for each training input, where each node is a image, and edges are transformations. This typically leaves on the order of 10^7 nodes in the DAGs and a few million unique pieces (where pieces are different if they differ for any sample input image or the test image).</p> <p>I run multiple tasks in parallel (4 at depth 3, 2 at depth 4) using a script which tries to schedule all the tasks to fit within memory and 9 hour time limit, this runs maybe 50% of the images at depth 4 and the rest at depth 3 (I didn't really benchmark it, except 67% were run at depth 4 in an old version). The memory limit was typically more restrictive than the time limit, so I reduced memory usage through several tricks, such as huffman coding the images in the DAGs and using a custom memory-efficient hashmap.</p> <p>My greedy stacker produces each candidate output as follows: - Split pieces into black parts and non-black parts (parts not picked are treated as transparent) - Pick a guiding sample - Pick some subset of samples to "care about" - Up to 10 times: - Among all the pieces which do not contradict any of the images we care about - Add the piece which fits the most new pixels in the guiding image - If the test output image contains no holes, add this as a candidate</p> <p>The stacker is optimized using bitsets to be fast and memory efficient.</p> <h2>Some notes about choices and failed ideas:</h2> <p>Greedily stacking outputs worked surprisingly well, even solving many tasks I didn't think it could solve. It consistently beat all my attempts at alternatives and improvements, so it just stayed mostly the same from the beginning.</p> <p>I tried several ideas which didn't make it. One of these was adding general functions of more than one variable. However, none of my many attempts seemed to be able to give a significant improvement.</p> <p>Another idea was a system of deducing transformations from the training outputs, which I really wanted to work. If I could make this in a general way, I could maybe apply meet-in-the-middle to search double the depth! However, after many attempts I found that deductions are way harder to stack and generalize than transformations. This made it much harder to generalize deductions, and ultimately only a single deduction was kept (deducing "kronecker products" as last step).</p> <p>Yet another approach was generating a large number of sample tasks to train ML algorithms and fit hyperparameters (f.ex. different costs for each transformation). However, I concluded that generating new tasks was harder than solving them.</p> <p>I always felt ranking candidates was a much easier problem than generating candidates (f.ex. if all inputs are symmetrical, then output should probably be symmetrical). Ideally this could be used to fill in parts of the image, or generate a huge number of candidates and leave most of the work to the ranking algorithm. However, I never found a way to exploit powerful ranking to my advantage. My simple ranking heuristics worked well enough for its purpose.</p> <h2>Other</h2> <h3>Statistics for the evaluation dataset</h3> <p>Depth 3, without adding flipped images. Faster, but probably worse than a full run.</p> <p>Total: 419 (Some of the 400 tasks have multiple tests) Size : 409 (97.6% deduced the correct output size) Cands: 176 (42% had the correct answer among the generated candidates) Correct: 164 (39% gave the correct answer among top 3 answers)</p> <h3>List of transformations</h3> <p>Many functions have many variations, but are only listed once (f.ex. there are 14 variations of pickMax for different criteria). Fill, Move, border, broadcast, center, colShape, composeGrowing, compress, compress2, compress3, connect, count, cut, embed, eraseCol, filterCol, getPos, getSize, getSize0, gravity, half, hull, hull0, insideMarked, interior, interior2, majCol, makeBorder, makeBorder2, mirror, myStack, pickMax, pickMaxes, pickNotMaxes, pickUnique, repeat, rigid, smear, splitAll, splitCols, splitColumns, splitRows, spreadCols, stackLine, toOrigin, wrap.</p> <p>Each transformation takes either an image or a list of images, and produces either an image or a list of images. Image -&gt; image transformations are also broadcast across lists of images to produce list -&gt; list transformations. Transformations which take multiple inputs are given the original sample input image, or a black image with the dimensions of the output image, to make them unary.</p> <p>Transformations are implemented to be O(#input pixels) to ensure no transformation takes significantly more time than any other, although some exceptions were made when they take negligible time for other reasons.</p>
Abstraction and Reasoning Challenge
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I guess you are interested in TOP solutions, yeah me too! <br> But please let me write here, as I spent mush energy for this challenge. </p> <p>My Solution is DSL plus search. <br> <a href="https://github.com/k-harada/abstraction-and-reasoning-challenge">https://github.com/k-harada/abstraction-and-reasoning-challenge</a> <br> My DSL applied like this. </p> <p><code> def test_58(): p = Runner(58, verbose=True) p.run("mesh_split") p.run("n_cell") p.run("keep_max") p.run("max_color") p.run("paste_color_full") </code></p> <p>I tried some kinds of search, and simple BFS significantly improved my score, 0.970 -&gt; 0.931. <br> No idea for probing. </p> <p>Solved: Train: 146 / 416 <br> Evaluation: 94 / 419 </p>
Bengali.AI Handwritten Grapheme Classification
24th place solution - Post processing(+0.013 Private LB)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers for holding a very fun competition. Thank you to all kagglers! Through the discussion, I learned a lot.</p> <p>My solution used a simple ensemble model and post-processing to maximize macro recall. Post-processing adds a bias to the logit output by the model. Without post-processing I am out of medal zone😭</p> <p>Here is an overview of the solution:</p> <h2>Inference Label</h2> <p>First, the DL model calculates the logits. As a post-process, add optimal bias to logits. Infer estimated labels using argmax for logits.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117884%2Fdb0652c31dc1ea8a9abbb306f3ba6bd2%2Finference.png?generation=1584433357383197&amp;alt=media" alt=""></p> <h2>Model</h2> <p>The model is an ensemble of five models. All training data was used for learning. Due to my limited time and computational resources, I've done very little cross-validation.</p> <ul> <li>model 1, 2, 3, 4 <br> <ul><li>Seresnext50_32x4d (pretrained model using imagenet) </li> <li>image size 128x128 </li> <li>Three linear head (output is grapheme root logit (168), Vowel diacritic logit (168), Consonant diacritic logit (168)) </li> <li>Mish activation, Drop block</li> <li>Cross Entropy Loss </li> <li>AdaBound </li> <li>Change the following parameters for each model </li> <li>Manifold mixup (alpha, layer), ShiftScaleRotate, Cutout </li></ul></li> <li>model 5 <ul><li>Seresnext50_32x4d (pretrained model using imagenet) </li> <li>image size 137x236 </li> <li>Three linear head (output is grapheme root logit (168), Vowel diacritic logit (168), Consonant diacritic logit (168)) </li> <li>Mish activation, Drop block</li> <li>Cross Entropy Loss </li> <li>AdaBound </li> <li>Manifold mixup (alpha, layer), ShiftScaleRotate, Cutout </li></ul></li> </ul> <h2>Calculation of post-processing bias</h2> <p>Since the model is trained in Cross Entropy Loss, it generally does not maximize macro recall. So I considered using a bias that would be added to logit to optimize macro recall. The optimal bias was calculated using a real coded genetic algorithm as follows: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117884%2F1b04e39fcfe7d6862464b81da5bd0878%2Fsearch_optimal_bias.png?generation=1584434262345894&amp;alt=media" alt=""></p> <ul> <li>Calculate logit for all training data. </li> <li>Calculate bias to maximize macro recall with real-valued genetic algorithm. </li> <li>Calculate logit for all training data with data extension. </li> <li>Calculate bias to maximize macro recall with real-valued genetic algorithm. </li> <li>The above biases are averaged to obtain the final bias.</li> </ul> <p>In estimating the test data, the above bias calculated from the training data is used.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2117884%2Fb8874f1bef7f5f1e5a0125ab21d3d7d0%2Foptimal_bias.png?generation=1584434294711062&amp;alt=media" alt=""></p> <p>There is not much good English literature on real coded genetic algorithms. I think the following is helpful for implementation. <a href="https://github.com/statsu1990/Real-coded-genetic-algorithm">https://github.com/statsu1990/Real-coded-genetic-algorithm</a></p> <p>CMA-ES is similar to real coded genetic algorithm, and there is a lot of English literature. Therefore, you may want to try CMA-ES. </p> <h2>Score (Public LB / Private LB)</h2> <p>Single model <br> - model1 (wo bias): 0.9689 / 0.9285 <br> - model2 (wo bias): 0.9680 / 0.9270 <br> - model3 (wo bias): 0.9691 / 0.9317 <br> - model4 (wo bias): 0.9681 / 0.9243 <br> - model5 (wo bias): 0.9705 / 0.9290 </p> <p>Ensemble model - ensemble1 ~ 5 (TTA, wo bias): 0.9712 / 0.9309 <br> - ensemble1 ~ 5 (TTA, with bias): 0.9744 / 0.9435</p> <p>That's all. thank you for reading!</p>
Abstraction and Reasoning Challenge
9th place solution(s)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It's fantastic to see the huge improvement of all teams in the last week of this interesting competition. Like many others, I and my teammates have had a sleepless night to play with the last "tickets." </p> <p>Here is a short list to sum up our team solutions:</p> <ul> <li>2 DSL: 8/102</li> <li>2 Pub: 2/102</li> <li>CA: 2/102</li> <li>CA + voting: 1/102</li> <li>DSL + alignment: 1/102</li> </ul> <hr> <p><a href="/user189546">@user189546</a>'s DSL and CA, and her own words</p> <p>My part of our team solution scores 7 points on LB. It consists of 2 approaches: - Crop object + symmetry transformations (5 points). It overlapped by 2 tasks with <a href="/yukikubo123">@yukikubo123</a> DSL+search solution. - Cellular Automata recurrent CNN public kernel + encoding colors by frequency (2 points, however first time it scored 3 points, didn’t happen again).</p> <p>1) Crop object</p> <p>This competition is hard for many reasons and one of reasons is large output space. When I joined the competition I wanted to get first small win quickly. So I looked at tasks that have small output space – crop object tasks. First object crop kernel scored 2 points and when I added more object extraction functions and symmetry transformations it scored 5 points.</p> <p>Kernel uses brute force to find right object extraction function and right symmetry transformation. If number of possible unique crop candidates &gt; 3 it predicts 3 biggest objects. Small objects can be features or noise. Solution runs in 3 min in Python.</p> <p><em>Pseudocode:</em></p> <p>``` bg_color = detect_bg(task) # find background color as color of figure that touches 3 sides of input. get_object_funcs = [get_objects_by_color, get_objects_by_connectivity, get_objects_by_color_and_connectivity, get_objects_rectangles_without_noise, ….]</p> <p>symmetry_funcs = [identity, flip, rotate, tile, zoom, flip_diagonal, …]</p> <p>for get_object_func in get_object_funcs: for symmetry_func in symmetry_funcs: train_prediction_candidates = symmetry_func(get_object_func( task_train_examples)) if task_train_outputs in crop_candidates: break test_prediction_candidates = symmetry_func(get_object_func(task_test_examples)) test_predictions = unique(sort_by_size(test_prediction_candidates))[:3] ```</p> <p><a href="https://www.kaggle.com/user189546/5-crop-tasks-by-brute-force"><strong>Sample Notebook</strong></a></p> <p>2) CA + ColorFreqEncoder</p> <p>Thanks to <a href="/teddykoker">@teddykoker</a> we had wonderful CA kernel: <a href="https://www.kaggle.com/teddykoker/training-cellular-automata-part-ii-learning-tasks">https://www.kaggle.com/teddykoker/training-cellular-automata-part-ii-learning-tasks</a>. It solved plenty of various tasks in train and eval however didn’t score on LB. I added encoding colors by frequency (0 – most common color, 1 – second most common color, 2 - …) and fixed all random seeds. Kernel performance is highly random so without random seeds it is not reproducible. When I submitted first time without fixing all random seeds it scored 0.97, however when I fixed random seeds it scored 0.98.</p> <p><a href="https://www.kaggle.com/user189546/cellular-automata-learning-with-colorfreqencoder"><strong>Sample Notebook</strong></a></p> <hr> <p><a href="/yukikubo123">@yukikubo123</a>'s DSL</p> <p>I won't post the detail of this solution here because <a href="/yukikubo123">@yukikubo123</a>'s GitLab <a href="https://gitlab.com/yuki1020/abstraction-and-reasoning-challenge/"><em>repo</em></a> alone is enough to show all of his awesome work of art -&gt; <a href="https://gitlab.com/yuki1020/abstraction-and-reasoning-challenge/">https://gitlab.com/yuki1020/abstraction-and-reasoning-challenge/</a></p> <hr> <p><a href="/tarobxl">@tarobxl</a>'s alignment technique</p> <p>When observing the data, <a href="/tarobxl">@tarobxl</a> has thought: “Could we ease the learning process of the model(s), simply by rotating the training pairs of a task in the same direction.” I and he have applied this idea to all available solutions/models and luckily it works with <a href="/yukikubo123">@yukikubo123</a>’s DSL and gives us a new solved task in the LB.</p> <p><a href="https://www.kaggle.com/phunghieu/yukikubo123-s-dsl-with-alignment"><strong>Sample Notebook</strong></a></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1853329%2F1cbdc8c981bf300dfefdf0dc20745529%2Fimage.png?generation=1590764004017060&amp;alt=media" alt="img_1"></p> <hr> <p><a href="/phunghieu">@phunghieu</a>'s notes [Topic Author] - What I’ve tried - Try using CNN to solve ARC’s problem in Deep Learning approach -&gt; failed - Try merging <a href="/yukikubo123">@yukikubo123</a>’s and <a href="/user189546">@user189546</a>’s solutions with 2 public kernels to yield the 0.931 scored solution - Successfully optimize the CA solution to run faster (x2) on GPU accelerator by removing redundancy, duplicate jobs, and trying to use batch-size &gt; 1 whenever possible - Try averaging ensemble technique for CA solution -&gt; failed - (I &amp; <a href="/tarobxl">@tarobxl</a>) Successfully use the voting mechanism to ensemble CA kernel -&gt; 1 more solved task - Try to add augmentation to CA solution -&gt; failed - Successfully apply the alignment technique for <a href="/yukikubo123">@yukikubo123</a>’s DSL solution -&gt; 1 more solved task</p> <ul> <li><p>What I’ve not tried</p> <ul><li>Modify the CA solution to solve tasks that have differences between the input’s size and output’s size</li> <li>Use multi-processing to further improve the CA solution performance, thus we could try more idea within 2 hours limit of a submission uses accelerator (GPU)</li></ul></li> <li><p>My main contributions to the team are</p> <ul><li>Merging all solutions into a big solver</li> <li>Improving the performance of all available solutions to fit them into a time-limited submission</li></ul></li> </ul> <p>(*) A few last words: Although not being a big brainer to the team, I have, somehow, tried my best to make use of my programming skills and techniques which I frequently used in the Deep Learning field to support my teammates. I also have taken advice from <a href="/tarobxl">@tarobxl</a> and <a href="/ysharma1126">@ysharma1126</a> to accelerate my “solution-optimization” job.</p> <hr> <p>Happy Kaggling!!!</p>
Abstraction and Reasoning Challenge
Our 10th place solution [with code]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone that the competition is finally over! Just after the end I would like to tell you a bit about the solution of our team “Puzzlemaster” that got 10th place.</p> <p>Our solution is an ensemble of 3 parts that solved 3 sets of tasks that has a partial intersection: - DSL-algorithm implemented by <a href="/felipefonte99">@felipefonte99</a> - ML-algorithm implemented by <a href="/nikitaovsov">@nikitaovsov</a>. I think they will create other posts about their solutions or post comments in this thread.</p> <ul> <li>Genetic algorithm that tries to fit a DSL-program that was created by me and <a href="/artyomp">@artyomp</a>. It solved 10-12 tasks. Now I am going to tell you about it.</li> </ul> <p>At first I would like to thank authors of these kernels that inspired me from the algorithm’s perspective:</p> <p><a href="https://www.kaggle.com/arsenynerinovsky/cellular-automata-as-a-language-for-reasoning">https://www.kaggle.com/arsenynerinovsky/cellular-automata-as-a-language-for-reasoning</a> <a href="https://www.kaggle.com/zenol42/dsl-and-genetic-algorithm-applied-to-arc">https://www.kaggle.com/zenol42/dsl-and-genetic-algorithm-applied-to-arc</a></p> <p>And of course the kernel that I opened 1000 times:</p> <p><a href="https://www.kaggle.com/boliu0/visualizing-all-task-pairs-with-gridlines">https://www.kaggle.com/boliu0/visualizing-all-task-pairs-with-gridlines</a></p> <p><strong>The solution</strong></p> <p>We created a Domain-Specific Language based on commands that can be applied to a grid. There are many commands in our language. Each command has a couple of parameters that are randomly sampled during training. We call it “global rules” (in opposite to cellular automata rules that are described below) Examples of the commands: - Draw a line (or some color) from cells (of some color) (until it stops at some color). - Map all cells of (some color) to (some color) - Rotate the grid - Apply gravity to (figures/cells) of (some color) in the direction of ((some border)/some color) - others All that written in brackets are randomly sampled parameters.</p> <p>At the end of the competition we added elements of “Multiple grids” to the DSL. For example, it could split the grid (for example, by figures on it), apply some commands to some of the grids and then merge them back. This boosted our LB score a lot.</p> <p>There were also “Cellular automata” commands that were applied after previous commands. More details about cellular automatas can be find at this kernel: <a href="https://www.kaggle.com/arsenynerinovsky/cellular-automata-as-a-language-for-reasoning">https://www.kaggle.com/arsenynerinovsky/cellular-automata-as-a-language-for-reasoning</a></p> <p>Our DSL contained a couple of randomly parametrized CA-rules. For example, checking of the direct neighbors, checking of colored corners and so on.</p> <p>So the final DSL-program is a sequence of “global” rules, then CA-rules are applied to the result of that.</p> <p><strong>The C++ part</strong> Very important part of our solution was that the “global” rules and “CA-rules” were implemented by <a href="/artyomp">@artyomp</a> using C++. He did a great job that accelerated the whole algorithm at 100 times and allowed us to create many new rules and not to care much about the complexity of the search space. Without C++, we would probably solve half of the test tasks we solved.</p> <p><strong>The search algorithm</strong> In fact, it’s similar to the algorithm from <a href="https://www.kaggle.com/zenol42/dsl-and-genetic-algorithm-applied-to-arc">https://www.kaggle.com/zenol42/dsl-and-genetic-algorithm-applied-to-arc</a> but with a number of heuristics. Our algorithm creates random sequences of programs, creates asexual mutations (tries to replace commands with new commands, insert new commands, delete commands) and "sexual" mutations (tries to concatenate random prefixes and suffixes of 2 programs with some mutation in the end).</p> <p><strong>Quality measurement</strong> How to measure if the program is good? Obviously, we should measure accuracy (how many cells are correct). If it equals to 1 for all training pairs, we found a solution. But since we use genetic algorithm with mutations of 1 rule to find a solution, we might need more metrics to allow the population to be bigger. My observation was that we should be careful with that because if we add more metrics, it is more likely that a junky program with many commands will be created by the algorithm that can solve training samples but overfits. </p> <p>So as a result, our final choice of metrics: - Mean accuracy for all training outputs - Accuracies for each output - The mean of the following binary metrics: for each color from 0 to 10 we compute if our prediction of the color is absolutely correct (in terms of both precision and recall). It helped the algorithm to find correct sequences that fit correctly colors one by one.</p> <p><strong>The code</strong> The DSL-solution is about 2.5k lines of code. We are going to clean it a little bit, make some comments and publish as a kernel so you can dive into details. I didn’t mention a huge number of details of the algorithm because there would be too much text. But I am ready to answer your questions about the algorithms if you are curious.</p> <p>UPD.</p> <p>The C++ version is in repo: <a href="https://github.com/artyompal/kaggle-abstract-reasoning">https://github.com/artyompal/kaggle-abstract-reasoning</a></p> <p>The kernel with the python version: <a href="https://www.kaggle.com/alexfritz/genetic-dsl-part-from-10th-place-python-version">https://www.kaggle.com/alexfritz/genetic-dsl-part-from-10th-place-python-version</a></p> <p><strong>Conclusion</strong> I would like to thank organizers and my team <a href="/artyomp">@artyomp</a> <a href="/nikitaovsov">@nikitaovsov</a> <a href="/felipefonte99">@felipefonte99</a>! It was a great experience for me and I learned a lot of new things.</p>
Bengali.AI Handwritten Grapheme Classification
21st Solution using only Kaggle Kernels
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Handwritten Grapheme Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank <a href="/seesee">@seesee</a> a lot, because he was very generous to share his notebooks: <a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/134161">https://www.kaggle.com/c/bengaliai-cv19/discussion/134161</a> I was not going to join this competition because I have no GPU, but in the last week, I have decided to join after seeing these nice kernels. I could only use 2x30 TPU hours for this competition.</p> <p>The baseline was scoring 0.9708 on the public LB. Here is what I have done on top of that: * I was aware that <strong>random split</strong> for train and validation does not represent the split for the real test set but I was lazy to change it. I just kept it in mind. * Every class contributes to the metric equally. Therefore, I have <strong>multiplied the predictions by inverse class frequencies</strong> before getting the maximum. This gave 0.007 boost on Public LB. (around 0.003 boost for the validation set) * I have changed the backbone model from <strong>Efficientnet B3 to B4</strong>. This gave 0.002 boost. * I have changed the augmentation from mixup to <strong>cutmix ** using the implementation here: <a href="https://www.kaggle.com/cdeotte/cutmix-and-mixup-on-gpu-tpu">https://www.kaggle.com/cdeotte/cutmix-and-mixup-on-gpu-tpu</a> I have only updated it so that it does the cuts as rectangles rather than squares (and fixed a small bug, I believe). Thanks <a href="/cdeotte">@cdeotte</a> for this nice kernel. cutmix did not improve my LB but improved the local validation. * I have changed the learning scheme. **Decreasing learning rates gradually</strong> gave me 0.007 boost. I could make use of it more but TPU run-time limit was 3 hours. If I had higher limit, I could end up in gold zone. Or maybe the opposite, having an underfit model gave me a shake-up. * I have ran the model <strong>on whole data</strong>, rather than 80% train split. This gave 0.001 boost. * I have <strong>blended the last two runs</strong> with different seeds. This gave me 0.001 boost. * Considering that my split was not representative and having more benefit from multiplying by inverse class frequencies than expected, I have decided to exploit a bit more by weight = np.power(weight, 1.2). This improved public LB by 0.0006 but seems to improve private even more. Probably because there are more <strong>unseen graphemes</strong> in the private set.</p> <p>So it was possible to get 21st place using only Kaggle Kernels in a week. Reading discussions and kernels carefully is enough for getting around this position. For top 10, usually more creativity is needed.</p>
University of Liverpool - Ion Switching
19th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Single 1cnn wavenet with batchnormalization and a little bit of dropout with the clean kalman dataset of my public wavenet notebook.</p> <p>Important: Added some data augmentation to handle the shift in the test adding a constant to the signal. Another data augmentation technique that also improve the cv was to add a little bit of a random normal distribution like this train_aug['signal'] + [np.random.normal(0, 0.01) for x in range(train_aug.shape[0])]. What i actually used was train_aug['signal'] + np.random.normal(0, 0.01) hoping that this will handle the shift and it worked (this was pure luck :) )</p> <p>Used KMeans to create 22 clusters, then i one hot encode this cluster making 22 extra features.</p> <p>Also i get some aggregated stats for each cluster like the kurtosis, skewness and std.</p> <p>Added the difference between the signal and nearest cluster centroid, also added the euclidean distance between them</p> <p>Used -+ 3 shifted signal, -3 rolling mean and std, -1 to 1 rolling mean and std, signal ** 2 and signal ** 4.</p> <p>That cover the feature engineering part. </p> <p>To handdle overfitting i used a learning rate scheduler like the public wavenet so it can make an early stop (if i used constant lr of 0.001 the net overfitts badly giving 0.942 public and private leaderboard and an awsome oof f1 of 0.94452)</p> <p>It was a fun competition, msg me if you want to team up in another competition that we are both interested and have more or less the same skills and knowledge</p> <p>Sry for my english it's not my first language.</p>
University of Liverpool - Ion Switching
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to everyone who participated in this competition. I want to thank Liverpool and Kaggle for putting it on. No competition is perfect but I think that despite it's faults this competition was successful (at least for me) in that it challenged me to think hard and apply machine learning solutions to solve a tough problem. A little over a year and a half ago I participated in my first serious Kaggle competition. I would've never believed that eventually I could achieve a solo gold. It was a lot of work paired with a healthy dose of luck.</p> <h2>Background (My Competition Story)</h2> <p>This competition piqued my interest the day it was launched. It seemed check all the boxes of what I'm interested in, namely: Non traditional data that wasn't too large or too small, a scientific based problem statement, non-synchronous so I didn't have to depend on kernels for training or submitting, and an opportunity to expand my skills building neural nets. After the Bengali competition ended I started working on the problem on nights and weekends like I normally do and keeping ideas in the back of my mind.</p> <p>Then COVID-19 cases started to show up in the US and things changed very quickly with my day job, I was furloughed, and suddenly had more time in my schedule for family and kaggle :)</p> <p>I noticed that many well known kagglers were opting out of this competition, which usually is not a good sign. Some theorized early on that we had reached optimum solution at 0.938 - I was skeptical. Yes, the data is semi-synthetic, but knowing the data was passed through a physical amplifier I figured that signal processing would at least give a boost beyond 0.938.</p> <p>I started modeling as I usually do with LGBM similar to what I shared in my public kernel (<a href="https://www.kaggle.com/robikscube/ion-switching-5kfold-lgbm-tracking">https://www.kaggle.com/robikscube/ion-switching-5kfold-lgbm-tracking</a>) Being inspired by The Zoo's past solutions I wanted to keep my model light on features. After reading <a href="/cdeotte">@cdeotte</a> 's excellent notebook explaining drift it was clear de-drifting was the priority. My first attempt at removing drift was just fitting a linear and parabolic equation on the rolling median values of each batch and subtracting it from the signal. This gave me a relatively clean signal however there was still one issue, the peaks of each channel were not lined up- specifically the 10 channel signal was not properly lined up with the peaks for the associated channels in other groups. To account for this I manually added an offset that would line up these peaks. This "peak alignment" combined with the clean data my LGBM model achieved 0.941 public LB - good enough for 2nd place at the time. I was thrilled but knew it was still very early in the competition. Not soon after that the first version of the "drift free" data was shared and many teams jumped to 0.941.</p> <p><img src="https://i.imgur.com/5Vcl2Bc.png" alt="0.941"></p> <p>I experimented with different features, different CV setups, model settings and was seeing small gains. During this time I also focused on better understanding the macroF1 metric and how it could be optimized. After reading some papers and past kaggle posts I learned that MacroF1 could be improved by giving preference to lower occurring classes. I developed some code for doing this and found post processing gave around a 0.0001 to 0.0005 boost on my CV and LB score but was also quite risky because there is always a chance of over post processing and doing more harm than good.</p> <p>That's when I decided to take my best LGBM features and use them in a Neural Network. Thanks to the wonderful notebooks I decided to code with TF2/keras instead of pytorch:</p> <ul> <li>Unet: <a href="https://www.kaggle.com/kmat2019/u-net-1d-cnn-with-keras">https://www.kaggle.com/kmat2019/u-net-1d-cnn-with-keras</a> Thanks- <a href="/kmat2019">@kmat2019</a></li> <li>Wavenet: <a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a> Thanks- <a href="/siavrez">@siavrez</a> </li> </ul> <p>Wavenet showed the most promise, so I started to experiment with different features and slightly modifying the structure which was enough to push me to 0.942 public LB (3rd place at the time).</p> <p><img src="https://i.imgur.com/bx8nMZP.png" alt="0.942"></p> <p>I experimented with pseudo labeling, target encoding, and many other things but eventually was starting to drop down in the rankings and became very discouraged. I even reached out to a friend/kaggler to see if he was interested in teaming- Luckily (for me) I made my next breakthrough before he could respond.</p> <p>In one of my experiments I tried was adding an LSTM after the wavenet in combination with target encoding on the signal rounded to 2 decimal places. This gave a big improvement on CV but I was skeptical. When I submitted I was surprised that it held up and I was the first to break 0.945! I spent the next few days trying to understand why this worked and recreating it in a more CV stable way. What I found was that the model performed better when the target encoding wasn't scaled: so while the target encoding was in the range 0-10, the signal and other features were scaled 0-1. I also realized that rounding the signal to 2 digits was essentially the same thing as binning the signal - so I switched to using <code>KBinsDiscretizer</code> to allow me to experiment with different bin sizes.</p> <p><img src="https://i.imgur.com/d0IeCid.png" alt="0.945"></p> <p>After many experiments I was able to push my score up to 0.946 (1st place at the time). I also found this great kernel gave me the idea to add a 50Hz and a few additional frequency features which gave my CV/LB a boost: <a href="https://www.kaggle.com/johnoliverjones/frequency-domain-filtering">https://www.kaggle.com/johnoliverjones/frequency-domain-filtering</a> - thanks <a href="/johnoliverjones">@johnoliverjones</a></p> <p><img src="https://i.imgur.com/NZStXz1.png" alt="0.946"></p> <p>By this point many power teams were forming with multiple GMs and strong kagglers. I was preparing myself to be overtaken- but alas I kept experimenting! I tried to keep my GPUs running experiments at all times (<code>while ps -p 12345; do sleep 60; python myscript.py</code> is a nifty trick to kick off a script once the previous completes). I also used google colab for additional experiments. Almost all my experiments failed or were inconclusive, but two ended up improving my score to 0.947 (2nd place public):</p> <p>1) Improving upon my "peak shift" code to find the perfect shift in peaks of each batch.</p> <p>2) Removing specific frequencies that contained noise (namely 50Hz) from the signal.</p> <p><img src="https://i.imgur.com/InuRxfZ.png" alt="0.947"></p> <p>And now onto the details:</p> <h2>Preprocessing</h2> <p><strong>Removing bad training data</strong> I started with drift free data as most people did. I also could visually identify that there were some outliers due to extreme noise that didn't resemble anything in the test set, so I removed that section of the training data completely:</p> <p><code> FILTER_TRAIN = '(time &amp;lt;= 47.6 or time &amp;gt; 48) and (time &amp;lt;= 364 or time &amp;gt; 382.4)' train = train.query(FILTER_TRAIN) </code> <strong>Removing noisy frequency</strong> I knew the 50Hz noise was from the hum that is created due to AC current in the amplifier. Having previously studied and worked in power engineering I knew that 50Hz (60Hz in the US) is only the <strong>approximate</strong> frequency of AC power. In reality it can lead or lag 50Hz due to many different factors. I wrote some code to identify where exactly the peak occurred for each batch- some batches it was 50.01Hz others 49.99Hz or 50Hz. The code then automatically tuned a notch filter with a Q value tuned with scipy to minimize the variance of the ~50Hz frequency with the surrounding frequencies amplitudes. The allowed it to "smoothly" filter out the 50Hz frequency with minimal disturbance to the rest of the signal.</p> <p>This notebook was helpful when I was first researching the idea: <a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise</a></p> <p>And the code for this "auto 50Hz filter":</p> <p>``` def fix_50hz(tt, batch, Q = 250.0, f0 = 50, fs=10000, plot=True, write=True, domax=True, use_cleaned=False): """ Q: Quality factor f0: Frequency to be removed from signal (Hz) """ if use_cleaned: sig_sample = tt.loc[tt['sbatch'] == batch]['signal_cleaned_10s'].values else: sig_sample = tt.loc[tt['sbatch'] == batch]['signal'].values</p> <pre><code>tmin = tt.loc[tt['sbatch'] == batch]['time'].min() tmax = tt.loc[tt['sbatch'] == batch]['time'].max() chunique = tt.loc[tt['sbatch'] == batch]['open_channels'].nunique() fft = np.fft.fft(sig_sample) psd = np.abs(fft) ** 2 fftfreq = np.fft.fftfreq(len(psd),1/fs) # See if 50 hz is the max amplitude i_45_55 = (fftfreq &amp;gt;= (f0-5)) &amp;amp; (fftfreq &amp;lt;= (f0+5)) i_45_55_no_50 = ((fftfreq &amp;gt;= (f0-5)) &amp;amp; (fftfreq &amp;lt;= (f0+5))) &amp;amp; (fftfreq != f0) i_50 = fftfreq == 50 if domax: freq_with_max_ampl = fftfreq[i_45_55][np.argmax(psd[i_45_55])] if freq_with_max_ampl &amp;gt; (f0-1) and freq_with_max_ampl &amp;lt; (f0+1) and freq_with_max_ampl != f0: print(f'Making f0 {freq_with_max_ampl}') f0 = freq_with_max_ampl else: freq_with_max_ampl = fftfreq[i_45_55][np.argmin(psd[i_45_55])] if freq_with_max_ampl &amp;gt; (f0-1) and freq_with_max_ampl &amp;lt; (f0+1) and freq_with_max_ampl != f0: print(f'Making f0 {freq_with_max_ampl}') f0 = freq_with_max_ampl # Design notch filter to make frequency f0 similar to average def notch_filter(Q): b, a = signal.iirnotch(f0, Q, fs) # Filter and plot y = signal.filtfilt(b, a, sig_sample) fft_ = np.fft.fft(y) psd_ = np.abs(fft_) ** 2 fftfreq_ = np.fft.fftfreq(len(psd_),1/fs) ivar = ((fftfreq &amp;gt;= f0-2) &amp;amp; (fftfreq &amp;lt; f0-1)) | ((fftfreq &amp;gt;= f0+2) &amp;amp; (fftfreq &amp;lt; f0+1)) i_f0 = fftfreq_ == f0 abs_diff = np.abs(np.mean(psd[ivar]) - psd_[i_f0]) diff_others = np.mean(psd[ivar] - psd_[ivar]) return abs_diff + np.abs(diff_others / 2) res = minimize(notch_filter, [Q], tol=1e-10, method='Powell') #, bounds=((100, 5000),)) Q = res['x'] print(f'Using Q {Q}') b, a = signal.iirnotch(f0, Q, fs) # Filter and plot y = signal.filtfilt(b, a, sig_sample) fft_ = np.fft.fft(y) psd_ = np.abs(fft_) ** 2 fftfreq_ = np.fft.fftfreq(len(psd_),1/fs) if plot: fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 2)) i = abs(fftfreq) &amp;lt; 100 ax.grid() ax.plot(fftfreq[i], 20*np.log10(psd[i]), linewidth=1) ax.plot(fftfreq_[i], 20*np.log10(psd_[i]), linewidth=1) ax.set_title(f'Time: {tmin} to {tmax} - Unique Channels: {chunique} - Q: {Q}') ax.set_xlabel('Frequency (Hz)') ax.set_ylabel('PSD (dB)') ax.axvline(f0, color='red', alpha=0.2) ax.axvline(-f0, color='red', alpha=0.2) plt.show() if write: # Write Cleaned Signal to dataset tt.loc[tt['sbatch'] == batch, 'signal_cleaned_10s'] = y return tt </code></pre> <p>```</p> <p>This worked rather nicely at removing the 49-51Hz noise. I experimented running this on each batch individually, by 10 second intervals, and 1 second intervals. I also experimented with removing other "noisy" frequencies. Plots were used to validate the change (the blue area shows where the frequency was removed)</p> <p><img src="https://i.imgur.com/zdIlxfB.png" alt="Filter Example"></p> <p><strong>Peak Alignment</strong> The final step of preprocessing was lining up of peaks of the distribution for each batch. To achieve this I wrote a bit of code that minimized the difference in median signal between batches by <code>open_channel</code>. In doing some research I found that these types of "peak alignment" problems are quite common in various field and I skimmed a few papers about it. <a href="https://www.researchgate.net/publication/221816197_Model-based_peak_alignment_of_metabolomic_profiling_from_comprehensive_two-dimensional_gas_chromatography_mass_spectrometry">For instance this paper discusses peak alignment for chromatography</a></p> <p>And the code:</p> <p>``` def apply_shift(tt, shift_group, base_groups):</p> <pre><code>te1 = tt.query('sbatch == @shift_group')['time'].min() te2 = tt.query('sbatch == @shift_group')['time'].max() print(f'Running for Group {shift_group} - {te1} to {te2}') shift_channel_freq = tt.query('sbatch == @shift_group')['open_channels'].value_counts().sort_values(ascending=False).index base_channels = tt.query('sbatch in @base_groups')['open_channels'].unique() for m in shift_channel_freq: if m in base_channels: print(f'Lining up on channel {m}') most_freq_channel = m break m1 = tt.query('sbatch in @base_groups and open_channels == @most_freq_channel')['signal_fixed'].median() m2 = tt.query('sbatch == @shift_group and open_channels == @most_freq_channel')['signal'].median() shift_start = m1 - m2 print('Base Shift: ', shift_start) def overlap_weighted(shift, shift_group=shift_group, base_groups=base_groups, tt=tt): shift_data = tt.query('sbatch == @shift_group').copy() shift_data['signal_fixed'] = shift_data['signal'] + shift base_data = tt.query('sbatch in @base_groups').copy() shift_times = shift_data['time'].values common_channels = np.intersect1d(base_data['open_channels'].unique(), shift_data['open_channels'].unique()) base_data = base_data.query('open_channels in <a href="/common">@common</a>_channels') shift_data = shift_data.query('open_channels in <a href="/common">@common</a>_channels') shift_medians = shift_data.groupby('open_channels')['signal_fixed'].median() base_medians = base_data.groupby('open_channels')['signal_fixed'].median() shift_counts = shift_data.groupby('open_channels')['signal_fixed'].count() base_counts = base_data.groupby('open_channels')['signal_fixed'].count() weighted_diff = ((shift_medians - base_medians) * shift_counts).mean() / shift_counts.sum() print(f'Shift: {shift[0]:0.6f} : Weighted Diff {weighted_diff:0.20f}', end="\r", flush=True) return np.abs(weighted_diff) print('') res = minimize(overlap_weighted, shift_start, method='Powell', tol=1e-2) best_shift = res['x'] print('') print('best_shift', best_shift) te1 = tt.query('sbatch == @shift_group')['time'].min() te2 = tt.query('sbatch == @shift_group')['time'].max() tt.loc[(tt['time'] &amp;gt;= te1) &amp;amp; (tt['time'] &amp;lt;= te2), 'signal_fixed'] = \ tt.loc[(tt['time'] &amp;gt;= te1) &amp;amp; (tt['time'] &amp;lt;= te2)]['signal'] + best_shift fig, ax = plt.subplots(1, 1, sharex=True, figsize=(15, 3)) tt.query('sbatch == @shift_group')['signal_fixed'].plot(kind='kde', ax=ax, title='Shift Set') </code></pre> <p>['signal'].median().iteritems(): @shift_group').groupby('open_channels')['signal_fixed'].median().iteritems(): ax.axvline(i[1], color='orange', ls='--') tt.query('sbatch in @base_groups')['signal_fixed'].plot(kind='kde', ax=ax, title='Base set') for i in tt.query('sbatch in @base_groups').groupby('open_channels')['signal_fixed'].median().iteritems(): ax.axvline(i[1], color='red', ls='--') plt.show() return tt</p> <p>``` Here are two examples of the output from this code:</p> <p><img src="https://i.imgur.com/rhXoFNf.png" alt="Peak 1"></p> <p><img src="https://i.imgur.com/6lEbbJk.png" alt="Peak 2"></p> <p>The one issue with the peak alignment process was that for the training set I knew the true open channels. For the test set I didn't know them, so I used my previous model's predictions. I tried using out of fold values on the training set- but it didn't work as well. My thinking was that eventually my test alignment would reach an "optimum" shift.</p> <p>Still, I knew this left me open to the possibility of overfitting to my own predictions. I tried many other approaches for lining up the peaks- but decided that this seemed to work the best despite the risks.</p> <p>The final result was a very clean dataset with peaks of each channel aligned.</p> <p><img src="https://i.imgur.com/keGEZ77.png" alt="Clean 1"> <img src="https://i.imgur.com/4NiMJGh.png" alt="Clean 2"></p> <h2>Features</h2> <p>As I mentioned above the most important feature I found was target encoding. I used the <code>KBinsDiscretizer</code> to first divide the signal into different bins and then ran target encoding on that. I didn't scale the target encoding but did scale all other features. Other features were lags +/- 5 samples, signal^2 and the 50Hz features.</p> <p>The following is the function I used to create target encoding. It makes use of the category-encoders package <a href="https://pypi.org/project/category-encoders/">https://pypi.org/project/category-encoders/</a></p> <p>``` def add_target_encoding(tr_df, val_df, test_df, features, n_bins=500, strategy='uniform', feature='signal', smoothing=1): if smoothing == 1: sm = '' else: sm = '_' + str(smoothing) kbd = KBinsDiscretizer(n_bins=n_bins, encode='ordinal', strategy=strategy) tr_df[f'{feature}_{n_bins}bins{sm}'] = kbd.fit_transform(tr_df[feature].values.reshape(-1, 1)) val_df[f'{feature}_{n_bins}bins{sm}'] = kbd.transform(val_df[feature].values.reshape(-1, 1)) test_df[f'{feature}_{n_bins}bins{sm}'] = kbd.transform(test_df[feature].values.reshape(-1, 1))</p> <pre><code>tr_df[f'{feature}_{n_bins}bins{sm}'] = tr_df[f'{feature}_{n_bins}bins{sm}'].astype('category') val_df[f'{feature}_{n_bins}bins{sm}'] = val_df[f'{feature}_{n_bins}bins{sm}'].astype('category') test_df[f'{feature}_{n_bins}bins{sm}'] = test_df[f'{feature}_{n_bins}bins{sm}'].astype('category') te = ce.target_encoder.TargetEncoder(verbose=1, smoothing=smoothing) tr_df[f'target_encode_{feature}_{n_bins}bins{sm}'] = te.fit_transform(tr_df[f'{feature}_{n_bins}bins{sm}'], tr_df['open_channels']) val_df[f'target_encode_{feature}_{n_bins}bins{sm}'] = te.transform(val_df[f'{feature}_{n_bins}bins{sm}']) test_df[f'target_encode_{feature}_{n_bins}bins{sm}'] = te.transform(test_df[f'{feature}_{n_bins}bins{sm}']) te_col_name = f'target_encode_{feature}_{n_bins}bins{sm}' if te_col_name not in features: features.append(te_col_name) return tr_df, val_df, test_df, features </code></pre> <p>```</p> <p>Every so often I would permutation importance on my models to see if any features could be dropped.</p> <p>An example of the exact features used in one of my final models is:</p> <p><code> ['signal', 'f_DC', 'f_50Hz', 'f_n100Hz', 'f_2xn100', 'signal_mm_scaled', 'signal_round2', 'signal_shift_pos_1', 'signal_shift_neg_1', 'signal_shift_pos_2', 'signal_shift_neg_2', 'signal_shift_pos_3', 'signal_shift_neg_3', 'signal_shift_pos_4', 'signal_shift_neg_4', 'signal_shift_pos_5', 'signal_shift_neg_5', 'signal_2', 'target_encode_signal_round2_500bins_10', 'target_encode_signal_300bins', 'target_encode_signal_600bins', 'target_encode_signal_100bins', 'target_encode_signal_1000bins'] </code></p> <h2>Model: Wavenet+LSTM</h2> <p>My model ended up being based off of the one found in the wavenet-keras kernel <a href="https://www.kaggle.com/siavrez/wavenet-keras">https://www.kaggle.com/siavrez/wavenet-keras</a> (thanks <a href="/siavrez">@siavrez</a>). I wanted to upvote his notebook so badly during the competition but I thought it might give people a clue as to what I was using. I've since upvoted it and wish I could upvote it 10x it was so helpful.</p> <p>Building off of the wavenet from that notebook I added:</p> <ul> <li>LSTM right after the wavenet into a softmax output</li> <li>Added GaussianNoise seemed to help</li> <li>Tuned learning rate scheduler</li> </ul> <p>Things that didn't work:</p> <ul> <li>GRU wasn't as good as LSTM</li> <li>GRU + LSTM and concat results</li> <li>Change activation functions</li> <li>Different levels of GaussianNoise</li> <li>32, 64, 92, 128 sized LSTM (92 seemed to work best)</li> <li>Lots of different settings to the wavenet but nothing conclusively improved.</li> <li>Lots of different learning rate setups</li> <li>Modeling types seperately</li> <li>Removing training data in attempt to more closely resemble public or private test data.</li> </ul> <p>I tried many, many different ideas, an example of one my final models looked like this:</p> <p>``` inp = Input(shape = (shape_)) x = wave_block(inp, 16, 3, 12) x = GaussianNoise(0.01)(x) x = wave_block(x, 32, 3, 8) x = GaussianNoise(0.01)(x) x = wave_block(x, 64, 3, 4) x = GaussianNoise(0.01)(x) x = wave_block(x, 128, 3, 1) x = LSTM(96, return_sequences=True)(x) out = Dense(11, activation = 'softmax', dtype='float32', name='predictions')(x) model = models.Model(inputs = inp, outputs = out)</p> <p>```</p> <h2>Augmentation</h2> <p>Early on I found that flipping the signal during training to double the data size helped. I also flipped for TTA (used flipping on the test data when predicting and averaged the results). I tried adding noise to the raw signal but that only made CV and LB worse.</p> <h2>The final week- training and blending</h2> <p>There were a lot of things that were difficult about doing a competition solo - but this part was probably the hardest. I didn't have anyone to consult with about how to make my final ensemble and blend and had to rely on my gut, in the end it ended up working out - but I also had a bit of luck.</p> <p>Since I was worried about already being slightly overfit in my peak alignment I ended up taking my best 40 or so submissions and blending the raw probabilities before doing post processing. For one of my final submissions I just took a straight average across the 40 models. The second submission I weighed each prediction by it's position on the public LB. I also very closely inspected the correlation of each sub, comparing public vs private LB, and also comparing by model type. My main concern was with the 10 channel section in the private test set. I noticed that was the section that had the highest variance across submissions, and it would have a huge impact on the final score because each class is weighed equally with the macroF1 metric.</p> <p>Example plot correlation of submission files ordered by public LB score: <img src="https://i.imgur.com/8fBLOgr.png" alt="Corr"></p> <h2>Post Processing</h2> <p>For post processing I merged two ideas from other kagglers. The first was from Chris' solution writeup about<a href="https://www.kaggle.com/c/bengaliai-cv19/discussion/136021"> "hacking macro recall" in the bengali competition</a>. The second was the f1 optimization class being used in notebooks. What makes this code unique is that it optimizes using the raw probabilities for each class and not the class predictions themselves.</p> <p>The resulting code was this:</p> <p>``` class ArgmaxF1Optimizer(object): """ Class for optimizing predictions using argmax """ def <strong>init</strong>(self, initial_exp=-1.2, method='nelder-mead', verbose=True): self.initial_exp = initial_exp self.opt_result = None self.method = method self.verbose = verbose</p> <pre><code>def apply_pp(self, X, EXP): preds_argmax = np.argmax(X,axis=1) s = pd.Series(preds_argmax) vc = s.value_counts().sort_index() df = pd.DataFrame({'a':np.arange(11),'b':np.ones(11)}) df.b = df.a.map(vc) df.fillna(df.b.min(),inplace=True) mat1 = np.diag(df.b.astype('float32')**EXP) return np.argmax(X.dot(mat1), axis=1) def apply_pp_f1_score(self, EXP, X, y): preds_pp = self.apply_pp(X, EXP) score = -macro_f1_score(y, preds_pp) if self.verbose: print(f'Exp {EXP}, score {score}') return score def fit(self, X, y): loss_partial = partial(self.apply_pp_f1_score, X=X, y=y) if self.method == 'nelder-mead': self.opt_result = minimize(loss_partial, self.initial_exp, method='nelder-mead') self.opt_exp = self.opt_result['x'] elif self.method == 'brute': rranges = ([slice(-1, 1, 0.05)]) self.opt_result = brute(loss_partial, rranges, full_output=True, finish=fmin) self.opt_exp = self.opt_result[0][0] if self.verbose: print(f'Optimal exp value {self.opt_exp}') score_argmax = macro_f1_score(y, np.argmax(X, axis=1)) score_pp = - self.apply_pp_f1_score(self.opt_exp, X, y) print(f'Score raw: {score_argmax:0.5f}') print(f'Score post processed: {score_pp:0.5f}') self.score_argmax = score_argmax self.score_pp = score_pp def predict(self, X, y=None): if self.opt_exp is None: print('[ERROR] Must first run fit method') return return self.apply_pp(X, self.opt_exp) def fit_predict(self, X, y): self.fit(X, y) return self.predict(X) </code></pre> <p>```</p> <p>I wanted to be extremely careful when applying post processing. I knew it helped on the public leaderboard but could never be 100% confident with how much to apply to the private. To help me decide how much post processing to apply I wrote code that:</p> <ul> <li>Used the OOF predictions from all of the models in the blend.</li> <li>Simulated the exact same distributions of that in the public and private test set (matching both model type and open channel)</li> <li>Repeated this process 100x with different seeds. The reason why I did this is because the F1 score would fluctuate +/- 0.001 depending on which samples were selected.</li> <li>Ran my F1 optimization code on each simulation.</li> <li>Took the average <code>EXP</code> from the 100 synthetic cases and used that to post process my final submission.</li> <li>Note this was done separately on public and private test set as they are scored separately. </li> </ul> <p>The output of this simulation looked something like this:</p> <p><code> 0 Raw public: 0.94290 : -0.14906 : 0.94304 Raw private: 0.94278 : -0.14000 : 0.94289 1 Raw public: 0.94391 : -0.01950 : 0.94390 Raw private: 0.94377 : -0.02000 : 0.94374 2 Raw public: 0.94353 : -0.13569 : 0.94378 Raw private: 0.94325 : -0.12837 : 0.94349 3 Raw public: 0.94336 : -0.02237 : 0.94338 Raw private: 0.94341 : -0.02000 : 0.94344 4 Raw public: 0.94361 : -0.10450 : 0.94381 Raw private: 0.94375 : -0.09656 : 0.94391 5 Raw public: 0.94441 : -0.09394 : 0.94453 Raw private: 0.94439 : -0.04750 : 0.94449 6 Raw public: 0.94412 : -0.10000 : 0.94427 Raw private: 0.94413 : -0.02200 : 0.94429 7 Raw public: 0.94413 : -0.06891 : 0.94418 Raw private: 0.94435 : -0.04828 : 0.94440 8 Raw public: 0.94384 : -0.13325 : 0.94410 Raw private: 0.94374 : -0.12375 : 0.94397 9 Raw public: 0.94363 : -0.01900 : 0.94365 Raw private: 0.94342 : -0.04100 : 0.94344 10 Raw public: 0.94333 : -0.04812 : 0.94341 Raw private: 0.94324 : -0.04550 : 0.94329 11 Raw public: 0.94339 : -0.09656 : 0.94354 Raw private: 0.94356 : -0.08803 : 0.94373 12 Raw public: 0.94378 : -0.09000 : 0.94385 Raw private: 0.94380 : -0.04200 : 0.94382 13 Raw public: 0.94385 : -0.09112 : 0.94403 Raw private: 0.94365 : -0.04750 : 0.94380 14 Raw public: 0.94380 : -0.09562 : 0.94403 Raw private: 0.94394 : -0.08775 : 0.94417 .... </code></p> <p>The distribution then looked like this, and I used the mean value when applying post processing to my submission:</p> <p><img src="https://i.imgur.com/w59k3O5.png" alt="exp selection"></p> <h2>Things I still can't explain.</h2> <p>There are still some lingering questions I have.</p> <ul> <li><strong>What are these anchor points?</strong> [-2.5002, -1.2502, -0.0002, 1.2498, 2.4998, 3.7498]. I noticed in the signal there is a spike in unique value counts just to the right of the signal distribution for each open channel. You can see them clearly in this plot. I called them "anchor points" because they almost always were labeled as the open_channel even though they were to the right of the mean value.</li> </ul> <p><img src="https://i.imgur.com/zAyMkMy.png" alt="spike"></p> <p>I probably spent 2 or 3 days working on this and thought it might be helpful for lining up peaks. I still don't have a good explanation for why they exist. I even wrote an optimization function that shifted the drift batches such that when rounding to 4 decimal places it would maximize these values. It resulted in a worse LB score.</p> <ul> <li><strong>Where do new batches start?</strong> </li> </ul> <p>Batch sizes are 10 seconds or 50 seconds.. I think. I considered seconds 0-50 to be one batch until closer inspection of the rolling median value. You can see a clear shift in the median value at 10 seconds. I found the same thing in the private test data. Because I was unsure I decided to split at these points and did my "peak alignment" on these separately.</p> <p><img src="https://i.imgur.com/oquVORO.png" alt="shift"></p> <h2>Conclusion</h2> <p>I ended up shaking down 9 spots in the private leaderboard to 11th place. My best single model scored 0.94547 private and had the potential for 7th place - but there was no way I would've selected it. On the other hand my best public LB score model had a private LB score of 0.94211! I never considered selecting it either.</p> <p><img src="https://i.imgur.com/yzEletG.png" alt="selectedsub"></p> <p>I'm sure there are things I could've done to perform better in the shakeup but I was just happy that it was good enough to stay in the gold range.</p> <p>Depending on time and interest - I may create a notebook that runs my code end-to-end.</p> <p>That's it. I hope you found this writeup helpful!</p>
University of Liverpool - Ion Switching
8th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners and especially to Gilles &amp; Kha Vo &amp; Zidmie for an excellent effort leading the public lb and private non-leak scores. Despite the simulated nature of the data it was fun trying to piece together all of the aspects of the problem.</p> <p><strong>Initial Approach</strong></p> <p>Drift was removed from the raw data in the generally accepted manner using a sine curve as identified by Chris Deotte, Markus and Eunho.</p> <p>A simple regression of the (cleaned of drift) signal against the target produced a reasonably good fit and thus I adopted the idea of defining the residuals from this regression (reg0) as <em>noise</em> which I could then attempt to model subsequently.</p> <p>I also adopt the designation of the batches into five separate models depending on their nature as defined by Chris Deotte in his early notebook “One Feature Model”. I denote these separate models as m1s, m1f, m3, m5 and m10 depending on the maximum number of channels open in each batch and a further subdivision of the seemingly binary sections of the target data into ‘slow’ and ‘fast’.</p> <p>(reg0) signal = 1.2336 x target – 2.7337 + <em>noise</em> m1s,m1f,m3,m5 signal = 1.2336 x target – 2.7337x2 + <em>noise</em> m10</p> <p>Visual inspection of the <em>noise</em> over time suggested two regions of excessive <em>noise</em>, a large patch in m3 and a smaller one in m1s. These regions were excluded from all further analysis.</p> <p>Even though the regression coefficients for the <em>noise</em> regression of signal against target were approximately the same for all models, the standard deviations of the residuals were different for each model suggesting that analysing the five models separately would probably be easier. The intercept for the m10 was almost exactly twice that of the others – see later.</p> <p><em><strong>Noise</strong></em><strong> Modelling</strong></p> <p>A set of simple features were first created for each model, initially for input into lightgbm. These were the signal and its leads and lags up to 14 time periods. In addition, I calculated the rolling 10-period min and max for both lead and lag for each observation as well as the rolling min and max for a 20-period lead and lag ahead of or behind the first 10 periods. For an observation of the signal Xt, the features here are Xt-1…Xt-14, Xt+1…Xt+14, min(Xt-1..Xt-10), min(Xt-11..Xt-30), min(Xt+1..Xt+10), min(Xt+11..Xt+30), max(Xt-1..Xt-10), max(Xt-11..Xt-30), max(Xt+1..Xt+10), max(Xt+11..Xt+30).</p> <p>I first used these features to estimate the signal itself in a lightgbm model. This estimated signal then formed the basis of another set of features in a similar manner to those I first calculated. Finally, I added a number of features based on the rolling min and max of the signal relative to each observation, e.g., min(Xt-1-Xt..Xt-10-Xt).</p> <p>These features then were used in lightgbm to estimate the <em>noise</em> for both test and oof. These <em>noise</em> estimates could then be reversed back using the original regression (reg0) to estimate the target and calculate F1 metrics. Note that I found that only including 3 observation leads and lags for the signal (rather than 14) produced the best results here.</p> <p><strong>Further Cleaning of the Signal</strong></p> <p>Frequency analysis of the <em>noise</em> showed up clear contamination with a regular signal at about 50Hz along with higher frequencies with lower amplitudes. Attempting to remove this led me to find that each 10 second minibatch had a slightly different frequency (and phase) around 50Hz. My lack of expertise in signal processing led me to perform an exhaustive search for the best fit for each minibatch. Here I noticed some differences between models. Models m1s, m3 and m5 all had easy to spot signals in 10 second minibatches. Model m1f seems to have 1 second minibatches. Model m10, however, has a more complicated structure with one or two peaks for every 10 second minibatch. More on this later. Removing the 50Hz contamination is reasonably straightforward for the train data but fortunately my <em>noise</em> estimates from the previous lgb also showed the same contamination (though with smaller amplitudes for m5 and m10). I was thus able to estimate the contamination for the test data and remove it. </p> <p>With my newly cleaned signal I was then able to repeat all of the previous steps. I obtained a new <em>noise</em> estimate (which produced much better results in cv and on the public lb). Further frequency analysis showed up harmonics of the 50Hz still contaminating the signal. The most prominent were at (50x23)Hz, (50x25)Hz and (50x29)Hz. For each of these harmonics I removed it from the signal and repeated all the steps so far, seeing whether it improved the cv each time. For m1s and m1f just the original 50Hz contamination removal was as good as it got. Model m3 had 2 cycles, m10 had 4 cycles. The m5 model kept going in this fashion for 7 cycles. (Note the seventh cycle for m5 was not a harmonic – maybe something else was contaminating the signal?)</p> <p>I thus ended up with what I hope are fully clean signals for each model.</p> <p><strong>Different Types of Modelling</strong></p> <p>As well as using the lgb modelling, I also experimented with neural nets using Keras. Ultimately a simple 2-layer dense net trained with same features I built for the lgb model proved to be the best I could do.</p> <p><strong>Funky Loss Objectives</strong></p> <p>Noticing that the exact error doesn’t really matter when it is small or large but mainly when it is close to the boundary between the discrete channel numbers, I experimented with different loss objectives in both lgb and keras. Both of these showed better results on cv and public lb using a cubic error term compared to plain mean squared error. A quartic error term was also ok giving similar results to a cubic term.</p> <p><strong>Model m10 and Data Augmentation</strong></p> <p>There were a couple of clues in preceding sections. The intercept in the original regression (reg0) was twice as large for m10 compared to the others. Also, the 50Hz contamination showed single or double peaks for m10 only. Eventually I realised that the m10 model was comprised of the sum of two separate m5 models. The double peaks appear to be an interference pattern from two peaks from the m5 models. In addition, the variance of the <em>noise</em> for an m10 model is almost exactly twice that of an m5 model.</p> <p>This meant that I could augment the m10 data by creating artificial data using the m5 data that we have and training on that. I created new data that looked like the m10 data by combining each 10 second minibatch of m5 data with the other 10 second minibatches for a total of 45 minibatches of new data. The hope that the data was similar enough to use for training was backed up by a small but worthwhile improvement in cv.</p> <p><strong>Ensembling and Optimization</strong></p> <p>For model m10 I used an ensemble of approaches utilising cubic and squared objectives, lgb and keras modelling. [This ensemble was then optimised slightly to obtain the maximum f1_macro score in cv.] For models m5, m3, m1f and m1s I used the single model that was best in cv. The rank order of my best m10 models in cv seemed to show little in common with the rank order on the public lb so it was hard to choose my final submissions. In the end I opted for a weighted ensemble of four m10 models that was good in cv (and fortunately ok on the leaderboards). This ensemble scored 0.94443 in cv overall and gave 0.94685 on public lb, 0.94539 on private lb, good enough for 8th place.</p>
University of Liverpool - Ion Switching
13th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners, during last week the competition had been very hard!</p> <p>The most difficult part was to find a good way to clean the signal micro-drifting (which is different across batches and for train and test) . To do that, I fit a gaussian mixture to batches of 100k points, and linearly ‘lift’ the signal to adapt the gaussians means to a set precalculated fixed signal levels, common for train and test data. The gaussian mixture fit also give an unsupervised estimate for open channel probability for both train and test.</p> <p>Using that probability as guess for the open channels is easy to spot the 50Hz and harmonics AC component using FFT and remove it surgically. ( see details of the calculations in this notebook <a href="https://www.kaggle.com/vicensgaitan/1-remove-drift-ac">https://www.kaggle.com/vicensgaitan/1-remove-drift-ac</a></p> <p>Firstly I try lightgbm for modeling, so some feature engineering was needed. I build some time-symmetrical rolling features (averaging for the signal and the time reversed signal) . Tree based models using these features and probabilities from the gaussian mixture scored around 0.942 in public LB</p> <p>The game changer is the WaveNet architecture. Using the same set of variables with a plain WaveNet ( no LSTM, no batch normalization, no dropout, no fancy heads) using cross entropy loss, we can easily achieve 0.946 public (0.945 private) . A single model can do almost gold <a href="https://www.kaggle.com/vicensgaitan/2-wavenet-swa">https://www.kaggle.com/vicensgaitan/2-wavenet-swa</a></p> <p>My 13th position is a bagging of 5 models with different seeds and learning rates.</p> <p>I miss the 5+5 structure of the 10-channel data…. Too bad</p> <p>There is still one mystery: The leaderboard results (public and private) is 0.004 points better than the CV value, correcting by batch composition. It seems that test data has some ‘leak’ from the train data, maybe this related to the high scores obtained by the competition winners</p>
University of Liverpool - Ion Switching
59th place solution [Single Model]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Dear All, Here is my model (GRU+waveNet) that i used in this competition. This model is able to achieve .943 on Public LB only using cleaned signals. Additional Features: K-Means clustering and transform as one-hot encoding. Loss: Focal Loss. Key point: 1. Dropout reduces the overfitting 2. Concatenate of GRU and wavenet output boost the score. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1970439%2Fce19354cb230dd9c9c44101ad088941c%2Fkaggle_model.png?generation=1590482812868883&amp;alt=media" alt=""></p>
University of Liverpool - Ion Switching
10th private (10th public) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We used 2 magic things in this comp: 1. Test was slightly shifted: we made it closer to the train distribution using cross-correlation; 2. Removing of 50Hz sine noise.</p> <p>In our view, models were not so important in this competition but Wavenet variations performed slightly better than other models; Our zoo is 6x Wavenets (cross-entropy &amp; focal losses), 2x CNN2d-Wavenet, 1x CNN2d, 1x LGBM.</p> <p>Our best models (both CV &amp; LB) are Wavenet on 4 features (cleaned signal, signal difference, signal min &amp; max in the 22-window) &amp; CNN2d-Wavenet. </p> <p>We didn't use the noisy part of training data (part of 8th batch) and blended fold predictions (EMA-decay helped a lot!).</p> <p>All our individual models (except LGBM) score gold on the private LB, but every blend performed better.</p> <p>My teammates will describe some parts of the solution in the comment section.</p>
University of Liverpool - Ion Switching
310th place solution (large shake up solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>we learn more from failure than success</strong> 😊 </p> <p>i would like to thank my talented team mates <a href="/kakoimasataka">@kakoimasataka</a> and <a href="/tonychenxyz">@tonychenxyz</a> you guys are awesome,i learned a lot from you,thanks for all the wisdom :)</p> <p>some of my favorite leanings from this competition (top 10): </p> <ol> <li><a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">Remove/Pick up electric? noise</a></li> <li><a href="https://www.youtube.com/watch?v=EqUfuT3CC8s">Markov Models</a></li> <li><a href="https://www.youtube.com/watch?v=5araDjcBHMQ">Hidden Markov Models</a></li> <li><a href="https://www.youtube.com/watch?v=spUNpyF58BY">But what is the Fourier Transform? A visual introduction.</a></li> <li><a href="https://en.wikipedia.org/wiki/Band-pass_filter">Band-pass filter</a></li> <li>[Understanding Ion-Switching with wavenet] (<a href="https://www.kaggle.com/mobassir/understanding-ion-switching-with-modeling">https://www.kaggle.com/mobassir/understanding-ion-switching-with-modeling</a>) and <a href="https://www.kaggle.com/sggpls/wavenet-with-shifted-rfc-proba">Wavenet with SHIFTED-RFC Proba</a></li> <li><a href="https://en.wikipedia.org/wiki/The_Hum">The Hum</a></li> <li><a href="https://en.wikipedia.org/wiki/Band-stop_filter">Band-stop filter</a></li> <li><a href="https://www.youtube.com/watch?v=CaCcOwJPytQ">What is a Kalman Filter?</a></li> <li><a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/133874">What is Drift?</a> and <a href="https://www.kaggle.com/tarunpaparaju/ion-switching-competition-signal-eda">Ion Switching Competition : Signal EDA</a></li> </ol> <h1>Our Model</h1> <p>we create fake signal using gaussian noise and 50 Hz noise.we use FourierTransform to create the HUM noise. we use mean and std to create the Gaussian noise. Then we Pass our fake signal to the pipeline and a FEATURE is created, FEATURE mainly uses the GMM and HMM results. we use mean and std to help converge or avoid convergence to local solutions. average of each channel's signal is aligned to 0 based on the HMM predictions.We hoped that by doing so, we could use all the data to detect anomalies in even a small class, such as channel 10. we stopped using all the data along the way though.</p> <h1>Pipeline</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2034058%2Ff5d3e7be570e31427da634b4662344ab%2FUntitled%20Diagram%20(1" alt="">.png?generation=1590458481854840&amp;alt=media)</p> <p>if you don't understand what do we mean by "Aligning the mean with vertical shift" then check this kernel <a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">Remove/Pick up electric? noise</a></p>
University of Liverpool - Ion Switching
23th Place Interesting Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, From the beginning of the competition and after reading the paper from the host, I've been thinking about 2 ways to improve the models. 1. Transfer learning from 1f models to 3\5\10 models ==&gt; Failed with HMM\RF\Wavenet... 2. Data augmentation from 1f signals\open_channels to 3\5\10 signals\open_channels: From the intuition that model with 3 max. open channels should be like 3 1f models running with summarized results ==&gt; <strong>Success</strong></p> <p><strong>For the second approach:</strong> 1. Split train\validation set in each fold 2. Random select signals\open_channels sequence (len=4000) from 1f models 3. Randomly picked, for example, 5 sequences from step 2. and sum up to create a sample with 5 max. open channels. 4. For each summed signal, adjust the mean\variance according to the corresponding mean\variance of the original data.</p> <p>My wavenet scored around <strong>cv .9388, lb .941</strong> After <strong>augmenting data for 5\10 open channels</strong>, the wavenet scores <strong>cv .9423, lb: .945</strong> I've shared my code here: <a href="https://www.kaggle.com/khyeh0719/wavenet-with-augmentation-2">https://www.kaggle.com/khyeh0719/wavenet-with-augmentation-2</a> which scores around .944 on public lb for a single model with single fold (no RFC features)</p> <p>Hopefully, this method could be used extensively to create signals\open_channels without actually generating the signal (but assumingly the mean\variance of each open channel state is known in advance :p)</p> <p>Anyways, I hope you like it and feel it interesting! Thanks Kaggle and the host for such an interesting competition, I really enjoyed it, and finally big congrats to all winners!</p>
University of Liverpool - Ion Switching
Brief 3rd place solution (Kha Vo view)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to thank Kaggle, the host <a href="/richardbj">@richardbj</a> , all competitors, and especially my two wonderful teammates <a href="/group16">@group16</a> <a href="/zidmie">@zidmie</a> who have quickest and brightest mindsets that I've ever luckily been working with. In the end, we are a little disappointed by the leak present in two top teams, given the tremendous amount of work in this competition as an optimization process. My teammates are writing towards a very detailed blog post about the solution. Now is their bedtime, so I will briefly write our solution. </p> <p>FULL BLOG SOLUTION: <a href="https://medium.com/@gillesvandewiele/334fab86fc85">https://medium.com/@gillesvandewiele/334fab86fc85</a></p> <p>NOTEBOOKS: Authentic Modelling: <a href="https://www.kaggle.com/khahuras/hmm-m318">https://www.kaggle.com/khahuras/hmm-m318</a> (private 0.94575) Leak Exploitation (updated soon)</p> <p>LEAK EXPLAINED <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153824">https://www.kaggle.com/c/liverpool-ion-switching/discussion/153824</a></p> <p>----------------- BRIEF SOLUTION ---------------------</p> <p>PREPROCESSING We removed drift, add offset/gain to make signal-round(signal) = noise, and we removed some sine peaks, notably 50Hz. We also found some small peaks around 350Hz and 1150Hz. Removing all these is not a perfect process, so we did that by different ways. In the end, we got a few slightly different signals for modelling.</p> <p>MODEL</p> <p>Ptran and state discovery</p> <p>We use a single HMM model, with highly customized forward-backward algorithm. The key thing in the modelling process is to decompose the signal of high cat into subprocesses of lower cats. Let us denote the signal categories as cat 1, 2, 3, 4, 5, 6, corresponding to maximum open_channels 1, 1, 3, 5, 10, 4 (cat 6 only appears in test). Starting from cat1 which has Ptran as a 4x4 matrix of states [0, 0, 1, 1], we extend this matrix into a bigger one for each higher cat. For example: cat3 is combined by three cat1 processes, then it has Ptran of size 20x20, where each state can get value in [0, 1, 2, 3], and it is the combination of 3 states of 0 or 1, which are 3 states of 3 processes of [0, 0, 1, 1]. The similar thing is done for cat4 and cat5, where cat4 has Ptran of size 56x56, and cat5 has Ptran of size 286x286. We optimize the raw matrix (4x4) as an optimization problem. </p> <p>Algorithm</p> <p>We made a highly customized fw-bw algorithm. If we denote the probability of signal at t falling in each state as s(t) (where each state is a row in the expanded Ptran matrix), then the normal forward pass is (roughly, and here for simplicity I skip normalization step for each time step calculation) we denote alpha0(t) = alpha0(t-1) x Ptran. Similarly the normal backward pass stopped at t would be beta0(t) = beta0(t+1) x Ptran^T, then pred(t) = alpha(t) * beta(t) * Psig. Here, Psig is the Gaussian emission of the noise (signal - round(signal)) at each timepoint, which we can optimize as parameters too, and pred is the prediction. We found that by integrating a "memory" of forward in doing backpass, the score can be boosted. So the customized procedure is beta1(t) = [(beta1(t+1) x Ptran^T )^coef1] * [(alpha0(t))^(1 - coef1)].</p> <p>This procedure got us better beta. Similarly, we perform another pass where we use beta1 as "stacked" features for alpha2, which is alpha2(t) = [(alpha2(t-1) x Ptran )^coef2] * [(beta1(t))^(1 - coef2)]. In the end, a customized sum of these alpha's, beta's got us final result as: pred = [ c1 * (beta1) + (1-c1-c2) * alpha2 + c2 * (beta1 * alpha2) ] * Psig, where c1, c2 are parameters to be optimized too.</p> <p>BLENDING and THRESHOLDING</p> <p>Given the pred, now we have the probability of each state (state here is a state of 4 states [0 0 1 1] of cat1, or 20 states [ 0 0 0 1 1 1 2 2 .....] (maximum 3) of cat 3. We take optimized averaged sum of several preds from different input preprocessed signals, and then simply calculate the single float prediction by states @ pred. This gives us a prediction like [0.45, 1.23, 0.24, 0.98, 3.55,...]. Now we need to threshold this pred into integer. We do that by an "unsupervised" method. For each signal batch of 100,000 points, we compute how much percentage that signal-round(signal) has absolute value less than a small value (such as 0.1, or 0.25 depending on cat). Then, in that 100K signal batch, we assign the same amount of labels from the prediction spectrum for each open_channel prediction. This method is to ensure that the label distribution estimation for each batch is applied to the prediction, so the prediction will roughly have the same label distribution as "good signal points" in that batch. This simple method seems to be very robust to overfitting, and quite fast. </p> <p>The fully optimized single HMM model like this can got us as high as 0.94584 on private LB.</p> <p>I am sorry if my write-up is too hard to understand. Questions are welcomed.</p> <p>WHAT DISTILLED IN MY MIND</p> <p>One should approach this competition as an optimization problem with an element of machine learning, rather than a machine learning problem with an element of optimization. We found that pure optimization works really great for this synthetic data, and no serious overfitting has been observed. That's why we can quickly escalate on LB.</p> <p>This is an important result for each of us. Personally I truly thank you to all of you again for this great competition, and again especially, to my two wonderful teammates, which I learn a lot from them. We will make our demo Kaggle notebook code public immediately, after getting the green light from the host <a href="/richardbj">@richardbj</a> . And we will post another detailed solution write-up later, together with a full-scale github repo.</p> <p>Thanks for reading!</p> <p>EDIT: (scroll to the start of topic) + Full blog solution posted + Kaggle solution notebook published + Leak explained </p>
University of Liverpool - Ion Switching
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I would like to thank Kaggle and the host @richardbj for organizing this competition. I find the competitions with scientific and mathematical background the most interesting. Of course a leak is always bitter. The solution I created is the average of 4 very similar Keras models. They are based on encoding the signals based on the results of Gaussian Mixture Models, initial processing (head) using Dense and BiGRU layers for feature creation, a Wavenet Model for main processing and final steps creating the classification using Dense layers. Additionally two tree models based on Random Forest Classifier (RFC) and Histogram-based Gradient Boosting Classification (HGB) are used as stacked input. Following is a description of the individual parts</p> <h2>Data Preparation and Preprocessing</h2> <p>The data is separated into batches of 500000 records each. This is done with the dataset df (for train and test) through df.batch = df.index // 500000. In the same manner a split into segments of 100000 records and windows of 10000 records are created. </p> <p>Initially the segments can be classified into segment types 1,...5 based on the values of mean standard deviation over the segments. </p> <p>As identifed by @cdeotte the signals contain a sinusoidal drift. This drift is removed automatically for train and test data using a python script. This is done by fitting and subtracting a sinusoidal curve from the signal over a complete batch and then a segment if not possible for the batch. </p> <p>The important part here as well as for the definition of segment types is to do this automatically without any visual inspection and being able to execute the scripts on new test data. Otherwise the solution would be invalid.</p> <p>The resulting signals are visualized below as discussed in the forum already. For the train data with batch number <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fbc203e2b2a812b4bf9e4eb78b5c72764%2Fimage1.png?generation=1590499148853904&amp;alt=media" alt=""> and the test data with segment id. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fcc4ca985c2b1673c799692ef8e8a5230%2Fimage2.png?generation=1590499399969186&amp;alt=media" alt=""></p> <p>The training data contains outliers in batch 8. I did not see any way of correcting this and therefore cut this out of the training data. When taking the signal values for each open channel and building the mean for each window one gets an interesting plot <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fedc2d509486f36f69aa2a546c59d7a6d%2Fimage3.png?generation=1590499712443407&amp;alt=media" alt=""> This shows that the signal strengths for an open channels value are approximately equal for segment types 1, 2, 3 and 4, but shifted for segment type 5. I evaluated the possibility to use the segment type for models and even split models. I came to the conclusion that this would be an error and could overfit badly. Instead I grouped segment type 1, 2, 3 and 4 into a segment group 0 and segment type 5 into segment group 1 and used the segment group as a feature in modeling.</p> <h2>Gaussian Mixture Models</h2> <p>The target variable Y = number of open channels is based on a HMM. The signal X is a function of Y, but including a Gaussian Noise. Evaluating the conditional distribution P(X|Y=k, Segment) gives Normal distributions with mean mu_k = mu_0 + k*delta, but with standard deviation that differ slightly between segment types and are larger for segment group 1. The image below show an approximation of the signal through GMM for one sample segment <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fbf357c97e47e81ec9ea9b18a8ed6bc56%2Fimage4.png?generation=1590500503459258&amp;alt=media" alt=""> The red curve shows the weighted sum of normal distributions with center corresponding to the discrete mean values (vertical lines) associated to the open channels value and the weight corresponding to the fraction of target values equal to that open channels value.</p> <h2>Signal Encoding</h2> <p>The modeling of the signals as GMM enable encoding the signal X as a vector (f_k), k =0,…,10 using the normal distribution density function N(mu, sigma)(X), with mean mu_k and sigma depending on the segment type. This encoding of the signals is one key input into the models.</p> <h2>Models</h2> <p>The main models are based on Keras Wavenet models with specific head and tail processing. Initially I focused on sequence processing using BiGRU, but they are much slower and therefore did not allow as much experimentation. I did not manage to get a better score that 0.942 on the public LB. When changing to Wavenet I initially used RFC and HGB models as additional input, but also the already created Keras models with BiGRU as stack input. This reached 0.946 on the public LB, but I had the impression it would overfit. I therefore dropped the BiGRU models as stack input and included BiGRU as a processing step together with the Wavenet model. The local CV results actually dropped, but the LB results increased and inspecting the predictions gave me seemingly better results. This gave me the final models. For the Keras models I did not use early stopping. I saved 3 models: With best f1 score, with lowest categorical cross entropy and latest (for stacking purposes). When initially testing on the LB the one stored based on “best f1 score” always had better LB score than the “lowest loss”. I therefore sticked to that. The Keras models were trained using Adam optimizer and SWA as well as a One Cycle Scheduler with max learning rate of 1.0e-3 and 300 – 350 epochs. </p> <h2>Cross Validation</h2> <p>For validation I used 5 fold CV. Prior to splitting the train data into train and validation data I split it into sequences of 4000 records and used StratifiedKFold on the batch as group. This makes certain that the batches are distributed equally among the folds. But what about the target variable? For this I looked at the distribution of the target variable for different random state values. The distribution depends strongly on that value and I looked for a value giving similar distribution of each class for both train and validation folds. So which one to choose? The answer is not 42, but one good answer was 43 (also 53 or 132). For the tree models for stacked input it is important not to use the validation folds for early stopping, but only for storing the out of fold predictions. Both RFC and HGB support this directly. I also tried Catboost with cross validation for determining the number of iteration, but did not use this in the final models. I was not able to get anything sensible out of LightGBM and (my version of) xgboost hat errors that did not allow me to use it.</p> <h2>Analysing Predictions on Test Data</h2> <p>The CV results turned out to correlate very well with the LB score, but there were potential issues in particular with the different test data for segment type 1. For train data these only had open channels values 0 or 1, but for test data it turned out to have higher values 2, 3, or even 4. </p> <p>For validating this I created scatter plots of the signals with predicted open channels values for each window (every 10000 records) showing horizontal lines for the relevant mean values from the GMM models. The plot below shows two windows <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F405208%2Fa269cd30482eb4500f7d64c43dcc1a95%2Fimage5.png?generation=1590501512153335&amp;alt=media" alt=""></p> <p>Looking at these different windows one can validate if there are “obvious” errors and try to enhance the models. When using Wavenet model alone I saw such errors that were then corrected when adding a BiGRU layer.</p> <h2>Some Final Words</h2> <p>I see the models as a good basis for further work. I had limited hardware resources (single GTX 1080Ti) and could not analyze variants as wished. It can definitely be improved. Finally: It is unfortunate that a leak occurred and partly spoiled the results.</p>
University of Liverpool - Ion Switching
7th Place Solution - Part 1: Removing Noise from Signal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners!</p> <p>First of all i want to thank my wonderful teammates <a href="/philippsinger">@philippsinger</a>, <a href="/dott1718">@dott1718</a> and <a href="/cdeotte">@cdeotte</a> for the awesome collaborative teamwork. Everyone gave so much effort to get the best result and I am already looking forward to teaming-up with you again.</p> <p>In this write-up I will get into details of our routine to remove noise from the provided signal. Afterwards, <a href="/cdeotte">@cdeotte</a> will go into more detail about the models that we used in our final submission. See his discussion <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/154253">here</a>.</p> <h1>Data Cleaning</h1> <p>One of the most difficult but at the same time also one of the most rewarding things to do in this challenge was the cleaning of the dataset (train and test). Prior to teaming-up, I had exclusively worked on that part and only used public kernels such as the WaveNet kernel. Thank you <a href="/siavrez">@siavrez</a> for this kernel! I also found HMM (just straightforward hmmlearn) to be a good choice for quickly testing the cleaning but we didn’t use HMM in our final solution.</p> <p>The original data could be divided into batches of 10 s or 50 s and the “signal” already showed a pearson correlation of 0.8017 with “open_channels”.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F651a59c5de2005b4b2c0bf1d878a98ff%2F__results___7_0.png?generation=1590526158397566&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F276ea09f74de3bbf52f13662f6fc177d%2F__results___8_0.png?generation=1590526175204985&amp;alt=media" alt=""></p> <p>For ease of understanding, I will refer to parts of the train and test set with their corresponding maximum open channels and opening probability or their type by "color". The “pink” parts in the test set are assumed to all have a maximum of 5 channels with low opening probability.</p> <h2>Sine Drift</h2> <p>Early on in the competition, an artificial sine drift added to parts of the train and test set has been spotted (<a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/133874">https://www.kaggle.com/c/liverpool-ion-switching/discussion/133874</a>). The drift was identified to be a 50 s long half sine wave (or just the first 10 s of that) with an amplitude of 5 and may be removed with the following code snippet:</p> <p>``` def sine_func(x, a, b): return -a * np.sin(b * x)</p> <p>sine_drift = sine_func(np.linspace(0, 499999, 500000), -5, np.pi / 500000) sine_drift_short = sine_drift[:100000]</p> <h1>Clean the train set</h1> <p>train_csv.loc[train_csv["100k_batch"] == 5, 'signal'] -= sine_drift_short # batch 2 first segment train_csv.loc[train_csv["500k_batch"] == 6, 'signal'] -= sine_drift train_csv.loc[train_csv["500k_batch"] == 7, 'signal'] -= sine_drift train_csv.loc[train_csv["500k_batch"] == 8, 'signal'] -= sine_drift train_csv.loc[train_csv["500k_batch"] == 9, 'signal'] -= sine_drift</p> <h1>Clean the test set</h1> <p>test_csv.loc[test_csv["100k_batch"] == 0, 'signal'] -= sine_drift_short test_csv.loc[test_csv["100k_batch"] == 1, 'signal'] -= sine_drift_short test_csv.loc[test_csv["100k_batch"] == 4, 'signal'] -= sine_drift_short test_csv.loc[test_csv["100k_batch"] == 6, 'signal'] -= sine_drift_short test_csv.loc[test_csv["100k_batch"] == 7, 'signal'] -= sine_drift_short test_csv.loc[test_csv["100k_batch"] == 8, 'signal'] -= sine_drift_short test_csv.loc[test_csv["500k_batch"] == 2, 'signal'] -= sine_drift ```</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F93bb094a57ab4145e141c9285beb27a0%2FBild1.png?generation=1590526366735290&amp;alt=media" alt=""></p> <h2>10 Channel Offset (“Ghost-Drift”)</h2> <p>The next thing that becomes obvious in the data, was a large offset between the 10 channel batches and the rest of the data. We now know where this difference originates from (leak), but at the time of evaluating, we were not sure about the exact offset and we have used various techniques for calculating it. Depending on the technique, this offset was calculated to be between -2.69 and -2.76. By removing that offset, neural nets may have an easier time at prediction by cross learning from other batches. </p> <p>A straightforward method for removing the offset is using a linear regression for the 10 channel batches (open_channels vs signal) and for the rest of the batches. </p> <p>``` def linear_fx(x, m, y0): return (m * x) + y0</p> <p>xdata = train_csv_batch10["open_channels"] ydata = train_csv_batch10["signal"] popt1, pcov = curve_fit(linear_fx, xdata, ydata, method='lm')</p> <p>xdata = train_csv_batch5["open_channels"] ydata = train_csv_batch5["signal"] popt2, pcov = curve_fit(linear_fx, xdata, ydata, method='lm')</p> <p>static_diff = popt1[1] - popt2[1] print("Offset 10c to rest:", static_diff) ```</p> <p>While this method neglects possible differences in slope and minor offsets between batches, it gave a robust first guess of the actual offset.</p> <p>Later on, <a href="/dott1718">@dott1718</a> developed a Gaussian Mixed Model for fitting the Gaussians of each batch using equal spacings and a binomial distribution. In our cross-validation we could not clearly identify which was doing better. </p> <p>When adding a second dimension to this GMM one can clearly identify a correlation between previous states and following states. It appears, that the signal has a decay that is within the magnitude of the recording frequency of 10 kHz. That means, if a previous signal was larger than the current signal, the current signal will be slightly higher than expected. We assume this to be an artifact from the signal recording or from the electrical ion channel model (electrical circuit) and tried to make use of that feature (other called it memory) in any way and improve out models on that, but to our amazement, WaveNets (and probably many other ML models) seem to learn that behavior already on their own and any additional cleaning is not yielding improvements. We did gain a slight improvement in our test HMM using that knowledge. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F006251f677ff0d36da2c3381dd73ee51%2F2d_gmm_5chans.PNG?generation=1590526427020665&amp;alt=media" alt=""></p> <h2>Outlier removal</h2> <p>In the first 500k batch and in the 8th 500k batch of the train set, severe outliers have been spotted. </p> <p>The outlier in the first batch, we simply replaced by gaussian noise with an appropriate mean and variance. </p> <p><code> FIRST_OUTLIER = (47.858, 47.8629) noise = np.random.normal(loc=-2.732151, scale=0.19, size=50) train_csv.loc[(train_csv.time &amp;gt;= FIRST_OUTLIER[0]) &amp;amp; (train_csv.time &amp;lt;= FIRST_OUTLIER[1]), "signal"] = noise </code></p> <p>The second sequence with outliers, we just removed for training. <code> SECOND_OUTLIER = (364.290, 382.2899) train_csv = train_csv.loc[(train_csv.time &amp;lt; SECOND_OUTLIER[0]) | (train_csv.time &amp;gt; SECOND_OUTLIER[1]), :] </code></p> <h2>Zero Level Offset</h2> <p>Just like the 10 channel offset, we have a zero level offset when no channels are open. Simple linear regression can be used to shift the datasets from ~2.73 and align the “zero channel” with zero mean. We can also use the results of linear regression to extract the scaling between “signal” and “open_channels” which allows us to extract the noise from the train signal and study it. </p> <h2>50 Hz Line Noise</h2> <p>When plotting the FFT of the signal and of the noise, we can clearly see that most of the noise seems to be white gaussian noise. But we can also identify some sharp peaks at 50 Hz, ~1150 Hz, ~1250 Hz, ~1450 Hz and ~3130 Hz. In Liverpool, they use 50 Hz in their power Lines and applying a narrow hann window to attenuate that frequency greatly improved our modeling capabilities. For best results, we tuned the attenuation with regard to signal to noise ratio at 50 Hz (higher attenuation for slow open channels compared to the fast open channels). With a grid search, we optimized the parameters "hann windows width", "attenuation high" and "attenuation low". We also tried removing of the other peaks, that are somewhat random in the data, but without ever getting a significant improvement in our CV score. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F228cf45c1c04ee6a4b6e2241396c178a%2Fnoise_level_train.png?generation=1590526567559238&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F59a7515a8528ebebe46442b334a0edd1%2F__results___12_1.png?generation=1590526594525784&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2Fd5453821f3ef30f4b64d495f8b98a99a%2F__results___13_1.png?generation=1590526609450668&amp;alt=media" alt=""></p> <h2>Minor Offsets (mostly in test set)</h2> <p>When leveling all batches with the same static offset mentioned above (zero level offset), we still experience significant differences between the batches. The very first 100k of the training set seem to be shifted about 0.03 in positive direction.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2Fc1675b0640b5a0c78c696d2bffadf2fa%2F__results___33_0.png?generation=1590526711915787&amp;alt=media" alt=""></p> <p>In the test set, those shifts are much more frequent and careful removal of those minor offsets appeared to have significant influence on the predictions of our models. To remove the offset of the training set we can easily use the ground truth values of open_channels combined with the scaling factor from above. As we don’t have that information in the test set, we evaluated three different techniques to remove that offset and they all led to similar results. To begin with, we have used old predictions for the test set, that scored high on the leaderboard to calculate the noise in the test set and ultimately the minor offsets. As this is a delicate thing, especially in the 10 channel batches with the high variance, we also checked how those values are changing by recursively using the predictions for calculation of the noise. As expected, the difference was somewhat significant in the 10 channel batches and almost non-existent in the remaining batches. To archive a more robust solution, we later based the entire shifting routine on the GMM mentioned above, modelling the gaussians with equal spacings and binomial distributions. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2675447%2F6ba61b4a705885dab777035abd9360c8%2Fscreen_shot_2020-05-22_at_10.23.21_pm.png?generation=1590526726818390&amp;alt=media" alt=""></p> <h2>Extra Data</h2> <p>We used 700k rows of annotated data (3 channels and 5 channels) from Richards GitHub (<a href="https://github.com/RichardBJ/Deep-Channel/tree/master/dataset">https://github.com/RichardBJ/Deep-Channel/tree/master/dataset</a>) which yielded a tiny improvement on CV. All the above steps have also been applied to that extra data. </p> <h2>What didn't work</h2> <ul> <li>Creation of additional synthetic training data</li> <li>Unequal spacing between gaussians</li> <li>Kalman Filtering</li> <li>Removal of ~1150 Hz, ~1250 Hz, ~1450 Hz and ~3130 Hz noise</li> <li>Prior modelling of the signal decay rate before feeding it to WaveNet</li> <li>Unequal scaling of batches</li> <li>Single models for each “type” of batch</li> </ul>
Abstraction and Reasoning Challenge
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Abstraction and Reasoning Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulations to <a href="/icecuber">@icecuber</a>. Scoring 0.79 on this competition going solo is really impressive. Also, thanks a lot to <a href="/fchollet">@fchollet</a> for hosting this competition and making us face this very interesting challenge.</p> <p>Here I want to briefly discuss the alrogithm that <a href="/rguigocoro">@rguigocoro</a> and I have developed. It scores 0.87 or 0.88 on the private test set I think. The final score of 0.81 on the LB came as a result of merging teams with <a href="/yujiariyasu">@yujiariyasu</a>, who had done a great job in solving many tasks already, and we luckily had almost no overlap. His approach is explained <a href="https://www.kaggle.com/c/abstraction-and-reasoning-challenge/discussion/138548?fbclid=IwAR0rVWjYYPzF3hDFT109Jsp-ezHsqttJzsVBgnVyG4JrFRUH7tBPIlia4y0">here</a>.</p> <p>For every task, our algorithm follows these steps: 1. Generate an object of the class <code>Task</code>. At this step, all the preprocessing is done: checking for matrix shapes, common colors, symmetries... There are 6 core classes that are used to store information about the preprocessing: The classes <code>Task</code>, <code>Sample</code>, <code>Matrix</code>, <code>Shape</code>, <code>Grid</code> and <code>Frontier</code>. 2. Based on the information retrieved from the preprocessing, the task might be transformed into something that can be processed more easily. It might make sense to modify some colors, rotate some of the matrices, crop the background or ignore a grid, for example. 3. Once the transformation is done, we generate 3 dummy candidates and enter the loop to get the 3 best candidates. In every iteration of the loop, for each of the current best 3 candidates, we try to execute all sorts of operations that make sense to try, given the properties stored in the object of the class <code>Task</code>. Many different sorts of functions are tried, such as "moveShapes", "replicateShapes", "extendColor" or "pixelwiseXorInGridSubmatrices", just to mention a few examples. The complete list of functions can be found on the GitHub repository linked below. If any function generates a candidate with a better score than any of the current best 3 candidates, we include this new candidate into our list of best 3 candidates, and remove the worst one. The score is simply computed by executing the functions that led to generate that candidate in the training samples, and checking how many pixels are incorrect. Thus, a score of 0 is the best possible one. 4. Revert the operations executed in step to 2 from the 3 candidates, to obtain the final candidates. 5. If it makes sense, we generate another task imposing that in every sample there can be either only one shape or only one non-common color. Obviously, we then have more training and test samples than in the original task. Then, we also compute the three best candidates for this version, and compare them with the previously generated ones to select the final best candidates.</p> <p>It was quite hard work to set up this whole pipeline, and the first 1.5 months or so, being unable to go under 0.99, were a bit frustrating. But in the end this has proved to work reasonably well. We try lots of different functions in our algorithm, and we don't really know which ones solve tasks from the private test set, as many of our LB improvements came after general preprocessing steps that are applicable to many different kinds of tasks.</p> <p>All the code, including the version that scores 0.813 on the leaderboard, is on <a href="https://github.com/alejandrodemiquel/ARC_Kaggle">this</a> GitHub repository.</p>
University of Liverpool - Ion Switching
20th place solution [Single Model]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I use <a href="https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr">wavenet</a> as baseline. I found, that increasing receptive field before wave block improve CV and LB score (I think, this help because the length was too big). So, I use maxpooling1d(2) with concat with signal before maxpool1d before wave block (max pool stride=1).</p> <p>NN has 2 branch: wavenet and 2 bi-gru with concatenation, after that fully-connected layer.</p> <p>I found, that Tversky loss with CCE improve NN accuracy and save training time. Final loss was: CCE + focal + Tversky (without balancing).</p> <p>I use GROUP BATCH SIZE=[4000, 50000, 25000, 12500, 10000, 8000, 20000] for prediction with with averaging.</p>
M5 Forecasting - Accuracy
[178th Place] Recursive Model with Regression & Classification
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Overview</strong></p> <p>I have to say that although I was expecting a massive shake-up, I think this competition took it to a new level. It was really hard to choose a final submission that I thought might generalize to the private LB. First, I want to take a second to thank Kaggle and the competition sponsors for such an awesome competition. I also want to thank the community for their awesome sharing and insights, this is what Kaggle is all about afterall.</p> <p>I am actually happy with this solution. I learned a lot and I tried to make my model as generalized as possible without using multipliers. I would like to take a second to talk about the multipliers. It seems that the sentiment has been expressed that they are not fair or "garbage" (I read somewhere) but the truth is, the are used quite frequently in many forecasting applications in the business world. They can be referred to as multipliers or growth. My concern with these multipliers here were that I just felt like I wanted my model to try and pick it up in some way so I chose to go completely free of them.</p> <p><strong>Cross-Validation Scheme</strong></p> <p>My cross-validation scheme is a 3-fold time series cross validation, where the length of the series in the validation are the same length as the submissions for the public and private LB. For a long time I struggled with the metric. It was a really interesting metric to implement and until the release of the public LB data I was unsure of whether I was calculating it correctly or not. As you can imagine, this led to a lot of frustration for validating any of my models. I mostly experienced gains in CV but decreases in LB. When the public LB data was released I was able to verify that my CV was set up correctly all along. Once I knew that I could at least see the correct scores, it was all experimentation from there.</p> <p><strong>Final Model</strong></p> <p>My final model is a mix of regression + classification and is a recursive model. After a lot of the literature that I read I really wanted to try and make either a recursive model or a direct model work. I used the same features for both my regression model and classification model. The features I used are mainly found in the public kernels, with the exception of using different lagged values for rolling features. After verifying that my CV was set up correctly, it was obvious that any lag 1 features were highly optimized for the public LB fold (I am calling it a fold because I ended up using it as my third fold in the final month). The lag 1 features performed exceptionally well on this fold and not the other folds. The scores were between 0.65X and 0.7X on other folds. Afterwards, I started to experiment with MANY different features. I finally converged on rolling features at lag 4. This generalized well to most folds but some were still stubborn. By generalized I was able to get my 3-fold set up all below 0.6. I also tried on other folds just to see and results were the same between 0.50 - 0.60 (still a lot of variance). The public LB score was still decent in the low 0.52X - 0.54X range. This was far from the best score on the public LB, but I stuck to trusting my CV because I could not replicate a 0.4X performance without other folds really suffering performance.</p> <p>I got more generalization between my folds when I implemented a classification model with the regression model. I actually used the classification model in two steps. First, I multiplied the outputs of the classification model by the outputs of the regression model; essentially scaling or smoothing the outputs of the regression by whether we expected it to have a transaction or not. In retrospect, I believe this is may be what helped my model not fall all the way down the LB. It is essentially a multiplier against the original predictions except that I used another model to build them. Secondly, once all of the predictions were created I set to zero any classification prediction that was under 5%. The best score I saw was 0.50X but I felt good about the generalization.</p> <p>Finally, I ran the model 3 times and shuffled the rows and features to help prevent overfitting. I averaged the outputs of the 3 models. If anyone is interested I would be happy to provide a list of my features.</p> <p><strong>After Thoughts</strong></p> <p>I have to say I was really disappointed in the metric that was used here. The metric felt so convoluted that there were times that I felt like I spent more time trying to figure out if my implementation was right compared to actually modeling. </p> <p>Working through all the ideas in the time series domain in a solo fashion is REALLY hard. There are so many aspects to time series. There are statistical models (ARIMA, ES), machine learning models (LGB, XGB) &amp; deep learning solutions (spent a month on this alone just trying to figure out how to make it work). Not to mention should I implement a recursive or direct or a mixture of both? Then there are different hierarchical methods, transformations, outliers, etc. It felt really hard to pick one direction and make it work.</p> <p>That’s really all there is to my solution. If anyone has questions, I would be happy to answer.</p>
M5 Forecasting - Accuracy
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to everybody for a great competition. Congratulations to the prize winners. I got my first solo gold as my first medal in kaggle.</p> <p>The outline of my solution is as follows:</p> <p>1.Model: LGBM for each [store_<em>id] x [dept</em>_id] ( totally, 70 models ) - Objective: tweedie (custom object do not improve CV) - Weight: cumulative actual dollar sales (last 28 days) ( I do not use scale for RMSSE. )</p> <p>2.Data period: 2014/1/1~ (before 2014, the trends are different)</p> <p>3.Features - Lag demand 28 days shift + mean and std for 7, 14, 28 days, etc.. (day by day for 1~7, 14, 21 days, and recursive : I use one with lower CV for each [store_<em>id] x [dept</em>_id]) - Sells prices some statistics of sells prices and, sell prices for same item__id in CA_3 (Sell prices for same item_id in CA_3 explain zero sales period for some items.) - Calendar Target encoding for ids (store, item, ...) x [weekday, events, or month] or ids with last 3M, 1Y, 2Y (rolling, no leak). This variables works well. Additionally ,I remove some variables following permutation importance.</p> <p>4.CV: last 3 x 28 days, and (sub : last 2 x 28 days + same period at 1 year before)</p>
M5 Forecasting - Uncertainty
36th Position Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would first like to thank <a href="/muhakabartay">@muhakabartay</a> for his constant motivation to improve with every competition. Our solutions are based on 5 different models </p> <p><strong>Preprocessing</strong></p> <p>We used preprocessing from the kernels below.</p> <ol> <li>Ulrich GOUE's Kernel : <a href="https://www.kaggle.com/ulrich07/do-not-write-off-rnn-cnn">https://www.kaggle.com/ulrich07/do-not-write-off-rnn-cnn</a></li> <li>Ulrich GOUE's Kernel : <a href="https://www.kaggle.com/ulrich07/quantile-regression-with-keras">https://www.kaggle.com/ulrich07/quantile-regression-with-keras</a> </li> </ol> <p><strong>Modelling</strong></p> <p><em>Model 1 Architecture</em></p> <blockquote> <p>x = L.GRU(400, return_sequences=True, name="d1")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m1")([x, context]) x = L.GRU(400, return_sequences=True, name="d2")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m2")([x, context]) x = L.GRU(400, return_sequences=True, name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=wqloss, optimizer="adam") reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001)</p> </blockquote> <p><em>Model 2 Architecture</em></p> <blockquote> <p>x = L.GRU(410, return_sequences=True, name="d1")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m1")([x, context]) x = L.GRU(410, return_sequences=True, name="d2")(x) x = L.Dropout(0.5)(x) x = L.Concatenate(name="m2")([x, context]) x = L.GRU(410, return_sequences=True, name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=wqloss, optimizer="adam") reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.01)</p> </blockquote> <p><em>Model 3 Architecture</em></p> <blockquote> <p>x = L.Concatenate(name="x1")([context, num]) x = L.Dense(500, activation="relu", name="d1")(x) x = L.Dropout(0.2)(x) # 0.3 x = L.Concatenate(name="m1")([x, context]) x = L.Dense(500, activation="relu", name="d2")(x) x = L.Dropout(0.2)(x) # 0.3 x = L.Concatenate(name="m2")([x, context]) x = L.Dense(500, activation="relu", name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=qloss, optimizer="adam") return model reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001)</p> </blockquote> <p><em>Model 4 Architecture</em></p> <blockquote> <p>x = L.Concatenate(name="x1")([context, num]) x = L.Dense(400, activation="relu", name="d1")(x) x = L.Dropout(0.25)(x) # 0.3 x = L.Concatenate(name="m1")([x, context]) x = L.Dense(400, activation="relu", name="d2")(x) x = L.Dropout(0.25)(x) # 0.3 x = L.Concatenate(name="m2")([x, context]) x = L.Dense(400, activation="relu", name="d3")(x) preds = L.Dense(9, activation="linear", name="preds")(x) model = M.Model(inp, preds, name="M1") model.compile(loss=qloss, optimizer="adam") return model reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.01)</p> </blockquote> <p>Model 5 : Baseline model <a href="https://www.kaggle.com/ulrich07/quantile-regression-with-keras">https://www.kaggle.com/ulrich07/quantile-regression-with-keras</a> </p> <p>Further we ensembled our 5 submissions. Here is our winning ensemble.</p> <p><a href="https://www.kaggle.com/roydatascience/silver-medal-solution-stacking-m5-submissions">https://www.kaggle.com/roydatascience/silver-medal-solution-stacking-m5-submissions</a></p> <p>Congratulations to all the winners. Happy Kaggling everyone.</p>
M5 Forecasting - Accuracy
[36th place] New forecasting Package: tsboost
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, Congrats to all winners and participants !</p> <p>I would like to introduce a new open source package I created few time ago for forecasting time series with discrete time step, installable through pypi and on my github :</p> <p><a href="https://github.com/franck-durand/tsboost">https://github.com/franck-durand/tsboost</a></p> <p>The goal of this framework is to provide a new tool ready to go in production for data scientists (like prophet), so that they can prototyping faster.</p> <p>I will publish my full solution for M5 on my github as it uses this package plus some data pre processing &amp; some feature engineering. But as I am a generally lazy personn, I will do it... in a more or less near futur.</p> <p>I did also 15th place for M5 uncertainty, and I plan to program the methodology I used in this time series framework in "a near future". By this way it will be possible via a meta parameter of the package to generate confidence intervals along with forecast point.</p> <p>It was a really interesting double competition. And I really like the testing methodology for time series : - 100% of the score is from private LB data (No bias from public Leader board in final evaluation) - that we can submit only one submission (it's like you are in a production context for time series) - unfortunately it was still possible to have a bias with futur external datas not avaiable at the time the forecast was supposed to be done (for exemple US GDP of june 2016, not available at the end of may 2016) - maybe the metric was not an easy one, but still, I belive it's a good one for hierachical time serie problems.</p> <p>I'm also a little bit sad for @kyakovlev , I belive he has taught a lot of good things and ideas to kagglers, and he really deserved to be at least in the gold zone.</p> <p>Regards !</p>
M5 Forecasting - Accuracy
Group k-fold works (102nd solution)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Code: <a href="https://github.com/MitsuruFujiwara/M5-Forecasting-Accuracy">https://github.com/MitsuruFujiwara/M5-Forecasting-Accuracy</a></p> <p>Group k-fold with group=year+month works both in public &amp; private.</p> <p>I prepared 4 models with different cv strategies (TS or group) and different data split (by category or by week or no split). Finally, I selected blending of them as final submission.</p> <p>The scores of individual models are as below.</p> <p>|name|cv|split|cv score|weight|private| |:---:|:---:|:---:|:---:|:---:|:---:| |Weekly Group|Group|Week|0.58741|0.4175|0.57412| |Weekly TS|TS|Week|0.60825|0.2320|0.60082| |Category Group|Group|Category|0.60203|0.2052|0.54479| |No split Group|Group|-|0.58963|0.1732|0.58869| |Blend|-|-|0.56715|-|0.60101|</p>
M5 Forecasting - Accuracy
25th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I learned a lot from this competition. I appreciate excellent discussion and notebook. My solution may be worthless and I wonder why this works well in private, but I share it. I predict level 1~10 and 12 values.</p> <p>[model] lightgbm level 12 recursive, objective poisson level 1~10 day-by-day (target-&gt;sale*price) objective rmse (level 1~9) tweedie (level 10)</p> <p>[feature] simple shift roll diff, product count , some calender features, price</p> <p>[validation] d1830-d1857, d1858-d1885, d1886-d1913, d1914-d1941</p> <p>I spent time on level1~10 prediction, cv of level 1~10 predictions were stable. I tuning level1~10 parameters using StratifiedFold.</p> <p>I combine prediction like this, <code>df[level12]*=df[price]</code> <code>df[level_] = df[level_prediction]*(df[level12]/df.groupby(level_).transform(sum))</code> <code>df[final_prediction] = 0.65*mean(df[level8, level9]) + 0.35*mean(df[level1~7, level10, level12])</code></p> <p>I cloud not get good level 12 wrmsse, but get better score when combining the predictions of level 1~10. I believe level1~10 predictions boost wrmsse score.</p> <p>| | single level 12 | combine level 1~10| | --- | --- | |d1942-d1969|0.73233|0.55895| |d1914-d1941 | 0.60359 | 0.5635| | d1886-d1913| 0.56219 | 0.51489| |d1858-d1885 | 0.67186 | 0.60522| |d1830-d1857 | 0.71142 | 0.644|</p> <p>I post clearn <a href="https://www.kaggle.com/mothermoto/m5-solution/notebook">notebook</a>.</p>
M5 Forecasting - Accuracy
63rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to all the participants. We learned a lot from your discussions and notebooks. Below is a summary of our model:</p> <p><strong>Models</strong> 10 * 28 LGBM ([10 store_id] * [28 day-by-day]) </p> <p><strong>Features</strong></p> <ul> <li><a href="https://www.kaggle.com/kyakovlev/m5-simple-fe">https://www.kaggle.com/kyakovlev/m5-simple-fe</a></li> <li><p><a href="https://www.kaggle.com/kyakovlev/m5-lags-features">https://www.kaggle.com/kyakovlev/m5-lags-features</a> But we didn't use the recursive features.</p></li> <li><p>Target encoding of 8 groups by ordered time series: <code> ['id'],['item_id'],['dept_id'],['cat_id'],['id','tm_dw'],['item_id','tm_dw'],['dept_id','tm_dw'],['cat_id','tm_dw'] </code> We used the average of sales for 28 or 60 days as a feature for the next 28 days. source code: <a href="https://www.kaggle.com/suuuuuu/m5-target-encoding">https://www.kaggle.com/suuuuuu/m5-target-encoding</a></p></li> <li><p>Stacking - Binary prediction (sales = 0 or else) features by above features. source code: <a href="https://www.kaggle.com/suuuuuu/binary-challenge-evaluation-ca-1-2">https://www.kaggle.com/suuuuuu/binary-challenge-evaluation-ca-1-2</a></p></li> <li><p>Deviation from the day of the event. source code: <a href="https://www.kaggle.com/suuuuuu/m5-event-lag">https://www.kaggle.com/suuuuuu/m5-event-lag</a></p></li> <li><p>sales lag from 1 to 27 (day-by-day)</p></li> </ul> <p><strong>Objectives</strong></p> <p>We used custom objectives. These were determined based on our metric below.</p> <p>|store_id|objective| |---|---| |CA_1| tweedie (power = 1.1) |CA_2| custom_asymmetric_train |CA_3| tweedie (power = 1.2) |CA_4| asymmetric_tweedie (p = 1.1, a = 1.2) |TX_1| tweedie (power = 1.1) |TX_2| tweedie (power = 1.1) |TX_3| tweedie (power = 1.1) |WI_1| asymmetric_tweedie (p = 1.08, a = 1.16) |WI_2| asymmetric_tweedie (p = 1.1, a = 1.15) |WI_3| asymmetric_tweedie (p = 1.06, a = 1.2)</p> <p><code> def custom_asymmetric_train(y_pred, y_true): y_true = y_true.get_label() a = 1.15 b = 1 residual = (y_true - y_pred).astype("float") grad = np.where(residual &amp;lt; 0, -2 * residual * b, -2 * residual * a) hess = np.where(residual &amp;lt; 0, 2 * b, 2 * a) return grad, hess </code></p> <p>``` def asymmetric_tweedie(y_pred, y_true): p = 1.1 a = 1.2 b = 1 y_true = y_true.get_label() residual = (y_true - y_pred).astype("float") grad = -y_true*np.exp(y_pred*(1-p))+np.exp(y_pred*(2-p)) grad = np.where(residual &lt; 0,grad*b,grad*a) hess = -(1-p)*y_true*np.exp(y_pred*(1-p))+(2-p)*np.exp(y_pred*(2-p)) hess = np.where(residual &lt; 0,hess*b,hess*a) return grad, hess</p> <p>```</p> <p><strong>Metrics and Validation</strong> We defined WRMSSE for each store. The LGBM hyperparameters were tuned using this evaluation metric. - Validation: [1884-1913] and [1914-1941] - grid search by optuna source code: <a href="https://www.kaggle.com/suuuuuu/m5-ca1-grid-final">https://www.kaggle.com/suuuuuu/m5-ca1-grid-final</a></p> <p><strong>Final training</strong> - train data: from 1 to 1941 - no early-stopping - LGBM round : 300 - ensemble of num_iteration = 100, 200, and 300 - source code: <a href="https://www.kaggle.com/suuuuuu/m5-ca1-model-final">https://www.kaggle.com/suuuuuu/m5-ca1-model-final</a></p> <p>Finally, we learned a lot from this competition and the community. We want to make use of it in future competitions. Thanks to all Kaggle community members.</p>
Deepfake Detection Challenge
92th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Deepfake Detection Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>With the publication of the final results, we will share in this post the details of the machine learning pipeline that got us into the 92nd place (top 5% of all contestants).</p> <p>We want to share insight into what we found worked well and what did not for this challenge.</p> <h2>ML Pipeline</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2541996%2F559ac0932600a1a49a16c482b09fafc1%2Fpipeline_DFDC.png?generation=1592040705943795&amp;alt=media" alt=""> <em>Our ML pipeline, including randomly sampling frames from videos, cropping around the face, inference with a trained model, and averaging the predictions to make a prediction for the video.</em></p> <p>The first choice we had to make involved what to process: should we focus on the entire videos, or on individual frames sampled from the videos? This choice would affect the type of machine learning model we used, the computation time, and probably the final performance. After observing that many fake videos contained obvious facial manipulations and could be identified from still images, we decided early on to focus on detecting a deepfake from individual frames.</p> <p>We first created a balanced training set by sampling 10 frames from real videos and 2 frames per fake video. We then cropped these images to focus on the faces, used transfer learning to train a pretrained model on this face dataset, and finally for inference we used ensembling with several models and video frames.</p> <p>We reached the top 5% of participants thanks to a combination of choices we made throughout the ML pipeline, focusing on:</p> <ul> <li>face detection</li> <li>model selection</li> <li>generalization methods</li> <li>frame and model ensembling</li> </ul> <h2>Face Detection</h2> <p>Since the video manipulations focused on people’s faces, which were only a small part of the videos, it was essential to develop a good face detection system. Specifically, we sought a face detection system that would maximize the detection probability and minimize the false positive rate.</p> <p>We benchmarked and compared several different open source face detectors, and ended up testing two different approaches:</p> <ol> <li>Face detection with no false positives. We hypothesized that noise in the data created by frames wrongly identified as faces could adversely affect training. For this approach, we used a combination of two different face detectors: faced [1] and dlib [2]. By requiring that a face must be detected by both detectors, we reached 0.1% false positives, which came at a cost of a rather low detection rate of 60%.</li> <li>Balanced false positive rate and detection probability. For this approach, we used the BlazeFace [3] detector and immediately reached 97% detection rate and 7% false positives. However, we noticed that this detector performed poorly when the faces were far from the camera. By adding an additional step in which we created horizontal, vertical, and matrix sliding windows with overlapping cells, and then taking those windows to perform face detection, we reached a 99% detection rate with 8% false positives.</li> </ol> <p>We found that the second approach yielded better results on the Kaggle leaderboard, and then adopted it as our baseline for face detection. We parallelized this approach to make it faster and we were extracting frames from videos, detecting faces, and performing model inference at 25 frames per second in the Kaggle environment.</p> <h2>Choice of model</h2> <p>By choosing to focus only on frames as opposed to entire videos, we had reduced the challenge to an image classification problem and could rely on pre-existing methods. Specifically, we used transfer learning with Convolutional Neural Networks (CNNs) trained on the ImageNet classification task [4]. Although the ImageNet dataset and the Deepfake dataset are very different, all natural images share common features, and we expect that starting with a pretrained model allows it to focus on the specifics of the problem at hand.</p> <p>As we tested several models, we found a correlation between the model’s results on ImageNet and the validation loss on our dataset. As a result, we selected ResNext [5] and DenseNet [6] models.</p> <h2>Generalization methods</h2> <p>The biggest problem we had to solve in this DeepFake challenge was that of generalization, as the training data provided and the test data used for the leaderboard came from different distributions. Consequently, we found that there was often a significant gap between our validation score and the leaderboard score.</p> <p>We combined three different approaches to improve generalization:</p> <ul> <li>Regularization: we tried standard methods such as dropout, weight decay, and label smoothing. However, we found that these methods only yielded a small improvement.</li> <li>Adding more data: to increase the diversity of images in our training set, we added data from a different source. Specifically, we used the FaceForensics dataset [7], released in 2018, which is also focused on deepfake detection.</li> <li>Data augmentation: We synthetically created more data by randomly introducing slight variations to the existing images using translation, rotation, brightness variation, compression, and resolution reduction.</li> </ul> <p>We found that the data augmentation techniques led to the highest improvement in our score. On our final submissions, we only used data augmentation and regularization.</p> <h2>Ensembling</h2> <p>To make predictions on videos, we performed two types of ensembling:</p> <ul> <li>Frame ensembling: we randomly sampled frames from a video, performed face detection, made individual predictions for each face detected, and then used the average over all the predictions as our prediction for the video. We started out doing this averaging with 5 frames per video, but quickly realized the more frames we used, the better. In our final submission, we were taking the average over 100 frames per video.</li> <li>Model ensembling: since different models tend to learn slightly different patterns in the data, to achieve better generalization, we averaged the predictions made by several different ResNext and DenseNet models.</li> </ul> <p>The use of both types of ensembling significantly improved our score and made us shoot up about 100 places in the ranking.</p> <h2>What we did and did not try</h2> <p>There were a few other things we tried that ended up not making the cut in our final pipeline:</p> <ul> <li>We hypothesized that we may be missing out on important information captured in the temporal dimension of the video. We thus trained a Recurrent Neural Network (RNN) on top of the pre-trained Convolutional Neural Network (CNN) with sequences of frames as input. However, we found that our results were not very different from training just a CNN and averaging several predictions. We thus concluded that most information about the real or fake nature of the data could be found in individual frames.</li> <li>We hypothesized that simply averaging predictions made by a CNN over several frames of a video was suboptimal and that better results could be reached by training a separate network to aggregate the predictions. However, despite having a huge impact on our validation score, we found that this procedure led to overfitting to our train dataset and worse generalization to the test dataset.</li> </ul> <p>Some things we did not try but could have improved our score are:</p> <ul> <li>We knew from the start that around 5% of the videos were fake because of manipulations not of the image, but of the audio. Although we decided to focus on the visual fakes, extra gains could have been made, either by sticking with our approach and cleaning our train dataset to remove the fake audios or by building a more comprehensive model that could deal with both fake audios and fake visuals.</li> <li>Training a classifier with an auxiliary (but relevant) objective is known to often lead to improvement. For instance, we could have trained our model to not only classify the video frames into fake and real categories but also predict which pixels were most likely to have been altered. <ul><li>We expect that more data augmentations and fine tuning of our models and pipeline would have led to improved generalization. For example, we found after the competition had officially ended that a wider crop around the faces we detected would have led to a better score.</li></ul></li> </ul> <h2>Our results</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2541996%2F6c5a267127b0fe3e6ac6f75e452342ec%2Fscore_evolution.png?generation=1592041933312850&amp;alt=media" alt=""> <em>Evolution of our score (yellow), ranking in the leaderboard (blue) over time, and main technology driver behind score improvement (green banner)</em></p> <p>The plot above shows the evolution of our leaderboard score (in yellow, smaller is better) and of our ranking among competitors (blue, higher is better). Our score consistently improved over time, as we incrementally tested different approaches and made changes to our pipeline.</p> <h2>Acknowledgments</h2> <p>Thank you to our team members: <a href="/berlitos">@berlitos</a> <a href="/wrclements">@wrclements</a> <a href="/bastien7898">@bastien7898</a> <a href="/bennox">@bennox</a>. Thank you to the organizers for hosting this competition.</p> <h2>References</h2> <p>[1] <a href="https://github.com/iitzco/faced">iitzco/faced: 🚀 😏 Near Real Time CPU Face detection using deep learning</a> [2] <a href="http://dlib.net/">dlib C++ Library</a> [3] <a href="https://github.com/hollance/BlazeFace-PyTorch">hollance/BlazeFace-PyTorch: The BlazeFace face detector model implemented in PyTorch</a> [4] <a href="https://paperswithcode.com/sota/image-classification-on-imagenet">ImageNet Leaderboard | Papers with Code</a> [5] ResNext: <a href="https://arxiv.org/abs/1611.05431">https://arxiv.org/abs/1611.05431</a> [6] DenseNet: <a href="https://arxiv.org/abs/1608.06993">https://arxiv.org/abs/1608.06993</a> [7] <a href="https://github.com/ondyari/FaceForensics">ondyari/FaceForensics: Github of the FaceForensics dataset</a></p>
M5 Forecasting - Accuracy
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi kagglers! :)</p> <p>This my first solution sharing, i hope it's helpful for you.</p> <h2>Thanks to</h2> <p>First of all, i'd like to thank the competition organizers and data scientists who participated in the competition. I have learned a lot from you, Really Really thank you.<br> (especially, <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a>'s kernels and discussions. Really helpful.)</p> <h2>Solution</h2> <p>The method i've used is very simple and not that much great. This is all based on insights from community. </p> <h3>Pre-processing</h3> <ul> <li>features based on price</li> <li>features based on calendar</li> <li>features based on target lag (recursvie / non recursive)</li> <li>features based on target lag rolling mean / std (recursive / non recursive)</li> </ul> <h3>CV strategies</h3> <p><strong>Time based split</strong> : mimic train/test split</p> <ul> <li>cv1 : d_1830 ~ d_1857</li> <li>cv2 : d_1858 ~ d_1885</li> <li>cv3 : d_1886 ~ d_1913</li> <li>public : d_1914 ~ d_1941</li> <li>private : d_1942 ~ d_1969</li> <li>without early stopping</li> </ul> <h3>Modeling strategies</h3> <ul> <li>recursive / non recursive<ul> <li>by store_id</li> <li>by store_id - cat_id</li> <li>by store_id - dept_id</li></ul></li> </ul> <h3>Model</h3> <ul> <li>LGBM(single)</li> <li>objective = tweedie</li> </ul> <h3>Post-processing</h3> <ul> <li><strong>Without</strong> post-processing. (e.g.) magic multiplier.</li> </ul> <h3>Simple summary as diagram</h3> <p><img src="https://github.com/YeonJun-IN/data.science.competition/blob/master/4.%20%5Bkaggle%5DM5-Accuracy/img/simple_summay.png?raw=true"></p> <h2>Opinion</h2> <p>A metric called WRMSSE was unfamiliar with me, and data structure was complicated. So I first read the documents provided by the organizer, and I got a lot of insights from the brilliant notebooks and discussions.</p> <ol> <li>different time series by state, store, cat, dept</li> <li>the effect of disaster and weather on sales(target)</li> <li>"out of stock" problem</li> <li>the anomaly of sales</li> <li>the importance of solid CV</li> </ol> <p>and so on…</p> <p>I decided two things through these insights.</p> <ol> <li>divide into groups with similar time series, and model it.<br> (e.g.) by store, by store cat, by store dept, etc.</li> <li>select final model using mean(cvs, public score) and std(cvs, public score)<br> (especially, focusing on std.)</li> </ol> <p>At first I made baseline using non recursive method, and i saw that there was a lot of variation in cv and public score.(large std)</p> <p>Then, i made second baseline using recursive method based on <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a>'s kernel, but there still was a lot of variation. </p> <p>The interesting part here is,</p> <ol> <li>Overall, the recursive method had a better score than non recursive.</li> <li>non recursive method had best score at cv3.</li> <li>recursive method had best score at public.</li> </ol> <p>Based on these insights, i expected that ensembling non recur and recur might lead to robustness.</p> <p>Then, i selected as final model.</p> <hr> <p>As i mentioned earlier, my method is very simple and not that much great. so i didn't expect this at all. I was very lucky to learn a lot and to get unbelievable result. </p> <p>Thank you to all the kagglers!!</p> <p>(and very sorry for many participants who spent a lot of time and effort on this competition. It's all thanks to you.)</p>
M5 Forecasting - Accuracy
Top 1% solution (28th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>So I thought about sharing some points of my strategy with you guys.</p> <p>I decided not to use any multipliers. I'm not a big fan of it and prefer to work on generalizing my model by working on feature engineering and tuning hyper parameters.</p> <h2>Feature Engineering</h2> <p>I used some ideas published by @kyakovlev at <a href="https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic">https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic</a> such as: - rolling mean and std - lag [ 28 ] rolling [ 7 14 30 60 180 ] - rolling mean - lag [ 1 7 14 30 ] rolling [ 7 14 30 60] - mean and std target encoding for 'item id', 'cat id' and 'dept id'</p> <p>From <a href="https://www.kaggle.com/sibmike/m5-out-of-stock-feature-640x-faster">https://www.kaggle.com/sibmike/m5-out-of-stock-feature-640x-faster</a> I took - 'gap e log10' and 'sale prob' and obtained its rolling mean with lag 28.</p> <p>I have also used features such as last 2 digits of 'sell price', 'days from last price change', 'sell price ratio', 'days from last sale', 'release week', 'weeks from release', 'cluster', 'ADI rolling mean' and target encoding taking 'weekend', 'snap' and 'event' + 'preholidays' into account. </p> <p>There are other features as well that don't need mentioning as they were basic ones. I also fixed some outliers I've found and others mentioned at <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/161718">https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/161718</a> .</p> <p>About the data, I actually calculated the features using the whole dataset since 2011 except encoding features where I set the lowest date to 2013.</p> <h2>Model</h2> <p>I used lightgbm and created models for each store, departing from @kyakovlev s model, and tuning it to my needs. </p> <p>I trained using a walk-forward cross validation I adapted from <a href="https://towardsdatascience.com/time-based-cross-validation-d259b13d42b8">https://towardsdatascience.com/time-based-cross-validation-d259b13d42b8</a></p> <h2>My thoughts</h2> <p>I think my solution can be improved, a lot. By Sunday, my model started to behave very strangely and I decided to rebuild everything from zero, so I didn't have much time to analyze which features were indeed good or not. I cut off a bunch of features, and didn't have time to test Bayesian encoding and other ideas, which I'll do now with more calm.</p> <p>Hope this helps you all understand some approaches of your fellow competitors, although we saw that the result of this competition could have also been met simply with 'some' multipliers. A big thanks to those that shared their ideas and helped to improve my model.</p> <p>Let me know if I forgot to mention anyone's idea, and I'll give the proper credit/ referencing.</p>
M5 Forecasting - Uncertainty
6th solution - Pure statistical forecast
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are really glad to reach top 10 in our first Kaggle Competition. (We did invest countless hours in this project)</p> <p>One thing to know about our team is that we are all Supply Chain professionals first, data scientists second. We used a lot of domain knowledge to crack the problem, and many of the techniques we've used were recycled from things we do on a daily basis at Lokad (where we all work or used to work). </p> <p><strong>Summary</strong></p> <p>Our solution is a multi-stage state-space model, states being active or inactive. (A good reference here: <a href="https://gluon.mxnet.io/chapter12_time-series/issm-scratch.html">https://gluon.mxnet.io/chapter12_time-series/issm-scratch.html</a>)</p> <p>Our predictions were generated through Monte Carlo simulation at level 12, we considered demand at different stores to be independent so we aggregated demand trajectories from level 12 up to level 10. Levels 1 to 9 time-series were handled individually with simple innovation state space models. We modeled emission with a Negative Binomial distribution to represent the fact that demand is discrete. </p> <p>Seasonality factors were carefully hand-crafted and events were modeled with simple linear coefficients. (Calculated at store department level)</p> <p><strong>Good properties</strong></p> <ol> <li><p>Our solution is fully explainable, (white-box approach) - so we can easily describe each forecast in case we are challenged. </p></li> <li><p>It is linearly scalable, you can give us the whole WalMart dataset and it will still run without a problem (of course we would need some extra cpu =). (And we could still get a factor 10x by moving some core parts of the code to a compiled language).</p></li> <li><p>It clearly differentiates demand from sales, (how much it would sell if we were never stock-out)</p></li> <li><p>It can output full demand distributions for horizons, example: how much will it sell next week ? Note that this is different from how much will sell day by day.</p></li> <li><p>It can be used for training agent-based models (used for our MEIO solutions) </p></li> </ol> <p><strong>Not so good properties</strong></p> <ol> <li><p>Time consuming. Adding new categories would require additional human analysis.</p></li> <li><p>Does not account for pricing effects (could not properly model it with the given dataset)</p></li> <li><p>Does not forecast new SKU. (We would require a separate solution for this case)</p></li> </ol> <p><strong>Conclusion</strong></p> <p>We do believe that tailored statical models were a good fit to the problem at hand. We had a fairly stable score accross the years and periods and above all full control of the solution.</p> <p><a href="https://www.youtube.com/watch?v=bw7fCukUVkA&amp;t=904s">https://www.youtube.com/watch?v=bw7fCukUVkA&amp;t=904s</a></p>
M5 Forecasting - Accuracy
[68th place] Stable solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to the winners! And thanks to all of you who shared your knowledge during this competition. I share my solution because I haven't seen nothing similar yet, and because proved to be very stable over my CV framework and the private leaderboard.</p> <h2>Validation framework</h2> <p>I used 4 folds for validation:</p> <p>F1: <code>2015-04-25</code> - <code>2015-05-22</code> F2: <code>2015-05-23</code> - <code>2015-06-19</code> F3: <code>2016-03-28</code> - <code>2016-04-24</code> F4: <code>2016-04-25</code> - <code>2016-05-22</code></p> <h2>Base models</h2> <p>Only used single LightGBM with recursive one step ahead prediction trained over the entire dataset as base model.</p> <p>These are the hyperparameters: <code> model_params = { 'objective':'tweedie', 'tweedie_variance_power': 1.1, 'metric':'None', 'max_bin': 127, 'bin_construct_sample_cnt':20000000, 'num_leaves': 2**10-1, 'min_data_in_leaf': 2**10-1, 'learning_rate': 0.05, 'feature_fraction':0.8, 'bagging_fraction':0.8, 'bagging_freq':1, 'lambda_l2':0.1, 'boost_from_average': False, } </code></p> <p>And these are the features: * time-attributes (year, month, year week, week day) and cyclical transformations. * Rolling mean, median, std and kurtosis. window shifts: [1, 7, 28], window sizes: [7,14,28]. Kurtosis is a great idea to stabilize the effect of outliers. * Rolling count of consecutive zeros. * Rolling sales frequency (number of days with sales / size of the window). * Standar price features + rolling transformations to detect the effect of price changes. * For important events (christmas, new year, 4 July, superbowl, easter, NBA finals and thanks giving) I created separate binary features. Additionally I created an extra binary feature for events where the sales are affected the day before.</p> <p>Important considerations: 1. When using tweedie as loss, the model is unable to predict 0. This is a problem for features like rolling count of consecutive zeros or rolling sale frequency. Then, for each step-ahead prediction I truncated to zero all predictions below <code>rho</code>, before predicting the next time step. The value of <code>rho</code> was estimated observing how the model predicted on the periods with long runs of zeros. 2. By varying the <code>seed</code> value I observed changes of 0.02X in the WRMSEE, i.e., is not a good idea to use a single model with a single seed (unless you are a lucky person). 3. LightGBM spent almost all its splits in <code>item_id</code> when using default categorical handling (left figure). On the other side, if you encode <code>item_id</code> instead of the default encoding, more features are used for split (right figure): <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2F8ea5de2184701018474057dfc0618561%2FScreen%20Shot%202020-07-02%20at%2010.33.40.png?generation=1593704069070764&amp;alt=media" alt=""></p> <p>I built 60 models in total, 35 models with <code>item_id</code> default encoding and 25 models with <code>item_id</code> <a href="http://contrib.scikit-learn.org/category_encoders/glmm.html">GLMM encoded</a>. To give more diversity to the ensemble, each model was trained with a different seed and the number of iterations was set randomly between 500 and 700 (this range was obtained through cross-validation). </p> <p>For ensembling I simply aggregated all the predictions using the <a href="https://en.wikipedia.org/wiki/Trimean">trimean</a>, which is a robust central tendency statistic. I decided not to use a more complex aggregation strategy after reading this thread: <a href="https://stats.stackexchange.com/questions/47950/ensemble-time-series-model">https://stats.stackexchange.com/questions/47950/ensemble-time-series-model</a>.</p> <p>For each of the 60 models I applied the trend correction and hierarchical reconciliation procedures (described below) <strong>before aggregating</strong>.</p> <h2>Trend Correction</h2> <p>The problem with recursive models is that the error propagation produces over estimation on the aggregated time series. I observed this behavior in all the folds except the public leaderboard fold where this over estimation was beneficial: <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/155770">discussion</a>.</p> <p>Looking at the predictions of my model on top Levels (level 1 &amp; level 3) I noticed a great deviation. However, the shape of the sales curve was right. From here comes the idea to build a trend correction method on the top levels. This method must have the following properties: <strong>properly correct the trend when it is deviated, and be the identity when the trend is right</strong>.</p> <p>The trend correction method I developed works by fitting 2 trend models for each time series to be corrected: 1. Fit a trend model to the training time series concatenated with the 28 step ahead predictions. This model should be very adjusted in order to capture the change in the last 28 time steps. 2. Fit a trend model to the training time series only. This models should generalize well the trend for the next 28 time steps.</p> <p>In this figure you can see the method applied to fold F3 and store <code>CA_1</code> (were predictions are over estimated on level 3): <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2F7505a5e0aab18ec5058d1d9eca7fbd0b%2FScreen%20Shot%202020-07-02%20at%2009.36.36.png?generation=1593700730273178&amp;alt=media" alt=""> And in this figure you can see the method applied to fold F3 and store <code>WI_3</code> (were predictions are well estimated on level 3): <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2Fdd453ffa7ca98c2e745a123b1cec7aec%2FScreen%20Shot%202020-07-02%20at%2009.37.20.png?generation=1593700921331810&amp;alt=media" alt=""></p> <p>The trend correction model I use was <a href="https://www.slac.stanford.edu/cgi-bin/getdoc/slac-pub-3477.pdf">Friedman's supersmoother</a>, with this <a href="https://github.com/jakevdp/supersmoother">great python implementation</a>. I applied the trend correction on level 3 (store_id), and tuned the hyperparameters of supersmoother over the 4-folds using Optuna. </p> <p>Here is the script I used to tune supersmoother: <a href="https://github.com/mavillan/m5-forecasting-accuracy/blob/master/scripts/trend_correction.py">trend_correction.py</a>.</p> <h2>Hierarchical reconciliation</h2> <p>Once the trend are corrected at store level, it is necessary to perform forecast reconciliation at store-item level. </p> <p>I tried almost all method described here: <a href="https://otexts.com/fpp2/hts.html">hierarchical time series</a>. I spent a lot of time with the optimal combination approach with weighted least-squares, but could't find an appropriate way of weighting the predictions. At the end I used the forecast proportions approach, which gave me the best results in CV. </p> <h2>Final words</h2> <p>Finally, this is how the trend correction method at store level looks like at level 1 for the predictions in the test period: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F350756%2Ffb2462d3815d3004447d10687a6e0b46%2FScreen%20Shot%202020-07-02%20at%2015.15.16.png?generation=1593721195401858&amp;alt=media" alt=""> the orange line (<code>reconc</code>) is the store-level trend corrected predictions, aggregated at level 1. In discrepancy with some Kagglers, I think that the decrease in sales in the test period was predictable with the data at hand. However, is something very hard to capture with a model trained at store-item level, where the time series are quite noisy. </p> <h2>Contributions</h2> <p>During this competition I built two python packages that I would like to share with the Kaggle community:</p> <p>1) <strong>tsforest</strong> (I have been working for a while on this one) <a href="https://github.com/mavillan/tsforest">github repo</a>. This package provides a complete framework for efficiently handle multiple time-series datasets and building a GBRT forecast models, using any of LightGBM, XGBoost, CatBoost and H2O GBM as backends. Using this package in a 16 cores VM I was able to: * Build the lagged/rolling window training features in ~10 min (including custom rolling window statistics). * Training LGBM over the entire dataset in ~30 min. * Perform the recursive one step ahead predictions in ~10 min. </p> <p>Here is the script I used to built the 60 models using <strong>tsforest</strong>: <a href="https://github.com/mavillan/m5-forecasting-accuracy/blob/master/scripts/forecast.py">forecast.py</a>. As you can see, using tsforest makes the code clean and easy to read.</p> <p>2) <strong>mahts</strong> (model agnostic hierarchical time series) <a href="https://github.com/mavillan/mahts">github repo</a>. This package implements algorithms for hierarchical times series reconciliation, based on the <a href="https://cran.r-project.org/web/packages/hts/index.html">hts R package</a> but has the benefit of being agnostic to the forecasting model.</p>
M5 Forecasting - Accuracy
16th Place Solution - Kalman Filters and Boosted Trees
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I just posted my team's solution <a href="https://nousot.com/blog/how-we-won-gold/">on my company Nousot's blog</a> and our code is <a href="https://github.com/nousot/Public/tree/main/Kaggle/M5">on Github</a>. In the article, the Section entitled "The Public Notebook Component," is where the methodological description starts. Cheers to <a href="/kyakovlev">@kyakovlev</a> for the excellent data pipelines.</p> <p>Our custom component used Kalman Filters via the R package <a href="https://cran.r-project.org/web/packages/KFAS/index.html">KFAS</a>, an awesome time-series tool if you gravitate towards state space modeling. Our data pipeline for this solution was constructed from scratch and relatively simple. However, at the end of the contest we realized that our solution was loosing big for the sparse series, so we used a loess fit to help us choose between a boosted tree solution from the public notebooks and the Kalman Filter solution for each time series. I suppose that makes it a "mixture of experts."</p> <p>Here's a case where the Kalman Filter looked more reasonable to us than the boosted tree solution:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F114170%2F43c9ef6e9249743d6884a7f5269d427b%2FKalmanWins.png?generation=1595451979390581&amp;alt=media" alt=""></p> <p>And here's a case where the boosted tree's solution was more reasonable: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F114170%2F827e75866eb49ce3285dc87f63c730e3%2FTreesWin.png?generation=1595452094881138&amp;alt=media" alt=""></p> <p>It was a great contest. Thanks everyone who competed!</p> <p>Ben</p>
Herbarium 2020 - FGVC7
5th - baseline with 11 epochs CE+SE-ResNext50
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Herbarium 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners. </p> <p>I enjoyed the last year Herbarium2019 where I managed to beat Facebook AI team while using 100x less compute (relaying on kernels only) to train final solution so i hoped i could do something similar this year. Oh boy, how wrong i was....</p> <p>The size of the dataset knocked me out immediately. Still i decided to run one experiment and that's to rerun the same code form last year and then to play with post-processing :). </p> <h2>Training the baseline</h2> <p>I borrowed 1080Ti for extended weekend and run 11 epochs :). I just used cross entropy, trained on whole dataset, had only some small augmentations to enable fast learning, and did not treat imbalance at all. Input size was 1/2 of the original, used accumulation to get to 256 batch size and scaled LR with cosine annealing. You could think of my solution as a default or a baseline. </p> <h2>Squeezing out the baseline - submit all you got</h2> <h3>Comparing checkpoints</h3> <p>I submitted after every epoch and tried to get better understanding of what's going on. Last two checkpoints scored 0.64+ on public. I noticed that as my score increased with later checkpoints the number of predicted unique classes also linearly increased. It went from epoch 5, 0.55 and 18K to epoch 11, 0.64 and 24K</p> <h3>Postprocess 1</h3> <p>I multiplied softmax outputs by the inverse of class frequency and got boost 0.05 and reaached 0.69+ with 27K unique classes. </p> <h3>Postprocess 2</h3> <p>The classes in test set had the same distribution as in training set with max clipped at 10. I calculated the difference between the expected counts and my predicted counts, then i adjusted the probabilities accordingly. </p> <p>For example if I expected a count 10 but got 15 I needed a reduction of 0.66% (adjustment). To prevent to much fluctuation I sqrt normalized the adjustments and repeated it several times. I followed number of uniquely predicted classes and stopped when it taps the 32K. This gave me 0.04+ so i reached ~0.74. Horizontal flip gave me a bit extra 0.005.</p>
Plant Pathology 2020 - FGVC7
10th place - missed 30+ better submissions :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>it's pretty much based on this <a href="https://www.kaggle.com/dimakyn/classification-densenet201-efficientnetb7">kernel</a> with some minor changes like keeping aspect ratio while resizing (600x850) and using whole training without cross validation; added TTA with flips (0, H, V, HV). </p> <p>I did this for B5, B6, B7 and IRv2. The best was IRv2 with 0.981 on public and ensemble gave 0.984. </p> <p>Then added pseudo labels and got single IRv2 0.984 and ensemble 0.988.</p> <p>No fancy augmentation or loss functions, i did not treat imbalance. Still i ended up getting many high scores on both public and private. But it was hard to pick the best one as I had no validation set and both test and training sets are small. </p> <p>So I took the best public submission and randomly chose one out of 100 best submission :). The random one ended up above my best public. </p> <p>Also, I missed 30+ better submissions than my current score :D</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1556833%2Ff36bff29350340c3669cdad0a7dd203d%2Fbad_luck.png?generation=1590563501929665&amp;alt=media" alt=""></p>
M5 Forecasting - Accuracy
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers!</p> <p>first of all I would like to thank Prof. Makridakis, Evangelos Spiliotis, the organizing team, the generous sponsors and everyone else involved for hosting this interesting competition.</p> <p>Also I want to thank all the great public kernel contributors that built the basis for my solution and final rank. Without your great work this would not have been possible! (Please see credits at the end of the summary)</p> <h3>1. Before we start...</h3> <p>The overall approach consists of two separate modelling streams on top and bottom of the grouping hierarchy that are aligned to get a result. It should be reproducible for any given time-frame as well as automatable and therefore valuable for any complex grouped/hierarchical time series problems.</p> <h3>2. The starting point</h3> <p>As a quick reminder what our baseline was <a href="https://www.kaggle.com/chrisrichardmiles/simple-model-avg-last-28-days-grouped-by-weekday">a quick look at this notebook</a> is helpful. Pretty clear that if we cannot beat 0.75 significantly we are on the wrong track.</p> <h3>3. The bottom level lgb model - "Probability" of single item being bought:</h3> <p>The bottom level is based on the lgb models that many participants used. ( <a href="/kyakovlev">@kyakovlev</a> - thank you for your great inputs in this challenge ) One really important aspect is that I did not use any historic sales data as features (no rollings, no lags, no last buy etc.). My hypothesis is that the drivers for a peppermint candy bar being bought in the WI_1 shop are not whether or not this candy bar was bought once yesterday and twice on Monday but only price, date-time, SNAP, etc.</p> <p>The lgb model learns a "probability" for each item(-quantity) being bought in a given shop on a given day based on the external drivers. ( By the way - <a href="https://youtu.be/uHGlCi9jOWY">sorry to all the statisticians reading my crazy ideas here</a> )</p> <p>It will be a major next step to dig into the understanding of this bottom level models. For example: Why do errors cancel out so nicely when aggregating? (I have a theory too embarrassing to write about yet - will follow up after some testing) </p> <p>Maybe my hypotheses turn out completely wrong but empirically they did hold for this given data set. As a sidenote: lr 0.2, 3600 iterations ... pretty basic settings on this one. Trained by store.</p> <h3>4.) The problem with the correct level</h3> <p>When following this bottom level approach we run into one problem. <em>The resulting models do not incorporate any trend or seasonalities.</em> (See comments for discussion - this point is not correct) We cannot (or at least I cannot) "de-trend" the underlying signals due to their intermittent nature.</p> <p>Maybe incorporating some kind of rolling/lag data might work - but I didn't see how to do this in a stable way that I understand or at least believe to understand (sometimes not the same :-) ) and am able to control.</p> <p>A lot of people in this competition chose to tackle this issue with "magic multipliers".</p> <p>What I used to shift the level up or down was <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/140564">one of the custom loss functions discussed in the forums</a>. </p> <p><code> def custom_asymmetric_train(ypred, ytrue): y_true = y_true.get_label() residual = (y_true - y_pred).astype("float") grad = np.where(residual &amp;lt; 0, -2 * residual, -2 * residual * multiplier) hess = np.where(residual &amp;lt; 0, 2, 2 * multiplier) return grad, hess </code></p> <p>With a multiplier &gt;1 we can train the model to "overshoot" the ground-truth with a multiplier &lt;1 we train the model to "undershoot" the ground truth. I think the loss approach will work better than shifting the result with a multiplier but did not compare the two approaches.</p> <p>Top level aggregations for a good fit and an overshot fit look like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F449263%2F89417feac00e6cd0dc83aaa8043f9fd9%2FVal6_quite_good_example.png?generation=1594061478181612&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F449263%2F34365bac17a83cb7b37130ee9709403e%2FVal5_3mult_too_high_example.png?generation=1594061508266861&amp;alt=media" alt=""></p> <h3>So how do we find the right level if we do not want to guess? (i.e. "play the lottery")</h3> <h3>5.) Aligning with an independent top-level prediction</h3> <p>What we need to find the right level is a top level prediction for which we are confident that trends and seasonalities are learned very well. The approach I used was to use N-Beats for the top 5 levels. </p> <p>The resulting predictions are very good and a great alignment basis as you can see for example in this top level (lvl1) validation result:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F449263%2F35b8f0bca46bc996a57c6b9711698e6f%2FN-Beats-TopLvl.png?generation=1594061615328817&amp;alt=media" alt=""></p> <p>Now I trained 15 different bottom level lgb solutions with different loss multipliers. Those bottom level predictions were aggregated and aligned with the N-Beats predictions for levels 1-5. Selection of the best fit was basically done when the mean error between N-Beats and aggregated bottom level predictions "flipped" from positive to negative or vice versa. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F449263%2Fd71aea79739d8a17c2c0ef86369ad5d5%2FFinal_Ensemble_Table.png?generation=1594061859331963&amp;alt=media" alt=""></p> <p>After selecting the best fitting bottom level solution I built an ensemble consisting of this solution and the 2 solutions above as well as the 2 solutions below. ( In my earlier experiments I saw that this had a quite beneficial effect for hitting the optimal level - my intuition was that we get a better "loss surface" with the ensemble - but again ... a lot of "intuition" here given that there was no time to analyze all ideas I had and given that I am not the best programmer. )</p> <p>The final alignment for Level1 of my submission looked like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F449263%2Fa9e79adaf0b4f6653ff6741c0cce28ba%2FFinal_Ensemble_Lvl1_aligned.png?generation=1594061940630819&amp;alt=media" alt=""></p> <h3>6. ) What did not work</h3> <p>In the beginning I played around with DeepAR and some other ideas. Later on I tried to reconcile the different prediction levels with the R/hts implementation. After renting out a 128Gig instance on AWS and a sleepless night I came to the conclusion that this will not work for me :-) - I am pretty sure I messed up the residual calculation for MinT but my feeling is that it will not work anyway given that we have &gt;30k bottom level signals and just over 100 top level signals .... OLS/WLS did not work for me either.</p> <h3>7. ) What I would love to see</h3> <p>Solutions that did not use lgb at all. Just saw the 3rd place writeup a minute ago - looking forward to reading more on this one.</p> <h3>8. ) Credits</h3> <p>The kernel I used for all my features is: <a href="https://www.kaggle.com/kyakovlev/m5-simple-fe">https://www.kaggle.com/kyakovlev/m5-simple-fe</a> From the same author the starting point of my lgbm exploration was (many of you may have seen it already): <a href="https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic">https://www.kaggle.com/kyakovlev/m5-three-shades-of-dark-darker-magic</a> Another input to my lgbm exploration was: <a href="https://www.kaggle.com/ragnar123/simple-lgbm-groupkfold-cv">https://www.kaggle.com/ragnar123/simple-lgbm-groupkfold-cv</a></p> <p>The kernel that put me in touch with GluonTS (great framework!!) was this one: <a href="https://www.kaggle.com/steverab/m5-forecasting-competition-gluonts-template">https://www.kaggle.com/steverab/m5-forecasting-competition-gluonts-template</a> Even though I did not end up having success with DeepAR getting to know GluonTS was key to my final path.</p> <p>The kernel that allowed me to rapidely test hypothesis and give me visual insight to the results was this one (really love this one!!): <a href="https://www.kaggle.com/tnmasui/m5-wrmsse-evaluation-dashboard">https://www.kaggle.com/tnmasui/m5-wrmsse-evaluation-dashboard</a> </p> <p>My aggregation code was copied from this kernel: <a href="https://www.kaggle.com/lebroschar/aggregate-forecast-eda">https://www.kaggle.com/lebroschar/aggregate-forecast-eda</a></p> <p>You see that a lot of community input was used - thank you everyone also for the great discussions on the kernels.</p> <p>Thank you all for this amazing competition.</p>
Plant Pathology 2020 - FGVC7
[placeholder] 49th solution + 0.981 solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Well done, everyone! Thanks for the kernels and very interesting discussions. I'll post my best private score solution which is 0.981 (around top10 scores I would say) and then my 49th place solution</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1696514%2F65231b0ddc924eb86ffcd47f4110f7d1%2FScreenshot%202020-05-27%20at%2011.18.56.png?generation=1590572282021536&amp;alt=media" alt=""></p> <h1>Best private score - 0.981</h1> <h2>Augmentation</h2> <p>ImageNetPolicy + HFlip</p> <h2>Backbone</h2> <p>Ensemble of efficientnet-b{2+3+4+5} with noisy-studient with around 20 epochs</p> <h2>Optimizer</h2> <p>Ranger</p> <h2>Loss</h2> <p>BCEWithLogitsLoss</p> <h2>Image sizes</h2> <p>600x600</p> <h2>Other</h2> <p>TTA</p> <h1>49th place score - 0.975</h1> <h2>Agumentation</h2> <p>ImageNetPolicy + HFlip + PCAJitter</p> <h2>Backbone</h2> <p>Ensemble of efficientnet-b{2+3+4+5} with noisy-studient with around 20 epochs</p> <h2>Optimizer</h2> <p>RAdam</p> <h2>Loss</h2> <p>BCEWithLogitsLoss</p> <h2>Image sizes</h2> <p>600x600</p> <h2>Other</h2> <p>TTA, smoothed results</p> <h1>Things I tried without improvement on my LB score</h1> <p>Cutmix, Mixup, weighted loss, focal loss, higher image size</p>
University of Liverpool - Ion Switching
1st (or 16th) place short write up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone, Thanks particularly to Chris Deotte, Vitalii Mokin, Trigram.&nbsp; I'm sorry to forget people, we'll spend the next days upvoting things we've neglected.&nbsp; Thanks to the organizers and the kaggle team, this whole competition meant so much to us, and just getting to gold medal level was a great feeling. </p> <p>Concerning the leak, it's been well described already but there were other clues, and we certainly didn't think we had something new. &nbsp; - The Matlab code has a vague comment about not working above 5 - It isn't seeded with the clock. &nbsp;It's a "feature" of Matlab (not a bug) that the RNG by default produces the same chain on startup. &nbsp;This was probably the original source of the problem. - There are visible beats in the 50Hz of the max10 set. - You can actually see the correlation in the signals visually (we happened to be adding 5+5 to get some extra noise for simulation and checked .corr across everything. &nbsp;It looks like lots of people had the same idea here)</p> <p>The leak was entirely limited (as far as we know) to one max10 private set. &nbsp;Of course everything seems obvious once you know it, but we expected a small advantage if any and a big leaderboard shake that everyone was talking about. &nbsp;I didn't think it would be at all appropriate to "disclose" it in the last week.</p> <p>Take this as the first place rundown, or the 16th place rundown as you prefer. We will eventually provide more details, but I didn't plan on having to write anything up so soon. </p> <p>One large improvement came from 50Hz cleaning. &nbsp;We ended up averaging signal minus lgbm predictions over a few cycles after labeling them mod200. &nbsp;This nicely accounts for small drifts around 50Hz in both frequency and amplitude, and is incredibly simple. &nbsp;(a real fourier transform wasn't the right way to go for us).</p> <p>The shifts were also obviously important, as Rob pointed out recently. &nbsp;His fourth point also can't be emphasized enough. &nbsp;</p> <p>We spent a <em>lot</em> of time on a bayesian inference model, although we didn't use it in the very end since the max10 set suddenly became max5, and bayes was virtually identical to LGBM after good 50Hz cleaning (but free from any machine learning!). &nbsp;This required sufficient monte carlo to find state transition probabilities that we also used for data augmentation.</p> <p>Finally, the max25 set was initialized differently (the second "low" HMM). &nbsp;This turns out to be important for a Bayesian model, as we began with the initial state and updated backward with each signal point. &nbsp;(although it's not a significant impact for f1)</p>
M5 Forecasting - Accuracy
[181th place] Silver medal at the lottery (no multiplier)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>About 3 months ago I invested quite some time on this competition, because I thought one could find a training procedure that would optimise the metric (WRMSSE). I tried to: <br> - build my own loss <br> - reweight training samples for W and S <br> - add all aggregation levels to the training set <br> none of the above improved my validation. </p> <p>Then I tried to do some feature engineering and tested against a 5-fold cross-validation + LB (4 + 1) but I soon realised that nothing worked to improve the WRMSSE on my CV. Albeit quite a few my engineered features would improve RMSE. <br> At this point I decided I would not spend more time on this and I focused on Twitter, which gave me much more satisfaction. I decided that I would come back to this competition only in the last week to put together the final submission and play the lottery, and so I did.</p> <p>At the end my simple solution is based on the following: <br> - RMSE as loss <br> - around 20 features, mainly calendar (weekday, month, etc.), events, snap and rolling target mean <br> - predict sales * price and divide by price afterwards <br> - average of 5 folds (GroupKFold) trained on the last 3 years <br> - stitch together 28 models, one per day </p> <p>I prepared only 2 submissions using the updated file, in order to avoid regrets for not choosing the right submission :) <br> My first submission was just a sanity check, using only the model at horizon=28 to verify that I did not screw up the submission file. After that I found a small bug, I prepared the 28 models (one per day), and submit. And guess what? The first submission (1 model only + bug) was worth the 29th place. But again, no regret because I would have never chosen that one. </p> <p>Finally many thanks to many people who inspired me a lot: <a href="/kyakovlev">@kyakovlev</a> , <a href="/sibmike">@sibmike</a> , <a href="/ragnar123">@ragnar123</a> </p> <p>P.S.: I did not use any multiplier</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F862519%2Fb1e79ad4a82b4581c426854bbdfe50cc%2FScreenshot%202020-07-01%20at%2003.21.23.png?generation=1593566728712969&amp;alt=media" alt=""></p>
M5 Forecasting - Uncertainty
7th Placed Solution - seq2seq LSTM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I am very surprised that I can get 7th place &amp; 1st place in students in this M5 forecasting competition. It is my first year that I participate in Kaggle competitions, and the first time that I get ranked here. It is more than happy that I can share my solution here (although it may be a bit too late).</p> <h2>Model Summary</h2> <p>In this competition, I first focused on the Accuracy competition (i.e. producing the mean forecast), and then leveraged on the model built on Accuracy competition to produce any uncertainty estimate (i.e. quantile prediction).</p> <p>I built two seq2seq LSTM model to produce the mean forecast on the next 28D. One is trying to forecast on the store-item level, another attempts to forecast on the dept-store level. The final accuracy forecast consists of simple averaging of both model. The uncertainty estimate is built using the estimated residuals of both models, while assuming the forecast residuals are i.i.d. normal.</p> <h2>Takeaway</h2> <p>I think the key takeaway in this competition is the importance of a reliable local validation scheme, together with the model ensembling technique, which makes my model to deliver stable performance both locally and in public / private LB. At the early stage of the competition where people are optimizing for the public LB, my model performance on public LB is actually not comparable to those top solutions. However, the key benefit of my solution is that it delivers stable performance both locally and in public / private LB.</p> <p>The WRMSSE / WSPL metric can be volatile given only 28D of validation window (i.e. public LB), because it is equally weighted on all aggregate levels (meaning that the top aggregate levels forecast, even with a few observations, already give a huge proportion of the loss). It would be potentially overfitting if we focus too much on optimizing for the public LB, without a reliable local validation scheme.</p> <h2>Model Architecture</h2> <p>The seq2seq LSTM model takes an input window of 28D, and output window of 28D. Basically it is similar to an encoder-decoder model for machine translation except that I add some exogenous features to the model, and avoid using any forecast information in the decoder input (i.e. trained &amp; forecasted unconditionally). The rationale behind training unconditionally is to ensure that the model performs similarly at inference time as at training time. In the following graph, y denotes the time series (i.e. sales), z denotes the exogenous features (e.g. weekday, holiday event, is snap, item id), h denotes the hidden state (as well as cell state).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2272940%2Fb8835694a5168bdd3d5ac5d8ac8b59c6%2Fmodel%20architecture.png?generation=1595125709755384&amp;alt=media" alt=""></p> <h2>Features</h2> <p>I do not use any special features. Basically the features are calendar features (e.g. holiday event, holiday type, weekday), id features (item-id, dept-id, store-id) and price features. By SHAP value analysis, the id features and weekday are significant to the model performance. The categorical features with high cardinality are post-processed using embedding layers.</p> <p>Some important feature processing steps are: 1. Re-scale each time-series into [0, 1] using its max. 2. Train the model to forecast the increment of sales value since the first day of input window (i.e. remove trend).</p> <h2>Training Method</h2> <p>The LSTM models are trained using Adam algorithm. The dept-store model is trained with weighted MSE loss. The store-item model is trained with unweighted zero-inflated poisson loss (to deal with cases with many 0s).</p> <h3>Improving Model Stability</h3> <p>During training, an issue of model instability is observed, similar to the observations <a href="https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/43795">here</a>. The following graph shows how the WRMSSE metric (used for accuracy measure) changes during training epochs.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2272940%2Fb6609e390dfd94cb8eda7fe9d162913f%2FTraining%20instability.png?generation=1595126453982905&amp;alt=media" alt=""></p> <p>To resolve the instability issue, each seq2seq LSTM model actually consists of models trained at 20 different checkpoints (i.e. training epochs) and is a simple average of the 20 checkpoint models. The submission model is trained blindly without any early stopping, and is observed to have stable performance.</p> <h2>Hierarchical Forecast</h2> <p>The dept-store level forecast is first converted into store-item level forecast using the 28D historical average of ratio. Then both models' forecasts are simple averaged and converted to forecast on all aggregate level by bottom-up approach.</p> <h2>CV Scheme</h2> <p>My local validation scheme consists of 4 folds of data, each is of 28D. Particularly: Fold 1: (training) [train st, d1829] --&gt; (validation) [d1830, d1857] Fold 2: (training) [train st, d1857] --&gt; (validation) [d1858, d1885] Fold 3: (training) [train st, d1885] --&gt; (validation) [d1886, d1913] Fold 4: (training) [train st, d1913] --&gt; (validation) [d1914, d1941] where train st is 2.5 yrs before the end of training data when training store-item model, and is 0 when training dept-store model (i.e. training on full dataset). The model is trained once only on fold 1 so as to speed up the local validation process. It is observed that the score in each fold has large variance, indicating that it may be unreliable / overfitting to optimize on any 28D data (i.e. the public LB). Therefore, I focus on the local CV score which also gives close alignment with both public LB and private LB score.</p> <p>Thanks a lot for the host who gave us a chance to join such a rewarding competition. I learnt a lot from this competition. Also special thanks to <a href="/kyakovlev">@kyakovlev</a> who built a lot of great discussion thread &amp; great notebook (which are super useful for us as a newbie to Kaggle).</p>
University of Liverpool - Ion Switching
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>UPDATE: All solution code is now available on my GitHub: <a href="https://github.com/stdereka/liverpool-ion-switching">https://github.com/stdereka/liverpool-ion-switching</a></p> <p><strong>Introduction</strong></p> <p>Before I start describing my approach, I would like to give many thanks to the following kagglers: 1. <a href="/eunholee">@eunholee</a> for discovering the most effective way of removing drift, explained in this <a href="https://www.kaggle.com/eunholee/remove-drift-using-a-sine-function">kernel</a>. 2. <a href="/friedchips">@friedchips</a> for diving really deep into markovian nature of the data. His posts and kernels spurred me to learn more about HMM (it didn't help me in the last weeks of the competition, but I experienced true pleasure learning the subject). 3. <a href="/kakoimasataka">@kakoimasataka</a> for attracting my attention to the noise component of the data. His <a href="https://www.kaggle.com/kakoimasataka/remove-pick-up-electric-noise">kernel</a> allowed me to understand how I should treat the noise in my preprocessing.</p> <p>There are several things I admitted on the early stage: 1. Preprocessing and data augmentations make more contribution to the final score than model tuning. One of the early breakthroughs of this competition was drift removal. 2. In 0.935+ zone even a very negligible (5 or 6 signs after decimal point) improvement of the score (CV or LB) is important and should be taken into account.</p> <p><strong>Preprocessing and data leak</strong></p> <p>I described the process of data cleaning and creating new data <a href="https://www.kaggle.com/stdereka/2nd-place-solution-preprocessing-tricks">here</a>. Unfortunately, the private part of the test dataset is corrupted by a leak. It has been already revealed by the other team in this <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153824">post</a>, so I am not going to describe it here. My personal history with this leakage is following. With my way of creating new data, I faced this leak in my local CV and I had to fight with it (I explain how in the modeling section) in order to get reliable CV score. Less than two days before the end of the competition I checked the test data for this kind of leakage. The result shocked me: I could never imagine that I can find it in the private part!</p> <p><strong>Modeling</strong></p> <p>As a baseline for my final model I used Wavenet-based architecture, which was initially posted in this <a href="https://www.kaggle.com/siavrez/wavenet-keras">kernel</a>. Following ideas worked for me: 1. Early stopping, learning rate scheduler. My model's convergence behavior was slightly different for different folds, so these simple tools helped me to control overfitting. 2. Data augmentation. I used flipping the signal and adding a random shift to the signal. Experiments showed that model predictions are extremely sensitive to a shift, and this augmentation helped the model to became more generalisable. 3. RFC probabilities as features for NN. Initially proposed in this <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/144645">post</a>, the effectiveness of this stacking has been confirmed in my experiments. 4. CV strategy. As I trained on different augmented datasets (with different size and target distribution), I had to validate on a subdataset, which would be common for all synthetic datasets. To do so I have chosen the original non-synthetic data. OOF score, obtained on this subset, was used to compare different models, trained on possibly different data. The original train data is not corrupted by the leak mentioned above, so it was safe, and as I got to know after the competition deadline, it was correlated with private LB. 5. One model for all groups of data. In case of NN-based models, it was not necessary to train different models for different groups. Moreover, I observed some sort of synergy between the groups: separate models had lower CV than one general model.</p> <p><strong>Final submissions</strong></p> <ol> <li><p>My first submission is a blend of two models, trained with slightly different scaler and augmentations. No leak is exploited here, this submission was planned before I found the leakage. CV 0.94359, public 0.94664, private LB 0.94529.</p></li> <li><p>The second one differs from the first only in 7th batch of the test data. To predict this batch I trained a separate model on reduced signal and then added subtracted channels to its prediction.</p></li> </ol> <p><strong>Conclusions</strong></p> <p>I have mixed feelings about this competition.</p> <p>On the one hand, I haven't built a model, which can handle real-world ion switching data. The others published by this moment solutions are also barely applicable to a data different from competition dataset. For example, the drift problem hasn't been solved and all top-scoring models are trained on the clean data. The leak I have found in the last days of competition leaves no confidence in the data. I realized that this kind of leakage can be implicitly hidden in the rest of the data, so complex models can exploit it automatically.</p> <p>On the other hand, I worked hard for more then a month and a half and enjoyed improvement of my models, it was really funny and cool. Concerning the leak, I think that the task of searching vulnerabilities in the data is valuable in itself. In the long run it can help to build reliable models. I cannot be sure, but probably the knowledge of possible leaks in ion switching data will help electrophysiologists to improve their existing models.</p> <p>In any case, I am grateful to the organizers and participants of the competition for an unforgettable experience.</p> <p><strong>P.S. I am not a native English speaker, so if you find any mistake, please, let me know. If you have any questions and need a clarification on some details, don't hesitate to ask. I'd also appreciate any feedback (positive or negative) you leave in the comments.</strong></p>