{"id":22510,"date":"2020-12-17T10:44:09","date_gmt":"2020-12-17T10:44:09","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/most-useful-techniques-handle-imbalanced-datasets\/"},"modified":"2023-09-18T17:59:15","modified_gmt":"2023-09-18T17:59:15","slug":"most-useful-techniques-handle-imbalanced-datasets","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/most-useful-techniques-handle-imbalanced-datasets\/","title":{"rendered":"The 5 Most Useful Techniques To Handle Imbalanced Datasets"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"22510\" class=\"elementor elementor-22510\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-e165fce elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"e165fce\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-63ffb9c\" data-id=\"63ffb9c\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-1c1da42 elementor-widget elementor-widget-text-editor\" data-id=\"1c1da42\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Have you ever faced an issue where you have such a small sample for the positive class in your dataset that the model is unable to learn?<\/p><p><em><strong>In such cases, you get a pretty high accuracy just by predicting the majority class, but you fail to capture the minority class, which is most often the point of creating the model in the first place.<\/strong><\/em><\/p>\n<p>Such datasets are a pretty common occurrence and are called as an imbalanced dataset.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4381c70 elementor-widget elementor-widget-text-editor\" data-id=\"4381c70\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>Imbalanced datasets are a special case for classification problem where the class distribution is not uniform among the classes. Typically, they are composed by two classes: The majority (negative) class and the minority (positive) class<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9aeb180 elementor-widget elementor-widget-heading\" data-id=\"9aeb180\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Imbalanced datasets can be found for different use cases in various domains:<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b71f64d elementor-widget elementor-widget-text-editor\" data-id=\"b71f64d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><strong>Finance<\/strong>: Fraud detection datasets commonly have a fraud rate of ~1\u20132%<\/li><li><strong>Ad Serving<\/strong>: Click prediction datasets also don\u2019t have a high clickthrough rate.<\/li><li><strong>Transportation<\/strong>\/<strong>Airline<\/strong>: Will Airplane failure occur?<\/li><li><strong>Medical<\/strong>: Does a patient has cancer?<\/li><li><strong>Content moderation<\/strong>: Does a post contain NSFW content?<\/li><\/ul>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-28cd080 elementor-widget elementor-widget-text-editor\" data-id=\"28cd080\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>So how do we solve such problems?<\/p>\n\n<p><em><strong>This post is about explaining the various techniques you can use to handle imbalanced datasets.<\/strong><\/em><\/p>\n<hr class=\"wp-block-separator\"\/>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-906c3a1 elementor-widget elementor-widget-heading\" data-id=\"906c3a1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">1. Random Undersampling and Oversampling<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-96ee9d1 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"96ee9d1\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-b1ec748\" data-id=\"b1ec748\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f5f112e elementor-widget elementor-widget-image\" data-id=\"f5f112e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"725\" height=\"213\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/0-3.png\" class=\"attachment-large size-large wp-image-18197\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/0-3.png 725w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/0-3-300x88.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/0-3-610x179.png 610w\" sizes=\"(max-width: 725px) 100vw, 725px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\"><em><a href=\"https:\/\/www.kaggle.com\/rafjaa\/resampling-strategies-for-imbalanced-datasets#t1\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/em><\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e6162af elementor-widget elementor-widget-text-editor\" data-id=\"e6162af\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>A widely adopted and perhaps the most straightforward method for dealing with highly imbalanced datasets is called resampling. It consists of removing samples from the majority class (under-sampling) and\/or adding more examples from the minority class (over-sampling).<\/p>\n\n<p>Let us first create some example imbalanced data.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-dbd86c1 elementor-widget elementor-widget-text-editor\" data-id=\"dbd86c1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>from sklearn.datasets import make_classification\n\nX, y = make_classification(\n    n_classes=2, class_sep=1.5, weights=[0.9, 0.1],\n    n_informative=3, n_redundant=1, flip_y=0,\n    n_features=20, n_clusters_per_class=1,\n    n_samples=100, random_state=10\n)\n\nX = pd.DataFrame(X)\nX['target'] = y\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-556d349 elementor-widget elementor-widget-text-editor\" data-id=\"556d349\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>We can now do random oversampling and undersampling using:<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a0509d3 elementor-widget elementor-widget-text-editor\" data-id=\"a0509d3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>num_0 = len(X[X['target']==0])\nnum_1 = len(X[X['target']==1])\nprint(num_0,num_1)\n\n# random undersample\n\nundersampled_data = pd.concat([ X[X['target']==0].sample(num_1) , X[X['target']==1] ])\nprint(len(undersampled_data))\n\n# random oversample\n\noversampled_data = pd.concat([ X[X['target']==0] , X[X['target']==1].sample(num_0, replace=True) ])\nprint(len(oversampled_data))\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b51b91b elementor-widget elementor-widget-text-editor\" data-id=\"b51b91b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>OUTPUT:\n90 10\n20\n180\n<\/code><\/pre><hr class=\"wp-block-separator\"\/>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c09a3d7 elementor-widget elementor-widget-heading\" data-id=\"c09a3d7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">2. Undersampling and Oversampling using imbalanced-learn<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e901773 elementor-widget elementor-widget-text-editor\" data-id=\"e901773\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>imbalanced-learn(<code>imblearn<\/code>) is a Python Package to tackle the curse of imbalanced datasets.<\/p>\n\n<p>It provides a variety of methods to undersample and oversample.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-34c23c3 elementor-widget elementor-widget-heading\" data-id=\"34c23c3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">a. Undersampling using Tomek Links:<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ce51da2 elementor-widget elementor-widget-text-editor\" data-id=\"ce51da2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>One of such methods it provides is called Tomek Links. Tomek links are pairs of examples of opposite classes in close vicinity.<\/p>\n\n<p>In this algorithm, we end up removing the majority element from the Tomek link, which provides a better decision boundary for a classifier.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6ddb4db elementor-widget elementor-widget-image\" data-id=\"6ddb4db\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"798\" height=\"227\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/1.png\" class=\"attachment-large size-large wp-image-33011\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence, Imbalanced Datasets\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/1.png 798w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/1-300x85.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/1-768x218.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/1-610x174.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/1-750x213.png 750w\" sizes=\"(max-width: 798px) 100vw, 798px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8cb37db elementor-widget elementor-widget-text-editor\" data-id=\"8cb37db\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p class=\"has-text-align-center\"><em><a href=\"https:\/\/www.kaggle.com\/rafjaa\/resampling-strategies-for-imbalanced-datasets#t1\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a><\/em><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c38a002 elementor-widget elementor-widget-text-editor\" data-id=\"c38a002\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>from imblearn.under_sampling import TomekLinks\n\ntl = TomekLinks(return_indices=True, ratio='majority')\n\nX_tl, y_tl, id_tl = tl.fit_sample(X, y)\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-38b484e elementor-widget elementor-widget-heading\" data-id=\"38b484e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">b. Oversampling using SMOTE:<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-138663d elementor-widget elementor-widget-text-editor\" data-id=\"138663d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>In SMOTE (Synthetic Minority Oversampling Technique) we synthesize elements for the minority class, in the vicinity of already existing elements.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-17c8127 elementor-widget elementor-widget-image\" data-id=\"17c8127\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"734\" height=\"225\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/2.png\" class=\"attachment-large size-large wp-image-33012\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence. Imbalanced Datasets\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/2.png 734w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/2-300x92.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/2-610x187.png 610w\" sizes=\"(max-width: 734px) 100vw, 734px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0778257 elementor-widget elementor-widget-text-editor\" data-id=\"0778257\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p class=\"has-text-align-center\"><em><a href=\"https:\/\/www.kaggle.com\/rafjaa\/resampling-strategies-for-imbalanced-datasets#t1\" target=\"_blank\" rel=\"noreferrer noopener\">Source<\/a><\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-76b3e7f elementor-widget elementor-widget-text-editor\" data-id=\"76b3e7f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>from imblearn.over_sampling import SMOTE\n\nsmote = SMOTE(ratio='minority')\n\nX_sm, y_sm = smote.fit_sample(X, y)\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ac1703b elementor-widget elementor-widget-text-editor\" data-id=\"ac1703b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>There are a variety of other methods in the\u00a0<a href=\"https:\/\/github.com\/scikit-learn-contrib\/imbalanced-learn#id3\" target=\"_blank\" rel=\"noreferrer noopener\">imblearn<\/a>\u00a0package for both undersampling(Cluster Centroids, NearMiss, etc.) and oversampling(ADASYN and bSMOTE) that you can check out.<\/p><hr class=\"wp-block-separator\"\/>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b26964d elementor-widget elementor-widget-heading\" data-id=\"b26964d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">3. Class weights in the models<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4e67d5e elementor-widget elementor-widget-image\" data-id=\"4e67d5e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"576\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/3.jpg\" class=\"attachment-large size-large wp-image-33013\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence Imbalanced Datasets\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/3.jpg 960w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/3-300x180.jpg 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/3-768x461.jpg 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/3-610x366.jpg 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/3-750x450.jpg 750w\" sizes=\"(max-width: 960px) 100vw, 960px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4327da0 elementor-widget elementor-widget-text-editor\" data-id=\"4327da0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Most of the <a href=\"https:\/\/www.experfy.com\/blog\/ai-ml\/user-agent-strings-parsing-ml-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">machine learning models<\/a> provide a parameter called class_weights. For example, in a random forest classifier using, class_weights we can specify a higher weight for the minority class using a dictionary.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1979d46 elementor-widget elementor-widget-text-editor\" data-id=\"1979d46\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>from sklearn.linear_model import LogisticRegression\n\nclf = LogisticRegression(class_weight={0:1,1:10})\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cb64d6a elementor-widget elementor-widget-heading\" data-id=\"cb64d6a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h4 class=\"elementor-heading-title elementor-size-default\"><em>But what happens exactly in the background?<\/em><\/h4>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c3473f1 elementor-widget elementor-widget-text-editor\" data-id=\"c3473f1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>In logistic Regression, we calculate loss per example using binary cross-entropy:<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a9056d4 elementor-widget elementor-widget-text-editor\" data-id=\"a9056d4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>Loss = -ylog(p) - (1-y)log(1-p)\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2dd6922 elementor-widget elementor-widget-text-editor\" data-id=\"2dd6922\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>In this particular form, we give equal weight to both the positive and the negative classes. When we set class_weight as&nbsp;<code>class_weight = {0:1,1:20}<\/code>, the classifier in the background tries to minimize:<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b905cd7 elementor-widget elementor-widget-text-editor\" data-id=\"b905cd7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>NewLoss = -20ylog(p) - 1(1-y)log(1-p)\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c9a34eb elementor-widget elementor-widget-heading\" data-id=\"c9a34eb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h4 class=\"elementor-heading-title elementor-size-default\"><em>So what happens exactly here?<\/em><\/h4>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1842eed elementor-widget elementor-widget-text-editor\" data-id=\"1842eed\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li>If our model gives a probability of 0.3 and we misclassify a positive example, the NewLoss acquires a value of -20log(0.3) = 10.45<\/li><li>If our model gives a probability of 0.7 and we misclassify a negative example, the NewLoss acquires a value of -log(0.3) = 0.52<\/li><\/ul>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-24799be elementor-widget elementor-widget-text-editor\" data-id=\"24799be\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>That means we penalize our model around twenty times more when it misclassifies a positive minority example in this case.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-dd5395c elementor-widget elementor-widget-heading\" data-id=\"dd5395c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h4 class=\"elementor-heading-title elementor-size-default\"><em>How can we compute class_weights?<\/em><\/h4>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-59998f4 elementor-widget elementor-widget-text-editor\" data-id=\"59998f4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><em>There is no one method to do this, and this should be constructed as a hyperparameter search problem for your particular problem.<\/em><\/p>\n\n<p>But if you want to get class_weights using the distribution of the y variable, you can use the following nifty utility from sklearn.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-076e0fb elementor-widget elementor-widget-text-editor\" data-id=\"076e0fb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>from sklearn.utils.class_weight import compute_class_weight\n\nclass_weights = compute_class_weight('balanced', np.unique(y), y)\n<\/code><\/pre><hr class=\"wp-block-separator\"\/>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0c2c6d1 elementor-widget elementor-widget-heading\" data-id=\"0c2c6d1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">4. Change your Evaluation Metric<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f601e5f elementor-widget elementor-widget-image\" data-id=\"f601e5f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-1024x576.png\" class=\"attachment-large size-large wp-image-18201\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence Imbalanced Datasets\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-1024x576.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-300x169.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-768x432.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-1536x864.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-2048x1151.png 2048w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-610x343.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-750x422.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/4-1140x641.png 1140w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c6d45e1 elementor-widget elementor-widget-text-editor\" data-id=\"c6d45e1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Choosing the right evaluation metric is pretty essential whenever we work with imbalanced datasets. Generally, in such cases, the F1 Score is what I want as my&nbsp;<a href=\"https:\/\/towardsdatascience.com\/the-5-classification-evaluation-metrics-you-must-know-aa97784ff226\" target=\"_blank\" rel=\"noreferrer noopener\">&lt;em&gt;&lt;strong&gt;evaluation metric&lt;\/strong&gt;&lt;\/em&gt;<\/a>&nbsp;.<\/p><p>The F1 score is a number between 0 and 1 and is the harmonic mean of precision and recall.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-54cc6e3 elementor-widget elementor-widget-image\" data-id=\"54cc6e3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"376\" height=\"127\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/5.png\" class=\"attachment-large size-large wp-image-33014\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/5.png 376w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/5-300x101.png 300w\" sizes=\"(max-width: 376px) 100vw, 376px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bf341b6 elementor-widget elementor-widget-heading\" data-id=\"bf341b6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h4 class=\"elementor-heading-title elementor-size-default\"><em>So how does it help?<\/em><\/h4>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d887495 elementor-widget elementor-widget-text-editor\" data-id=\"d887495\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Let us start with a binary prediction problem.&nbsp;<em><strong>We are predicting if an asteroid will hit the earth or not.<\/strong><\/em><\/p>\n\n<p>So we create a model that predicts \u201cNo\u201d for the whole training set.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2183ee8 elementor-widget elementor-widget-heading\" data-id=\"2183ee8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h4 class=\"elementor-heading-title elementor-size-default\"><em>What is the accuracy(Normally the most used evaluation metric)?<\/em><\/h4>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e4391e3 elementor-widget elementor-widget-text-editor\" data-id=\"e4391e3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>It is more than 99%, and so according to accuracy, this model is pretty good, but it is worthless.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5db9ef3 elementor-widget elementor-widget-heading\" data-id=\"5db9ef3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h4 class=\"elementor-heading-title elementor-size-default\"><em>Now, what is the F1 score?<\/em><\/h4>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0a945f6 elementor-widget elementor-widget-text-editor\" data-id=\"0a945f6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Our precision here is 0. What is the recall of our positive class? It is zero. And hence the F1 score is also 0.<\/p>\n\n<p>And thus we get to know that the classifier that has an accuracy of 99% is worthless for our case. And hence it solves our problem.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e40a6f6 elementor-widget elementor-widget-image\" data-id=\"e40a6f6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"262\" height=\"229\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/12\/6.png\" class=\"attachment-large size-large wp-image-33015\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1f0c2ef elementor-widget elementor-widget-text-editor\" data-id=\"1f0c2ef\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Simply stated the&nbsp;<em><strong>F1 score sort of maintains a balance between the precision and recall for your classifier<\/strong><\/em>. If your precision is low, the F1 is low, and if the recall is low again, your F1 score is low.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c73387f elementor-widget elementor-widget-text-editor\" data-id=\"c73387f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>If you are a police inspector and you want to catch criminals, you want to be sure that the person you catch is a criminal (Precision) and you also want to capture as many criminals (Recall) as possible. The F1 score manages this tradeoff.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4677b74 elementor-widget elementor-widget-heading\" data-id=\"4677b74\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">How to Use?<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-109cb32 elementor-widget elementor-widget-text-editor\" data-id=\"109cb32\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>You can calculate the F1 score for binary prediction problems using:<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e73bee5 elementor-widget elementor-widget-text-editor\" data-id=\"e73bee5\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code>from sklearn.metrics import f1_score\ny_true = [0, 1, 1, 0, 1, 1]\ny_pred = [0, 0, 1, 0, 0, 1]\nf1_score(y_true, y_pred)\n<\/code><\/pre>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-959a0f6 elementor-widget elementor-widget-text-editor\" data-id=\"959a0f6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>This is one of the functions which I use to get the best threshold for maximizing F1 score for binary predictions. The below function iterates through possible threshold values to find the one that gives the best F1 score.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9d67666 elementor-widget elementor-widget-text-editor\" data-id=\"9d67666\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<pre class=\"wp-block-code\"><code># y_pred is an array of predictions\ndef bestThresshold(y_true,y_pred):\n    best_thresh = None\n    best_score = 0\n    for thresh in np.arange(0.1, 0.501, 0.01):\n        score = f1_score(y_true, np.array(y_pred)&gt;thresh)\n        if score &gt; best_score:\n            best_thresh = thresh\n            best_score = score\n    return best_score , best_thresh\n<\/code><\/pre><hr class=\"wp-block-separator\"\/>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ca3c62d elementor-widget elementor-widget-heading\" data-id=\"ca3c62d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">5. Miscellaneous<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1773028 elementor-widget elementor-widget-image\" data-id=\"1773028\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-1024x768.png\" class=\"attachment-large size-large wp-image-18204\" alt=\"MLWhiz: Data Science, Machine Learning, Artificial Intelligence\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-1024x768.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-300x225.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-768x576.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-1536x1152.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-2048x1536.png 2048w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-610x458.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-750x563.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/7-1140x855.png 1140w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-af595ff elementor-widget elementor-widget-text-editor\" data-id=\"af595ff\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Various other methods might work depending on your use case and the problem you are trying to solve:<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-aa99288 elementor-widget elementor-widget-heading\" data-id=\"aa99288\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">a) Collect more Data<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f6b27be elementor-widget elementor-widget-text-editor\" data-id=\"f6b27be\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>This is a definite thing you should try if you can. Getting more data with more positive examples is going to help your models get a more varied perspective of both the majority and minority classes.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9b7b898 elementor-widget elementor-widget-heading\" data-id=\"9b7b898\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">b) Treat the problem as anomaly detection<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-85f15dc elementor-widget elementor-widget-text-editor\" data-id=\"85f15dc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>You might want to treat your classification problem as an anomaly detection problem.<\/p>\n\n<p><strong>Anomaly detection<\/strong>&nbsp;is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data<\/p>\n\n<p>You can use Isolation forests or autoencoders for anomaly detection.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6e429a2 elementor-widget elementor-widget-heading\" data-id=\"6e429a2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">c) Model-based<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-aaade88 elementor-widget elementor-widget-text-editor\" data-id=\"aaade88\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Some models are particularly suited for imbalanced datasets.<\/p>\n\n<p>For example, in boosting models, we give more weights to the cases that get misclassified in each tree iteration.<\/p><hr class=\"wp-block-separator\"\/>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e7179e4 elementor-widget elementor-widget-heading\" data-id=\"e7179e4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Conclusion<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-92853fb elementor-widget elementor-widget-text-editor\" data-id=\"92853fb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>There is no one size fits all when working with imbalanced datasets. You will have to try multiple things based on your problem.<\/p>\n\n<p>In this post, I talked about the usual suspects that come to my mind whenever I face such a problem.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>There is no one size fits all when working with imbalanced datasets. You will have to try multiple things based on your problem. This post talks about the usual suspects whenever one faces such a problem.<\/p>\n","protected":false},"author":653,"featured_media":18205,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[1136,1137,1073,670,1138,1139],"ppma_author":[3409],"class_list":["post-22510","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-datasets","tag-evaluation-metrics","tag-imbalanced-datasets","tag-models","tag-oversampling","tag-undersampling"],"authors":[{"term_id":3409,"user_id":653,"is_guest":0,"slug":"rahul-agarwal","display_name":"Rahul Agarwal","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/04\/medium_cc5785b8-8195-44e6-a0de-2e33be05d7cb-150x150.png","user_url":"http:\/\/bit.ly\/384SBYb","last_name":"Agarwal","first_name":"Rahul","job_title":"","description":"Rahul Agarwal is a Data Scientist at Walmart Labs."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22510","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/653"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=22510"}],"version-history":[{"count":4,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22510\/revisions"}],"predecessor-version":[{"id":33018,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22510\/revisions\/33018"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/18205"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=22510"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=22510"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=22510"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=22510"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}