Naïve Bayes algorithm is a supervised learning algorithm. But real world data might be huge, high dimensional and it is more often that the predictor will have more than 2 classes. NB algorithm is a simple yet powerful concept which works really well on multi class variables with fast performance. NB is a combination of results obtained by using Bayes Principle on each feature to classify a target variable.

The Naïve Bayes algorithm is comprised of two words Naïve and Bayes, Which can be described as:

  • Naïve: It assumes that the features are independent of occurrence of other while predicting…

In Machine Learning, while working on real world problems we come across data sets that have more than 20 features or high dimensional data. To check the correlation between them, we might have to visualize 20C2 = 190 2D scatter plots! That’s a lot to visualize. On top of that, most of them will not be informative. Clearly if we have many features it gets clumsy to analyze the features and understand their relations. …


Learning decisions that makes the difference

A learning agent playing Atari games (Space Invaders and Breakout) using Reinforcement Learning

Introduction

Designing machine that learn to do a job by itself is one of the most researched topic than any other in recent times due to various reasons like increased computational power, availability of resources to experiment etc., This lead to uncover significant innovations that made life simpler. If you just have data then an algorithm will provide insights or you train a model and it recognizes your face and many other use cases that we see around us which are built using Machine Learning and Deep Learning. …


Image Source

Nature is so altruistic that we depend on it for every minute thing in life, yet it never ceases to influence us along the timeline. Early man had seen fire erupting due to friction and he rubbed stones to create fire. Every organism in nature has its uniqueness and its ability to change its capabilities according to the situations is a striking phenomenon where we can derive motivation from. A very popular example — Wright Brothers have closely observed pigeon and bats while flying which flashed the idea to design aircraft with wings.


Techniques to devise personalized strategies using statistical models

Left Image: https://pixabay.com/illustrations/review-hand-star-human-online-4390160/

Introduction

Customer churn occurs when customers or subscribers discontinue their association with a company or service. There are many Machine Learning models to predict if a customer is going to churn or not. The problem doesn’t stop there, business has to deploy certain strategies to retain the customers who are at the verge of churn because it’s five times cheaper to retain an existing customer than to acquire a new one. Statistical models can be used to derive and evaluate personalized strategies which is a core challenge in CPG companies.

We call the event of customer churn as failure and survival…


Given a problem statement to categorize a set of data into classes, we can resort to algorithms like logistic regression, decision trees, boosting techniques etc., There is one more interesting and intuitive concept that helps in classification which is support vector machines. To understand SVM we must have clear idea on hyperplane, margin, kernel. Here is my attempt to help you understand these terms :)

Hyperplane:

Assume you have a 2D space with some data points as shown, and a line (ax+ by +c=0) is able to group this space into two parts.

fig-1

Similarly for data in 3D space, a 2D…


When ever we deal with trees in Machine Learning, bagging and boosting are two commonly heard words. They are usually methods of ensemble modelling. It is similar to dividing a big task into numerous small tasks and aggregating them to achieve desired result from it.

Have you ever faced the issue of over fitting in decision trees? we can try changing the parametres like adjusting max_depth etc., but it won’t differ much in some cases. If we can recall that in decision trees, there is a high probability of model performing excellently on train data and poor performance in test…


Have you ever had difficulty in deciding what to do in a situation? Well as human beings we have this amazing(mostly defective 😛) habit of taking a decision in split second with out thinking. But assume you start thinking about the outcome, then your thought process would be to analyse the situation and draw insights to take a decision. Now think about a machine, it is also capable of taking a decision in split second but also by thinking and analyzing. Let us understand how Machine Learning finds its application in decision making.

Fig 1: A simple Decision Tree to understand if a patient is recovered from a disease or not. (classification tree)

Predictor space(the whole data points of the…


This blog is completely dedicated to the crucial metrics used in classification problems. You might have come across problem statements where we have to use metrics other than the well known ‘accuracy’ score. Let us try to understand confusion matrix, accuracy, recall, precision, F1 Score, ROC- AUC curve and their usage.

fig.1 Metrics in a nut shell

Accuracy score is widely used for evaluating model which do not have any issue with type I and II errors or if it is a balanced data set. But certain problems like cancer analysis or customer churn data which are imbalanced, the focus will mainly be on False Positives…

Dharani J

Data Analyst | Data Science Enthusiast | ML Blogger |

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store