The post K-Means Clustering appeared first on Machine Learning Tutorials, Courses and Certifications.

]]>K-means clustering is a type of unsupervised learning, which is used when you have unlabeled data (i.e., data without defined categories or groups). The goal of this algorithm is to find groups in the data, with the number of groups represented by the variable K. The algorithm works iteratively to assign each data point to one of K groups based on the features that are provided. Data points are clustered based on feature similarity.

**In Simple, It follows a simple procedure of classifying a given data set into a number of clusters, defined by the letter “k,” which is fixed beforehand. The clusters are then positioned as points and all observations or data points are associated with the nearest cluster, computed, adjusted and then the process starts over using the new adjustments until a desired result is reached.**

To process the learning data, the K-means algorithm in starts with a first group of randomly selected centroids, which are used as the beginning points for every cluster, and then performs iterative (repetitive) calculations to optimize the positions of the centroids

- Behavioral segmentation:
- Segment by purchase history
- Segment by activities on application, website, or platform
- Define personas based on interests
- Create profiles based on activity monitoring

- Inventory categorization:
- Group inventory by sales activity
- Group inventory by manufacturing metrics

- Sorting sensor measurements:
- Detect activity types in motion sensors
- Group images
- Separate audio
- Identify groups in health monitoring

- Detecting bots or anomalies:
- Separate valid activity groups from bots
- Group valid activity to clean up outlier detection

K={2,3,4,10,11,12,20,25,30}

Let say, we want to create two clusters,
**Take K=2**

As we are randomly select the two mean values: Lets cal for Cluster

Step 1:

- M1=4 M2=12
- K1={2,3,4} K2={10,11,12,20,25,30}

Step 2:

- Take the mean for K1 and K2
- M1=3 M2=18
- K1={2,3,4,10} K2={11,12,20,25,30}

Step3:

- Again take the mean for K1 and K2
- M1=4.75 M2=19.6
- K1={2,3,4,10,11,12} K2={20,25,30}

Step4:

- Again take the mean for K1 and K2
- M1=7 M2=25
- K1={2,3,4,10,11,12} K2={20,25,30}

Step5:

- Again take the mean for K1 and K2
- M1=7 M2=25
- K1={2,3,4,10,11,12} K2={20,25,30}
- M1=7 M2=25

**As we got the same mean, so we have to stop**
so our new cluster is :

- K1={2,3,4,10,11,12}
- K2={20,25,30}

To find the number of clusters in the data, the user needs to run the K-means clustering algorithm for a range of K values and compare the results. In general, Earlier there is no method for determining exact value of K, but an accurate estimate can be obtained using the following techniques.

One of the metrics that is commonly used to compare results across different values of K is the mean distance between data points and their cluster centroid. Since increasing the number of clusters will always reduce the distance to data points, increasing K will always decrease this metric, to the extreme of reaching zero when K is the same as the number of data points. Thus, this metric cannot be used as the sole target. Instead, mean distance to the centroid as a function of K is plotted and the “elbow point,” where the rate of decrease sharply shifts, can be used to roughly determine K.

In [2]:

```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
```

In [3]:

```
X= -2 * np.random.rand(100,2)
X1 = 1 + 2 * np.random.rand(50,2)
X[50:100, :] = X1
plt.scatter(X[ : , 0], X[ :, 1], s = 50, c = 'r')
plt.show()
```

In [4]:

```
Kmean = KMeans(n_clusters=2)
Kmean.fit(X)
print(Kmean.cluster_centers_)
print(Kmean.labels_)
```

In [5]:

```
plt.scatter(X[ : , 0], X[ : , 1], s =50)
plt.scatter(-0.75243353, -0.95640447, s=200, c='g', marker='s')
plt.scatter(1.87600534, 2.01533769, s=200, c='r', marker='s')
plt.show()
```

In [6]:

```
Kmean.predict([[-3.0,-3.0]])
```

Out[6]:

In [7]:

```
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```

In [8]:

```
# Importing the dataset
dataset = pd.read_csv('../datasets/Mall_Customers.csv')
X = dataset.iloc[:, [3, 4]].values
dataset
```

Out[8]:

In [9]:

```
x1=dataset.iloc[:,3]
x2=dataset.iloc[:,4]
plt.xlabel("Annual Income (k$)")
plt.ylabel("Spending Score")
plt.scatter(x1,x2)
plt.show()
```

In [10]:

```
# Using the elbow method to find the optimal number of clusters
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 0)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('The Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
```

In [11]:

```
# Fitting K-Means to the dataset
kmeans = KMeans(n_clusters = 5, init = 'k-means++', random_state = 0)
y_kmeans = kmeans.fit_predict(X)
y_kmeans
```

Out[11]:

In [12]:

```
# Visualising the clusters
plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Standard')
plt.scatter(X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Careless')
plt.scatter(X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Target')
plt.scatter(X[y_kmeans == 3, 0], X[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Careful')
plt.scatter(X[y_kmeans == 4, 0], X[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Sensible')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
```

The post K-Means Clustering appeared first on Machine Learning Tutorials, Courses and Certifications.

]]>The post Hierarchical Clustering appeared first on Machine Learning Tutorials, Courses and Certifications.

]]>`Hierarchical Clustering Analysis`

¶**Clustering is the most common form of unsupervised learning, a type of machine learning algorithm used to draw inferences from unlabeled data.**

Hierarchical clustering, also known as `hierarchical cluster analysis`

, is an algorithm that groups similar objects into groups called clusters. The endpoint is a set of clusters, where each cluster is distinct from each other cluster, and the objects within each cluster are broadly similar to each other.

Broadly speaking there are two ways of `clustering`

data points based on the algorithmic structure and operation, namely `agglomerative and divisive`

.

**Hierarchical clustering algorithm is of two types:**

i) Agglomerative Hierarchical clustering algorithm or AGNES (agglomerative nesting) and

ii) Divisive Hierarchical clustering algorithm or DIANA (divisive analysis).

**Agglomerative**: An agglomerative approach begins with each observation in a distinct (singleton) cluster, and successively merges clusters together until a stopping criterion is satisfied.(Bottom to Top Approach)**Divisive**: A divisive method begins with all patterns in a single cluster and performs splitting until a stopping criterion is met.(Top to Bottom Approach)

Both this algorithm are exactly reverse of each other. So we will be covering Agglomerative Hierarchical clustering algorithm in detail.

**STEP 1: Make each data point a single-point cluster**

- That forms N Cluster

**Step 2: Take the two closest data points and make them one cluster**

- That forms N-1 clusters

**Step 3: Take the two closest clusters and make them one cluster**

- That forms N-2 Cluster

**Step 4: Repear Step 3 until there is only one cluster**

** Agglomerative Hierarchical clustering** -This algorithm works by grouping the data one by one on the basis of the nearest distance measure of all the pairwise distance between the data point. Again distance between the data point is recalculated but which distance to consider when the groups has been formed? For this there are many available methods. Some of them are:

1) Method of single linkage or nearest neighbour or Single-Nearest distance

2) Method of complete linkage or farthest neighbour.

3) Method of between-group average linkage or average-average distance or average linkage.

4) centroid distance.

5) ward’s method – sum of squared euclidean distance is minimized. Ward’s method, or minimal increase of sum-of-squares (MISSQ), sometimes incorrectly called “minimum variance” method.

This way we go on grouping the data until one cluster is formed. Now on the basis of dendogram graph we can calculate how many number of clusters should be actually present.

In [2]:

```
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```

In [3]:

```
# Importing the dataset
dataset = pd.read_csv('../datasets/Mall_Customers.csv')
X = dataset.iloc[:, [3, 4]].values
dataset
```

Out[3]:

In [4]:

```
# Using the dendrogram to find the optimal number of clusters
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = 'ward'))
plt.title('Dendrogram')
plt.xlabel('Customers')
plt.ylabel('Euclidean distances')
plt.show()
```

In [5]:

```
# Fitting Hierarchical Clustering to the dataset
from sklearn.cluster import AgglomerativeClustering
hc = AgglomerativeClustering(n_clusters = 5, affinity = 'euclidean', linkage = 'ward')
y_hc = hc.fit_predict(X)
y_hc
```

Out[5]:

In [6]:

```
# Visualising the clusters
plt.scatter(X[y_hc == 0, 0], X[y_hc == 0, 1], s = 100, c = 'red', label = 'Careful')
plt.scatter(X[y_hc == 1, 0], X[y_hc == 1, 1], s = 100, c = 'blue', label = 'Standard')
plt.scatter(X[y_hc == 2, 0], X[y_hc == 2, 1], s = 100, c = 'green', label = 'Target')
plt.scatter(X[y_hc == 3, 0], X[y_hc == 3, 1], s = 100, c = 'cyan', label = 'Careless')
plt.scatter(X[y_hc == 4, 0], X[y_hc == 4, 1], s = 100, c = 'magenta', label = 'Sensible')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
```

The post Hierarchical Clustering appeared first on Machine Learning Tutorials, Courses and Certifications.

]]>