Commit 22f44079 by Arham Akheel

Migrating Introduction to Text Analytics with R to tutorials repository.

parent 66399e2d
#
# Copyright 2017 Data Science Dojo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# This R source code file corresponds to the Data Science Dojo webinar
# titled "An Introduction to Data Visualization with R and ggplot2"
#
#install.packages("ggplot2")
library(ggplot2)
# Load Titanic titanicing data for analysis. Open in spreadsheet view.
titanic <- read.csv("titanic.csv", stringsAsFactors = FALSE)
View(titanic)
# Set up factors.
titanic$Pclass <- as.factor(titanic$Pclass)
titanic$Survived <- as.factor(titanic$Survived)
titanic$Sex <- as.factor(titanic$Sex)
titanic$Embarked <- as.factor(titanic$Embarked)
#
# We'll start our visual analysis of the data focusing on questions
# related to survival rates. Specifically, these questions will use
# the factor (i.e., categorical) variables in the data. Factor data
# is very common in the business context and ggplot2 offers many
# powerful features for visualizing factor data.
#
#
# First question - What was the survival rate?
#
# As Survived is a factor (i.e., categorical) variable, a bar chart
# is a great visualization to use.
#
ggplot(titanic, aes(x = Survived)) +
geom_bar()
# If you really want percentages.
prop.table(table(titanic$Survived))
# Add some customization for labels and theme.
ggplot(titanic, aes(x = Survived)) +
theme_bw() +
geom_bar() +
labs(y = "Passenger Count",
title = "Titanic Survival Rates")
#
# Second question - What was the survival rate by gender?
#
# We can use color to look at two aspects (i.e., dimensions)
# of the data simultaneously.
#
ggplot(titanic, aes(x = Sex, fill = Survived)) +
theme_bw() +
geom_bar() +
labs(y = "Passenger Count",
title = "Titanic Survival Rates by Sex")
#
# Third question - What was the survival rate by class of ticket?
#
ggplot(titanic, aes(x = Pclass, fill = Survived)) +
theme_bw() +
geom_bar() +
labs(y = "Passenger Count",
title = "Titanic Survival Rates by Pclass")
#
# Fourth question - What was the survival rate by class of ticket
# and gender?
#
# We can leverage facets to further segment the data and enable
# "visual drill-down" into the data.
#
ggplot(titanic, aes(x = Sex, fill = Survived)) +
theme_bw() +
facet_wrap(~ Pclass) +
geom_bar() +
labs(y = "Passenger Count",
title = "Titanic Survival Rates by Pclass and Sex")
#
# Next, we'll move on to visualizing continuous (i.e., numeric)
# data using ggplot2. We'll explore visualizations of single
# numeric variables (i.e., columns) and also illustrate how
# ggplot2 enables visual drill-down on numeric data.
#
#
# Fifth Question - What is the distribution of passenger ages?
#
# The histogram is a staple of visualizing numeric data as it very
# powerfully communicates the distrubtion of a variable (i.e., column).
#
ggplot(titanic, aes(x = Age)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Passenger Count",
x = "Age (binwidth = 5)",
title = "Titanic Age Distribtion")
#
# Sixth Question - What are the survival rates by age?
#
ggplot(titanic, aes(x = Age, fill = Survived)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Passenger Count",
x = "Age (binwidth = 5)",
title = "Titanic Survival Rates by Age")
# Another great visualization for this question is the box-and-whisker
# plot.
ggplot(titanic, aes(x = Survived, y = Age)) +
theme_bw() +
geom_boxplot() +
labs(y = "Age",
x = "Survived",
title = "Titanic Survival Rates by Age")
#
# Seventh Question - What is the survival rates by age when segmented
# by gender and class of ticket?
#
# A related visualization to the histogram is a density plot. Think of
# a density plot as a smoothed version of the histogram. Using ggplot2
# we can use facets to allow for visual drill-down via density plots.
#
ggplot(titanic, aes(x = Age, fill = Survived)) +
theme_bw() +
facet_wrap(Sex ~ Pclass) +
geom_density(alpha = 0.5) +
labs(y = "Age",
x = "Survived",
title = "Titanic Survival Rates by Age, Pclass and Sex")
# If you prefer histograms, no problem!
ggplot(titanic, aes(x = Age, fill = Survived)) +
theme_bw() +
facet_wrap(Sex ~ Pclass) +
geom_histogram(binwidth = 5) +
labs(y = "Age",
x = "Survived",
title = "Titanic Survival Rates by Age, Pclass and Sex")
# IntroDataVisualizationWithRAndGgplot2
The public GitHub repository for Data Science Dojo's webinar titled "An Introduction to Data Visualization with R and ggplot2".
These materials make use of the data from Kaggle's [Titanic: Machine Learning from Disaster](https://www.kaggle.com/c/titanic) competition.
Additionally, the following are required to use the files for the Meetup:
* [The R programming language](https://cran.rstudio.com/)
* While not required, [RStudio](https://www.rstudio.com/products/rstudio/download/) is highly recommended.
* The [ggplot2](https://cran.r-project.org/web/packages/ggplot2/index.html) package.
Subproject commit b71d0c5cc95860a0e51fbc3c4d9ffd4289c1e876
#
# Copyright 2017 Data Science Dojo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# This R source code file corresponds to video 1 of the Data Science
# Dojo YouTube series "Introduction to Text Analytics with R" located
# at the following URL:
# <YouTube Video Link Here />
#
# Install all required packages.
install.packages(c("ggplot2", "e1071", "caret", "quanteda",
"irlba", "randomForest"))
# Load up the .CSV data and explore in RStudio.
spam.raw <- read.csv("spam.csv", stringsAsFactors = FALSE, fileEncoding = "UTF-16")
View(spam.raw)
# Clean up the data frame and view our handiwork.
spam.raw <- spam.raw[, 1:2]
names(spam.raw) <- c("Label", "Text")
View(spam.raw)
# Check data to see if there are missing values.
length(which(!complete.cases(spam.raw)))
# Convert our class label into a factor.
spam.raw$Label <- as.factor(spam.raw$Label)
# The first step, as always, is to explore the data.
# First, let's take a look at distibution of the class labels (i.e., ham vs. spam).
prop.table(table(spam.raw$Label))
# Next up, let's get a feel for the distribution of text lengths of the SMS
# messages by adding a new feature for the length of each message.
spam.raw$TextLength <- nchar(spam.raw$Text)
summary(spam.raw$TextLength)
# Visualize distribution with ggplot2, adding segmentation for ham/spam.
library(ggplot2)
ggplot(spam.raw, aes(x = TextLength, fill = Label)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Text Count", x = "Length of Text",
title = "Distribution of Text Lengths with Class Labels")
#
# Copyright 2017 Data Science Dojo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# This R source code file corresponds to video 2 of the Data Science
# Dojo YouTube series "Introduction to Text Analytics with R" located
# at the following URL:
# https://www.youtube.com/watch?v=Y7385dGRNLM
#
# Install all required packages.
install.packages(c("ggplot2", "e1071", "caret", "quanteda",
"irlba", "randomForest"))
# Load up the .CSV data and explore in RStudio.
spam.raw <- read.csv("spam.csv", stringsAsFactors = FALSE, fileEncoding = "UTF-16")
View(spam.raw)
# Clean up the data frame and view our handiwork.
spam.raw <- spam.raw[, 1:2]
names(spam.raw) <- c("Label", "Text")
View(spam.raw)
# Check data to see if there are missing values.
length(which(!complete.cases(spam.raw)))
# Convert our class label into a factor.
spam.raw$Label <- as.factor(spam.raw$Label)
# The first step, as always, is to explore the data.
# First, let's take a look at distibution of the class labels (i.e., ham vs. spam).
prop.table(table(spam.raw$Label))
# Next up, let's get a feel for the distribution of text lengths of the SMS
# messages by adding a new feature for the length of each message.
spam.raw$TextLength <- nchar(spam.raw$Text)
summary(spam.raw$TextLength)
# Visualize distribution with ggplot2, adding segmentation for ham/spam.
library(ggplot2)
ggplot(spam.raw, aes(x = TextLength, fill = Label)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Text Count", x = "Length of Text",
title = "Distribution of Text Lengths with Class Labels")
# At a minimum we need to split our data into a training set and a
# test set. In a true project we would want to use a three-way split
# of training, validation, and test.
#
# As we know that our data has non-trivial class imbalance, we'll
# use the mighty caret package to create a randomg train/test split
# that ensures the correct ham/spam class label proportions (i.e.,
# we'll use caret for a random stratified split).
library(caret)
help(package = "caret")
# Use caret to create a 70%/30% stratified split. Set the random
# seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(spam.raw$Label, times = 1,
p = 0.7, list = FALSE)
train <- spam.raw[indexes,]
test <- spam.raw[-indexes,]
# Verify proportions.
prop.table(table(train$Label))
prop.table(table(test$Label))
#
# Copyright 2017 Data Science Dojo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# This R source code file corresponds to video 3 of the Data Science
# Dojo YouTube series "Introduction to Text Analytics with R" located
# at the following URL:
# https://www.youtube.com/watch?v=CQsyVDxK7_g
#
# Install all required packages.
install.packages(c("ggplot2", "e1071", "caret", "quanteda",
"irlba", "randomForest"))
# Load up the .CSV data and explore in RStudio.
spam.raw <- read.csv("spam.csv", stringsAsFactors = FALSE, fileEncoding = "UTF-16")
View(spam.raw)
# Clean up the data frame and view our handiwork.
spam.raw <- spam.raw[, 1:2]
names(spam.raw) <- c("Label", "Text")
View(spam.raw)
# Check data to see if there are missing values.
length(which(!complete.cases(spam.raw)))
# Convert our class label into a factor.
spam.raw$Label <- as.factor(spam.raw$Label)
# The first step, as always, is to explore the data.
# First, let's take a look at distibution of the class labels (i.e., ham vs. spam).
prop.table(table(spam.raw$Label))
# Next up, let's get a feel for the distribution of text lengths of the SMS
# messages by adding a new feature for the length of each message.
spam.raw$TextLength <- nchar(spam.raw$Text)
summary(spam.raw$TextLength)
# Visualize distribution with ggplot2, adding segmentation for ham/spam.
library(ggplot2)
ggplot(spam.raw, aes(x = TextLength, fill = Label)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Text Count", x = "Length of Text",
title = "Distribution of Text Lengths with Class Labels")
# At a minimum we need to split our data into a training set and a
# test set. In a true project we would want to use a three-way split
# of training, validation, and test.
#
# As we know that our data has non-trivial class imbalance, we'll
# use the mighty caret package to create a randomg train/test split
# that ensures the correct ham/spam class label proportions (i.e.,
# we'll use caret for a random stratified split).
library(caret)
help(package = "caret")
# Use caret to create a 70%/30% stratified split. Set the random
# seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(spam.raw$Label, times = 1,
p = 0.7, list = FALSE)
train <- spam.raw[indexes,]
test <- spam.raw[-indexes,]
# Verify proportions.
prop.table(table(train$Label))
prop.table(table(test$Label))
# Text analytics requires a lot of data exploration, data pre-processing
# and data wrangling. Let's explore some examples.
# HTML-escaped ampersand character.
train$Text[21]
# HTML-escaped '<' and '>' characters. Also note that Mallika Sherawat
# is an actual person, but we will ignore the implications of this for
# this introductory tutorial.
train$Text[38]
# A URL.
train$Text[357]
# There are many packages in the R ecosystem for performing text
# analytics. One of the newer packages in quanteda. The quanteda
# package has many useful functions for quickly and easily working
# with text data.
library(quanteda)
help(package = "quanteda")
# Tokenize SMS text messages.
train.tokens <- tokens(train$Text, what = "word",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, remove_hyphens = TRUE)
# Take a look at a specific SMS message and see how it transforms.
train.tokens[[357]]
# Lower case the tokens.
train.tokens <- tokens_tolower(train.tokens)
train.tokens[[357]]
# Use quanteda's built-in stopword list for English.
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
train.tokens <- tokens_select(train.tokens, stopwords(),
selection = "remove")
train.tokens[[357]]
# Perform stemming on the tokens.
train.tokens <- tokens_wordstem(train.tokens, language = "english")
train.tokens[[357]]
# Create our first bag-of-words model.
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# Transform to a matrix and inspect.
train.tokens.matrix <- as.matrix(train.tokens.dfm)
View(train.tokens.matrix[1:20, 1:100])
dim(train.tokens.matrix)
# Investigate the effects of stemming.
colnames(train.tokens.matrix)[1:50]
#
# Copyright 2017 Data Science Dojo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# This R source code file corresponds to video 4 of the Data Science
# Dojo YouTube series "Introduction to Text Analytics with R" located
# at the following URL:
# https://www.youtube.com/watch?v=IFhDlHKRHno
#
# Install all required packages.
install.packages(c("ggplot2", "e1071", "caret", "quanteda",
"irlba", "randomForest"))
# Load up the .CSV data and explore in RStudio.
spam.raw <- read.csv("spam.csv", stringsAsFactors = FALSE, fileEncoding = "UTF-16")
View(spam.raw)
# Clean up the data frame and view our handiwork.
spam.raw <- spam.raw[, 1:2]
names(spam.raw) <- c("Label", "Text")
View(spam.raw)
# Check data to see if there are missing values.
length(which(!complete.cases(spam.raw)))
# Convert our class label into a factor.
spam.raw$Label <- as.factor(spam.raw$Label)
# The first step, as always, is to explore the data.
# First, let's take a look at distibution of the class labels (i.e., ham vs. spam).
prop.table(table(spam.raw$Label))
# Next up, let's get a feel for the distribution of text lengths of the SMS
# messages by adding a new feature for the length of each message.
spam.raw$TextLength <- nchar(spam.raw$Text)
summary(spam.raw$TextLength)
# Visualize distribution with ggplot2, adding segmentation for ham/spam.
library(ggplot2)
ggplot(spam.raw, aes(x = TextLength, fill = Label)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Text Count", x = "Length of Text",
title = "Distribution of Text Lengths with Class Labels")
# At a minimum we need to split our data into a training set and a
# test set. In a true project we would want to use a three-way split
# of training, validation, and test.
#
# As we know that our data has non-trivial class imbalance, we'll
# use the mighty caret package to create a randomg train/test split
# that ensures the correct ham/spam class label proportions (i.e.,
# we'll use caret for a random stratified split).
library(caret)
help(package = "caret")
# Use caret to create a 70%/30% stratified split. Set the random
# seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(spam.raw$Label, times = 1,
p = 0.7, list = FALSE)
train <- spam.raw[indexes,]
test <- spam.raw[-indexes,]
# Verify proportions.
prop.table(table(train$Label))
prop.table(table(test$Label))
# Text analytics requires a lot of data exploration, data pre-processing
# and data wrangling. Let's explore some examples.
# HTML-escaped ampersand character.
train$Text[21]
# HTML-escaped '<' and '>' characters. Also note that Mallika Sherawat
# is an actual person, but we will ignore the implications of this for
# this introductory tutorial.
train$Text[38]
# A URL.
train$Text[357]
# There are many packages in the R ecosystem for performing text
# analytics. One of the newer packages in quanteda. The quanteda
# package has many useful functions for quickly and easily working
# with text data.
library(quanteda)
help(package = "quanteda")
# Tokenize SMS text messages.
train.tokens <- tokens(train$Text, what = "word",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, remove_hyphens = TRUE)
# Take a look at a specific SMS message and see how it transforms.
train.tokens[[357]]
# Lower case the tokens.
train.tokens <- tokens_tolower(train.tokens)
train.tokens[[357]]
# Use quanteda's built-in stopword list for English.
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
train.tokens <- tokens_select(train.tokens, stopwords(),
selection = "remove")
train.tokens[[357]]
# Perform stemming on the tokens.
train.tokens <- tokens_wordstem(train.tokens, language = "english")
train.tokens[[357]]
# Create our first bag-of-words model.
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# Transform to a matrix and inspect.
train.tokens.matrix <- as.matrix(train.tokens.dfm)
View(train.tokens.matrix[1:20, 1:100])
dim(train.tokens.matrix)
# Investigate the effects of stemming.
colnames(train.tokens.matrix)[1:50]
# Per best practices, we will leverage cross validation (CV) as
# the basis of our modeling process. Using CV we can create
# estimates of how well our model will do in Production on new,
# unseen data. CV is powerful, but the downside is that it
# requires more processing and therefore more time.
#
# If you are not familiar with CV, consult the following
# Wikipedia article:
#
# https://en.wikipedia.org/wiki/Cross-validation_(statistics)
#
# Setup a the feature data frame with labels.
train.tokens.df <- cbind(Label = train$Label, data.frame(train.tokens.dfm))
# Often, tokenization requires some additional pre-processing
names(train.tokens.df)[c(146, 148, 235, 238)]
# Cleanup column names.
names(train.tokens.df) <- make.names(names(train.tokens.df))
# Use caret to create stratified folds for 10-fold cross validation repeated
# 3 times (i.e., create 30 random stratified samples)
set.seed(48743)
cv.folds <- createMultiFolds(train$Label, k = 10, times = 3)
cv.cntrl <- trainControl(method = "repeatedcv", number = 10,
repeats = 3, index = cv.folds)
# Our data frame is non-trivial in size. As such, CV runs will take
# quite a long time to run. To cut down on total execution time, use
# the doSNOW package to allow for multi-core training in parallel.
#
# WARNING - The following code is configured to run on a workstation-
# or server-class machine (i.e., 12 logical cores). Alter
# code to suit your HW environment.
#
#install.packages("doSNOW")
library(doSNOW)
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 10 logical cores.
cl <- makeCluster(10, type = "SOCK")
registerDoSNOW(cl)
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
rpart.cv.1 <- train(Label ~ ., data = train.tokens.df, method = "rpart",
trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was approximately 4 minutes.
total.time <- Sys.time() - start.time
total.time
# Check out our results.
rpart.cv.1
#
# Copyright 2017 Data Science Dojo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# This R source code file corresponds to video 5 of the Data Science
# Dojo YouTube series "Introduction to Text Analytics with R" located
# at the following URL:
# https://www.youtube.com/watch?v=az7yf0IfWPM
#
# Install all required packages.
install.packages(c("ggplot2", "e1071", "caret", "quanteda",
"irlba", "randomForest"))
# Load up the .CSV data and explore in RStudio.
spam.raw <- read.csv("spam.csv", stringsAsFactors = FALSE, fileEncoding = "UTF-16")
View(spam.raw)
# Clean up the data frame and view our handiwork.
spam.raw <- spam.raw[, 1:2]
names(spam.raw) <- c("Label", "Text")
View(spam.raw)
# Check data to see if there are missing values.
length(which(!complete.cases(spam.raw)))
# Convert our class label into a factor.
spam.raw$Label <- as.factor(spam.raw$Label)
# The first step, as always, is to explore the data.
# First, let's take a look at distibution of the class labels (i.e., ham vs. spam).
prop.table(table(spam.raw$Label))
# Next up, let's get a feel for the distribution of text lengths of the SMS
# messages by adding a new feature for the length of each message.
spam.raw$TextLength <- nchar(spam.raw$Text)
summary(spam.raw$TextLength)
# Visualize distribution with ggplot2, adding segmentation for ham/spam.
library(ggplot2)
ggplot(spam.raw, aes(x = TextLength, fill = Label)) +
theme_bw() +
geom_histogram(binwidth = 5) +
labs(y = "Text Count", x = "Length of Text",
title = "Distribution of Text Lengths with Class Labels")
# At a minimum we need to split our data into a training set and a
# test set. In a true project we would want to use a three-way split
# of training, validation, and test.
#
# As we know that our data has non-trivial class imbalance, we'll
# use the mighty caret package to create a randomg train/test split
# that ensures the correct ham/spam class label proportions (i.e.,
# we'll use caret for a random stratified split).
library(caret)
help(package = "caret")
# Use caret to create a 70%/30% stratified split. Set the random
# seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(spam.raw$Label, times = 1,
p = 0.7, list = FALSE)
train <- spam.raw[indexes,]
test <- spam.raw[-indexes,]
# Verify proportions.
prop.table(table(train$Label))
prop.table(table(test$Label))
# Text analytics requires a lot of data exploration, data pre-processing
# and data wrangling. Let's explore some examples.
# HTML-escaped ampersand character.
train$Text[21]
# HTML-escaped '<' and '>' characters. Also note that Mallika Sherawat
# is an actual person, but we will ignore the implications of this for
# this introductory tutorial.
train$Text[38]
# A URL.
train$Text[357]
# There are many packages in the R ecosystem for performing text
# analytics. One of the newer packages in quanteda. The quanteda
# package has many useful functions for quickly and easily working
# with text data.
library(quanteda)
help(package = "quanteda")
# Tokenize SMS text messages.
train.tokens <- tokens(train$Text, what = "word",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, remove_hyphens = TRUE)
# Take a look at a specific SMS message and see how it transforms.
train.tokens[[357]]
# Lower case the tokens.
train.tokens <- tokens_tolower(train.tokens)
train.tokens[[357]]
# Use quanteda's built-in stopword list for English.
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
train.tokens <- tokens_select(train.tokens, stopwords(),
selection = "remove")
train.tokens[[357]]
# Perform stemming on the tokens.
train.tokens <- tokens_wordstem(train.tokens, language = "english")
train.tokens[[357]]
# Create our first bag-of-words model.
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# Transform to a matrix and inspect.
train.tokens.matrix <- as.matrix(train.tokens.dfm)
View(train.tokens.matrix[1:20, 1:100])
dim(train.tokens.matrix)
# Investigate the effects of stemming.
colnames(train.tokens.matrix)[1:50]
# Per best practices, we will leverage cross validation (CV) as
# the basis of our modeling process. Using CV we can create
# estimates of how well our model will do in Production on new,
# unseen data. CV is powerful, but the downside is that it
# requires more processing and therefore more time.
#
# If you are not familiar with CV, consult the following
# Wikipedia article:
#
# https://en.wikipedia.org/wiki/Cross-validation_(statistics)
#
# Setup a the feature data frame with labels.
train.tokens.df <- cbind(Label = train$Label, data.frame(train.tokens.dfm))
# Often, tokenization requires some additional pre-processing
names(train.tokens.df)[c(146, 148, 235, 238)]
# Cleanup column names.
names(train.tokens.df) <- make.names(names(train.tokens.df))
# Use caret to create stratified folds for 10-fold cross validation repeated
# 3 times (i.e., create 30 random stratified samples)
set.seed(48743)
cv.folds <- createMultiFolds(train$Label, k = 10, times = 3)
cv.cntrl <- trainControl(method = "repeatedcv", number = 10,
repeats = 3, index = cv.folds)
# Our data frame is non-trivial in size. As such, CV runs will take
# quite a long time to run. To cut down on total execution time, use
# the doSNOW package to allow for multi-core training in parallel.
#
# WARNING - The following code is configured to run on a workstation-
# or server-class machine (i.e., 12 logical cores). Alter
# code to suit your HW environment.
#
#install.packages("doSNOW")
library(doSNOW)
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 10 logical cores.
cl <- makeCluster(10, type = "SOCK")
registerDoSNOW(cl)
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
rpart.cv.1 <- train(Label ~ ., data = train.tokens.df, method = "rpart",
trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was approximately 4 minutes.
total.time <- Sys.time() - start.time
total.time
# Check out our results.
rpart.cv.1
# The use of Term Frequency-Inverse Document Frequency (TF-IDF) is a
# powerful technique for enhancing the information/signal contained
# within our document-frequency matrix. Specifically, the mathematics
# behind TF-IDF accomplish the following goals:
# 1 - The TF calculation accounts for the fact that longer
# documents will have higher individual term counts. Applying
# TF normalizes all documents in the corpus to be length
# independent.
# 2 - The IDF calculation accounts for the frequency of term
# appearance in all documents in the corpus. The intuition
# being that a term that appears in every document has no
# predictive power.
# 3 - The multiplication of TF by IDF for each cell in the matrix
# allows for weighting of #1 and #2 for each cell in the matrix.
# Our function for calculating relative term frequency (TF)
term.frequency <- function(row) {
row / sum(row)
}
# Our function for calculating inverse document frequency (IDF)
inverse.doc.freq <- function(col) {
corpus.size <- length(col)
doc.count <- length(which(col > 0))
log10(corpus.size / doc.count)
}
# Our function for calculating TF-IDF.
tf.idf <- function(x, idf) {
x * idf
}
# First step, normalize all documents via TF.
train.tokens.df <- apply(train.tokens.matrix, 1, term.frequency)
dim(train.tokens.df)
View(train.tokens.df[1:20, 1:100])
# Second step, calculate the IDF vector that we will use - both
# for training data and for test data!
train.tokens.idf <- apply(train.tokens.matrix, 2, inverse.doc.freq)
str(train.tokens.idf)
# Lastly, calculate TF-IDF for our training corpus.
train.tokens.tfidf <- apply(train.tokens.df, 2, tf.idf, idf = train.tokens.idf)
dim(train.tokens.tfidf)
View(train.tokens.tfidf[1:25, 1:25])
# Transpose the matrix
train.tokens.tfidf <- t(train.tokens.tfidf)
dim(train.tokens.tfidf)
View(train.tokens.tfidf[1:25, 1:25])
# Check for incopmlete cases.
incomplete.cases <- which(!complete.cases(train.tokens.tfidf))
train$Text[incomplete.cases]
# Fix incomplete cases
train.tokens.tfidf[incomplete.cases,] <- rep(0.0, ncol(train.tokens.tfidf))
dim(train.tokens.tfidf)
sum(which(!complete.cases(train.tokens.tfidf)))
# Make a clean data frame using the same process as before.
train.tokens.tfidf.df <- cbind(Label = train$Label, data.frame(train.tokens.tfidf))
names(train.tokens.tfidf.df) <- make.names(names(train.tokens.tfidf.df))
# IntroToTextAnalyticsWithR
Public repo for the Data Science Dojo YouTube tutorial series [Introduction to Text Analytics with R](https://www.youtube.com/playlist?list=PL8eNk_zTBST8olxIRFoo0YeXxEOkYdoxi). This tutorial series leverages the [Kaggle SMS Spam Collection Dataset](https://www.kaggle.com/uciml/sms-spam-collection-dataset) originally published by [UCI ML Repository](https://archive.ics.uci.edu/ml/datasets/sms+spam+collection)
- [Introduction to Text Analytics with R - Part 1](https://www.youtube.com/watch?v=4vuw0AsHeGw)
- [Introduction to Text Analytics with R - Part 2](https://www.youtube.com/watch?v=Y7385dGRNLM)
- [Introduction to Text Analytics with R - Part 3](https://www.youtube.com/watch?v=CQsyVDxK7_g)
- [Introduction to Text Analytics with R - Part 4](https://www.youtube.com/watch?v=IFhDlHKRHno)
- [Introduction to Text Analytics with R - Part 5](https://www.youtube.com/watch?v=az7yf0IfWPM)
- [Introduction to Text Analytics with R - Part 6](https://www.youtube.com/watch?v=neiW5Ugsob8)
- [Introduction to Text Analytics with R - Part 7](https://www.youtube.com/watch?v=Fza5szojsU8)
- [Introduction to Text Analytics with R - Part 8](https://www.youtube.com/watch?v=4DI68P4hicQ)
- [Introduction to Text Analytics with R - Part 9](https://www.youtube.com/watch?v=SgrLE6WQzkE)
- [Introduction to Text Analytics with R - Part 10](https://www.youtube.com/watch?v=7cwBhWYHgsA)
- [Introduction to Text Analytics with R - Part 11](https://www.youtube.com/watch?v=XWUi7RivDJY)
- [Introduction to Text Analytics with R - Part 12](https://www.youtube.com/watch?v=-wCrClheObk)
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment