Data Mining blog

This is my blog for the Data Mining module.

Analysis in Rapid Miner


I have downloaded the data for the Titanic. I have created a scatter graph that graphs age against survival. This shows that people who survived were from all ages, and people who died were from all ages. A person’s chance of survival did not seem to depend on age. This would indicate that they did not save the younger people first, or the older people first, as may have been suggested. The data is also colour coded by class. Class 1 are blue, class 2 are green, class 3 are red. The bulk of the red data is in the top left of the graph. This means that the vast majority of the class 3 people were young and they died.




I have downloaded data in relation to a golf club. This has details of whether there is play or not, and in what weather conditions. I have created a scatter plot in Rapid Miner of temperature against play (yes or no). This seems to indicate that there is play when the temperature is generally cooler, apart from two outlier times when there was play at very hot temperatures.


I have downloaded data in relation to the payment method for products from a store. There are three methods, cash, cheque and credit card. I want to see if there is any correlation between age and payment method. I have created a scatter plot which graphs age against payment method. Cash payments are pretty evenly spread across all ages. Credit card payments are also across all ages. This may come as a surprise as it might have been expected that older people would not use credit cards as much as younger people. The payments by cheque occur amongst all ages until about age 60, after which there are very few payments by this method.


Finally. I have downloaded a file that outlines details of working hours and conditions and benefits for the employees of a company. I have created a scatter plot that graphs the latest wage increase against the working hours of the employees. This shows the data to be predominantly in the top left hand corner of the graph. This means that the main bulk of the workforce got small pay increases of under 5%, apart from a handful of people who got over 5% of an increase.

Regression Analysis in R

I have used R to run a regression analysis.

I have used the enrolment data from a university in the US which shows the number of enrolments each year. I want to see if there in any correlation between the enrolment nombers (ENROL) and other factors such as the unemployment rate (UNEMP), the numbers graduating from high school (HGRAD) and the average income per person (INCOME).

R gave me the following results:


lm(formula = ENROL ~ UNEMP + HGRAD + INCOME,  data = datavar)


Min         1Q     Median    3Q     Max

-1156.2   -491.5  -3.121  382.6  1226.6


Estimate      Std. Error    t value     Pr(>ltl)

(Intercept)  -8.379e+03   2.156e+03     -8.985   0.000594***

UNEMP        4.689e+02   1.236e+02      4.236    0.000812***

HGRAD        4.328e-01    7.523e-02       6.645      1.64e-05***

INCOME      4.018e+00   4.658e-03       8.478      4.89e-09***

signif. codes:  0 ‘***’  0.001  ‘**’  0.01  ‘*’  0.05  ‘.’  0.1  ‘  ‘ 1

Residual standard error: 661.5 on 1025 degrees of freedom

Multiple R-squared: 0.9652, Adjusted R-squared 0.9548

F-statistic: 232.5 on 3 and 1025 DF, p-value: < 2.4e-14

From this output, we can determine that the intercept is -8379, the coefficient for the unemployment rate is 468.9, the coefficient for number of spring high school graduates is 0.4328, and the coefficient for per person income is 4.018. Therefore, the complete regression equation is:

Autumn Enrolment = -8379 + 468.9 * Unemployment Rate + 0.4328 * Number of Spring High School Graduates + 4.018 * Income per person.

The results from R give us a number of statistics and indicators for us to see how good a fit our model is, and how significant or not is the correlation between the enrolment and the variables we have included.

The stars (for example, ***) indicate the predictive power of each feature in the model. The significance level (as listed by the significance codes shown) provides a measure of how likely the true coefficient is zero given the value of the estimate.

The presence of three stars indicates a significance level of 0, which means that the feature is extremely unlikely to be unrelated to the dependent variable. In this case, the intercept and our three variables all have three stars beside them. So we can be confident that the variables we have included are related to the dependent variable.

A common practice is to use a significance level of 0.05 to denote a statistically significant variable. If the model had few features that were statistically significant, it may be cause for concern, since it would indicate that our features are not very predictive of the outcome. In this case, the intercept and our three variables all have significance levels that are very small, much smaller than the 0.05 level that is commonly used. Therefore we can conclude that these variables are all statistically significant, and we are correct to include them. The overall p-value for the model is a very small figure, in the order of 10 to the power of -14. Therefore we can conclude that these results did not occur by chance and that there is strong dependency between the variables.

The Multiple R-squared value (also called the coefficient of determination) provides a measure of how well our model as a whole explains the values of the dependent variable. It is similar to the correlation coefficient in that the closer the value is to 1.0, the better the model perfectly explains the data. Since the R-squared value is 0.9652, we know that over 96 percent of the variation in the dependent variable is explained by our model. Because models with more features always explain more variation, the Adjusted R-squared value corrects R-squared by penalizing models with a large number of independent variables. The Adjusted R-squared value here is 0.9548. So we can conclude that our model is a very good fit for the data. With an R-squared value this high, we don’t need to search for any more variables.

The Residuals section provides summary statistics for the errors in our predictions, some of which are apparently quite substantial. Since a residual is equal to the true value minus the predicted value, the maximum error of 1,226 suggests that the model under-predicted by over 1,200 for at least one observation. The minimum residual value is -1,156 which means that the model over-predicted by over 1,100 for at least one observation. The maximum and minimum residuals are approximately 2 standard deviations out from the mean, which can be expected in a model like this. There are no outlier values that are way off from the rest of the data.

Fifty percent of errors fall within the 1Q and 3Q values (the first and third quartile), so the majority of predictions were between 491 over the true value and 383 under the true value.

Given the preceding performance indicators, our model is performing very well.

Information Systems


Microsoft Excel is a spreadsheet developed by Microsoft for Windows. Excel is a very powerful tool to enable us to analyse large data files. It features calculation, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications. Excel uses a grid of cells arranged in numbered rows and letter-named columns to organize data. All mathematical operations can be used in Excel to carry out calculations in relation to data. We can change the format and colour of the different cells. The cells can be in general letters format, number format or date format, for example. We use formulas and functions to perform calculations on the data in one cell and putting it into the new cell. We can highlight header rows in colours so they stand out. We can freeze the panes in order that certain rows and columns are always displayed when we move around the spreadsheet. There can be many sheets in one spreadshseet. We can use a formula in one sheet by taking data from another sheet.

Excel and Financial Data

Spreadsheets are often used to store financial data. Formulas and functions that are used on this type of data include:

  • Performing basic mathematical operations such as summing columns or rows of numbers;.
  • Doing accounts calculations and finding values such as profit or loss.
  • Calculating repayment plans for loans or mortgages.
  • Finding the average, maximum, minimum, and other statistical values in a set of data.
  • Performing actuarial calculations such as present values of pensions in order to calculate the amount needed to pay per annum in order to receive a certain pension at retirement.
  • Carrying out What-If analysis on data – where variables are modified one at a time to see how the change effects other data – such as expenses and profits.

Special functions in Excel for analysing big data files

VLOOKUP: This function is used when you need to find things in a table or a range by row. For example, look up an employee’s salary by his employee number, or find his phone number by looking up his last name.

HLOOKUP: This function operates like VLOOKUP, except that it searches for a value in the top row of a table or an array of values, and then returns a value in the same column from a row you specify in the table or array.

SUMIF: This function combines the SUM and IF functions. This allows us to sum data from a group but only for the people specified. For example, if we had salary data for a large company, and we want to get the sum of the salaries for the people in a specific location, or people in a certain job title, SUMIF allows us to pick out these people by looking up the location or job title, and summing the salaries for these people only.

COUNTIF / AVERAGEIF / PRODUCTIF: As in SUMIF, these allow us to take the count, average or product of specific people if they are in a certain group.

GOALSEEK: This feature allows you to alter the data used in a formula in order to find out what the results will be. The different results can then be compared to find out which one best suits your requirements.

Graphing in Excel

Excel can be used to create many types of graphs including bar graphs, line graphs, scatter plot graphs and pie charts.

We can put different bars or lines on the same graph for comparison purposes. For example, we can put a graph of inflation over a certain time period for Ireland, the UK and Germany alongside each other.

Bar graphs are one of the most common types of graph used to display data. Bar graphs are most often used to show amounts or the number of times a value occurs. For example, for a school class you can use a bar graph to show and compare the number of students with different hair colors.

Scatter plot graphs (XY chart) are used to show scientific XY data. Scatter charts are often used to find out if there is a relationship between variable X and Y. For example, X could be the weight of individuals and Y could be their height. X could be the age of cars, and Y could be the sale price, of all cars sold in a year. Excel can calculate a regression analysis to calculate if there is much of a correlation between X and Y.

Pie charts use a circle format, cut into pieces like a pie. Each piece represents the proportion of that particular item out of the total. For example we can use a pie chart to represent the percentage of people voting for political parties in an election result. Excel allows us to give different colours to the pieces of the pie, in order to make them stand out from each other.

Creating Macros in Excel

Advanced users of Excel often use macros to speed their ability to perform computing tasks. A macro is a series of instructions given to the spreadsheet software. Simple macros record your keystrokes or mouse clicks, allowing you to replicate them quickly and easily. Creating a macro in Excel automates repetitive tasks, such as complex cell formatting. You can create and save a series of macros to perform basic activities and improve your work efficiency. An example of a use of macros would be in producing benefit statements for a large group of employees.

Excel Security Features

Excel may contain personal and confidential data. It is important that the many security features are used. Companies that handle the data of their clients could face legal action and reputational damage if the data is compromised or falls into the wrong hands. Files can be password protected, so that a password has to be used in order to open the file or make changes to the data or formulae. A sheet can be hidden within the workbook, so it can still be linked by formulae, etc. but not seen without the password.

Try R Blog Post

try R course completedinflation

cpi inflation

This is my Try R blog post.

I have completed the Try R course from Code School. I attach an image showing completion of ths course.

Displaying 20160305_220251.jpgExample case I have created using R Graphics

I have created a barplot of the rate of inflation in Ireland from 1977 to 2015. I first obtained the inflation figures based on the Consumer Price Index from the Central Statistics Office website. In R, I created a vector with this data and called it “inflation”. Then I put names on the data, with the labels 1977 to 2015. Then I used the barplot command to create the barplot.

What information can be gleamed from the barplot?

This barplot shows that inflation is hard to predict over time. There is no discernable pattern over the long term. However, years of high inflation seem to come together, as do years of low inflation. Ireland has had a low-inflation economy for over 30 years now. The last year that inflation was in double figures was in 1983. We had deflation  in 2009, 2010 and 2015. One major global event could cause inflation to jump up suddenly. For example, in the 1970s, a shortage in oil lead to higher petrol prices, which lead to higher transport costs, which had a knock-on effect in food and clothing prices, which caused inflation to rise sharply. If people’s salaries do not increase in line with inflation, they are getting a pay decrease in real terms, so they will have less disposable income. This can cause businesses to close and people to lose their jobs, which reduces the tax take for the government and increase the social welfare bill, which can lead to a recession.

What other ideas / concepts could be represented via R Graphics?

We could have incorporated other figures for the time period. We could have used R to create a matrix showing inflation, unemployment rate and population for the time period in question, for example. We could use R Graphics to graph inflation versus unemployment rate to see if there is any correlation between them. Is the unemployment rate generally higher or lower if inflation is high, and what if inflation is low? Perhaps it would show that there is no significant correlation between inflation and the unemployment rate. We can use R’s merge function to merge the inflation data with other data for the years in question.

We can use data frames to access individual portions of the data.

We can use 3D images in R Graphics.

We can add statistics in relation to the data in the graph. We could add a horizontal line at the mean level of the data. We can show a confidence interval of the data where there is a 95% probability that inflation is within this range.

Another possibility would be to extend the period of time to 100 years, to show the trend over very long periods. We could group the data into periods of 5 or 10 years.

We can add headers or footers. We can use colours to show different colours at different ranges of values.

We can compare Irish inflation with that of other countries by creating a matrix, with, say, UK and EU inflation in the other columns. We could create three line graphs, displaying Irish, UK and EU inflation alongside each other on the same graph. So we can see if there is any correlation between them or if Irish inflation is generally higher or lower than other countries.


Fusion tables – 2011 census data

ireland population data 2011 (6)

fusion table map 1fusion table map 2

This is my fusion table blog post.

I used fusion tables to create this heatmap of Ireland based on the population density of the 26 counties according to the 2011 census. I have created one heatmap based on population numbers, and a second heatmap based on the population densities of the counties. I first obtained the population statistics from the Central Statistics Office website. Then I copied the data into an Excel spreadsheet. I obtained the map lead file in kml format, which contains the county boundaries of Ireland. I created a Google fusion table heatmap of Ireland by matching up the county names in both the population data file and the map lead file.

The colours shown on each county correspond with a certain population density or total population range. Population density measures the number of people per square kilometre living in each county. This heatmap gives a better idea of how people are condensed into the counties, as opposed to the heatmap based on population numbers. The colours get darker for the higher the population density of each county. The population total heatmap goes from light orange to dark red, while the population density heatmap goes from light purple to dark blue. The legend at the bottom right shows how the colours correspond to a certain range of values. These maps gives a valuable insight into how the population is concentrated.

As we would expect, the highest population density is in the capital of the Republic, Dublin. The population density of Dublin is more than nine times that of the next highest county, Louth. There are high figures for the counties surrounding Dublin such as Louth, Kildare,  Wicklow and Meath, and also for the counties in Munster that contain big cities, Cork and Limerick. Most of the counties with lowest population densities are in the west of Ireland, such as Leitrim, Mayo and Roscommon.

The most significant feature of this analysis is the high concentration of the population in Dublin and its surrounding counties.

In addition to creating a heatmap based on population, we could also have used the 2011 census data to create a heatmap based on other factors e.g. rate of unemployment, average age, percentage claiming social welfare, male to female ratio etc. This can help the government target the areas that are most in need of investment to provide jobs or to build schools, for example.

We could create the heatmap for the four provinces, instead of separating by county.

We could also have gone deeper into the data. One possibility would be to have separate heatmaps for each county, splitting into the different regions of each county.

We can create similar heatmaps for previous censuses. This would show how the population density has changed over time. This information can be used to show trends in population, which will help in planning for the future. Governments can use the figures to plan for infrastructure, roads, schools, housing needs etc. The areas with growing populations will need better roads, more schools, more houses built, and so on.

The government can use the analysis shown in the data to adopt policies to reverse the trend of the growth of the population living in the Dublin region. This will alleviate the pressure in the Dublin area. For example, they could introduce tax incentives to encourage companies to set up perhaps in the west of Ireland, in order to create jobs there. This will move many people out of the highest concentrated areas into the lowest. This would have to be accompanied by a major investment in housing, amenities etc that will be needed to cope with the increased population. People would be encouraged to move by the much lower price of houses outside of the Dublin commuter belt.