Have you ever considered a ‘what if’ … what If I could become proactive rather than reactive? In becoming proactive, what positive influence can be imparted on the organisation?

All network element management systems are only capable of looking at the now together with archiving some of the past. In a typical Mining setting, there are a handful of Network Managers that handle specific events from specific network elements. There are individual Network Managers for fibre, microwave, Telemetry, DTVMR systems and so on. And, it is possible for all these element managers to be combined into a Manger of Managers (MoM). Depending on the amount of management products used throughout the network, the total network element manages can be from a couple to many, and multiplied across each function (SCADA, IT etc). So these analytic techniques mentioned below do not only apply to the telecom environment, it can also be applied to any data source.

There is an abundance of information that simply disappears and or is hard to access. It is therefore difficult or impossible to make quick inferences about a specific fault or even in time. Turning data into information, and information into knowledge, is Business Intelligences – it is about becoming proactive.

This article will discuss simple concepts like data standardisation and finding normality as methods that can be used to help quickly isolate, and predict the outcomes (and using a analytics tool, this can be done in minutes).

Normality, Standardisation and Central Tendency

Normality standardisation and central tendency in general have more to do with the data set itself and therefore does not completely offer a comprehensive method to gain knowledge. Although these methods are used in the process of rationalising datasets, they still must be understood before data visualisation techniques can be applied, otherwise the data used could be 100% wrong or right or somewhere in-between (no qualified data, no qualified results).

Creating a proactive environment means that some knowledge of pending events is required, predicting the future, without the proverbial crystal ball. There is actually some science behind it. Just to be clear, predicting the future can only be done within the dataset or population observed. The true mean for ‘predicting the future” is done with prescriptive analysis, another topic.

In my last blog, A brief guide to data types for the Resources Sector, I spoke about finding normality or standardising the population mean, this concept of standardising the population mean is essential to inferring something about the population (dataset).

To expand slightly on ‘normality’ another important aspect is finding ‘Central Tendency’. Simply put, it is finding the mean, median, mode and geometric mean of a population. To reiterate, to truly say something about datasets and trends, other statistical techniques are required like logistic and linear regression, correlation and R^2 (measures goodness of fit). Together with standardising the normal, these statistical techniques provide a simple way to find variances between faults and products. It is these variances between systems and within systems that give clues as to the why and the when. Aspects of a dataset(s) can be inferred to say something about a trend.

So the tools we need to use are mean, histograms, scatter and Q-Q plots.

Normality and Variance

As mentioned earlier a lot of network information is discarded and/or just read in the here and now. However, what if the saved historical information was compared against other systems or within systems? This could be done in real time as well. What additional insight would this give? What if the Receive Levels of microwave links could be compared to the Australia Meteorological Bureau 5 day weather forecast? Do Receive Signal levels vary because of the weather or is it something else eg equipment. Over time you could start to ‘infer’ what may happen to some microwave links against a forecast … predictive maintenance yes!

I think you get the picture; you can take data analysis as deep as you wish, the point is the data is there to be analysed, you only need to create the models to make sense of it.

So here we go … but first, Variance.

Variance is defined as ‘the average of the squared differences from the mean’ however a practical illustration is more relevant. Normality is an underlying assumption in parametric testing (defined assumptions about the data), that the data is normally distributed (fits the bell curve).

Variance is the average distance for each variable from the overall mean (and square it).

Normality is an assumption made about data that, the data is distributed normally (fits the standard bell curve).

A Practical Illustration

Let’s say there was an intermittent telemetry connection polling several PLCs. The SCADA Engineer found no issue with their equipment and sent the issue through to the Telecommunications team. The Telecommunications Engineer scratches his/her head and thinks ‘mmmmm, there are no equipment alarms at the moment, but wait I’ll look back through the history’.

After checking the history, the Engineer notices that some links were up and down in the area but could not immediately identify which link could be affecting the PLC polling, as there are several links serving the area of interest. The Telecommunications Engineer could do a sit in and wait till alarms appear and check the PLC Engineer, or he/she could use the power of data analyses to locate possible causes.

To pinpoint possible causes we could use all the data available, which would be a huge task and time-waster or use a sample of the data (frame). As long as the sample is greater than 30, we can approximate the normal and the normal being (this example) the sample average Receive Signal Level of each microwave radio.

Suppose we took daily snap shots (30 samples) of Receive Signal levels for various microwave links over the period of concern, something like Table 1. Table 1 shows a part of the averaged daily Receive Signal values that are going to be used for this example. We want to test if the Receive Signal level for each radio has a bearing of the SCADA Engineer’s complaint, so first we need to find what is normal, a normal Receive Signal.

Article 4 Table

 Step 1: Assessing Normality and Variance for a System

Remember that assessing for normality gives the Data Analyst the ability to assume that a part of a dataset can infer the whole. Assessing for variance gives the Data Analyst the ability to find the Standard Deviation. The Standard Deviation gives the Data Analyst a “standard” way of knowing what is normal and what is extra-large or extra-small, and whatever is in between. That’s the key to knowing what normal operational conditions look like and when that normal may likely be exceeded.

The following figures show how normal and variance can be displayed to instantly know if something should be checked, and how significant the variation is. These plots can be done in Excel, which can take some time, or by hand which can take even more time, or in a data analytics tool (which I have been using) which takes only minutes.

The Receive Signal levels were taken from 30 sites surrounding the area of concern and sampled over 30 days, and approximately 15 days either side of the complaint. The sample average Receive Signal level was found to be -74 dBm.

After plotting all 30 days, four days appeared to have the most variability and are shown below. The line from left to right represents a normal situation, the dots show the Receive Level observed. The four sample days show that on day 10 something started to happen, day 11 the Receive Levels became more spread out eg from -65dBm to -97dBm to 30dBm to -122dBm. On the 12th and 13th day the network appeared to return to normal. So what happened on the day 11?

Day 11 was chosen because there looked to be more Receive Level spread than all other days, meaning the radio links Receive Levels varied greatly from a number to another number during the day, which is not ideal. For example, during day 11 the Receive Level could have been at -34dBm, another part of the day at -110dBm and so on. Day 13 also shows spread but follow the normal distribution more closely then day 11. However, if required, day 13 can be used as well.

Step 2: Assessing Normality and Variance for a period

It just so happens that I have the Receive Level data for day 11 and the below histogram shows that the Receive Signal Level values went from -20dBm to -135dBm, meaning the radio links Receive Levels varied from the normal (-74dBm ) and were scattered over a large range(spread).

By running a normality test (Q-Q plot in this case) for each link on day 11, showed the link(s) that have contributed to the extremes dbm value.

Step 3: Assessing Normality and Variance for a Link

The resulting data analysis found that site DM01 Receive Signal Level is spread over a wide range and mostly in the radios none responsive region, -87dBm to >-125dBm. The histogram shows DM001 is right shifting away from the sample average normal and should be investigated.


The above concept is nothing more than applying basic statistics to datasets, and even if you have a tool that does the heavy lifting, interpretation is still the key to effective system analysis.

The Steps:

  1. Categorise the data in to groups eg days v sites
  2. Assess normality by applying an average (or median etc) to the categories
  3. Analyse the data by comparing the means

And Finally

In this example, the analysis showed that it is possible to quickly filter out the white noise and get to a possible cause. There are other methods but it depends on your knowledge of statistics. As for predicting the “future”, you’ll have to wait to part 4.