## Surface Profile Explorer

### Quick Start

When we measure the profile of a rough surface we want to know which number best describes that roughness. It is a little-known fact that most surfaces need at least TWO numbers to describe them. One is related to amplitude, the other to frequency.

But which amplitude measure, which frequency measure and what are all the other numbers? Just start to play with the app and answers will quickly appear. Just click on the type of surface that interests you and enter a YMax value for the sort of peak height you expect and you're off to a good start. Each parameter has a tooltip and an internal clickable link so you can find out what it is.

### Surface Profile Explorer

There are many ways to quantify a surface. Most of us make do with Ra or Rz, yet very often these are completely useless in revealing key aspects of our surfaces. The SPE allows you to set up known surfaces and see how the large variety of different measures change when you change parameters. The key fact is that most of us most of the time don't know which surface measurement correlates with the desired surface property. Once you understand the different measurements then sorting out the correlation becomes straightforward, as discussed later. The surfaces are:

- Triangular, A set of triangular shapes
- High Freq., High Frequency random
- Low Freq., Low Frequency sine wave
- Deep deep troughs in a flat surface
- High, high peaks on a flat base
- Complex, a complex profile with some interesting features
- Wide, Wide peaks with narrow troughs
- Narrow, Narrow peaks with wide troughs
- Synth, creates a synthetic surface from your own frequency components
- Load your own file - see below

The parameters you can change are:

- YMax, The largest height of the surface, in µm
- View length mm, The length, in mm, of the view, assuming that the scan itself is the standard 12.5mm
- Slope, A linear trend imposed on your data, in µm
- Noise, Extra noice imposed on your data, in µm
- View Offset, the offset to the view, for exploring things at low view lengths

For Synth you put in relative amplitudes (nominally 0 to 100%) of surface structures with wavelengths from 1mm down to 1µm. The app adjusts the relative values so that the overall surface meets your YMax requirements so you can adjust frequency and amplitude independently. *Note that many more data points have to be used so that calculations are slower. And for accurate results, choose a smaller L, such as 1mm*

The main graph shows the data plus its *average* value (horizontal line) and 5 *
zones*(separated by verticle lines) from which values such as Rz are calculated.

The small graph shows the ADF, Amplitude Distribution Function, which looks at how the profile varies away from the average value (at 0 in X axis). A symmetrical curve has a low Skew (as discussed below)

### What do the parameters mean?

To get a quick idea of what each parameter is, move your mouse over the value and a Tooltip will appear with a brief explanation.

**Rz** comes in two flavours. Rz is not too much affected by a single rogue measurement point.
**Rz(DIN**) or Rtm This is the mean of the 5 highest peaks and the 5 deepest valleys found in the 5 adjoining samples.
**Rz(ISO)** This is the mean peak-to-valley height found by measuring the peak-to-valley height in 5 adjoining samples and taking the average of the 5. This is usually identical to Rz(DIN) but odd spikes can change things.

**Rt** This is the maximum roughness depth and is the biggest peak-to-valley height. One rogue peak or valley can completely distort this reading.

**R3z** and **R3zmax** are equivalents based on the 3rd-highest peak in the 5 samples. They are said to be less affected by rogue points.

**Rmax** or **Ry** or **Rymax** or **Rti** This is similar to Rt. It finds the largest single peak-to-valley height in 5 adjoining samples. If the baseline of the data is flat then Rmax=Rt. However if the baseline slopes, Rt will be larger than Rmax because the highest peak will be made higher by the slope (and the lowest trough is made lower). You can try this out for yourself. Change the value of Slope in the Profile section, and you will see how Rmax is lower than Rt.

**Ra** This is the average (arithmetical mean) roughness. Its value will always be less than or equal to half of Rz as it is a measure of the variation either side of the mean, not the peak-to-peak variation. In practice it is usually much less than half of Rz. It is less dependent than Rz on rogue data points - which may or may not be a good thing! It is the most common measurement simply because early instruments could easily measure it. It is often the *least*> useful

**Rms** or **Rq** This is the Root Mean Square average. It’s not so popular for general surface measurements but it turns out to be crucial for contact mechanics calculations and is popular for those interested in the appearance of surfaces because optical properties can, in principle, be calculated directly from the Rms value.

**Rp** This is the maximum deviation above the mean line. It is highly susceptible to rogue points. If the profile varies equally above and below the average then Rp will be approximately 0.5 of Rt as Rt measures peak-to-valley height and Rp measures peak-to-mean height.

**Rpm** This is the mean value of Rp measurements over 5 adjoining samples and is less susceptible to rogue points. It is the Peak Mean (hence PM).

**Rvm** This is the mean value of the deepest valleys measured over 5 adjoining samples. So it’s the Valley Mean (hence VM).

Not surprisingly Rpm + Rvm = Rtm

**Rpm/Rz** This is a useful diagnostic value. If you calculate the W (for Wide) profile you have very little extra height above the mean value so Rpm/Rz is small. If you calculate the opposite, N (for Narrow) you now have lots of extra height above the mean value so Rpm/Rz is large. For general structures, Rpm/Rz will be close to 0.5.

**Lr** is the Length Profile Ratio. Imagine you are a tiny ant walking along the surface. You follow every up and down so you walk a distance X’ which is larger than the straight distance X (e.g. the default 12.5mm) across the sample. The Length Ratio = X’/X. Many people look at a surface profile plot and imagine that this ratio must be very high. But usually this is an illusion. The Y axis is greatly magnified. In the default plots a typical Y value is 4µm and the X length is 125000µm. If you plotted to scale, the surface would look completely flat or, in other words, Length Ratio=1. It is very difficult to construct a surface with a Length Ratio > 1.5. This is an important point for those who believe that increased adhesion of a rough surface comes from the extra surface area. For most “roughened” surfaces, the Length Ratio is probably no more than 1.05, yet the increase of adhesion from roughening might be a factor of 2. This comes from changes in crack propagation, not from increased surface adhesion.

The **View Length** slider helps you see why the standard view is so deceptive. Because the scan length is always fixed as 12.5mm, as you decrease the view Length you see the scan ever closer to its true scale. If your roughness is ~5μm and your view Length is 50μm you see that the real roughness is nowhere near as exaggerated as it looks at 12.5mm view. If you want to look at more of the surface in this higher magnification, slide the **View Offset** slider.

### Peak Count Measurements

**Pc** Peak count is a useful, but potentially confusing measure.

Pc is sometimes (depending on standards) specified as peaks/mm or peaks per cm. The app calculates peaks/cm – so divide by 10 if you want it in the alternative peaks/mm standard. In a noisy plot the number of "peaks" looks to be very large; we want a measure that discards false peaks and counts only true peaks.

So how does Pc ignore the noise? The rule is that if you scan from left to right, as soon as a peak rises above the reference level (0 or 1 depending on Pc0 or Pc1), you say “A peak has started” and increase your Pc by 1. But as you scan along to the right, you must wait till you hit a valley that is at a distance below the average line equal to the height of your reference above the average line. Only when you reach that point can you say “I’ve found then end of a peak” and can start searching for the next one”. So a peak needs a reasonably deep valley before it’s classed as a peak. This fits well with human intuition and is a reason why Pc is popular.

For smart Pc measurement the user should choose a threshold. In this app just two ('standard') Pc values are calculated:

- Pc(0) is the peak count with the threshold set to 0 (i.e. the mean position). So a peak must go above 0, go below 0 and come back up to 0 again to be counted.
- Pc(1) has a threshold of 1. So a peak must go above 1, go below -1 and come back up to 1 again to be counted.

There are some important other peak-related measurements, all based on the *
average* as the level where peaks are counted.

**Sm** is the Mean Spacing between peaks. The definition of a peak once again starts with the signal rising above the mean and then having to fall below the mean before the new peak starts at the next rising above the mean.

**λa** is the Average Wavelength. It is calculated from Ra and Δa (see below). It is similar to Sm but is weighted by the amplitudes of the various wavelengths rather than by the predominant wavelength.

**λq** is the RMS equivalent of λa and is important in many contact mechanics calculations.

**Δa** is the average absolute slope. It is calculated using a sophisticated Savitzky-Golay polynomial differential fit.

**Δq** is the RMS equivalent of Δa.

### Shape information

Two values summarise the overall shape. They are calculated from the **
Amplitude Distribution Function**:

**Rsk**is the measure of the Skewness of the distribution above and below the average. A perfect symmetric distribution has Rsk=0. The more the distribution tails off in one direction, the larger the (+ve or -ve) skew. A large positive skew means thin spikes sticking up. A large negative skew means narrow pits in the surface. In either case, such large Rsk values mean that other measures (such as Ra) are not of great value.**Rku**measures the Kurtosis of the distribution. A high kurtosis distribution has a sharper "peak" and flatter "tails", while a low kurtosis distribution has a more rounded peak with wider "shoulders". By definition, a normal distribution has kurtosis of 0.

### Loading your own data

To keep things simple, the data file must be a simple text file (e.g. .txt or .dat) containing two columns separated by a tab or comma. This means that you have to do a little work (maybe in Excel) to transform your surface data into this simple format, including getting the units right because ** all data are assumed to be in μm**. The first column is the x-distance of each measurement. It is assumed that the points are regularly spaced because the program only uses the final value to define the width of the plot. The second column is the y-height. If the first row contains a header, this is ignored, as are any lines that don't contain two meaningful numbers.

*The 3 sliders you can meaningfully slide ("All") are the Slope, View Offset and View Length. Sliding the other 2 ("Not File") assumes that you have chosen to revert to one of the standard examples.* If you do slide one of these "No file chosen" (or the equivalent for your browser) appears on the File name so you know that you are no longer looking at your own data.

### Correlations

Suppose you have 10 surfaces, each of which has a difference in the property of interest to you: touch, feel, gloss, adhesion, friction... How do you go about specifying the optimum surface to give you the best properties? The *wrong* answer is to do what most people is do which is measure Ra or Rz of the 10 surfaces (writing down the values produced on the screen of the measuring device) and hope to get a correlation. The *right* way to do it is to gather the scan data in digital format and then get your software (or, if necessary, the full Surface Profiler on my website) to analyse the data for all the key parameters.

It's then straightforward to create a grid in Excel of the properties and the measurements and do something like a Pearson Correlation test (a built-in function of Excel) to find which parameter(s) correlate with the data. In addition to the measured parameters you can use your intuition to create combination parameters (multiplied or divided) and include those in the Pearson Correlation. Hopefully you will have a pleasant surprise when a strong correlation emerges. Once you know what it is you then "only" have to create a surface with the correct parameter(s) to get the result that you or your customer requires. Of course, knowing that you want a certain combination of parameters is one thing, getting them is another thing. But the alternative is blind exploration of a very complicated landscape so in the long run it's generally preferable to know in advance what your ideal structure should be, then work out how to make it.