NDE 4.0 Podcast | Transcript | Dr. Dave Hughes | Episode 7

NDE 4.0 Podcast Transcript

Episode 7 — Improving Safety with Better Data — Dave Hughes, CTO, Novosound


Nasrin Azari: [00:00:00] Hello everyone. Today, we are honored to be speaking with Dave Hughes, the CTO of Novosound, an award-winning Scottish sensors company utilizing thin film processes to eliminate conventional limitations on ultrasound sensors.

Dr. Dave Hughes is the founder and CTO of Novosound. His background includes degrees in both physics and engineering, and over 10 years of ultrasound research experience. He is now a high growth entrepreneur overseeing the team’s technical direction and vision of Novosound, which has rapidly revolutionized industrial ultrasound technology, which has remained largely unchanged for 40 years.

Novosound was founded in April 2018 as a spinout from the University of the West of Scotland, where Dave invented Novosound’s core IP. Under Dave’s leadership, the company has quickly grown to become an award-winning global business manufacturing ultrasound sensors for a range of industries, including NDT and oil and gas, aerospace, and semiconductor markets.

Welcome, Dave to Floodlight Software’s NDE 4.0 Podcast series, which poses five questions to NDE 4.0 experts.

Dave Hughes: [00:01:07] Thank you very much for having me.

Nasrin Azari: [00:01:11] Great! Well, it’s very exciting to have you on our program today. I’m really interested in the use of sensors for NDT and the future potential of continuous monitoring.

So I’m really excited for this conversation today. How about we start with a basic question to start with. Question number one for you is what are the different types of sensors that can be used in NDT? How do they work and what data can they provide?

Dave Hughes: [00:01:40] Okay. So that’s a very wide-ranging question in nondestructive testing.

There’s a huge array of sensors that are available to basically do what they see on a non-destructive test, an object. You don’t want to break apart an airplane to assess its health. So you’ve got electrical sensors, such as using Eddy currents that generate Eddy currents, and the objects pick up reflections of these as they travel.

You have optical sensors. You have digital cameras, special inspection based methods, both inside the notes stages and the scopes, et cetera. Chemical sensors are used a lot in the corrosion industry to look at the chemical changes of a material as a crude sort of road map. And then where Novasound tends to be based is in mechanical or acoustic sensors.

So mechanical sensors could be used for vibration detection. So fair. When you’re monitoring a part of the machinery that’s moving, if it’s working properly, it will move in a very constrained, rhythmic manner. When issues or problems start to rise, you start to get vibrations or away from the normal signal.

And these sensors tend to pick them up. The other type of mechanical waves aren’t the mechanical waves that you can feel, or you can observe or witness, but there are more above a certain threshold that turned into acoustic waves, an acoustic wave as just a mechanical way. Data propagates through an object.

You got acoustic emissions. So if a pipeline or a part fractures that we’ll send a video high-frequency sound that you may not be able to hear with your ears, but it’s able to be detected by an acoustic means, take a phone that’s one version of an acoustic sensor. If you go higher in frequency, it starts getting to the ultrasound domain and that’s where will the sound.

We concentrate on high frequency, safe ultrasound sensors, and these are active and that they will send an acoustic pulse into an object. So it’s not detecting acoustics that get generated through faults. We are actively putting sound into objects and listening for the echoes coming back.

And from that, you can measure and build up a picture of structural changes. This might be due to erosion that give rise to a wall loss, or even the internal buildup of waste products, for example, you can imagine these with ultrasound and that’s ultimately what Novosound does. We create sensors that use ultrasound to measure these structural changes.

Nasrin Azari: [00:04:18] Oh, interesting. So when you place them on say a pipe or something, how far apart do you have to place them or what sort of the optimal, is there an optimal range?

Dave Hughes: [00:04:30] That really comes down to what you’re trying to look at. And a lot that came through the history of NDT, the NDT engineers start to understand where problems are going to arise.

For example, rounded joints are more common to be corroded, than a straight flat part. So, for a sensor, you would tend to place them at the parts that are known to corrode or to erode. So you can get a measure of how fast it’s happening or if it’s happening at all. There are lots of times where you may believe that collusion is happening, shut down a pipeline in order to properly inspect and find nothing.

It’s absolutely healthy. So why did you shut down and potentially risk losing, you know, a large amount of money for this downtime, Novosound solves that problem by alerting you to continually monitor, monitor the part where they were shuttling.

Nasrin Azari: [00:05:17] Interesting. Fascinating. So let’s move to our question. Number two for you, Dave, which is what challenges and or hurdles that need to be overcome in order for an NDT company to successfully deploy and use the technology?

Dave Hughes: [00:05:31] So I’ve split this kind of train of thought into two parts. As a physicist, as my background’s in physics and engineering, I really want to solve the technical challenge first. That’s what I was drawn to ever since I started moving much more into business and setting up Novosound. And you realize that that are no real technical challenges left.

There are all the commercial challenges. And it’s the technical part Novosound has solved as to how you get a sensor to be simple enough that you can place it in any environment, low temperature, room temperature, atmospheric rate up to very hot and still measure the mechanical change that you want to monitor.

So the real key technical body or to continuous monitoring is having the right to measure that enables you to assess the structural health of the object that you’re looking for. Once you have that, you move right into the realm of the commercial problem. What’s the return investment or the economic impact of investing in hardware to monitor this versus sending in the rope access or the human engineers to do the inspection as well. So, you know, should you invest $100,000 a year for a section of pipe that will give you if you shut it down $1,000,000 of loss for the time being down.

And so you have to balance these up and of course, you might spend $100,000 installing sensors on a pipeline that just doesn’t corrode as fast as you think it does. So you wouldn’t have spent the million dollars. Or you’re not saving the $1,000,000 a year because there’s no problem. You don’t know that until you get started.

Nasrin Azari: [00:07:18] Ok, it sounds like it’s a learning experience.

Dave Hughes: [00:07:22] Yeah. So that would be the third category problem, technical, commercial, and then just luck.

Nasrin Azari: [00:07:28] Yeah. The continuous monitoring sound side seems really interesting to me because you can be a lot more proactive about problems versus waiting for them to happen and then having to fix a problem that’s that could be really costly to clean up.

Dave Hughes: [00:07:47] Absolutely. It’s all about increasing the flow of data to allow you to do the predictive analytics, to be much more efficient and economical with the clean up effectively, or even if you can prevent in the first instance so that you don’t have the cleanup.

Nasrin Azari: [00:08:03] Right. Right. That’s really interesting. Let’s move to question number three for you, Dave, which is what are the benefits that can be achieved through continuous monitoring and how long until those benefits can be realized? So this kind of points to your ROI question.

Dave Hughes: [00:08:19] Yeah, exactly. So the ROI really stems from what is it that you’re wanting to monitor and if you don’t monitor it and it goes unchecked, how much is it going to cost to put it right or to ignore it? And sometimes it can be much, much more catastrophically expensive than just monitoring it and making sure it’s okay. So by increasing the flow of the data, you move away from a situation where you may only have a data point every single year, and you have to inform or work out what happened to get from the reading in December 2018 to the reading in 2019. And then, you know, you’re not going to get your next data point until 2020, you know, a black box situation that you may find in 2020 that things that weren’t okay are now not, but what’s happened in the middle?

So when we are talking to our clients, the main coloring end of this idea is to say let’s turn the light on. Let’s get rid of the unknown of the sampling between 2018 and 2020. And you end up with a regular unmanned inspection going on to allow the flow of data to increase to allow you to have much more economical planning and implementation of the traditional NDT in the first instance.

If, if you as well go back to the idea of some parts, just won’t corrode as fast as others. If you know where you have to put your resources out the small window of time that you have to go off and do your inspection, the economic benefit starts to multiply 10 fold, 100 fold versus doing nothing.

And just staying with the conservative way of doing it once every year, once every 12 months, if you’re lucky, you may have an unplanned outage so you can go in and do an inspection, but the reason for an unplanned outage is going to failure. Something’s gone wrong to enable that increase.

So let’s try and increase the data without relying on things going wrong.

Nasrin Azari: [00:10:27] So how much data is involved in a continuous monitoring system? You know, you talk about exchanging one data point a year to how many data points is it? You know, a data point once every hour? What do you think is what’s typical?

Dave Hughes: [00:10:47] So this all depends on the application or the object you’re trying to monitor. For example, in heavy pipework, we’re talking like duplex steels or highly corrosive pipework. You may only get material loss of, you know, a millimeter a year or so in terms of your wall loss and the time. So the time of monitoring is going to be very slow.  It’s not going to be a realtime live video. That would be like watching paint dry in some respects, but then if he appoints every week, like a button. So if we say we don’t need that every minute, we don’t need every hour because these are the micrometers of a wall that has changes that you may not be able to see.

And then it’s a lot of noise. If you go to every week, you may start to see a very slight change every month that you’re only getting 12 X data points a year. So there’s always going to be a balance based on how fast you expect from your existing regions that you’re going to need to sample.

Oversampling all the time can be useful in order to get a better picture. But there may also be some cases as if you were to jump into the deep end and say, right, let’s install these sensors. Let’s take a measurement twice a day for the next year and you actually see a lot more stuff going on that you just did not expect.

And it actually quite a warning affair. There may be decision-makers higher up the chain that do not want to see that kind of level of unknown. For example, in safety-critical environments like aerospace oil and gas. But we usually work with those decision-makers in the safety-critical environments to understand what data is needed to make it as safe as possible without getting over the top.

Right in terms of on there, the faults aren’t real faults, maybe false friends almost.

Nasrin Azari: [00:12:47] Yeah. What kind of push do you get any pushback from companies? In one of our previous podcast sessions, one of the things we talked about was this sort of interesting dynamic. Obviously, the whole point of the systems that we’re putting in place is to increase the safety and integrity of infrastructure.  And obviously there are certain regulations that a company needs to adhere to and an asset operator needs to perform a certain amount of inspections to maintain the safety of their systems. What you are offering is much more information than they can get from human inspections, right?

So is there any push back on? Um, because one of the things that we talked about was in a different session was that if you actually learn that there’s a problem, you know, let’s say you’re only required to test a particular component once a year. If you decide to test it every month and then you learn of a problem two months after your last test cycle, for example, you are obligated to fix it. Versus previously, without having that knowledge, you’re only really obligated to look at it in 12 months time, you may not have seen the issue.

There’s kind of that balance between wanting to do the right thing, but then also be required to do the right thing because you have the information, whereas you might previously not be required to have it.

Dave Hughes: [00:14:30] Absolutely, and that’s what I was touching on just before and that move towards continuous monitoring or NDE 4.0, NDT 4.0 as well. It increases the amount of insight and data or information, let’s call it details then. The cause is the raw form of information. If you took a sensor reading every minute for a year, you’d have a lot of data, but you might not necessarily have a lot of information. So let’s think about, in terms of information, when you have so much more information, things will come out of the woodwork that you might not have known about until later. So it does increase the workflow to an extent. This is how I kind of see it kind of disrupting and also revolutionizing the NDT service industry. And this is still the manned part because NDT services, as well as going out and doing the inspections that feed the maintenance second’s of the market as well, where there are the actual repairs that have to be done.

So in some respects, it will generate more work in that part, which is good that may offset some of the decreases in the activity, in the services, but at the same time, some of these challenges unearthed through the increased information might not necessarily need to be acted on. So in the first period, there’ll be a lot of false positives or false fans that will get fixed and we’ll end up with some really great improvements to assets.

Other parts will be disrupted. It repaired, but not necessarily going to be positive from later on, just because the sampling rate is so high, as you move into the different phases and get more deep into NDE 4.0 the sampling rate will decrease because we’ll understand more about the trends. We’ll understand more about what predictive analytics actually tells us.

So you might find that thing, for example, corrosion. You can get situations, just the chemical balances of the material loss when it happens, very very slowly over a long period of time, but then also the makeup of the cap or the put-in that happens inside the pipe, it suddenly just accelerates.

And it’s the chemical process that just takes over. And all of a sudden, you’ve got a decrease in the wall thickness fairly rapidly. So you move away from the linear wall change rate to a nonlinear wall change that can make problems happen much faster than you thought. If you only take two data points, one of the starts of the year and one at the end, you’ll only see the endpoint of that fast problem arises.

If you do a data point every month and you’ve got 12 data points, you then should be able to see that. And there’s a linear change to the wall thickness and then a drop-down and catch the dropdown. Before you have before it was going to burst through the wall, to an extent. However, if you concentrate on that linear regression to start with, you know, you get into a false sense of comfort, or you may repeat an earlier closing one of these unplanned don’t teams where you may have had another six months to go just before it was actually needed to do that work.

So that kind of ventures into the challenge of when is the right time to get on board with starting to collect this data. Because you will have a time where there’s a lot of redundancy. There’s a lot of redundant data points. And a lot of activity that’ll be quite costly. The benefit of continuous monitoring has to be looked at from a long-term vision point of view now you’ve and the pain being the expense and the capital that needs to be invested to put the sensors out there.

So that in two years, three years, five years up to 30 years, you start getting to a point where you didn’t speak to in parts where you haven’t invested. And putting the sensors on those parts and it starts to become much more automated. You’re starting to focus on inspect parts as well, but it’s such a long transition that trying to convince the industry to jump in now and spend the money now to save the money later.

And, uh, you know, these, these are gonna be long contracts that are gonna be outside the, perhaps let’s say the time scale of employment contracts. So we want to invest in something that is going to save their successor. So that’s one, I guess, you know, going back to the commercial challenge in terms of long term vision, we’re not talking about tomorrow.

Nasrin Azari: [00:18:58] Yes, you can see that there’s a lot of planning involved with a system like this. Let’s move ahead to question number four. This one is around artificial intelligence. What’s the relationship between continuous monitoring and AI? What types of AI algorithms can be used with this type of data and what results are companies looking for?

Dave Hughes: [00:19:21] Okay, I always look at  AI as just being a very complicated way of describing standards, statistics, or standard trend analysis. And everybody does it. We already spoke just a couple of minutes ago about looking at a pipeline that’s decaying. We do that by visually looking. I get off and he held some intelligence that’s at the end of the day, the simplest formula is linear regression.

So. When you were in your school, I don’t know if it’d be elementary or the other types of schools that they have in America. We have primary, secondary, et cetera. When you’re in secondary school, you get taught how to fit a line to an XL curve. And at the end of the day, that is a simple form of AI because it can tell you what’s happening after the data points of your graph finish. Anything else in AI was built on top of those principles. You have some data, you track trends to kind of point towards where the data is going and you make a decision based on what that is. In order to get to that point, you have to have enough data points. You can’t fit a line over one point.

You can’t fit curve over two points. You need more data in order to get the right trends and the right shape of the trend. So before you even get to AI, we always see it as like a four-step process almost, and that you sell the sensors to get the data points there and you collect information from them using manual inspection, rope access inspection.

Then you collect more of this by automating all of the data loggers. So you’re starting to flow the data into a database. AI then turns on at that point and through the uses of the linear regressions or k-means analysis or photos, these are just basic statistical ideas that have been used for hundreds of years, or maybe not hundreds of years, tens of years, let’s say to create a report.

So, you know, selling the reporting and then they’re selling the monitoring part of continuous monitoring, the predictive analytics. When you start becoming much more complex than the AI and you move beyond the statistics, you get into the kind of markets that IBM Watson is selling to or age process in deep dive.

And with these really advanced nodal networks and beyond you are now moving into the future of predictive analytics. Yeah, there’s a lot of buzzwords words, I realized I just threw a lot of buzzwords right there, but at the end of the day, these are just complex terms for fairly simple statistics. The complex part comes from the data that’s fed into them to train the models.

You can only train the models. If you’ve got a history of data to build upon. Novasound really bridges that gap between the world at the moment where there are sparse data points and the cost-effectiveness of having enough data points to move in and to place with it as cost-effective to have a large number of sensors our there feeding data into these models. So in the future, you can start reducing the number of data points and move to a point where there’s trust and the AI signals and the AI reports that are coming out of it that when you have, so you have three or four assets pipelines, let’s see, they’re all very similar. The same operators transporting the same materials through the new pipeline. You invest in year one on the sensor system. Okay. Along with two of the pipelines and maybe half of the Thursday to two and a half pipelines that have lots of data on it. Over the years you collect lots of information through the, through the sensors, you plug them into your complex neural networks and your AI and algorithms that you’re both ends up with enough data.

You’re now just through not invest with that capital investment. You’re not monitoring the fourth pipeline without a sense your are going to need it and that kind of being the power of this move into NDE 4.0  / NDT 4.0, where when everything’s designed on computer bell reproducible, and it’s all at scale, the number of errors or changes part to part should be minimized.

So if inspect one you’re right into that realm of statistics and predictive analytics that you can start to or things that haven’t been monitored. But to get to that point, you have to invest in monitoring a lot of stuff right now.

Nasrin Azari: [00:23:35] Right. So I had this vision that I call my, my NDE 4.0 Nirvana, which is a world where catastrophic failures don’t happen. And that’s because we’re proactive by using these technologies to predict when failures are going to occur. And when we talk about continuous monitoring systems and making this type of investment, that’s really the world we’re looking at and the comparison and costs that companies have to make today, I wonder how much they anticipate or kind of put in the coffers for a potentially catastrophic issue that they might have to resolve. And those are incredibly expensive. Right?

Dave Hughes: [00:24:30] So, yeah, really expensive problems happen in an instant, and we’ve all seen them in the newspapers over the years.  And we’re probably just around the corner from the next one. A lot of the time, they don’t want to be able to put a cost on it because that can, it gets into predicting that it could happen.

If we can, with Novasound’s manufacturing techniques, really bring the cost point down is less of a challenging touch. It becomes the smoke alarm in your factory. Almost. It just knows you forget it’s there because you’ve not got that bill for $1,000,000 a day to keep your factory safe as it’s less than that. So it becomes just part of the running cost that becomes then with the bricks and mortar of building the pipeline and through that enhancement and of the continuous data and pretend it’s monitoring your safety, improves up to the industry. You’ve made the service sector smaller. You’ve increased the equipment segment, which is good for hardware manufacturers like Novasound. But through that, you’ve increased safety, which is the number one priority.

Nasrin Azari: [00:25:35] Right, exactly.

Dave Hughes: [00:25:36] And the one that through the safety, you know, much more efficient because you’ve not got these massive overtly as insurance payers and all the other kind of costs that come with a disaster in your profession because people are going to work and coming home each day in the same status that they went to work in the morning.

And yeah, so it’s been a really interesting discussion so far. Let’s go to our last question, which I think you’ve already touched on but let’s see what additional information you can provide for us here. The last question is how does an NDT company gets started using NDE 4.0 technologies like sensors and set a path for moving forward?

Simply, they give us an email, and then we’ll see what we can do for them. The sales pitch, haha. It comes to that process that we’ve been really developing over the last two years as a company. We’re a very new company and not many people have heard of us, but hopefully, over the next 12 months, that changes. We’re just appointed a new chairman of the business. That actually was just announced the day before yesterday, so this is good timing. Our new chairman is a person called Dr. Derek Mathison, who was the chief marketing and technology officer for Baker Hughes, which is one of the biggest NDT companies in the world.

Yeah, he’s recently left Baker Hughes. And the first appointment he’s made is to become the chairman of this little company from Scotland. He’s originally Scottish, so there is a link there. So we’re really excited about what that brings. He has a deep interest in the data and the digitization of metrology, which is what we do and that’s, what’s attracted him to work with us and we’re hoping with his expertise from that field towards Novosound will really be able to make what we spoke about so far into reality with  Baker Hughes.

Nasrin Azari: [00:27:25] Maybe Baker Hughes will be one of your first customers, right?

Dave Hughes: [00:27:31] Yes, absolutely. That’s it. It’s great for a two-year-old company like us to some of you on that map, which is great. But to answer the question. As that stage four-stage process that we see companies need to engage with now to reap the benefits later of getting the sensors bought, even if they still have to do a manual inspection, they buying the reserves, fitting them to the asset, and leaving them there.

You reduce operator error so your data gets better. The information, the measurements you get now are better through effects sensor, and you can have as many as of the measure you need because the cost-effective point is there are no. You did not play that and you get the data loggers and there are data loggers that are off the shelf.

So that’s not new technology. Now that’s a simple investment. You then move into the proper data analytics and the statistics and the artificial intelligence by adding the report into it. And again, those who provide that, but then there are industry experts, consultants who can take the data and write a report.

So it can be quite a nebulous kind of way of doing it almost. There’s not a defined package that fits all industries when turbines will have a system to oil and gas or to aid a space, but the core experiences that, and the expertise that Level III ultrasound or Level III Eddy current or Level III NDT in general, inspectors will have all tap into this.

It’s just a case of digitizing it and moving forward with it. They can’t digitize it until they have enough data to start playing. And as we say, inspect the parts they’re not looking at.

Nasrin Azari: [00:29:14] So just get started with, with what you can and start collecting data.

Dave Hughes: [00:29:19] Yeah. And who knows what system will end up within three years, five years, 30 years.

And as Moore’s Law has told us, we cannot predict what’s going to be around the next corner. And if anything, as the COVID 19 pandemic tells us, you can’t predict what’s happening in six months, let alone long term. And, I’m hoping in the small sense that the corporate pandemic will make a lot of the operators of these pipelines or factories think about how can they do their inspection without fitting a whole bunch of people together on a boiler or a pipeline in order to do the inspection. So you’re not only creating the increasing the safety of those who work in the factory or in the plant. You’re actually improving the safety of your NDT crew as well.

So they’re all multifaceted. It’s just that now is the right time to start, but who knows where we’ll end up?

Nasrin Azari: [00:30:11] Yeah, I agree. It’s a very exciting time in the world for tech link sensors in NDT. I thank you for all your great information today, Dave, thank you for being on the podcast, Dave.

Dave Hughes: [00:30:25] Thank you for having me.

Nasrin Azari: [00:30:27] You’re welcome. So I feel like this emerging technology has the potential to greatly disrupt the market and improve testing services. So thank you listeners for tuning in today. If you’re interested in learning more about Dave or Novasound, please look for links to appropriate websites on our podcast webpage. Thank you very much.

For more expert views on NDT, subscribe to the Floodlight Software blog at https://floodlightsoftware.com.

Scroll to top