NDE 4.0 Podcast | Transcript | Roundtable Discussion with NDE 4.0 Influencers | Episode 11

NDE 4.0 Podcast Transcript

Episode 11 — NDE 4.0 Ethics Panel – Roundtable Discussion with NDE 4.0 Influencers

Our Guests: Ripi Singh, Tracie Clifford, Matt Litschewski

Editor’s Note: In the interest of time, this transcript is still in rough format and has not been edited for proper grammar and punctuation. If you have a need for a fully edited transcript, please contact us.

[00:00:00] Mat Litschewski: Welcome to the NDE Where we ask five questions for a NDE or NDT expert. This is the show for NDE professionals where we dig into the big questions about NDE inspections and digital transformation. Every episode we ask a NDT expert five questions that can help you

[00:00:25] Nasrin Azari: do your job better. Hello everyone. We have a great session in store for you today.

[00:00:32] Nasrin Azari: We are going to do something a little bit different. I have three NDE 4. 0 enthusiasts on deck with me today, and we are going to have a more interactive conversation that will probably take a little bit longer than our standard podcast session. We often focus on the emerging technologies themselves when we’re talking about NDE 4.

[00:00:50] Nasrin Azari: 0, which are, of course, The cornerstone of NDE 4. 0. But when we think about practical implementation, ethics and related [00:01:00] concerns can stop organizations from adopting useful technologies. So today we’re going to talk about these ethics concerns. I have three guests on our show today. First, I have Rippy Singh who runs his own coaching practice called Inspiring Next.

[00:01:14] Nasrin Azari: He chairs the NDE 4. 0 committee within within ASNT and is one of the leaders of the NDE 4. 0 initiative. Welcome, Rippy. Thank you. I also have Tracy Clifford with us today. She is the faculty lead and instructor of non destructive testing and QA QC programs at Chattanooga State College. Thank you for being with us,

[00:01:37] Tracie Clifford: Tracy.

[00:01:38] Tracie Clifford: Thank you so much.

[00:01:39] Nasrin Azari: And finally, we have Matt Lachewski, who is, who is the project manager and level three technician at Acuren. Welcome, Matt.

[00:01:49] Mat Litschewski: Hello.

[00:01:51] Nasrin Azari: So, Ritby, Tracy, Matt, thank you for joining us today. And everybody else that’s, that’s on the line, thanks for listening in. We will loosely follow our [00:02:00] standard five question format today.

[00:02:02] Nasrin Azari: And I’d like to start off by asking Each of you, in turn, to provide a more detailed introduction of yourself and particularly your interest in ethics and an answer to this question, which is what do you believe is the number one ethics related issue in NDE 4. 0 and why? Let’s start with you,

[00:02:25] Ripi Singh: Rippy. Very good.

[00:02:26] Ripi Singh: Thank you, Nasreen. Thank you for hosting this and organizing it in your standard style of five questions. Little bit more about me. I would say my first career was a professor of aerospace engineering. The second was in the corporate America, and now I’m in my third innings, so to say. As a freelance innovation coach, I got into industry 4.

[00:02:52] Ripi Singh: 0 about five years ago and into the NDE 4. 0 about three and a half years ago, [00:03:00] and it always intrigues me whenever we talk about new technology where there is no precedence of what is right. What is wrong? There’s no regulation in place yet because we don’t know enough about it. And then there are financial pressures that business executives are driven towards.

[00:03:20] Ripi Singh: Which drive them to making decisions based on well, it’s not illegal. So it is okay to do and I come at it from saying not necessary just because there is no law on this or there’s no regulation on this that does not still make it okay to do. So we need to take a pause and look at is it really okay to do what we are trying to do for which there is no history.

[00:03:45] Ripi Singh: There’s no story, there’s no lesson to learn from, and there’s no one watching us to do. How do we control our own emotion, our own greed from taking over and unnecessarily taking advantage of people? So that’s where my interest on ethics comes [00:04:00] around with Industry 4. 0.

[00:04:02] Nasrin Azari: Thank you. Thank you, Rippy. Um, I think that’s very, very insightful and, you know, certainly for folks that are just sort of coming on board with NDE 4.

[00:04:15] Nasrin Azari: 0. Again, I think people tend to focus more on the technology aspect and really are focused on if I can, if I can make it happen, I can make it happen. But, um, focusing on what is actually right is, is not always the first thing that comes to mind. So I think that’s a great. Uh, place to focus attention.

[00:04:34] Nasrin Azari: We’re thinking about ethics. Um, and now, Tracy, could you introduce yourself, your interest in ethics related to NDE 4. 0 and your answer to that same question? Sure.

[00:04:45] Tracie Clifford: Well, I like RIPI. I’m not on my third career, but I’m on my second career. And originally, I worked in quality and quality engineering and regulatory affairs for about 30 years before coming to the academic life, which is different.[00:05:00]

[00:05:00] Tracie Clifford: Um, but it’s also ties in a lot of my background that I’ve had before. And for me, it’s really important since I am the purveyor that the students get a window into a possible future career, especially in NDE. It’s looking at what type of ethics concerns can they maybe expect or be prepared for when they go into a workplace.

[00:05:25] Tracie Clifford: So it can be as a technician and maybe ask to compromise your integrity, or it could be throughout the business. And especially with Industry 4. 0, they are all right there at the cusp of the new technology. And they are often the ones thrown into, Oh, you know how to use the technology. We’re going to have you go ahead and be the person who does most of the work with it.

[00:05:48] Tracie Clifford: But my ethics related issue is more high level. And I’ll probably go back to this several times. It’s with management and management buy in. And it ties right into what Rippy is [00:06:00] saying. You’re always going to get. Well, how is ethics a value? And then it goes back to, is this a cost of no quality or poor quality?

[00:06:09] Tracie Clifford: And, you know, management’s always going to say, what’s this costing me? And so I think it’s really important to understand to be proactive instead of reactive after you’ve had a failure. And getting that management buy in in the beginning is key to being proactive.

[00:06:28] Nasrin Azari: Definitely. Yeah, I like the concept of, of being, becoming proactive versus being reactive, which is certainly where, when we talk about NDE 4.

[00:06:38] Nasrin Azari: 0, where the industry is, is heading. Um, so that’s a great, a great topic as well, and I’m sure we’ll get back to that a couple of times. And Matt, let’s hear from you now.

[00:06:51] Mat Litschewski: Uh, yeah, so I’m on, oh gosh, I don’t know, second, third career. I’ve done quite a bit. I started as a [00:07:00] chemist, worked in marketing, and now I’m a level three and project manager, uh, and, uh, a master’s candidate for data science.

[00:07:11] Mat Litschewski: Actually, my last term, so almost told a master’s in data science, I guess. So my interest in NDE 4. 0 is born out of my data science master’s. I mean, I saw where our industry was headed and I pursued more knowledge on how this is going to, how it’s going to work. And ethics, ethics in every profession I’ve done has always kind of been an interest of mine.

[00:07:39] Mat Litschewski: If we don’t conduct ourselves in a proper way, I’ve always believed that we’re as good as our word kind of thing. So a strong ethical foundation in any profession is paramount to a good profession and a good professional life. And to answer your question, I would [00:08:00] say trust, uh, is probably one of the biggest foundations.

[00:08:06] Mat Litschewski: for NDE 4. 0 ethics. As a practitioner of non destructive evaluation, our clients expect us to give them the right answers, to adhere to the codes and standards that we’re applying to the inspections we’re doing. And there’s that speaks to the kind of two specific dimensions of trustworthiness and ethics, which is integrity and benevolence.

[00:08:32] Mat Litschewski: So there’s a lot of room within big data to do things that can end up not being benevolent and not meaning to do them that way. Um, so I think that we need to, we need to build trust. Into any code of ethics as we as we move forward into this space. Yeah.

[00:08:57] Nasrin Azari: Yeah. Interesting. So, [00:09:00] you know, we have this, this concept of, of trust.

[00:09:03] Nasrin Azari: And I think with, with NDE 4. 0, what we’re all dancing around is the fact that there, it’s really there, there, there are emerging new ways of doing business. For non-destructive testing. That’s sort of opening the doors to having more information at our fingertips, um, more capabilities at our fingertips, and being able to trust that we’re, you know, following the right sort of moral guidelines versus just being driven by cost is gonna be really important.

[00:09:35] Nasrin Azari: Um, I’m gonna start with, with the next question on our list, which, Kind of goes to what you were talking about, Matt. So I’m going to start with you here. If you were to design a code of ethics for NDE 4. 0, what would it look like?

[00:09:51] Mat Litschewski: So I sort of feel like a code of ethics in terms of, you know, we’ve all [00:10:00] read them, the ASNT code of ethics or computer science code of ethics or statistical code of ethics.

[00:10:06] Mat Litschewski: It’s usually a list, right? It’s usually like one. Do this to don’t do that three, whatever. Um, and that’s great. And I think that we need to have a framework that sort of has some guiding principles, but more importantly, we need to build a system. It needs to be a practicing system that is applied at.

[00:10:29] Mat Litschewski: Every iteration we do along the steps of building the NDE 4. 0 program, and to that, I would probably put first and foremost building checklists as part of the practice to making sure that there’s good ethical practice.

[00:10:48] Nasrin Azari: So what would that checklist look like?

[00:10:50] Mat Litschewski: So I do have an example, and this is actually more for a data, data science project sort of checklist.

[00:10:58] Mat Litschewski: And it’s pretty [00:11:00] simple. But, you know, it’d be like, first, have you listed how this technology can be attacked or abused? Have you tested the training data to ensure it’s fair and representative, which gets into algorithmic fairness? Have you studied and understood the possible sources of bias in your data?

[00:11:19] Mat Litschewski: Uh, does the team reflect diversity of opinions, backgrounds, and kinds of thought? Uh, what kind of user, uh, consent do you need to collect and use the data? Do you have mechanisms for gathering consent from users? Have you explained clearly what users are consenting to? Uh, do you have a mechanism for ready, uh, for redress if people are harmed by the results?

[00:11:42] Mat Litschewski: Can we shut down the software in production if it’s behaving badly? Have we tested for fairness with respect to different user groups? Have we tested for desperate error rates among different user groups? Do we test and monitor for model drift to ensure our software remains fair over time? [00:12:00] And do we have a plan to protect and secure user data?

[00:12:03] Mat Litschewski: So some of those are applicable to nd4o, if not all of them, when it comes down to the designing of an AI system, uh, you know, the user data generally is usually a company’s data, not necessarily consumer data, but I think the security of it is. As still important as if it was consumer data.

[00:12:23] Nasrin Azari: Yeah, I think a lot of companies are really protective of their information.

[00:12:26] Nasrin Azari: I think that’s one of the concerns around NDE 4. 0 is is the exposure of information that they’re, they can be very protective of. But that sounds like a really good list.

[00:12:39] Mat Litschewski: Yes, and I’ll have to actually cite that it’s from ethics and data science on O’Reilly Media by Mike Manson, Hilary and Patel. So I did borrow it from somewhere.

[00:12:49] Nasrin Azari: Yeah. And what do you guys, let me move on to, um, to Rippy to see, see how you. Yeah, how you would respond to this code of ethics. [00:13:00] Um, concept of developing a code of ethics. And, you know, do you agree with Matt or would there be whatever

[00:13:07] Ripi Singh: Matt mentioned is correct. But I don’t know yet if that is complete because we don’t know all the intricacies of it.

[00:13:17] Ripi Singh: A S N T has a code of ethics. It has 13 precepts. Europeans have their own mine and British have their own whatever 12 13. Each of the NDT societies has a code of ethics, which, like Matt mentioned, has a list of things you will do and you will not do. So one thing that we ought to do is Check whether that list is still valid in the fourth revolution.

[00:13:40] Ripi Singh: And my quick check said, yeah, most of them are still valid because they’re all related to humans. The thing that is new is that we now have a machine in the mix that could be thinking and acting on its own, for which we do not have any precept in any of the code of ethics as of now. [00:14:00] So I would submit, maybe we need to add two or three more precepts to that list of 1213 that already exist.

[00:14:09] Ripi Singh: Things around learning machines, right? Eventually, you cannot blame a machine. It must be traceable to some human who trained or developed the machine. Another thing could be whatever language the community can come together. If I develop or train a machine, I feel responsible as if it is an extension of me.

[00:14:31] Ripi Singh: You know, I create something, I create an automated system and let it go. I can’t say, Oh, I didn’t do it. The machine did it. No, if I trained it, I got to take the ownership. It is just, it was my hand. It just was not attached to my body when it was acting. Start to take some ownership of that. There could be things around, precepts around data, that, you know, I will not add to the misinformation or spread disinformation, or make every effort to stop misinformation, disinformation from flowing, for things like that.[00:15:00]

[00:15:00] Ripi Singh: After adding those precepts, since the subject is so new and we are all learning, we probably need a guidance document, just like Matt said, which of checklists. Which can also have a bunch of case studies. Tracy and I spoke about it, you know, like the trolley problem. Is there an equivalent trolley problem in NDT that we could define?

[00:15:20] Ripi Singh: Could we actually do some stories to bring the message home, right? That could be a supporting document so that people know. How to interpret that code of ethics. So that’s how I would look at it.

[00:15:33] Nasrin Azari: Interesting. Tracy, how about you? Do you have some thoughts on this, this potential code of ethics for NDE 4. 0 implementation?

[00:15:42] Tracie Clifford: I do, and Matt, I like that list. I think it covers quite a bit, and uh, but I agree with Rippy, you know, there’s probably some things, because we don’t know what we don’t know. I think about how do you apply this code and who is it applied to? How do you [00:16:00] define the scope? And always in project management and in, uh, quality, we think of who are interested parties that don’t know they’re interested parties, who should we be applying the code to that don’t know they need to have the code applied to them?

[00:16:18] Tracie Clifford: So that’s the sort of thing that I think of on the on the larger scale with developing a code of ethics, ethics, how do we make sure it is applied to purposely to capture all parties

[00:16:30] Nasrin Azari: and just a question for for all three of you just to follow up. Um, is it, is there a requirement around meeting this code of ethics today?

[00:16:40] Nasrin Azari: And maybe that’s still

[00:16:43] Mat Litschewski: Is there a requirement to meet a code of ethics for NDE 4. 0 or a code of ethics just for NDE? I guess a

[00:16:48] Nasrin Azari: code of ethics for

[00:16:49] Mat Litschewski: NDE. Because I, yeah, there’s definitely a requirement to meet a code of ethics for NDE. We, I mean, So really this is

[00:16:57] Nasrin Azari: an evolution, right? This is an evolution of an [00:17:00] existing code of ethics versus something brand new that the industry needs to adopt.

[00:17:05] Mat Litschewski: Right. I think, yeah, exactly. This is another iteration of something that started a long time ago. And we just need to move forward with new technology and be flexible in our, in our code. I mean, as a manager, I hope that my guys are out there doing things correctly every day. They’re following the procedure that was written.

[00:17:24] Mat Litschewski: They’re applying it to the applicable code. They’re inspecting too. They’re, they’re not making shortcuts in their inspection technique. Right. I mean, all of those are ethical behaviors. Uh, that have, can have long reaching ramifications. I’ve spoken about this before with other people, but you know, there was a recent, uh, pipeline project in Pennsylvania where the guy falsified like a thousand x ray film.

[00:17:51] Mat Litschewski: So not only did he potentially put the greater society in that area in danger of a pipeline failure, he also cost millions of dollars to the pipeline [00:18:00] company and his own personal company. And ruined his reputation and hurt the reputation of all NDE technicians.

[00:18:08] Nasrin Azari: And what was the gain from his perspective in doing that in the first place?

[00:18:12] Nasrin Azari: Was it

[00:18:13] Mat Litschewski: not having to go out and do actual labor?

[00:18:17] Nasrin Azari: Yep. Cause I’ve definitely heard this concern from other folks of inspectors being, um, being, what is the word? Sort of not coerced, but being encouraged to, you know, Produce more, more passes than failures, you know, where there are failures to produce passes.

[00:18:40] Nasrin Azari: Um, just so that the company doesn’t need to deal with the consequences of a failure or reject on a particular part or inspection.

[00:18:50] Mat Litschewski: Great. And to continue on for a second, um, it’s built out of education when we train these people to be technicians. I mean, if. [00:19:00] And I can’t speak to like a college setting that has NDE programs, because I actually went to, you know, not one of those schools because it didn’t exist, uh, when I did it.

[00:19:11] Mat Litschewski: But nothing around ethics was taught, taught to me about it other than, hey, you know, this is what you’re supposed to do. Yeah. And I think if we take the time in the upfront to actually teach upcoming technicians. How to work ethically and honorably. It would go a long way in perpetuating a good ethical framework.

[00:19:35] Nasrin Azari: Definitely. So kind of along those lines, um, Tracy, let me, let me jump to our next question for you. Um, for all of us, but let’s start with you, you know, and as we think about building out, um, NDE 4. 0 systems, do you think it’s possible to build ethical behavior into NDE 4. 0? How would we do it? And what are the challenges [00:20:00] associated with it?

[00:20:02] Tracie Clifford: Well, I definitely think it’s achievable, but that goes all the way back down to the algorithms and the programming for most of that. And then like Rippy was saying, there’s a person, there’s a human that actually is attached to that. So there’s kind of some caveats that go with that, and there must be qualifiers.

[00:20:21] Tracie Clifford: It kind of goes back into what Matt’s talking about as far as, um, you know, having the code and looking into each of those points on the checklist to making sure that the human aspect. Is checked just as critically as a validation of an operation of an algorithm and then testing. And I think it’s really important that we also look at the life cycle of the equipment and see, you know, it’s not just the equipment.

[00:20:49] Tracie Clifford: And when I talk about life cycle, I’m thinking from all the way from an idea to design to testing, validation, production, but then [00:21:00] on the end, who’s buying it? And what are their purchasing specifications or requirements? And are they saying that I want to purchase a piece of equipment that includes an ethics check?

[00:21:12] Tracie Clifford: Or there’s a mandated ethical design to it? And I don’t think we’re anywhere near that. So that goes back to the answer is yes. But we’re not there yet. But I think having these conversations is going to be helpful. Yeah. And then go going back to what I said initially about having management buy in to be able to design these systems to answer these ethical questions.

[00:21:36] Tracie Clifford: Again, we need to have management right there agreeing that that is. It’s the right thing to do the ethical thing to do and not just looking at the cost function because it will catch up. There will be a failure just like you were talking about Matt, you know, someone who didn’t want to go out and do the actual x rays, even in programming these NDE 4.

[00:21:58] Tracie Clifford: 0 systems. [00:22:00] We have to try our best to build the ethics into the whole process just like we build quality into the whole process. And that’s a new concept. Mm hmm.

[00:22:11] Nasrin Azari: I like the idea of the, you know, sort of the buyer being the one that demands it, right? If the buyers are out there demanding that the product is ethical before they, before they actually make a purchase, then it’s, then it sort of ties into that cost concept that Rippy was talking about.

[00:22:28] Nasrin Azari: Um, if you can’t sell the product without ethical behavior ingrained in it, then it’s going to, that’s. That’s a powerful way of sort of ensuring that it happens is, is to make that something that buyers actually want. Rippy, what do you think about this topic?

[00:22:48] Ripi Singh: Well, can we build ethical behavior into an automated system?

[00:22:55] Ripi Singh: Simple answer is I don’t know, but I can speculate. [00:23:00] I can speculate in the sense people are wrestling with this in context of a driverless car. When you are in a situation and you have to make a decision, what do you do? Do you protect the passenger who owns you as a car? Or do you protect the pedestrian whom you are about to hit?

[00:23:20] Ripi Singh: I’m sure people are wrestling with that. And people do ask the question, you know, where are you programming all these systems? And if the programming is done outside of US in some other country, are we building that country’s values and ethics into the program? Some countries value senior citizens, some value kids more.

[00:23:40] Ripi Singh: So is the car being programmed to work differently? So there are issues over there. So the way I would speculate, and I don’t know how to do it, but I would speculate the way you can build ethical behavior into an automated system is by having a diverse team of people work on developing the algorithms [00:24:00] and a diverse set of individuals training that AI system to behave in an ethical manner.

[00:24:06] Ripi Singh: And just like I mentioned before, can we have case studies to go along with the code of ethics? Can we use something similar case studies? We will train and test the ethical response of an automated system. And then you may have to do it periodically because as the system learns, it may drift, it may deviate from what it was originally programmed based on all of the usage and learning that the system got.

[00:24:33] Ripi Singh: So you have to again check it. In some sense, another way to look at it is If a fresh person comes out of college and we train that person through ASNT level one, two, three years of experience, and we build the ASNT or whatever entity code of ethics into that individual, can something similar be done with an automated system?

[00:24:56] Ripi Singh: I would speculate it is possible. I just don’t know how to do [00:25:00] it yet. I hope someone out there actually takes it seriously and figures out a way to do it. It doesn’t matter. With your data science background, you’ll figure out how do I combine all these checklists into actually a decision algorithm which helps the system behave ethically.

[00:25:20] Mat Litschewski: Yes. Yeah, go ahead, Matt. So I was going to say, so I’ll just carry it on into this question, then what it all boils down to is a system is only good as the designer who builds the system. So things that Rippy brought up concepts of drift, uh, uh, cultural differences in ethics and morality. These are all key elements of building you.

[00:25:47] Mat Litschewski: Thank you. A good AI system.

[00:25:54] Mat Litschewski: I, I guess my point is it comes down to a human being. We can, we can build all these systems [00:26:00] and we can code lots of amazing, wonderful things and there’s a lot of really good AI systems being built right now to deal with Eddie Current and X-ray ultrasound’s a little bit behind the eight ball on this, I think.

[00:26:16] Mat Litschewski: But anyway, uh, But it all comes down to, we need people who not only, I think there will always be a human element involved in the final decision making out of an NDE system. And so, is it possible to build ethical behavior into an NDE 4. 0? Absolutely, but it comes back to the person designing the system and writing the program, or the group of people.

[00:26:45] Mat Litschewski: Yes.

[00:26:47] Nasrin Azari: I think it’s fascinating, the whole concept of the, you know, the cultural differences and the, you know, the definition, I think it’s the whole definition of what is ethical, and how there can be multiple answers to the, to that question, if, if, if we [00:27:00] build a, quote, ethical system, it might, be an ethical system to me, but it might not be an ethical system to Matt or to Rippy, you know, it’s an interesting problem that it’s a lot more complicated once you, once you dig in and think about the details.

[00:27:18] Nasrin Azari: So let’s circle back around to you, Tracy, real quick. And, you know, one of the themes that’s come up. A couple of times so far is is the whole concept of educating and you are an educator. You’re in an educational system. Do you see that ethics is getting these ethical questions? Are they getting their sort of fair share of the floor in the educational system?

[00:27:44] Nasrin Azari: Is that something that you teach or that any of your compadres? Um, teach to your students. It

[00:27:50] Tracie Clifford: is. It is. I teach a lot of the introductory classes with codes and standards, especially, and we definitely take a deep dive [00:28:00] into ethics and integrity and how your work speaks for you as a person. But then we also talk about how the dynamics of how humans deal with each other can impact that also.

[00:28:13] Tracie Clifford: And that’s a great point that you made, Rippy, as far as culturally. It’s like, my students cannot depend on the fact that they’re going to work in the Southeast or even in the U. S. A lot of them possibly will work in different countries and different cultures. What is acceptable? How do you find out what’s acceptable?

[00:28:32] Tracie Clifford: And, and maybe even back to what you’re talking about, Matt, I, I try to say, and I hope that when you get to an organization you’re working for, they’re not just going to train you on the procedure and the steps of the procedure and how to use the equipment if you don’t have experience with it, but they’re also going to talk about this is our culture.

[00:28:50] Tracie Clifford: Here’s what’s important to us. This is what work ethic is or what they do. But I always ask them to talk about what’s an ethical dilemma you’ve been placed in. And [00:29:00] the room gets silent when they start talking about where they had to refuse. They had to refuse something they had to go back and, you know, and it’s really hard to do that to a customer.

[00:29:14] Tracie Clifford: And, but it’s on, but it’s for the best, right? It’s, it’s all we’re trying to be risk adverse before we have a failure. And they’ve, I think that actually has more impact on my students, people from the walks of NDE telling their story, maybe not telling who the customer is, but sometimes you can guess. And helping that student understand that this is, you’re a professional, and this is a hard thing to do at times, which you’ve got to be willing to protect your integrity.

[00:29:46] Nasrin Azari: Yeah, yeah, I think. Hearing stories of failure is sometimes such a great learning experience for people like that story you were talking about, um, Matt, about the individual that caused [00:30:00] so much trouble because he just didn’t want to complete the work and just the ripple effect and that’s your responsibility as an individual and how do you, you know, how do you handle the effects of some decision like that, that, that, um, that, you know, didn’t work out for you in the end.

[00:30:17] Nasrin Azari: Um, Let’s move on to the next question, which I’m going to start with you, Rippy. What are the NDE 4. 0 ethics concerns associated with data transparency, privacy, and security?

[00:30:31] Ripi Singh: Very good. So before I go there, let me add one more statement to the previous topic in terms of having ethical, being able to build ethical behavior into NDE 4.

[00:30:42] Ripi Singh: 0. You know, one other aspect that we sometimes wrestle with is If you have an intelligent system, and if you have something that’s driven on machine learning and stuff, if for some reason you discover that the machine is now beginning to behave unethically, You should be able to [00:31:00] shut it down or unlearn that system.

[00:31:02] Ripi Singh: So, you know, being able to identify and say, okay, let’s extract a certain amount of data out of it. You know, whatever you learned in the last six months, please forget. Can we do that? You know, maybe there’s a way to protect it. So we’ve got to also be able to keep the, keep the train in the center of the lane as it keeps, uh, running down.

[00:31:21] Ripi Singh: Okay, coming to your next question about, about ethical concerns around data transparency and privacy and stuff like that. You know, I personally, to whatever level I know about NTT, I don’t see a serious issue around data privacy and transparency and all that. I think, I think if you think about it, We’ve been, we’ve been having credit cards for a long time.

[00:31:45] Ripi Singh: We’ve been having financial transactions, online shopping, health records, your car maintenance records, everything is getting online. We have, as a, as scientists, engineers, as technologists, as business people, we have figured out how to protect the [00:32:00] data. In the end, it’s not the data is the problem. It’s the people who have access to the data as a problem.

[00:32:05] Ripi Singh: Ethics is not a data issue. Ethics is a human abuse of the data issue. So it boils back to the basic precepts that AS& T Code of Ethics has, that I will not abuse. But there is something else related to data that bothers me, and Matt alluded to that, and that’s around data bias, is around feedback loops, is around misinformation, disinformation creeping into the systems and causing a drift in the models that will eventually hurt.

[00:32:33] Ripi Singh: The decision making from the system. So I’m more concerned about how the data gets abused by humans and how the algorithms which are not perfect can deviate from their intended purpose. And do things that we don’t understand. And that one combined with something else. You see how in the last few years we have become so dependent upon our cell phones that I don’t [00:33:00] remember my wife’s phone number.

[00:33:01] Ripi Singh: And I’m so dependent on my GPS that if I’m somewhere in the middle of the city I cannot find my way back home from that area if the GPS is not working. So there is another concern that comes up. If NDT, NDE becomes highly dependent upon the machines. And for some reason, either machine shuts down or misbehaves, we will not have to deal with it.

[00:33:22] Ripi Singh: And if it misbehaves, and I, I’m just overly reliant on the machine, oh, the machine is telling this, and I keep going with it, I will be following in a box driven by something else that I put over trust in it, which over a period of time, because of the data misinformation has drifted away. From its course.

[00:33:41] Ripi Singh: That is what bothers me more than data transparency or privacy. Cause I think we can handle the transparency.

[00:33:47] Nasrin Azari: Interesting. Um, Matt, what do you think about this concept of data transparency, privacy, data security, and ethics associated with it? [00:34:00]

[00:34:00] Mat Litschewski: Uh, you know, I’m actually kind of in, in Rick’s corner on this.

[00:34:02] Mat Litschewski: I, I think that we we’re good with security. We know a as NDE practitioners, how to protect our client’s data. Mm-hmm. , um, And, and I think we do a good job of teaching that, that you can’t talk about a client to somebody else that’s not involved in your company or to that client, and there’s probably some, some things you shouldn’t even discuss within that client if they’re not directly impacted by your inspection, but Rippy’s got a very good point and sort of.

[00:34:36] Mat Litschewski: Over trust in a system. There’s already many examples within data science and AI of society believing in a system trying to design it without bias, but the algorithm finding bias within itself. And the key one is the parole AI system that was is [00:35:00] used still by many criminal justice systems to determine what parole who to consider for parole based on their profile.

[00:35:09] Mat Litschewski: And it overwhelmingly rejects black applicants over white applicants. They’ve scrubbed out race from the data set. But there’s other factors that key into bias things. Uh, you know, that can go to the demographics of the neighborhood they came from. Or other things that we don’t see as a bias marker that become a bias marker within the algorithm.

[00:35:33] Mat Litschewski: So, I, I think the real fear is trusting a system without keeping it in check. Mm hmm.

[00:35:42] Nasrin Azari: And, you know, kind of along those lines to the to one of the points that we made if we’re so dependent on the system. How do we know that it’s gone wrong? You know, if we can’t tell it, it’s gone wrong. And that might have been that might sounds like that’s the case with that.

[00:35:56] Nasrin Azari: They tried to take race out of the equation, but [00:36:00] it didn’t actually work because they were they were still profiling people based on other factors than explicit race. So really interesting. Tracy, what are your thoughts? On this question about, you know, data and and ethics around data

[00:36:18] Tracie Clifford: kind of goes back to where Rippy was talking about from last question.

[00:36:21] Tracie Clifford: You know, if it starts to have that drift or a variation and learn everything from some time point. But I think is how do you go in and find what was going on in that algorithm to allow that drift in the beginning? And then you have to go and say, if it’s a specific programmer or group of programmers writing these algorithms, what other systems could they possibly be impacting that they’ve also written code for?

[00:36:51] Tracie Clifford: And we have to learn to identify so that we, we That we need to identify these situations, [00:37:00] not, and again, back to, and not always being reactive, but how can we put this into training and learning how to code so we eliminate those in the future?

[00:37:13] Nasrin Azari: I feel like there’s this dream that we are going to, sometime in the future, have AI systems that replace, you know, our human, Um, are human inspectors first for certain for certain tasks, and it almost seems like that might not be the smartest way to go.

[00:37:34] Nasrin Azari: You might always want to have, you know, a check a trained individual that you can compare results with. Right. You’ve got your AI system, but you also have your expert and you want to make sure that you’re that your AI system is is Not all is, is, is finding the same results and, and practicing the, the behaviors that you would expect of your expert [00:38:00] inspectors.

[00:38:01] Nasrin Azari: I’m sure that’s, uh, going to be a challenge down the road. Um, so really, really interesting. Let’s finish with this, this final question to all three of you, which is, uh, and we’ll start with you, Tracy. What is the first step that we can take as a community to remove or reduce any ethics barriers to adopting NDE 4.

[00:38:25] Nasrin Azari: 0 program?

[00:38:28] Tracie Clifford: Well, I know I probably sound like a broken record, but again, it goes back to management because management makes those high level decisions on where to budget, when to bring in more resources. And and not just management buy in, but they need to be aware of like the data that Matt is going to be analyzing and bringing what impact does that have on their business?

[00:38:54] Tracie Clifford: And again, to be future looking and make those good business decisions. So [00:39:00] that to me always is the 1st step. But I also think even back to what we’ve discussed. How do we integrate this into the education or training of NDE technicians to help them understand what they need to be on the lookout for?

[00:39:17] Tracie Clifford: Because some things are discovered just through everyday work, especially if they’re working in an NDE 4. 0, uh, organization. How will they know to pick up on something? So, so two different things there, one at the very top and then one at the person down at the ground actually doing the work. That’s, that’s my thoughts.

[00:39:36] Nasrin Azari: Nice. Okay. And I, and I guess one of the things I should have, you know, maybe included in this question and I’ll do it for you, Matt, is, um, how would you characterize what the barriers are if there are ethics barriers? I mean, I’m sort of making an assumption that, um, some companies are hesitant the 4. 0 because of potential [00:40:00] ethics.

[00:40:04] Nasrin Azari: Um, and What do you think those concerns are? And then. What can we do as a community to remove or reduce those? Those concerns.

[00:40:14] Mat Litschewski: So. That actually ties in with my answer to the first question without the addition, which is, it’s all going to be an economic driver. The pocketbook is going to dictate how this, how this kind of goes, unless we as a community, as a society put pressure to actually design systems ethically, if it’s cheaper to design a system with maybe less ethics, we’re going to design a system that works.

[00:40:43] Mat Litschewski: And might not be held to a higher standard that we would like to see as we’re trying to build this program and decide what the ethics of this program should be to me, getting buy in from management, [00:41:00] in particular, to drive, creating a better ethical system will then drive down to the bottom practitioner of the system, who will then hopefully hold up the ethics that are put in place.

[00:41:14] Nasrin Azari: Yeah, that’s it. So this top down structure that that needs to happen within a company.

[00:41:20] Mat Litschewski: Yeah, much like a good example that’s that’s been used is the Toyota system. Anybody on the Toyota production line can stop the production line. If they see an issue in quality, that’s what needs to be put into our system of ethics.

[00:41:38] Mat Litschewski: Everybody along the line of NDE 4. 0 should be able to hit a stop button when they see something going wrong.

[00:41:44] Nasrin Azari: Yeah, raise a flag that says there’s a problem here. And almost, they almost, it might almost be a requirement for them to step up, right? If they didn’t step up and they saw something, you know, that should be something that they’re, you know, [00:42:00] not just encouraged, but, you know, um, assumed that that is an action they should take.

[00:42:06] Nasrin Azari: Um, Rippy, let’s, let’s, um, finish up with you, your thoughts around, um, ethics as barriers to adopting NDE 4. 0 programs and how we might get around those, those barriers, or what we might put in place as a community to reduce, um, barriers to

[00:42:23] Tracie Clifford: adoption.

[00:42:24] Ripi Singh: So I think as a community, the first step we have already taken, the first major step is Awareness and acceptance that there is an issue.

[00:42:34] Ripi Singh: We all need to work together. We all need to learn together and do something about it. Nasreen, this podcast that you organize is a part of that awareness acceptance campaign and I hope thousands of people watch it and realize that managers need to think about it. System developers need to think about it.

[00:42:53] Ripi Singh: Programmers need to think about it. Business guys need to think about it. Yeah. And also making aware the buyers. [00:43:00] What Tracy brought out, right? Buyers should demand it. So step one is awareness and acceptance. This podcast, AS& T Committee, Subcommittee on Ethics, Tracy and I have written a chapter for the NDE4 Handbook on Ethics.

[00:43:16] Ripi Singh: The ICNDT Special Interest Group on ND4. 0 has identified ethics as an area, has identified three individuals from U. S., U. K., and third country, I think, Italy, to actually lead a global team to do so. To address common ethical concerns, to even look outside of NDE first, to see what can be adopted to NDE.

[00:43:43] Ripi Singh: Matt shared with us a list of questions that could become a part of this. This is the best practices out there which are evolving. We don’t have to invent everything from the scratch. We can actually adopt it. What’s happening with the driverless cars that will tell us how to look at a few things, right?

[00:43:58] Ripi Singh: So all these things are happening. Look at [00:44:00] the DOD’s guidance on AI. It has given five principles. We could adopt those five principles. The first one is eventually a human should be responsible for it. Second, you should be able to trace it down to that human who’s responsible. Third, the human who’s responsible should also be trained to take the decisions or train the systems like that, right?

[00:44:21] Ripi Singh: Fourth, you should be able to shut down the system if it is misbehaving, and, you know, things like that. So there is guidance that’s coming out from multiple organizations, from, from, uh, what I’ll call our Center for Applied Ethics at Santa Clara. So that’s our second step, you know, let’s adopt what people have learned and are sharing with us, and then see which pieces can be directly adopted to NDE 4.

[00:44:44] Ripi Singh: 0. Another aspect that Tracy and I have mentioned in our Handbook of NDE 4. 0 is around having a Digital Transformation Review Board. If you don’t know what to do, but you at least are aware that there’s an [00:45:00] ethics concern, just like you have Quality Review Board, Materials Review Board, Design Review Board.

[00:45:04] Ripi Singh: You can have an ethics review board or a digital transformation review board, which is made up of people with diverse experiences, including people like Matt, who are, you know, getting top notch with their understanding of data science. So actually make sure that when you develop a system and you train a system, you’re doing it the right way.

[00:45:25] Ripi Singh: So those are the initial steps that we have to go through to get to a point where we can we can feel better about it. And then, of course, learn

[00:45:35] Nasrin Azari: as we go. So I feel like we’ve just barely scratched the surface of, of this topic. I mean, you know, I’ve, I’ve, I’ve been wanting to do this podcast for some time, and I’m really grateful to the three of you for, for joining us today and providing.

[00:45:49] Nasrin Azari: So much thoughtful commentary around ethics, because I think it’s extremely interesting and much broader than I [00:46:00] would have thought, especially when I first started thinking about ethics associated with NDE 4. 0. It wasn’t even, you know, honestly, it hasn’t even been a topic of, that I’ve been aware of until recently, and having these conversations with you all, and it’s It’s fascinating, um, how deep it goes.

[00:46:18] Nasrin Azari: So I really appreciate your inputs today. Thank you, Rippy. Thank you, Tracy. Thank you, Matt. Um, thank you listeners. Please take this, this whole ethics conversation very seriously and get yourself, um, educated as, as Rippy mentioned, we’re sort of, we’ve got these, this podcast out there, there’s a lot of, um, great.

[00:46:39] Nasrin Azari: tools out there for people to spread information about NDE 4. 0 and ethics is a part of that conversation. So let’s keep it going. So thanks everybody and thanks for joining. We’ll see you next time. Bye bye. For more expert views

[00:46:54] Mat Litschewski: on NDT, subscribe to the Floodlight Software blog at floodlightsoft. [00:47:00] com.

For more expert views on NDE 4.0, subscribe to the Floodlight Software blog at floodlightsoft.com.

Scroll to top