RSTalks #1 - Why AI Matters? This is our RSInsight

This transcript describes the YouTube video "RSTalks #1 - Why AI Matters? This is our RSInsight"

Dan: Hi, I'm Dan, and welcome to the premiere of RSTalks, the Rocscience podcast.

Who am I? I'm a software developer at Rocscience, working on the RS2 team. I started my journey at the University of Waterloo, doing my undergrad in civil engineering. I then completed my Master’s at the University of Toronto, specializing in civil and mineral engineering.

I've taken on several different roles and even earned my Professional Engineering license here in Canada. But I really fell in love with this role because it allows me to work on the most diverse and challenging projects in this field.

When I’m not working, I’m brewing coffee or tea. Whether it’s grinding the coffee beans or whisking matcha, I find that simple, mindful task really helps me reset after a busy day—and I feel like everyone needs to find that moment in their life.

That same passion I have for knowing every step in a perfect brew? I bring it with me when I learn about new developments in our industry. And that’s really the essence of what RSTalks is all about.

I’ll be sharing a drink with industry experts and thought leaders as we talk about new developments in geotechnical engineering. We’ll cover topics like career development, groundbreaking research, and innovative engineering solutions.

And I really want you to join me for that ride.

Text fades in on a burgundy screen, reading: “RSTALKS”

The text fades out, and Dan reappears on screen.

Dan: Now, in today’s episode, we’re going to talk about a very controversial topic—AI development in geotechnical engineering.

AI, in general, has brought out a lot of mixed feelings in our society. In 2014, Stephen Hawking, in a BBC interview, talked about how the full development of artificial intelligence could spell the end of the human race.

In 2017, Jeff Bezos was a bit more optimistic. He referred to this time period as a golden age. He talked about the development of machine learning and artificial intelligence as a possibility for us to solve problems that were once science fiction just several decades ago.

The big questions now are:
Is AI going to take my job?
Are AI tools hurting me or helping me?
Should I be optimistic about the AI revolution?

Now, I’m going to talk to four of my colleagues here at Rocscience—one of whom has developed an AI-powered chatbot that’s here to answer your geotechnical questions.

Please join me in welcoming Steve to the show. Welcome, Steve!

Steve: Oh, thank you, Daniel.

Steve and Daniel clap.

Dan: Not that many people here, but still good!

Okay, so I know that you’re the AI team lead, and you've been developing an AI-powered chatbot called RSInsight. I think the first—actually, before I get into that, I know you have a milk allergy...

Steve: Yes.

Dan: So, I made you an oat milk latte, and I really hope that is better than the coffee you can get at the office. If it’s not, I’ll make you something else—don’t worry about it. The beans are from a local roaster here in Toronto called Velvet Sunrise—please sponsor us. And yeah, I hope you like it! Please take a taste and let me know how it is. Is it the right sweetness, bitterness, and all that stuff? I made myself an Americano.

Steve: It’s really good, thank you.

Dan: Okay, so you’re an AI team lead for RSInsight, which is an AI-powered chatbot here at Rocscience. I think what everyone wants to know right now is just a high-level overview of what that is and what it does.

Steve: Right, so RSInsight is a chatbot that has been fed proprietary information of geotechnical engineering knowledge. Similar to—there are a lot of chatbots out there, and depending on which field you're in, there's Galactica for science, Bloomberg GPT for finance. These domain-specific chatbots are trained on domain-specific knowledge.

So, when we look at the papers out there and the technology as well, there hasn't been any company from the geotechnical side that has attempted to create a chatbot system with a database specifically focused on geotechnical engineering solutions.

What we have done is collected all of the online help from our website, as well as learning resources from Evert Hoek’s Learning Corner, our case studies, and, over time, our interactions with customers we've had over the years. We have thousands of case studies that we have learned from.

So, what if you had a chatbot that has access to all this domain-specific knowledge and can answer questions directly from this database? That was our initiative. We started this project in May of this year. This chatbot will be coming out very soon, and I’m really looking forward to how we can improve the experience for geotechnical engineers to utilize this domain.

Dan: And how do you kind of look at, like, the workflow? Like, how would you expect a geotechnical engineer to learn about it, like learn about its limitations and how you expect it to change the way we do designs?

Steve: Yeah, that's a good question. I—I see it as an assistant tool. So it's not going to be directly giving you a right answer, you know. We will still have to use engineering judgment. Because with any chatbot, you feed in a question and then it will generate the responses, but it is up to the engineer to determine whether the answer is right or wrong. But the good thing about this chatbot is that it is able to give you an answer from our domain-specific knowledge first, before it goes in to look for another solutions.

So a very simple example is, if you ask a question related to geotechnical engineering with ground improvement and settlement, for instance, it will give you guidance on how you can analyze these step by step, and it will go into software use on how you can use, you know... Settle3, for instance, to run these analyses.

And then if you look for another information that is required from theory that is outside of geotechnical engineering, we still have the ability to do that, you know, pulling from the fundamental theories of the physics and mathematics to combine the answers together to generate responses that you, pretty much in theory, you have to go in, look for the books, and pull out the equations. Right? Make, write the summary, but you now have a chatbot that does it for you.

Dan: So I think a lot of our viewers might sort of have the same question I have, which is, I type in an answer—what is the quality of the answer I get back? So are there any ways to kind of mitigate against it or check at least if it's right?

Steve: Yeah, so definitely—this is the beauty of RSInsight—is that it is able to give you a full reference on how it's generated responses. So you can actually ask a prompt and it will give you even the page number where it's pulled the information from, so that way you actually know where this information is coming from.

At the same time, yes, because it's a probabilistic model—they call it the LLM, large language model—so a chatbot does have room for susceptibility of hallucination, meaning if your query is not so clear or if, let's say, you're trying to mislead intentionally a chatbot to answer differently, sometimes it will actually go off. I'm sure a lot of people have tried this, you know, as a fun experiment. And to avoid that is to basically not intentionally toxify the chatbot but also reason out, you know, step by step if you think that it hasn't given enough information that you're looking for... do work with a chatbot, you know, it's sort of like when you're talking with someone as well—you want to be patient with them, and you want to, you know, provide logics and where you want the conversation to go. So it's something very similar—like the LLM nowadays—it's very smart to understand the context of how you ask the question, so we'll provide the answer based on, you know, the series of queries that you have asked the chatbot. And RSInsight also has the ability to, you know, read all the contextual background of your chat history so that it is able to reason out and get to the answer that you want.

Dan: So it actually sounds like, kind of like recommendations I make when we recommend stuff for software. It's like you kind of have to be aware of your input because it'll give you something whether it's right or wrong.

Steve: Right.

Dan: But I—I think what you're saying is that, because it has background knowledge specifically in, you know, as to what we do—case studies and so forth—the probability is it's going to give you something like probably nail on what you're looking for? So I guess it would be similar to me just Googling everything, right? Or I would assume, reading all the papers myself—I would imagine.

Steve: Similar... I'll say similar... faster.

Dan: Similar but faster? Yeah, I can see... I can see how that would be. Yeah, I could see how it... to get a new person to relearn all of the past experiences that are continuously going as we're building more features—that's trouble, so it would be, it could be very useful to have some type of tool like that.

Steve: Yeah.

Dan: So I think the final thing I want to ask you about is... okay, so what do users need to do if they want to get started?

Steve: So they first have to go to RocPortal.

Dan: Okay.

Steve: Sign up the account.

Dan: Sign up, please!

Steve: Yes, and you'll be given, if you are a current user of Rocscience, you're already a flexible user, which means you'll be given certain number of questions/queries per month; but if you're a free user, you still have access to our chatbot. So you'll be able to freely try out our chatbot and see how well it performs. And we also have a feedback system as well, so in case if you don't like the answer you can put a thumb down and leave feedback, or if you like the response you can give a thumbs up and over time this will also improve us in, you know, the performance of the LLMs, just like how the other regular chatbots also are improving over time with their interaction with the customers.

Dan: Oh excellent, so yeah, thanks so much for stopping by. Really excited having you here. Yeah, take care, we'll see you... I'll see you, see you at your desk.

Steve: Yeah, for sure, great! Alright, thank you Daniel, appreciate it.

Dan: Byebye!

Steve: Thank you for your time, guys!

Dan: Alright, my second guest is Sina, Product Manager for Rocscience’s LEM software, or Slide2 Slide3. He's an adjunct professor for York University. He's also an influencer in the geotechnical space. I have no idea how you have time to do this. Thanks for joining me on this podcast.

Sina: Thank you, Daniel, for having me here. I don't know about the influencer part, but the first two ones are true.

Dan: You're definitely an influencer, oh my gosh. I aspire to be that impactful.

Sina: Thank you.

Dan: So before we get to that, so you are a tea drinker, right? So this is a special blend that my friends have taught me how to make. It's called hot water tea. I'm just kidding. Please drink it. So I actually made you an oat milk matcha latte.

Sina: Wow.

Dan: So it has a matcha, a little bit of honey, a little bit of oat milk because I remember you prefer alternatives.

Sina: Yup.

Dan: Please take a taste, yeah, let me know how it is. Yeah, yeah, yeah, yeah, yeah. And of course, I have my Americano.

Sina: Wow, that's so nice.

Dan: Have you had matcha before?

Sina: No.

Dan: I'm introducing you to matcha. Oh my god. I hope it's good though?

Sina: Yeah, it's perfect, so thanks so much.

Dan: Yeah, so I think the reason we brought you on... now you've been in the industry for a very long time, that's not anything to do with your age—I'm not saying anything about your age. It’s that you've been impactful.

Sina: I would say I'm almost 40.

Dan: So I'm over 30, so it's, you know, we're all getting there. But I think the main thing is, from your perspective, your demographic, your friends and colleagues—do you guys use AI tools?

Sina: Oh yeah, a lot. Depends on the industry, like I have so many friends in different industries. Even my sister-in-law is a professor at New York University and her job is AI machine learning. She did her background... her background was in electrical engineering, but the whole thing is about AI. So yeah, we've been—even ourselves here—we've been investigating basically anything related to AI because that's the future, and we want to be the pioneers as always. And we go to any direction that is related to AI to see if we can implement it in our software. And the professors that I've been talking to, the main part of the research is check AI if it works for geotechnical engineering.

Dan: But like, I know that you're quite studious, you like... you can get into any passions, and it sounds like you have friends that also do that. But do you have... like, when you meet people at conferences, I don't think everybody comes from that background. Do they also have that feeling or is there...?

Sina: Yeah, especially about the data, because the first thing that they start with is boreholes, like borehole data. And the main question that they have is, do you have any AI tool that makes it faster for us and/or minimizes the error that we have, like in our data? And people approached me and said, "Okay, we are looking to create an AI tool, can you help us?" I don't know why, like... because they think that they have more time. But yeah, it's everywhere now, and this data analysis that we actually implemented in RSLog as well—it’s called OCR, which is an AI-based approach. It's the same thing, and I know companies, geotechnical companies, that they have their own online AI tools to do that for them. And it's mandatory for the, like, members of that company to use this AI tool for the borehole data management software purposes.

Dan: Okay, so that seems like it's coming from decision makers? Like, they're going in that direction, but what about the people that have to use these tools day to day? How do you think they feel about...?

Sina: They are educating themselves, I think, because it's like us—we had no idea, since a couple of years ago, what is AI, what is machine learning, because my background is in my PhD in statistics, so I should be more familiar, but still we didn't have that background. And after... it's an interesting discussion—when we had a statistic in Slide2, one day we said, "Okay, it's taking too long, what should we do? Like, what are the approaches?" And we came up... we came across this method called “stochastic response surface method,” which is machine learning-based, and we implemented it in Slide2 three, four years ago. At the time, it wasn't that hot in geotechnical engineering, but any direction that you go and you talk about speed, you talk about data—you come across this and you have to learn, and that's what everybody is doing. They think that, okay, it is necessary to learn it, and they're doing it now.

Dan: So what did you do to start learning?

Sina: I started like very basics, like “what is machine learning,” “what is AI.” So the first thing to investigate is, machine learning is one of the several branches of AI, so I wanted to do machine learning in Slide2, we wanted to learn how it works, so we went to textbooks, and we Google what is this response surface method, what is like these functions and stuff, and then mathematics was like there were a lot of equations, a lot of things, so we had to go to different directions, papers even that they use it in different fields, because geotechnical field is one of the, I think, last ones that we are picking them, because of our backgrounds. So we went to other fields and see how they used it and we learned from them and then we brought it here.

Dan: Okay, like for me I think personally, we're in an age where you can look up anything on the internet and obviously there's right and wrong information. Yeah, would you recommend this? They just go into... I mean, there are tutorials now, I think?

Sina: Yeah, yeah, there are tutorials they can learn, but the thing is when you are a developer and you are still a developer, so you know that when you want to develop a feature in a software you can’t just Google it, you have to go to the root of the problem.

Dan: Yes.

Sina: So you have to learn all the basics to be able to develop something that any user can use. It's a general feature so they can play with the numbers that we put there as defaults. So you have to be prepared for all of that. So if you don't know what exactly each one of those are doing, you can’t develop it, so that's why for us it's a little bit more basics and in more depth, but for general users they can just go to, like, Google it—what is a stochastic response surface method for example—and they can, they don't have to know everything. They just need to know, as it's a saying, like, garbage in garbage out, so they should know the parameters that they are using as inputs—they should be accurate enough, that's all.

Dan: Okay.

Sina: And how they interpret, how to interpret the results, they should understand the method to be able to interpret the results as well.

Dan: Yeah, so it's basically saying, and I do this through all my work anyways, is like you're responsible for your input, so basically like understand that it's a tool, but I mean it's very powerful tool, I've seen what Slide2 can do and some of that. So yeah, so would you say there's anything at this point in time that the AI tools haven't been able to help you with?

Sina: Maybe we didn't try them. There are like, different aspects. I know that there are PhD studies are being done now to connect Seepage analysis to slope stability so to use the Seepage data, because there are a lot of data out there and they can use it in the slope stability problem for landslide failures. There has been always a Seepage there so they are doing PhDs on that to be able to incorporate AI into it, but is it possible to bring it to Slide? I'm not sure if it's possible because the thing is again this is a commercial software and we need more data to train it. We need to be 100% most of the time but we are still investigating different aspects.

There has been things, some features for example, I wanted to know if AI can help but we ended up being limited to the amount of available data—that's a main problem in geotechnical engineering. Most of the data—they are confidential, and we have synthetic data. We can reproduce synthetic data but that's not enough because these are ideal cases. We want bad data, we want good data from the customers. Most of the models, we are not allowed to use them. Most of the data, we don't have access to them. So that's the main limitation I think in geotechnical engineering and in some aspects that we wanted to tackle.

Dan: So is that because if you don't train it properly?

Sina: Exactly.

Dan: Not that useful.

Sina: No, it's not. AI is not a magic tool, so basically, it must be trained properly. If you train it with bad data, you don't get what you want. You may develop a tool that gives you some features and stuff, but I wouldn't trust that.

Dan: That makes sense, yeah. So do you know if companies are doing... oh, you mentioned the online, like their self online tools — so they're using their own data?

Sina: Yeah, the funny part is, they were using their own data and then they compared it with this OCR feature that we have and they found that their AI was doing something wrong, so it is possible. Because they were using maybe limited data that they had. So you should be aware and, as I said, if you want to have any AI tool, you should use your engineering judgment as well. So AI is not going to replace the person who has like 30 years of experience. He looks at the model and says, "Okay, this is going to fail." But AI may say no based on the data that you provide, it's not going to... you know, it should be a combination and an adaptation process with the experience and with general knowledge of technical engineering.

Dan: Yeah, I think, I think I felt that when I got my stamp? At the end of the day, it's like the AI is not stamping the drawings and stuff, I'm stamping it.

Sina: Exactly. We had a... we worked on a feature for one year. It's called “Intelligent Search” in Slide3. It's an AI-based method and it's unique to Rocscience. We spent one year brainstorming and developing, different meetings, every week meetings. And we said, "Okay, what if we do that?" And it's based on particle swarm optimization, which is nature-inspired optimization, which is a branch of AI. So you assume that that's going to solve the problem, but if you have an open pit like 3 kilometers by 3 kilometers and like 2 kilometers, there’s something like that — it's not going to help, like they're going to miss something. So you have to tell the approach, "Okay, this is your general approach but focus on this, do more particles on the region that is steeper, do more particles on the regions that they have a weak layer." So you have to tell the algorithm to do that, otherwise, okay, they talk to each other, they go to places with lower factor of safety. It's very smart. It's amazing how it works. But still it needs a touch and that touch came from more than 80, 90 full 3D models that we had. I said, "Okay, if I apply this algorithm, that's not going to give me the correct answer. Why is that?" And we dig into the algorithm, and we applied that based on what we observed in that model. So you can see, it's a human beside that it's going to create something way bigger and way better.

Dan: So you need that experience coupled with these advanced tools, right?

Sina: Of course, like if you look at those 3D models, I can look at them and I know that these results are... they don't make any sense, but I'm feeding it to my algorithm. So we scrap those. I said, "Okay, make this model better, fix it," and feed it again, and the results will be improved way better. Now a lot of our audience are engineers, but I do want to mention that you're also a product manager.

Sina: Yeah.

Dan: You have that business side of you. Do you think those tools are going to help in that regard?

Sina: 100%. Like this intelligent search, for example, in Slide3 — it fixed an issue that it's been there in every 3D software limit equilibrium, not final limit equilibrium software, for years, because in limit equilibrium, search is the main thing. Bishop method is bishop; we didn't change the formulation of bishop method or spencer method. But how to search for the critical factor of safety? That's the key. And for 3D models, it's a big challenge because the models are huge and the speed is another problem, so performance, accuracy — they all come to the account compared to the 2D analysis. And this solved our issue, so now we have classic search method and intelligent search, and I would go to intelligent like as a default, because I know that this is this particle swarm, this optimization, AI-based method fixed all those issues that it has been in at least 3D limit equilibrium for years, or response surface that I mentioned. A lot of people complain in 3D probabilistic analysis, each run may take a few minutes. Just imagine you want to do it 1000 times. So now with response surface, you do it only 40 times, 30 times. Just imagine how much is the difference. It went from a few days to two, three hours. You know how much money is going to save in the consulting side and business side? So definitely these tools are meant to fix the business part of the issue.

Dan: Right, so because time is money, right?

Sina: Exactly.

Dan: Time is your budget. So yeah, so you're saying from the higher ups, they would benefit from just faster decision-making, whatever it is that they decide to do. Have you ever seen it sort of used to manage people? Is that, has that, for AI tools?

Sina: Not really.

Dan: Or just engineering?

Sina: Like there are tools AI-based, because most of the time is dealing with big data, so in terms of like managing hours, if you have a company with different employees and managing hours optimization, is a main key in AI. Definitely it can be done in terms of managing and making more productive, in terms of performance and yeah.

Dan: Yeah? Yeah, yeah, yeah, that makes sense to me. Yeah, so I think we've got through all of it. I have a cue card here, I’ll hide it away. Thanks so much for joining me. Thanks for sharing your experience obviously...

Sina: 27:30 It was a great pleasure to be here and just one thing before I say goodbye.

Dan: Sure.

Sina: There might be a resistance against it in the geotechnical industry and there will be, a lot, because again people with high experiences — they don't trust easily and it takes some time for AI geotechnical engineering and we need data. So to be able to do it properly.

Dan: I think yeah, data is going to be a challenge absolutely, but I think just by yourself, you're able to be a professor, you're an influencer, you're you know you're working in product manager for two different products. I think it's hard for me to find an excuse not to learn these things right? Learn more about these things.

Sina: Exactly. It saved time for me as well, like even ChatGPT itself helped. And there are AI tools I didn't know a lot of them. I got a list actually, like if you want to do for example this part of job use this tool, if you want there's a list and it's always being updated on LinkedIn. Everybody is posting on LinkedIn about these AI tools and actually I go and look at them and any of them that's going to help me — let's say you want to do a feature in Slide2, I have to do a market research, I have to do other software research, so I use these tools to help me if there is anything out there that I'm missing, or there's any value for it, or what is the application in general because we have to look at the big picture so...

Dan: I think though if you learn one tool, I would imagine that they're not like, the interface at least, it's not that different. It's just sort of the inside of how powerful it is on which areas.

Sina: Exactly, and how you ask your questions.

Dan: Right.

Sina: So basically that's very important. At some point I was... I kept asking the question that, what are the references for this part of like a slow style because I always you know, we write papers all the time and I was looking for some references. Whatever it came from AI — all wrong. So you should be smart in terms of... and because I was looking for my references and they knew that these are not mine, it just put my name in references and said okay these are yours so... and I had to modify my question several times to get the proper answer. So being smart what you ask is the other thing is what you are exactly looking for. You can just throw questions there and expect something meaningful. You have to play with it. I know that some developers, they ask questions even about the codes that...

Dan: Right.

Sina: They write, these AI tools they write codes for them but they do cross-check all the time and I keep asking, keep modifying, and the results was a little bit scary. They got, they wrote the code themselves before. They got exactly the same code from AI tools as they had, but it had a lot of back and forth. So it is very strong but you need to be smarter.

Dan: So you're saying to, like to the viewers like just don't give up on...

Sina: Yeah.

Dan: Play with it, try it out, and it seems like it's benefiting you a lot so it's excellent.

Sina: A lot yeah. It's helping in every aspect. We write reports. We do research. We write papers. We... a lot of things. It is very...

Dan: It might be so fast that you get another job or another role added on to your list of achievements yeah...

Sina: I'm done. No... like being... just to comment on... like I would like to express it's not like I’m an influencer per se to be on LinkedIn, I would like users to know how to use our tools. There was a quote that if you are a product manager and saying that my software is the best, my software is the best tool — that doesn't help users. Let them use your product. And me posting things on LinkedIn and being there all the time and providing users with different models, different papers, different things, is just to show them how they can use the tool. And they just try it. And once they try it, they cannot leave it. So that's the goal. And I think that's what every product manager basically should do.

Dan: I think you bring up a good topic for maybe another podcast about social media? And engineering?

Sina: Yeah, for sure.

Dan: I think I personally found it to be extremely valuable to be present.

Sina: All the users are there, especially with the latest LinkedIn and everything and latest developments. Everybody who is your audience is there. So you should talk to them. You should show them what your software or your product can do. And that, I found it very interesting. I don't give you numbers but it's very interesting how impressions the post get if it goes to technical, but if you try to advertise something, it's not working. So technical is the key in terms of presence.

Dan: I think that fits in with the engineers’ mindset. They just want things done and done well, right? So yeah, thanks again so much.

Sina: Thank you for having me here.

Dan: Yeah, I hope I didn't stop you from all of the stuff you have to do today. Take care. Welcome our final guests. We have Angela and Riana. So Angela is a project manager for the Rocscience rock suite like Dips, RocTunnel3, RocSlope3, list goes on. By the time I finish making her matcha latte, I feel like she's designed to build another product. And Riana, of course, I've made you actually a pour-over coffee. Milk to your liking, of course. And I think it goes well with all the treats that you're bringing in, which is pleasantly, you know, pleasantly surprised. I hope this... please have a taste and then I hope you like it.

Angela and Riana: Yeah, thank you so much.

Riana: Yeah, delicious.

Dan: Thank you. Okay, so the reason we brought you guys in here — you guys are rock stars out here at the company. You guys are also sort of at the beginning of your careers. The main thing that I want to get to know is: are you guys using AI tools in your personal and professional work? Start with Angela? Yeah, we'll start.

Angela: Yeah, I use it all the time for coding. So I use ChatGPT and Copilot, and it helps a lot with sort of the mundane code where, you know, I give it a prompt and it's able to give me something back. Not always usable sometimes, but that's where I guess the engineering judgment comes in. I also use it for writing — anything that's sort of on the technical side. It gives me some ideas as to things that I didn't think about, so... but in general I did find that with geotechnical content, these AI tools typically don't do as well as compared to coding.

Dan: And are you, yeah, you... I guess you would do coding as well, right? You use a lot of similar things?

Riana: Yeah, I would say I use ChatGPT. It's taken a while for me to get used to even thinking about us writing a prompt. And I would say it's a... it's a skill to figure out what to write in there because a lot of the time it doesn't give me exactly what I need, but I use it now pretty regularly to help me with even just figuring out how to get started or structure my code.

Dan: So the thing is, you're also managing Slide2, which is geotech, a lot of geotech questions. Do you end up Google — like not Googling, but using AI tools for geotech questions as well? To like help you out?

Riana: I have been warned against that by Angela...

Dan: Because right, oh yeah, you guys sit near each other.

Riana: So I, I don't tend to use it for geotech...

Dan: Hmm, okay.

Riana: ...questions.

Dan: I think I've had the same problem, where you type in a theory question and you kind of have to check yourself, make sure it's actually what the book is or whatever the thing is, so you know that's good I think. Also, like throughout the podcast, I think people have been saying this about, you can't not use your engineering judgment like you're mentioning before. So that's good, yeah. So the thing I want to ask you about also is if I'm an up-and-coming engineer — actually doesn't even need to be engineer now to think about it — how do I prepare myself? Do I need to learn all these AI tools when they come in? What's the process like when you guys were learning it?

Riana: I would say it's definitely helpful to learn how to use them. But I think it's more important to know how to test them?

Dan: That’s a great point.

Riana: And their responses. So you definitely, you need to know the underlying... underlying theory...

Dan: Yeah, the things... same, what do you think as well?

Angela: I would say at the end of the day, you know, AI is just a tool, and it's only as useful as how you use it. So even things like giving a prompt, I think it's a learned skill. You kind of treat it like if you're talking to someone who has no idea of context or anything that you're talking about, like explain to them really like what the problem is, so that even a person can understand, and I think that's how you sort of structure your prompts nicely for AI too. They don't know what you're talking about. They don't know what your problem is. And they don't know what you want to achieve at the end, so a lot of it comes down to proper communication with the tool to get a good quality answer.

Dan: So you're... are you saying that we should be solid in our basic understanding at least, of the problems, the theories, and all the stuff, and then you can use that tool? Like don't just jump into it is what you're saying. Because you're saying bad input probably’s not going to help you, I would assume?

Angela: Yeah, I mean, you know, something as simple as I've asked ChatGPT, “Hey, what's the typical friction angle that I should use for rock?” and it'll say something along the lines of, “Oh, well, use 30° to 35°,” and I say, “Well, that sounds a little bit high,” and then it's like, “Okay, then try 20° to 25°,” but that still sounds a little high, and then it'll say, “Okay, try 15° to 20°,” and it's like... now it's like going all the way downhill, and I—like, I feel like it's, you know, not really giving me something that I need and that's usable.

Dan: So you need that for us, for geotech at least. We need that context.

Angela: You need, yeah, you need the context. You need to know application. And I think these AI tools sometimes just give you an answer just to give you an answer, based on what you asked, right?

Dan: Yeah, I think my personal experience is like, cuz I've asked geotech, I've also asked coding questions. But I think the coding side is, it really helped me once I knew basic data structures and algorithms, and then I use the AI tool, and then it was very helpful. I'm sure you, when you were going through school, you've already learned all these things. But I think I had to make myself learn that stuff first before I trusted... because technically you can go to ChatGPT or whatever it is, and just build a software off of copy and pasting probably, and it might work, but I found that to be not that effective. And I feel like I'm getting the sense that you guys are like, probably thinking as well like, you probably shouldn't do that?

Riana: Yeah, I mean even I was dealing with ChatGPT recently, and the answer looked great and then I copied and pasted it in. First of all there was compiler, like there were errors in the code that I had to fix, which like if I didn't know the proper method of fixing them, there could have been a lot of ways I could have gone wrong, so I needed to know my background in, in like data structures in, in the classes I was using in order to fix those errors. And then not just that but dealing with okay, the program now compiles but it's not actually giving me the exact output I want, so playing with what the response is in order to like fine-tune it to exactly what you were looking for.

Dan: And I think it goes back to what you're saying about testing. I think, yeah, I found this to be a very good skill just to have. I mean, even if you're answering a customer's question, you're going to have to test it somehow or break down a big issue into, like, where is it failing? So I think that's really key, yeah. So if I'm—if I'm—then, so when you guys were picking up, let's say, we're—I'm guessing you guys use ChatGPT and Copilot, is that what—basically, do you guys just start—just get into it? So, start typing prompts? Like, how did you guys refine your prompt-think skills?

Riana: Well, trial and error...

Dan: Trial and error, yeah, yeah.

Riana: A lot of the time I will just, like, type something, and then it'll come back and it'll give me— instead of, like, in code, it'll write it as if I was, like, doing hand calculations. I’d— “Okay, don't give me that, can you write this in C#? Can you write this in C++?” And then it'll do something, but it'll be using entirely different libraries than our software is used to. So I say, “Can you use the transformation class to do this?” So kind of just, like, playing with it in that way to give it something that I can more easily copy and paste, and then fine-tune from there.

Dan: Right, and what about you? How do you feel about—like, how did you refine your skills of making it useful for you?

Angela: I don't try to challenge it too much. Like I said, I think I—majority of the time—will use it for more of the mundane tasks. Like for simple geometries, you know, we all kind of have to Google, like, “Oh, is this dot product or is this cross product?” And, you know, it kind of takes that outside Googling away from some of these really simple math transformations—like rotations, etc., projections. So if I said, “Oh, I just want to project A onto B, can you just give me the C# code for that?” it’s something that—I think—a lot of websites will have that kind of information, and that's probably where ChatGPT will extract that information.

So in those cases, you know, probably more than nine out of ten times, it'll actually give me back a piece of code that I can immediately use, and it'll be correct. But on the other hand too—because we did talk about the subject that testing is so important—sometimes it's the other way around, where I write the code and I tell it, for example, maybe I wrote a brute-force method and it produces something... something... “But can you make it better? Can you optimize it? Or can you actually write me some simple test cases for this code that I've already produced?”

Dan: And how do you qualify—or, like, check—whether that new solution is good or bad?

Angela: So I think that's where using test cases as the benchmark is so important—just in general coding as well—is that if you were to go about changing the implementation, the result should still be consistent.

Dan: Okay, I see—so part of this is writing a unit test, or whatever kind of testing you're going to do.

Angela: Yeah.

Dan: So it seems like—almost in the beginning—you’re going to need a good set, and then that’s going to help you optimize, yeah.

Angela: And unit testing is not something that's necessarily black or white, because I think even writing good unit tests takes a little bit of critical thinking and some creativity as well. A lot of what good test coverage is about is thinking about things like edge cases—not just that, “Okay, this returns true,” or “This returns false,” but what about the case where it can't return true or false because the inputs are wrong? Are there certain exceptions that you're expecting? Is there certain behavior that you're expecting out of this code too? It's not necessarily just checking an answer.

Dan: So, like, would you—I think that goes into one of the points I wanted to ask about, which is basically, like, what do people think that AI tools can do, but they really shouldn't? Or, like, what are the limitations of it? I think one of the points that you brought up is that you're prompting it, you're creating test cases, and you're using it to check—so I kind of feel like that's one of the negative things about... I wouldn't say negative—limitations about tools that people need to be aware of—is that you can't just, like... you're still using your judgment, your experiences, and that almost is irreplaceable, I think.

And I think that goes into the next question—which is the big question of the entire podcast—which is: Is it going to take my job? I think a lot of people are afraid. What do you guys think?

Riana: Please.

Dan: Please take your job?

Everyone laughs.

Dan: But don't tell the management that.

Riana: No, because I think that even if my role changes, there still needs to be—like we were talking about—validation of the answers.

Dan: Right.

Riana: And making sure that it's not—I mean, even if it ends up at some point being 99% correct, you have to be prepared for that 1%. And there always needs to be someone pulling the strings of it.

Dan: So, some sort of final person responsible for the outcomes if you release it out, right?

Riana: Yeah, like checking the outputs, making sure that they make sense, and they're not just something AI has picked up from a website—that it's incorrect information. Yeah.

Dan: What do you think about that?

Angela: I don't think—in the near future at least—because what I think AI is, is it's basically a very, very intelligent five-year-old child. It's very intelligent, and it knows a lot of things, but it's lacking judgment. It doesn't have that maturity.

Dan: But do you think it's going to, like, eventually get there?

Angela: Again, I think it depends on context. Because with a general AI tool—like I was saying before—it's pretty good at math, it's pretty good at coding, it's pretty good at language. But it's very underperforming in very specific industries. I don't know why—is it because it doesn't have that context built in? Or is it because it doesn't have enough information to build its models on? I don't know.

Dan: Okay, something interesting to look into, I guess—for the researchers, yeah. I think personally, for me, it's like these AI tools, especially ChatGPT, have opened me up to explore other passions. Because it’s very quick—like you said—it will tell you, sort of in point form, a lot of information that you would otherwise have to look up brute-force and just look through all the articles.

So, for example, when I do my own—I don't know—content or something, and I do script writing or something like that, creativity... like, these sorts of things, it can help me bounce ideas. Or I give it a word salad, and it helps me kind of formulate some type of thing.

And I personally don't think it's going to really take our jobs. I think it's going to make the job role evolve. I think it's a bad idea not to be aware of them and to use them—even if you don't use them every day. I think they're powerful, for sure. But I also agree with what you're saying, and what I've been saying in the podcast, is: you can't not use your judgment. You can't expect it to take responsibility for your actions at the end of the day.

So, yeah. I think the liability is a big question—like when you copy and paste something from AI, or AI is doing a diagnosis in the medical field—these are the types of interesting topics that we can get into.

Yeah, so basically those are my questions. So thanks—thanks so much for joining me here for AI Tools in the Geotechnical Industry. I hope you guys had fun and that we were able to get your insight on everything.

Riana: Thanks for having us.

Dan: Yeah no problems.

Angela: Thank you.

Dan: Any concluding remarks? Anything you want to say before you go? No?

Riana: I think that was a nice take you had on AI that you can evolve it or our rules will evolve but we can find a way to maybe have more room to include our creativity and passion.

Dan: yeah I hope so I mean like you talking about you can take my job but if you can take the mundane jobs is that a bad thing right so?

Riana: Joking.

Dan: Yeah, all right, all right. So anyways—thanks so much again. Take care. These are your drinks—please take them with you. Enjoy them!

Riana: Thank you.

Angela: Thank you so much.

Dan: And yeah—so thanks so much. And that's our last guest for the podcast.

And that’s a wrap for our premiere episode of RSTalks!
A huge thanks to our guests for coming in and sharing their insights.
Huge thank you to you for tuning in.

If you liked this podcast, remember to follow it on your favorite platform—like Apple Podcasts or Spotify. Please rate the show and share it with your friends, family, pets—everybody!

So, until next time...
Can you guess what we’ll dig into next?

Bye!

Music drops as the screen flashes:

Follow us on social media
LinkedIn · Instagram · Facebook · X
@rocscience

Ends with the Rocscience logo on screen.

Back to top