November 25, 2024

00:34:14

Chris Hastie - #TrueDataOps Podcast Ep. 42

Hosted by

Kent Graziano
Chris Hastie - #TrueDataOps Podcast Ep. 42
#TrueDataOps
Chris Hastie - #TrueDataOps Podcast Ep. 42

Nov 25 2024 | 00:34:14

/

Show Notes

Kent interviews Chris Hastie, Data Practice Lead at InterWorks. Chris’s passion is not only to personally provide elegant and simple solutions to complex problems but also to provide others with the knowledge and tools to do this themselves.

He has been engineering data since 2014 and using Snowflake since 2018 to solve customer challenges and drive value, becoming a Data Superhero in 2019 following a series of blog posts and other community engagement.

He is also a SnowPro Subject Matter Expert and has been involved in most current SnowPro exams.

At InterWorks, he leads a practice of experienced data engineers and architects who share a love for helping customers unlock the value of their data.

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Welcome to the True Data Ops podcast. We will start in a few seconds to allow other folks to get logged on to the live stream. Be back in a few. Thanks. Okay. Welcome to this episode of our show True Data Ops. I'm your host, Kent Graziano, the data Warrior. In each episode we try to bring you a podcast discussing the world of DataOps and the people that are making DataOps what it is today. So be sure to look up and subscribe to the DataOps Live YouTube channel. That's where you're going to find all the recordings from our past episodes. If you missed any of the prior episodes, now's a good chance to catch up now. Better yet, if you don't want to miss any of our future episodes, you can go to truedataops.org and subscribe for this podcast. My guest today is Snowflake, data superhero, Snow Pro, subject matter expert and data practice lead at Interworks, my buddy Chris Hastie. Welcome to the show, Chris. [00:01:11] Speaker B: Hey Ken, how you doing? [00:01:14] Speaker A: So for the folks that don't don't know as much about these programs and about you, you want to give us a little background on you and your world of data architecture and Snowflake and all of those fun things. [00:01:27] Speaker B: Yeah, absolutely. So I've been working with data for about 10 years now. My 10 year anniversary ish. And about 5 months ago and the whole time I've been working kind of almost a mixture of behind the scenes on all the data architecture, data engineering pieces and face to face customers analytics and I think that's really where the joy is. Right. I don't enjoy just being sat behind the screen all the time, but also it is nice to get your hands dirty on some code too. So it's nice having that bit of a mix. Yeah, my general background, I've been, I was a different company for three years for the first three years of my career, but the vast majority, the last seven years I've been Interworks. I've gone from an analytics consultant focused on bi and dashboarding through to a data engineer, data architect, and now data practice lead, which I really enjoy as a role. It's just a general nice split between consulting hands on with customers and delivery and a nice split of what solutions we work with, partners we work with. And the thing that I find really enjoyable, control over the blogs that we're outputting for for the day of practice. So yeah, there's a whole load of blogs that we do which have a lot of fun with. [00:02:48] Speaker A: Cool. Yeah. And you guys, if I remember correctly, Interworks was one of the first SI partners with. With Snowflake. [00:02:57] Speaker B: We were. We were with them right from. Right from very near the start. We were Partner of the Year, I think, in 2018, globally, maybe 2019. And, yeah, that was a really exciting time. We were getting really involved in, you know, the delivery and the training that Snowflake offer and all that. And we were helping with the Snowflake Rapid Start, and we've still got our own version of that. And that was also what really brought me into the Snow Pro SME stuff, because it meant that I was playing around with the Snow Pro Core exam, you know, before it was Snow Pro Core, and I had the fun of going through the alpha and the beta and all that stuff way back when, and that kind of gave me a lot of just excitement about then doing all the other ones and helping with that. So, yeah, it's been a lot of fun. [00:03:48] Speaker A: Yeah. Now I've had to see. I got my Data Heroes superheroes T shirt on. So I know a lot of people are in the Data Superheroes program, but the Snowflake subject matter expert designation, I think a lot of people probably haven't heard of that. Can you explain that one a little bit? [00:04:08] Speaker B: Yeah, sure. So the SME program is more about helping create the Snow Pro exams, and you can get involved in a whole load of different ways. You can be a beta tester, which is how I started out. So it's just sitting the exams, you know, three, four months before they're officially released. And then, you know, it is a beta exam. It might have some bugs, might have some issues. But you're also. I mean, to me, it was fun because I got to sit the exam early and I got to tell people that I've passed an exam before it was even live, which is just fun. Yeah, yeah. But other than that. Yeah, the. The wider SME program is around writing questions, evaluating questions that other people have written, working. We have, like, groups, so we're working, three or four of us at once to go through a pool of, let's say, 10 questions other people have written. And basically it's like an approval process, but we touch them up and modify them, that kind of thing. And it's. Yeah, it's really enjoyable if you like the element of kind of helping steer on an education piece. But it's also great just if you're looking to be one of those people that is certified in a load of things, I can promise you it's easier to get certified in it if you. [00:05:23] Speaker A: Helped write the Exam one would think, yeah, so it's kind of. So the Snow Pro exams are really kind of. There's a lot of community input then, because obviously you don't work for Snowflake, you work for Innerworks. And I think that's an interesting approach. I like that idea because I know anytime I've taken exams, there's always one or two questions that you go, boy, there's something wrong with this question. It's just not worded right. The answers are vague and I could see it could be two of the three answers, but I know that that's not right. And so getting that input early on must really help the, you know, I guess the exam writers and ultimately the people who take the exam when it's released that a. We'll say a. A real person, a non. A non Snowflake employee who's out in the field working with the product has gotten to look at these questions and say, yeah, these, these make sense. And this is a valid test of an individual's knowledge and what do they really know and understand about Snowflake. [00:06:36] Speaker B: Yeah, no, you're completely right. And I think one of the things that really stands out to me that I find quite interesting with it is often we're in these situations. I think it's about a 75 to 25 split between people that work for Snowflake in the SME program and aren't. At least I think is from. And it works really nicely when you're chatting to them. And they'll be pushing, oh, we think this question and this content and this piece, because that is what Slowflake are seeing as a company as being like the thing that is their new hot topic that they want to focus on and that they want to push. And it makes perfect sense until you stop and think, well, the vast majority of people sitting this exam may have been using Snowflake for two years, may have reached all these different criteria, but there's no real reason why they might have used that particular piece of functionality if it's a particularly niche thing like I. Let's take Kafka Streaming, for example. Definitely important, definitely plenty of use cases, but there are also a whole chunk of people that never have that need and requirement. So there's no need for the exam to cover it for more than, you know, 5% of the marks or whatever. [00:07:45] Speaker A: So a little bit of a reality check. [00:07:48] Speaker B: Yeah, it's really useful. That's good. [00:07:50] Speaker A: Yeah, yeah. So that make that makes the exam more practical and more accessible to folks who are. Who are out using the product. Oh, that's good. Yeah. Little rabbit hole there, folks. I didn't know too much about that program. Wanted to find out. Chris is the guy to ask. So this season on our show, we're. We've kind of taken a. Trying to take a little bit of a step back and think about how the world of what we call true Data Ops has really evolved and what we've learned the last couple of years. Now, as we've established, you've obviously been a practitioner in the field. You're helping customers take best advantage of the cloud in general and Snowflake, obviously, in particular. So can you give us a little sense of what you've seen in, you know, your career here in the last bunch of years since you joined Inner Works? You know, how has that evolved, particularly with Snowflake? [00:08:49] Speaker B: Absolutely. I think the. One of the biggest trends I've seen is it felt like. And it could also be a sign of just me developing individually, but it felt like in the first five or the first half of my career, the first five years, it felt like everything was a lot more about people having one problem to solve, trying to solve that problem, kind of trying to hack their way or put some. Put something together that does it. But then as soon as another problem came along, it was all about right, and now shift to this next thing and almost start from scratch. And I didn't see as many cases where something was being designed and put together to serve multiple different purposes at the same time and effectively just unlock that value. And I think that's one of the main things I've seen. Maybe it's the shift to cloud and the, you know, the separation, storage, compute, and all the normal marketing points that people say about basically improving our ability to do things. But I feel like people these days approach problems with a wider mindset, not just how can we achieve this, but I think we do have more insight into our wider business needs. This and this and this and this, and how can we hit the most of those points as early as we can? And I'm not sure really how that's come about other than I think it's just the ease that of doing things means that more people are in the space and playing around. And by nature, that playing around leads to more people then upskilling in a particular thing and leads to more people challenging each other. And it's that challenge that then means something fits, you know, four projects instead of one. And I think that, yeah, that's one of the main things I'd say that at least I've seen change. That I think has been interesting. [00:10:49] Speaker A: Yeah. So from your perspective, you know, what, what do see, what is Data Ops really? And you know, how do you see that fitting into this whole evolution and data landscape that we're in? [00:11:04] Speaker B: The way I see it, the purpose of data is to serve the needs of the business in a way like there's no point gathering it all and shoving it somewhere and then never looking at it. [00:11:15] Speaker A: Right. [00:11:16] Speaker B: So, yeah, to me, Data ops is that process of taking that, that pool of data and helping it meet what customers or what the end user, what the business needs and the process through that. I mean, because just doing that is fine. I wouldn't really call it data ops, just someone being able to access the file. But to me, Data ops is the whole process around doing that efficiently, effectively, and in a way where if it needs to change, we can change it without breaking everything else. If we need to add things, we can add things. If someone leaves the company and someone else has to take over, it's not a huge learning curve. It's all about doing things in a way that performs well but is also easy to understand. I don't want any black boxes in any of the processes I build because it's just going to cause a problem down the line. [00:12:08] Speaker A: Yeah. And as you said, over the years here, you've seen there's more and more adoption of the cloud and data in the cloud, so making that, you know, more accessible and, you know, more resilient. I guess if you're going to make the data available for the business. You know, when I started in data warehousing, it was, you know, I'll say, I'll, I'll be generous and say 50% of the time it was more of a build it and they will come sort of attitude. Yeah, let's get all the data, we'll put it in this, you know, monolithic data warehouse and we'll get it all nicely structured and all of that and then we'll go, okay, point your reporting tools at it. Have fun. That works some, but not as much. And things have certainly changed a lot in the last 20, 30 years with, like I said, the advent of the cloud. Now, as you know, in the true DataOps, where we talk about the seven pillars, for listeners who haven't looked at that yet, you can find that on the truedataops.org website. Just look for Seven Pillars and you can, you can see we have a nice graphic and detailed explanations of all of it. So it's it's been really four years since we first put that website up and came up with the seven pillars and that Justin and Guy and I and a couple others, we did the Dummies Guide to Data Ops. It was yeah originally Datalytics which was one of the first. Like you guys, we one of the first sis in, in the, in the. Starting in the UK for, for Snowflake and then ultimately all over Europe. So thinking about those seven pillars, Chris, do you think that those still resonate today with all the changes that have happened? [00:13:53] Speaker B: I definitely do. I think if anything they're more important just because the. What I mentioned before about more people are now able to get on and do things. I think the seven pillars are a very good way to guide people towards doing things, you know, the right way because I mean it's not the only way but it is in my opinion a strong process to follow. I like the seven pillars include a lot in terms of transferability, in terms of things being well documented, well controlled. There's a whole. I think it is one of the full pillars is all about CI, CD and environments. Just making sure that you have things deployed. You know, dev test prod is great but I'm much more a fan of production and then a branch for this and a branch for this and a branch for this and kind of a more, I would say, I guess a more modern approach to it, but I guess other people might just call it a more software engineering approach. [00:14:52] Speaker A: Absolutely, yes. Yeah. [00:14:54] Speaker B: But yeah, I think all of that just makes so much more sense now because cloud platforms have enabled all these things in terms of, you know, storage being completely separate means that we can have in Snowflakes example, you can clone entire environments and then do all your dev work and you don't need to separately manage something as dev or as tess. You can just spin them out, spin them back and the pillars really help keep you within that. And yeah, I just every time, because I first looked at the pillars about seven, eight months ago and I was kind of looking at, in my semi arrogant minded way of thinking, right, let's go and have a look at, you know, and I thought, yeah, this matches everything that I think of doing. So yeah, it was almost an anticlimax for me to be honest. [00:15:51] Speaker A: You were you hoping that there was going to be some revelation there or you were hoping that you were going to poke holes in it? [00:15:58] Speaker B: I was hoping for both. I was hoping I was going to look and go, oh, I've never thought of doing that. That's A really good idea. Oh, but look, they're not automating their testing. And then yeah, you had, you have everything. And I think it does, it works really well. And I think the, I mean you and I chatted, we chatted a few times, but I think you and I had a YouTube video or something 2ish, 3ish years ago. [00:16:23] Speaker A: Oh, probably, yeah. [00:16:23] Speaker B: And I, I talked a lot about metadata driven ingestion there and if anything I've just done more and more and more of it since and I feel like that whole process lines up really nicely with the seven pillars concept because yeah, I don't like anything to be hard coded. It should all be metadata driven, it should all just flow and just be able to be recreated and developed and all that stuff. And the pillars meet that really nicely. [00:16:50] Speaker A: Yeah, and you just morphed right into one of the topics I wanted to talk about, which was the metadata. And as he said that in my background with agile, doing agile data warehousing and things like that, the efficiencies come from generating things, not hard coding things. Everything you hard code. If you need to make a change, well, that means you got to change the code and you got to make sure that you change the code. Right. And certainly the automated regression testing is really, really critical in that. But the more in my mind, the more that we can generate, the more efficient we can be in building these systems and making them resilient. Now when we think about what's happening now with AI and gen, AI and machine learning, how do you see that kind of coming together? [00:17:46] Speaker B: I'm still firmly in the camp of AI. Looks like it will be great, but I'm not yet willing to let it loose on any of my systems in anything more than a controlled way. But I've certainly seen it be the co pilot functionality that's present in pretty much every technology you can think of now has a co pilot. And I have seen that I think guide and just smooth over a lot of things. I was working recently in, I forget the name of technology. It was a technology that I don't use as often and they've now got copilot and I could do everything I wanted to do, but I was still doing my same way of working but I didn't know the exact syntax and I could just say, oh well, I want this to automatically fill in the columns from this thing and I want it to read it and do it and all that stuff. And basically I recreated my metadata driven approach and I was able to do that a lot Quicker than when I first had to write it, you know, four or five years ago. So from that perspective, I'm excited. I'm just finding, I think we all see it, AI still has the odd mistake, and sometimes they're easily fixed and moved, and other times they send you down a rabbit hole for three hours before you realize it's because it just misunderstood quite what you wanted. And you've relied on it that bit too much to. To question the response yourself, you know? [00:19:09] Speaker A: Yeah. So I think that's one of the things that comes up a lot. We talk about, you know, having good data foundation. What you just mentioned to me about how you would use AI speaks to governance. Right. And that it's. Yeah, we're not just, not just let it loose, but you're talking about like in a very controlled way, which to me means governance. Right. You're. You're not just turning it loose, pointing an AI at like a massive data lake and saying, okay, tell me what the business needs to know. What should we know about this data? And it goes, I have no idea what this data is because there's not enough metadata. Right, Yep. And I think that's where your comments and your previous experience looking at metadata, metadata driven things is, do you think? I guess the question is, how do we get people to go about understanding that not only do they need good data, but they actually need better and cleaner metadata in order for all this to work? [00:20:21] Speaker B: That's a very good question because a lot of people find, you know, as soon as you hit the documentation stage, most, a lot of people's brains switch off until the next fun challenge comes along. And you're right, you need to have things. No, it needs to be, in a way I mentioned before, someone else can pick it up and run with it. If you want to properly leverage something like an AI, a genai or whatever, it's the same process. You can think of it similarly to a human. You still need to have something that you can give it, that tell it all the pieces and pieces, and that is that metadata. And if you've got a catalog, for example, I find if you've filled in your catalog well, then there are a whole load of things you can tell your AI models. And I've seen one example I thought was really nice we had. It wasn't much, it was about 50 tables over a few databases. But we documented it well and we put in that time and effort to document it. And we could point one of these copilots at it and someone can say, oh, I need a query that's going to tell me this and this and this and this. And it can actually just go grab all that, find it and present it. And if you've not gone through the time and effort to put in that previous information, it will probably just tell you it based on a couple of column names and field names that it found that might be completely wrong and just again, lead you down that rabbit hole. I think metadata is definitely important. [00:21:51] Speaker A: You mentioned in there, you mentioned data catalog as well. That was part of what you just said. Is, is there a need, I'll say, within, within an organization, you know, building the data culture. We talk about building a data culture to get more business people on board with like maybe filling in the blanks in a data catalog to increase that understanding. [00:22:17] Speaker B: Yeah, because I, I think probably in the majority of data focused companies you'll still find that there is a data team and they can speak very well to the structure of the data and how it flows and all that. But yeah, you're right, they have no concept really of how it's being used downstream. They have very little understanding of actually what it means beyond the things they can read. And you need, in my mind, you always need to have somebody for any data piece or if we use the mesh terminology for any data product. I think you do need to have a business owner, not a technical owner that is involved as well and they can help guide and steer that in the way that makes sure that it doesn't just end up being yet another thing that people could access but don't because it doesn't meet what they need. And it's just sat there. [00:23:12] Speaker A: Yeah, so again, we're back to the business IT interface comment again, that conversation. And yeah, I think in the, in the data mesh world you talk about having a product owner and you know, the, the product owner should be somebody from the domain team, not necessarily a technical person and them having the responsibility for not only, you know, in the data mesh world, they talk about them sourcing the data and responsible for the quality of data, all that, but just the understanding of the data. What does it mean, what are the terms that we use, what's taxonomy and the ontology involved in this data? Because yeah, I could see that with AI, if you use the wrong words in the prompt that doesn't align with actual, the actual semantic meaning of the data, you could go down a rabbit hole thinking you're solving the problem correctly. And in fact you're completely out in left field somewhere. [00:24:22] Speaker B: Yeah, exactly. And it just, yeah, if you don't have that. I think one of the other nice examples I've seen of having a non technical person involved is often they can actually tell you how things are linking in a way that you've never thought of. Like you might have to use a really rudimentary example, you might have one thing of sales and one thing of deliveries. And I think we can all understand that deliveries are going to match the sales. But for the sake of an example, a data person might go, we produce a sales table, we produce a deliveries table. And it's the business person that comes along and says, well why don't those tie in? Why can't we do this and this? And it's something, it's, you know, they're the ones using it, they're the ones that need it and therefore it makes sense that they're the ones that also have some responsibility towards first guiding what's there and second actually sharing and documenting that same knowledge so that other people who come to it later can then also see and go, oh yeah, oh we know it's to do with this. Oh, that's the business logic. Oh, that's the bits and pieces. And then as we said, one day maybe we can just point an AI to it and it can just consume the whole thing and say, yep, cool, by the way, your profit's going to drop by 2% next year unless you do this, this, this and we'd all be very happy. [00:25:35] Speaker A: Right? Yeah. And I think that, you know, that's one of the, one of the seven pillars is collaboration and that's really what we're talking about, right? The, the ability for technical and non technical people to collaborate on the understanding of the data, the understanding of the business problem and, and for the business people to actually be able to access that and contribute to it like we were talking about with the data catalogs rather than just telling an IT person something and the IT person has to go into some arcane tool and code it all in and you know, they may or may not transcribe it properly. [00:26:10] Speaker B: Yeah, exactly, yeah. [00:26:12] Speaker A: So have you had a chance, you know the asking a question kind of LLM style and it generating a query. I know Snowflake's added a little bit of that into their product already. Have you played around with that at all? [00:26:27] Speaker B: I've played around a little bit. As I said before, I don't use AI massively yet in my day to day stuff because I still find that all of these things just tell you just slightly the wrong stuff. It is A lot better. And what I've at least found, the Snowflake one's a nice example. I've used it a couple of times. I've told it to do something, it's given me an answer that I've not liked, but it's given me the SQL that I did like and then I could just tweak it that little bit to get what I really want. [00:26:54] Speaker A: Okay, that makes sense. Yeah, because it is, as you say, it's not a black box. Right. It's actually turning your question into SQL in this case. [00:27:05] Speaker B: Exactly. Because I've seen, I think there are a few tools out there that have been around for a couple of years that have been focused more on that natural language process. And I think most of them started off as you asked a question, it gave you an answer, and then over the span of a year or two, they all slowly started servicing the workings that was going underneath so that people could see it themselves. And I just think that level of visibility is always needed in these things. [00:27:33] Speaker A: Yeah. [00:27:33] Speaker B: And I think we've seen a few other AI models now where if you ask a question, it will cite its sources. And that kind of approach is just. Yeah, for trust. Because you need trust. Right. If you don't have trust in it, then it's useless. [00:27:47] Speaker A: So I mean, I mean, that's been the history of data warehousing and analytics as well as, you know, if they even. They ran a tableau report and didn't like the answer, then they lose faith that the entire repository wasn't built correctly. And this gives you that, a little more traceability, like I said, a little more visibility. Because you can take a look at it, say, oh, okay, now I see how it came up with that answer because based on the way I worded the question, it picked these five tables and one of those tables really wasn't the right one. That it didn't. There wasn't enough metadata, there wasn't enough context, there wasn't enough something for it to actually get the table. And like you said, you can look at the query and go, okay, it's picked from these five tables. Wait a minute, what's that one doing in there? Yeah, no, that's not the right one. It should be this one. And then it's kind of like got a. You can then backward engineer the prompt. They say, okay, how, how could I have asked the question differently that it would have known that that's what I meant? Because obviously what I said, it didn't know. And it could be, you know, the way you worded it, the terminology you used, or it could be the metadata in the repository that it's accessing. Right, all of the above. So yeah, I'm glad you brought that up because that's, that's definitely a. My concerns around AI and gen AI is the same thing is the, is the governance aspect of IT and having a human involved to evaluate, you know, is this really the right answer and to be. For it to be easier to do that rather than, as you said, you might go down a rabbit hole for a couple of hours and go, oh, wait a minute, this is not right. Yeah, so how do we unwind that to just like in our software development life cycles, the concept in agile fail fast. Right. We want to find out faster that the AI is taking us down a slightly, not the right path that we want to be on so we can back up and go a different direction. [00:29:49] Speaker B: Yeah, exactly. And I think that really also emphasizes with the AI stuff. I've seen a few technologies out there now where you can kind of point it at something and AI will go and create a whole, basically a whole data warehouse out of it. And I think that's really cool when it works. And normally with those things, if it doesn't work, then it can kind of, you can pick it apart a little bit. You can see where again, they're getting better, not being black boxes. But I'm always worried that someone somewhere will think, all right, cool, we don't need a person involved in this anymore. [00:30:27] Speaker A: We don't need an architect, don't need an engineer. [00:30:29] Speaker B: Yeah, yeah. And obviously they'll need it when it goes wrong, but they're not thinking about that at the time. And we have had a couple of projects already where the whole project has come about because the customer has tried, oh, we don't need any of this stuff. We're just going to go and do our own. We're going to get an AI thing to do it for us. And they've just run with that and maybe wasted a couple of months just like trying and trying and trying and then they've come and actually that AI stuff still works really well, but just have somebody to support it. It doesn't have to be Arsenal saying that, but just someone, a data engineer, a data architect should be involved in data project. You can't just have a C suite person going, right, cool, let's just drop that in. [00:31:11] Speaker A: Well, yeah, that goes back to the seven pillars on the true data ops of, you know, the testing and monitoring and the governance is somebody needs to check these things rather than just letting it loose. Right. [00:31:27] Speaker B: And I think that's one of the things I like about the seven pillars is that it's not like do this and then that's it, that's the end. The whole point of seven pillars, they facilitate this iterative approach in this long standing, long term way of not just delivering the thing, but maintaining it and supporting it and ensuring that it keeps going. [00:31:46] Speaker A: Yeah, really right. Things have to scale and it's not a one and done. Everything change. Things change so much. It's just one and done is not, is not the way anymore. And if we want to continue to grow as an organization, access more data, it's got to be scalable, it's got to be traceable, auditable, all of that, and maintainable. Right. You don't want to have this big, massive 5 million lines of machine learning code that nobody has ever seen. All right, well, we got to wrap it up now. It's great talking to you, Chris. What's coming up for you? Any events, meetups or anything that you're going to be hanging out at? [00:32:32] Speaker B: I mean, I'm a Snowflake Summit every year. Always feel free to grab me then because I love, I love chatting to people at Snowflake Summit. It's a great time of year. Last week actually was the Snowflake build. I was speaking at that. So I think the QR code's on the screen now. If anyone fancied catching the recording that was all about metadata driven ingestion, which is very on topic for this, but otherwise, yeah, just hit me up on LinkedIn. I'm always happy having a chat and there's plenty of stuff that. There's plenty of stuff that people can talk about in this space. So every single conversation to me ends up being interesting and new, which is nice. It's not always just the same thing over and over. So, yeah, just, yeah, reach out and let me know and if the other one is. If you want to check out our blog, we've got a whole load of stuff on our blog in terms of Snowflake tableau data in general. Matilda data, iq, dbc. There's a whole load of stuff. So, yeah, go. [00:33:26] Speaker A: Very good. All right, well, thanks Chris, for being my guest today and always good to catch up with you and you know, your insights and your experiences, I'm sure are valuable for, for all of our listeners. So. And thanks everyone who's online joining us today and those of you who are going to watch us in replay, be sure to join me again in two weeks. My guest will be CTO of FWD View Carl Ferrer. As always, be sure to like our the replays of today's show and tell your friends about the the True Data Ops podcast. And don't forget to go to truedataops. Org and subscribe to the podcast so you don't miss any of the future episodes. Until next time, this is Kent Graziano, the Data Warrior, signing off. For now.

Other Episodes