Energy News Beat
Energy News Beat Podcast
Building Trust in AI: Data Squared’s Breakthrough for Energy & Defense
0:00
-29:12

Building Trust in AI: Data Squared’s Breakthrough for Energy & Defense

In this episode of Energy Newsbeat – Conversations in Energy, Stuart Turley speaks with Jon Brewton, CEO of Data Squared, about their groundbreaking work in AI and data management. Jon explains how their patented system eliminates AI hallucinations, making AI more trustworthy and transparent for industries like defense, energy, and engineering. They discuss the challenges of scaling AI, its applications in energy grid management, and the value of integrating various data types to optimize decision-making. This conversation sheds light on how AI can drive real-world solutions with reliability and explainability.

This is huge, with new patents in hand, Data2 Squared is helping add accountability to AI. I have been learning a great deal about AI through the podcast series on AI and Data Centers, which is significant.

The U.S. government and energy sectors will find Jon's company and his resources critical. For the U.S. to achieve energy dominance, it must also be AI Dominant. I applaud the accountability that Jon's team has successfully implemented.

Jon posted on his LinkedIn the following:

The biggest problem with AI isn’t what you think, it’s not speed or capability, it’s trust.

I just had an incredible conversation with Stuart Turley on Energy Newsbeat about solving AI’s accountability crisis, and it reinforced something I’ve been thinking about for months. Most AI operates as a black box, giving you answers but no idea how it arrived at them, which is dangerous when you’re making critical decisions in defense, energy, and engineering. We’ve patented a system that eliminates AI hallucinations through complete transparency, where every insight traces back to its source and every recommendation shows its reasoning.

Stuart understood immediately why this matters, putting it perfectly: “For the U.S. to achieve energy dominance, it must also be AI Dominant.” But here’s the thing, unreliable AI is worse than no AI at all.

Our platform is already transforming how organizations handle data chaos, from energy grid management to defense applications, turning disconnected information into actionable intelligence with full accountability. Instead of hoping your AI got it right, you can verify exactly how it reached its conclusions, which changes everything about how you can trust and deploy these systems.

The conversation revealed something crucial about where we are in this AI revolution, we’re not just building better AI, we’re building trustworthy AI. That’s the difference between technology that impresses and technology that transforms entire industries.

Stuart’s team gets it because they’ve been covering the intersection of AI and energy infrastructure, and they see what we see, the future belongs to organizations that can trust their AI to make mission critical decisions.
What’s your biggest concern about AI reliability in your industry?

Check out

https://data2.ai/

Also connect with Jon on his LinkedIn here: https://www.linkedin.com/in/jon-brewton-datasquared/

Thank you, Jon, for stopping by the podcast, and your leading the charge in AI security is critical for the United States - Stu

Highlights of the Podcast

00:00 - Intro

00:39 – What Is Explainable AI?

01:11 – The Problem with AI Hallucinations

03:15 – AI Systems and Trust Issues

06:21 – Data Squared’s Solution

08:32 – Real-World Applications in Energy

10:35 – Filtering Out Inaccurate AI Outputs

12:28 – Beyond Analytics: Building Trustworthy AI

16:12 – The Echo Chamber of AI Training Data

17:57 – Who Can Benefit from Data Squared’s Technology?

18:49 – Opportunities in the Utility Sector

20:08 – Achieving a Patent in AI Technology

21:06 – The Impact on Defense and Engineering

23:25 – The Role of Military Experience in Innovation

26:22 – The Unlikely Team Behind the Innovation

28:29 – Looking Ahead: Future Applications in Energy

29:01 – Closing Remarks

Sponsorships are available for the AI and Data Center in the Energy Series.

Stuart Turley [00:00:07] Hello, everybody. Welcome to the Energy News Beat podcast. My name's Stu Turley, President and CEO of the Sandstone Group. AI is not only here, it's growing like you wouldn't believe. And I happen to have John Bruton. He is the CEO of Data Squared. He is a regular now on the podcast. This is part of our AI and data center series that we have going on the Energy New Beat podcast, and I mean, this is some exciting news that they've got from Data Squired. Welcome, John. Thank you for stopping by the podcast.

Jon Brewton [00:00:39] Thanks for having me, Stuart. Really appreciate the opportunity to come on and have another conversation about AI and energy.

Stuart Turley [00:00:45] I'll tell you what, this is some exciting news. You just had a story go out on defense scoop. This is actually very cool. Military vets patent hallucinogen risk resistant, explainable AI technology. I'm just sitting here and I went to Oklahoma state and I was scratching my head because I learned how to program for trend on card key punches. I was stretching my head on this one. What does this mean?

Jon Brewton [00:01:11] Well, what it means in reality, I think one critical distinction needs to be made. AI as the industry understands it really runs through companies like OpenAI, or Anthropic, or Gemini from a Google perspective, or Grok. And those are the systems that we use to engage with AI, sort of in a commercial sense. If anybody's using AI today, they're probably using those things. We developed a thing called the Review Platform. And the Review platform is essentially a platform that allows us to take those specific systems and those specific capabilities and apply them with trust, transparency, reliability, and I'll get to explainability in just a second, two industries that are high reliability in nature, low tolerance for failure, high cost for failure like intelligence, like defense, and more importantly, like energy and engineering. And so what we figured out is really important. Those systems kind of work the way that they work, but they work in a way where their core operating function is to generate an answer. So if you ask ChatGBT or Claude or Grok a question right now, you're going to get an answer. And almost every sense of that. Anytime you ask, you will get an answer. That's the objective function of that system. It does not mean that that answer is correct or grounded in reality. And so this is like the core trust problem with these systems is that they tend to make things up from time to time. And whenever they do, it's really hard to understand what they're making up because it looks so plausible. They'll even have like manufactured links. In answers that say, this is where I pulled this information from. But in many instances, those are just fabricated links because they know that they need to produce a link, but maybe they don't have the right training data or the right understanding of the question you asked. And so they just produce something because that's the function of these systems and that's hallucinations.

Stuart Turley [00:03:15] So AI can be a politician and lie?

Jon Brewton [00:03:19] A hundred percent. It's actually designed in exactly that capacity. That's a great analog, actually. You know, how do you, how, do you say no without saying no, this is sort of the, the core objective function of these systems. How do you say yes and always say yes is probably the alternative there. Right. They have a problem with reality. And the reason they have a problem with the reality is one, the function, the objective function of those systems is to produce an answer. But two, the training data that they're trained on is let's call it an inch deep, mile wide in concept. There's a lot of high level information about subject matter areas that are easy to ascertain and easy to contextualize. But when you and I ask a question, as an example, we use English language. The English language is really nuanced. And so whenever you say bits, as an example are you are you talking about in our industry in oil and gas? Are we talking about drill bits? Or are we talking about computer bits? Or are talking about things, you know, the nuance in words is parsed out by these computer systems. And so the burden of the complexity that you put into the system when you ask a question really starts to hinder how it recalls information and why it recall certain information. The bad part about these systems is not only that they make stuff up at a pretty prodigious rate, quite frankly, but it's that they also don't explain how they arrived at their answers. And so at Data Squared, we saw this as an opportunity. So instead of trying to build a new language model or trying to building a new system that would compete with Brock and Jim and I, OpenAI, all these systems, we decided to take a different approach and try to build and a systematic approach that would allow us to transfer data in a really prescriptive way that would ground these models in that reality. And so that's kind of like part one of our whole patent is we prepare the data environment in a way that it helps these systems understand the context for that so it doesn't have to guess. And it grounds the reality and focus into a narrow lens. So instead of having the aperture be this wide, we're really looking through, you know, a three inch hole. And so we're really narrowing their focus in a way that makes the transaction between our system and our data and their operating function, effective and transparent. And so that's we, we really raised the confidence level and the answers that you get. And we have full cycle explainability of what data was passed to the system, how that data was used, what was returned from that system, how it actually compares to the fact basis of the information that we have defined and how well the system is answering that question within the context that we're trying to sort of problem solve in. And so we have this really robust layer that we can put on top of those things and make them all work better.

Stuart Turley [00:06:21] For big system data modeling and I'm sitting here thinking of, let's say you've got 1800 pads in the Permian, those 1800 pads are bringing in X number data points from all these different SCADA points. I could see this coming back to the main office in Houston and now you've got an overlaying thing. How is this being validated? But yet then you can also look at exterior data, lay your information on there and have a valid validation check that I just get that right.

Jon Brewton [00:06:53] Pretty much. Yeah. And, and look, we're trying to make this really easy for people. Now, one of the things that we do that's fundamentally different than any other company is we use a graph database model as the grounding mechanism and the source of truth. And we talked a little bit about this last time. There's a relational databases that we're all familiar with that kind of, you know, rose columns. And then there's this graphical representation was sort of 3d very tangibly connected data model. And what that buys us is the ability not to re-engineer data. So, whenever we build this model, we see the context for how all these parts fit together, why they really fit together and what it means for different elements of this information to be pulled in to say like any question that we would ask of the system. That really solves this sort of fundamental trust problem that we have with any of these AI systems and the adoption of these AI system at scale for areas like energy and engineering. Luckily, we're in a good position with strategic partners like the US, Microsoft, Dell, NVIDIA, WWT, General Dynamics, Neo4j, that can really help us start to build a broader and more prolific path to market and leveraging their brand equity gets us into a really good position where we can sort of use the sweat equity that they've sort of built over time and position ourselves as sort of an additive feature. But the key thing is we're not just another analytics company. We're not just another AI company building a large language model. We are a company that made AI trustworthy and deployable for mission-critical decision-making. And that's kind of like the real big difference here is something that helps everyone.

Stuart Turley [00:08:32] Just as a personal note, I write maybe 15 articles a day of the articles that I write, I either use chat GPT or any of the others that you mentioned or grok. And if you don't know the material, you do get crap back. I mean, it is, it's not a technical term. You get crap. Back. And it's like, you can't just put this crap out there because your reputation is on the line. So you're the. Crap filter, if you would. I mean, I think crap is okay to use on the podcast and is not a swear word, but you're the crap filter for enterprise wide kind of things. And that is huge.

Jon Brewton [00:09:14] Yeah, I think one of the reasons we are that filter is the way that we prepare data. Like I said, we kind of started out with this process because of our background in the military and intelligence and engineering to say like, you know, how can we design this high reliability system where we can integrate a lot of different types of information and create these explainable hallucination resistant outcomes. At that point, it really relies on and necessitates the use of an integrated and unified harmonized data model. That means taking cross-functional data from the production department, the completions department, the wells department, the planning department, all of our midstream departments, how we're moving barrels from one aspect of the business to the other, how we are exploiting and building capacity in those barrels, and taking all that data together, putting it in this structured, unstructured time series, harmonized data model, and that creates this ability to filter the crap out of the system, because now we can call balls and stripes. Now we can say what is right and what isn't right. And not only what is and isn't why it is not right or why it is correct. And that's the most important thing is it gives us that performance layer where we can validate the thinking of the AI against the thing that we're trying to solve for with our subject matter expertise because.

Stuart Turley [00:10:35] You're going to eliminate the umpire getting beat up at a board meeting. So the IT department comes in and goes, have the CEO fire his whole team. You're gonna stop that from being fired because the IT team is going to be able to go, Hey, look, the umper was wrong.

Jon Brewton [00:10:54] That's 100% right. Now, it's funny, when we say explainability and trustworthy, that doesn't mean that every answer in the system is inherently correct. What it does mean is that we have enabled the system to explain how it arrived at its conclusions, what data it used to do that, and really sort of call value on whether or not the conclusions that it came to are really what we were looking for or are they correct at the end of the day. And that's the missing piece from any of these AI systems, no matter what you look at, where you look, all of them are roughly the same. A lot of them use a single mode of data. It'll be like unstructured data or text, something from a PDF. They don't use time series data, OT data from SCADA sensors. They use our financial transactional history. We don't use our signals intelligence, you know, and we don't us our structured data, which in many instances is planning or spreadsheet data simultaneously. Like that's a real bottleneck for performance and scale for those systems. And that's another thing that is really fundamental to what we do. We say like, we don't care about the data types. We're going to put it all together and we're going to give you an opportunity to interface with AI in a way where that stuff is linked. Tangibly together so that we can understand how Department A is making decisions, what that means for Department B, and how that filters into our profitability as a company as an example. And that's really hard to do. It's impossible to do within the construct of how these systems are designed today. I think that's the real important part of what we brought to the market.

Stuart Turley [00:12:28] This is huge. I get excited when I'm just kind of talking about this. And one of the things that I've found as just on a consumer level, I'm drawing it back out of the enterprise and going to the consumer that when I write articles and then I reference my own website or my own sub stack as the authority, I am trying to see what kind of answers I get back. The side benefit that I have noticed is that when I ask Like in the last five days, I've written five different articles on why Gavin Newsom's energy policies are destroying California. I did not know just as a side note, I did not know that California was importing Russian oil after it had been shipped to India and then refined into gasoline and diesel and then imported that. And that's good for the environment. But when I went ahead and had AI take a look at my sites as an authority with, I've noticed a increase of AI bots on my site, reviewing all of my stuff. So who's training who am I training AI or is AI training me?

Jon Brewton [00:13:40] It's such an interesting question. And it's an astute observation, quite frankly. People are not picking up on this because there is this echo chamber effect that we saw from traditional algorithms and say social media, where we stopped getting both sides of an argument and started only getting one because that's what we engaged with. And a lot of that still persists today in the way that these things work. AI is no different. And if we start thinking about whether or not the AI is being trained on human-generated content or AI content. Well, that starts to become a real problem because we start to see these self-fulfilling prophecies around how answers are generated and what answers are provided because there's a lot of group think in these models and the way that they process information because they're very generic or generalized in nature. So that's how you can really start to tell filter layer one, is this a subject matter expert answer or is this BS answer generated by an AI? But if that AI answer gets filtered in to our database and knowledge and then becomes a part of that without being able to clarify whether it's AI generated or human generated, now what are we training the future of our system on the future our engineers on? Is this a generic AI takeaway for how we should operate assets? Or is this an informed opinion from a subject matter expert who knows what they're talking about and doing over a 30 year period, and that becomes really hard to delineate. That's another feature of what we built. We knew for a fact that we needed to chart the answers that were being generated from the system, those value contribution of those answers and how that information got stored back into the data models that were used for decision-making in the future. We had to categorize stuff as either being generated by humans or we had to category stuff as being generated by AI and systematically being able to do that really does start to build fluency and awareness into how answers are being generated and what that means for whether or not they're correct, whether they're generalized, and whether or not it's just a bot training us at the end of the day. Which is what we'd like to avoid. And so traditional systems can't do that. Our system does it inherently. It's sort of another sort of, let's just call it systematic safeguard that we built into the platform. We want to make these things really easy for people to use, but we also want to be able to, at any point in time, say within the information we've stored over the last year, how much of that information was generated by bots or AI and how much it was generated by people and then how much that system is being leveraged in either bucket. To generate answers today, and this is like really important.

Stuart Turley [00:16:12] Who could use your service? You've got a lot of business going on with the government. You've also got other ones, but I already know an answer to this. Cause I know about 16 companies that could use you're your information. But who do you think need your products?

Jon Brewton [00:16:28] Yeah, I think at the end of the day, anybody that's trying to build capacity and capability with AI, and that's a really broad statement, but I'll clarify just a little bit further. Anybody that's tried to use these systems can get general functionality out of these systems on an individual basis. So you or I, Stuart, could log into these systems and use them at work on a day to day basis if our company allows it. And that's okay, but like that's not a systematic construct for scaling that capability across your business. So I think anybody that's trying to develop solutions regardless of the use case or the industry, and I know we're talking about energy here, that wants to ensure they have a system that's trustworthy, that's traceable, that's auditable, and I think more importantly at the end of the day, scalable, those are our targets. And right now that's a lot of filtering on our side for high reliability industry because we know that's the first place where bottlenecks are going to be created when people try to scale these technologies and it's because of trust and transparency. And so for us, if you're working in energy, if you're working in finance, if you're working in law enforcement, if you're working in intelligence, you're but those aren't the only places we can apply these things. It's just where's the best bang for the buck if we're trying to build scalable solutions that we can deploy into an oil company. It's really kind of anywhere.

Stuart Turley [00:17:58] I can also see your company being critical in the utility space. Absolutely critical because the utility is, we are seeing in the energy utility space, a dramatic change over the next five years, we're going to need to be seeing decentralized management of the grid. And micro grids rolling in to help articulate and stave off blackouts from increased demand coming in with a poorly run centralized grid. This is a global issue. And I see a gigantic opportunity for data squared in that area. And I'm, I'm sitting here kind of going, where do I sign up to be a sales rep for you, because this is absolutely huge.

Jon Brewton [00:18:49] No. So energy at scale is a really good application area. But to your point, a lot of the early validation work we did, we worked with a company in New Zealand to look at grid management solutions, grid optimization solutions. No, you're doing well here because you're picking good areas that could create a lot a value. And I think that's the more important thing is do we have clear line aside to the value that we can create with a very systematic AI enabled workflow and grid management, grid optimization, spot market optimization, how do we sell and buy the utilities that we generate or that we disperse? How do you microsize that to a given grid infrastructure? How do load balance across your network? These are all areas that are really important. And then on the preventative maintenance side, it's like, how do we maintain the uptime on the network in the best and most proactive way possible? And these are all applications for the utility space that work exceptionally well, exceptionally well.

Stuart Turley [00:19:50] That's crazy. I'll tell you, this is kind of cool. I am, I am just excited. I'm going to have the link in the show notes to this article. Cause again, congratulations, John, to the entire team at day of squared. This is huge. Getting a patent out there is not easy.

Jon Brewton [00:20:08] No, it's not easy. Especially in this space, anything in the machine learning or AI space gets a very high order and magnitude of review whenever they look to validate some of these things. We got to this really early. We filed our initial patent in November, November 8th of 2023. We were two months old as a company. We had figured something out that nobody else had figured out and we got to it really early, so I'm really proud to say we're the first and only company with a patent for how to reduce or eliminate hallucinations and more importantly, how to build explainability into how these systems work today. And I think that's the most important part of what we've done. And it just creates an opportunity for AI to be applied into different industries and to start to create value in areas where how we got to the answer and the data that we use to generate that answer really matter. And, you know, that's a core outcome here.

Stuart Turley [00:21:06] I mean, this is something that is critical. I can see this in defense is absolutely mission critical and they ought to write you a check today as a taxpayer. I would not mind because I could see you saving lives with this.

Jon Brewton [00:21:21] Yeah, it's saving lives. And look, we're veterans, my whole company leadership team, we represent all the sort of flavors or colors of the DOD rainbow, you know, I'm an Air Force guy, we have a Navy SEAL veteran, we ever Marine Corps veteran, we have an army veteran and you know it's core to kind of how we develop the technology and why we developed it the way we did because we want to apply it to situations where we can save people's lives, prevent problems, optimize how we do our work, eliminate collateral damage and make sure that have a clear and a full cycle understanding of not only what we intend to do and how we intend to do it, but what the outcomes might be, what the reactions might be from some of the enemy combatants or some of people in communities we're engaging with. And those things are so important for eliminating damage at scale. And you can apply the same construct to engineering, how we're engineering solutions, how do we have real trust in what we built, how do validate that. Just more we can raise our understanding and our efficacy rates and the use of this technology in those workflows, the more value it's gonna create and the more damage it's going to eliminate. And that goes to like fraud, waste and abuse in the government. You know, a lot of the conversations we have today with the government are centered on how can we identify fraud, waste and the abuse and optimize our understanding of say. You know, the VA in the process around, you know how information comes into the VA, what claims are vetted, how they're vetted what's approved, how that gets recycled into the system, and just creating this sort of interface layer where we can kind of see that whole workflow in one location and at one time. And if we can see it in one one location at one, time the AI can see it. And when we start to apply that to AI, its ability to parse out opportunity and weakness within the structure of that entire value chain is otherworldly, but you have to prepare it the right way. And so like, yes, we're working with the government to try to figure out ways to eliminate fraud, waste and abuse, increase our mission efficacy, reduce our collateral damage and reduce risks and then to the mission and the force. That's a big part of what we're doing.

Stuart Turley [00:23:26] This is really cool, but on a personal note, I want to know who wins the Smackdown in slapping around talking and bragging rights in the office between brand Smackdowns, military Smackdown.

Jon Brewton [00:23:41] I mean, look, you know, at the end of the day, we all know and everybody in the DOD knows that the Air Force sort of leads the way and smarts an application. There's brute force application where you get either that's the Marine Corps or the Navy SEALs. I guess you could call the Army something similar there too. But if we want to operate in a really smart and effective way, we sort of need that Air Force lens over everything that we do. It's really strategic in nature and it's important to delineate between, you know, let's call them hammers and nail guns. So, Air Force is the one to go with.

Stuart Turley [00:24:09] It sounds like a CEO kind of answer to me, man.

Jon Brewton [00:24:13] Yeah, hey look, we do give each other a lot of grief about, you know, our military service and you know it's a fun part of the company and the people that we built it with, you know, this sort of shared history and shared experience really defines sort of connection and how well we work together and I think that is probably how we got to where we are because this is just anecdotally funny. You know, we're the least likely people to have solved this problem. You take a drilling engineer, an accountant. A data systems manager and a Navy SEAL, and you throw them in a room and say, why don't you solve the most pressing problem in the world for scalability and application of AI? The likelihood of that group of people coming up with that answer is really, really low. But what that allowed us to do is sort of think about this problem in a different way. Really extrapolate this problem, extract our understanding of our industry experience, the things that we lived through, the problems that we had and how to solve that. And it all came down to like engineering perspective at the end of the day. It's like, how do we get high quality, high reliability answers in anything that we do? And it has a lot to do with how we do the work to get to that answer. And so that's really what we built a system around. And I'll be completely transparent. When we built it, we were like, this is neat. We didn't think it was transformational and we didn't think it special when we introduced it to a patent attorney which was sort of a check the box exercise for us because again, we didn't think we done anything very interesting. We just thought it was neat and a new way to sort of think about things. They're exact words where if we get this right, you guys should be on a beach retired in five years because this is the coolest thing we have ever seen. And nobody is thinking about solving this problem this way. This could change AI forever. And that's a light bulb moment where you go like, oh, neat. But if the story starts with drilling engineer, an accountant, a data systems manager, and a Navy SEAL walk into a bar, the outcome of that story is a punchline. It's not what we ended up with. And so it's a really interesting case study. Kind of taking a different approach, thinking outside the box and applying different industry lenses and experience to solving problems that are unique.

Stuart Turley [00:26:22] Well, John, you've absolutely made my day. And you want to know one of the main reasons that the United States did win the war in Germany, and that is the GIs were taught to think independently. And you again, validated that independent thought process and camaraderie between the, the military forces to solve the problem. I love it. That's just made my. So how do people, how do, people find AI squared?

Jon Brewton [00:26:51] Yeah, so You can look us up at data2.ai. That's our website. That'll give you an idea of what we're up to, who we are, and where we play. You can find us on LinkedIn. The handle is at data to US. So D-A-T-A to US, and that will be us. Then you can find me at John Bruton, J-O-N-B-R-E-W-T on LinkedIn, and those are the primary places that you'll get information and updates from us as a team. But yeah, if you're interested in learning how to use AI, use it effectively and create real value. Within respect of your business and what these capabilities have to bring to you. Don't hesitate to reach out.

Stuart Turley [00:27:34] I'm sure hoping to get to see you in person. Cause I'm with our DC tour that we're trying to put together. We may run paths and have the opportunity to do some live podcasts face to face. So I'm looking forward to those. And honestly, I'm look forward to our next one in this series because this is a series on AI as it is changing and it is who's training, who is really a Pretty good question now. So thank you. I thank you for the podcast.

Jon Brewton [00:28:04] Yeah, no problem. I just want last note, in the coming conversations, I'm really going to focus on direct applications in the energy industry, things that we've done, how we've created value, how we been successful, things we messed up so that people can learn from these things. And I'm looking forward to getting some of those results in front of you because it's just going to look and feel different than anything anybody's seen so far when it comes to AI in the industry.

Stuart Turley [00:28:29] And I'd also throw the offer to your, your staff, anybody else that's on the team that wants to hop on to any of these in the future that would add value. If they're just going to smack me down because of my good looks, I don't need that, but they're going to, if you want anybody else on your team on here too, because this series is critical and I, I recognize that AI squared is different. And that's why I am so excited to have this series going on. So again, thanks for your time.

Jon Brewton [00:29:01] All right. Thank you. Really appreciate the conversation.

Discussion about this episode

User's avatar