SepaIQ: AI-Driven Insights for Smarter Manufacturing

62-minute video  //

Read the Transcript

Tom (00:00:03):
Everyone, thank you for joining us today for AI Driven Insights for Smarter Manufacturing. So I’m Tom Hechtman, the CTO and founder of Sepasoft, and I’m joined here with Doug

Doug (00:00:18):
Brandl. I am an MES solutions engineer with Sepasoft.

Tom (00:00:25):
So what we’re going to be going over today, a lot of, you’re familiar with Sepasoft, but if a case anybody’s out there that isn’t familiar with solve, and I promise we only have one slide, it’ll be brief, and then we’re going to go into the current state of the industry and then we’re going to spend most of our time in a live demo. This isn’t prerecorded, so Doug is going to be doing that. And then we’re going to talk about what parts of SepaIQ, our new product that we used in that live demo and how that can help companies. And then we’ll go briefly about an onboarding process that we’re offering and then we’ll finish up with a summary in q and a. So Sepasoft started in 2010 developing MES modules for the inductive automation ignition platform. And we’ve done that for many years and it’s gotten a lot of experience. Our modules have to deal with controlling production on the plant floor, making sure you got the e-signatures, the right material, all that kind of thing. We’ll tell the difference between that and SepaIQ a little bit more later. But we have a thousand plus implementations using our products worldwide. More than 200 MES certified system integrators worldwide, 10 distributors in foreign countries. And we’re privately held corporation with no outside investors, which just means we continue to innovate and we’ll be here for many years to come supporting our customers.

(00:02:07):
We’re going to talk a little bit about some core concepts. Sometimes there’s misunderstandings on these and we just touch on these, not a deep dive at all. So what is advanced analytics and manufacturing? Well, it’s more than just having a trend chart from a historian or some basic SPC information. It’s really in manufacturing. You have the relationships between different events in related to collected data and all that. It’s bringing that all together in a meaningful way to be useful to enhance your efficiency. What is machine learning? Machine learning is this really math algorithms that just look for patterns and data where we’re going to be looking at as patterns and manufacturing data today. But an analogy is credit card. If I have my patterns for using my charge card, where I use it, when I use it, time of day, all kinds of information, the credit card company can look at those patterns and say, oh, here’s a new charge. It doesn’t follow Tom’s patterns. It’s probably fraudulent. And they’ve been doing that for years and they’re very good at it. But we’re going to take a look at what this means for manufacturing and then we’ll take a look at a large language model. Think of chatGPT, things like that, copilot. Those are large language models and how we can capitalize on them in manufacturing space.

Doug (00:03:42):
What do large language models do? How do they function?

Tom (00:03:45):
So they are just processing words so they can decipher the words that we type in, get the underlying meaning of ’em, and then they can start predicting the next words we’re going to anticipate using. And so they get pretty exciting, but they have some shortcomings to that. And so definitely be looking at that and in the manufacturing space, and we’ll talk about some of the shortcomings. And then artificial intelligence is just really making a computer be like a human

Doug (00:04:19):
And put enough electricity into a rock, starts to act like a

Tom (00:04:23):
Human. Well, we haven’t tested that, but you might be right. That’s

Doug (00:04:27):
Not on the demo today.

Tom (00:04:28):
Okay. Yeah. So it’s kind of the overarching, in fact, LLMs and generative AI is, they call it generative ai. It’s really the llm and there’s a lot of advancements and a lot of new technology coming out. We’ll definitely have our eye on it as time goes on. And then not too much on data lake, data warehouse and data lake house, but a data lake is unstructured data, images, PDFs, textual data, whatever, just throw it in there. It’s just a melting pot and can frankly be hard to make sense of data warehouse. And usually those are used to feed LLMs, by the way. Data warehouse is more structured. And think of a spreadsheet database, that kind of thing. Data lakehouse is a combination of those two. Could take structured or random. I want a data lakehouse, a lake, A house by the lake. Yeah, that’d be nice.

(00:05:28):
So the current trends that we see in the industry right now talking to lots of companies is they want to take advantage of ai and it really, they say ai, but it’s all advanced analytics predictions. It’s the whole thing. They want to start improving their efficiencies, reduce their costs, be more competitive in their marketplace, and they want to take advantage of it. And I’ve heard of many cases where that comes down from the CEO and it makes sense. So there’s definitely a demand for that. We also see, and this actually started before AI officially was here, was get connected to your devices, your sensor in your field, push that up, throw it into your data lake. The first step is just collect it. We’ll figure out what to do with it later.

Doug (00:06:24):
And here we are.

Tom (00:06:25):
And here we are later and we’ve heard of small islands of success, but we haven’t heard of major rollout across enterprises and it’s been very costly and time-wise, technology wise to get those small islands of success. So we need to transition into a more scalable way to do that. We also see companies having to get data scientists or data engineers in there. They’re not familiar with the manufacturing process. So that’s been a challenge. I think smaller and medium businesses, they don’t even know where to start. Yeah, okay. So commonly a lot of these challenges is companies have data in all kinds of places, very diverse and even different reporting tools. You see spreadsheets a lot still. The data can exist in multiple places. It’s not consolidated. And so that leads to inconsistencies calculations. Somebody might have a calculation on the spreadsheet, it might be in the system, it might be on HMI. That

Doug (00:07:42):
Might be different across.

Tom (00:07:43):
Yes, exactly. Modifying past data. So this is one of the things with that focus of just collect your data, the real world and manufacturing. When you’re manufacturing products, you change data after the fact. And it might be lab results come in two days later. It might be we change the downtime reason. It could be that a lot was selected and we have to go back and correct it. So changing that past data and then incidentally knowing that it was changed is all very important. And when you just focus on collecting the data and getting it up into the data lake, you miss out on that and now you have inaccurate data, you’re going to get inaccurate results out and coming up with a solution that’s cost effective that can be rolled out, how many years is it going to take to roll out? How much custom code do you have?

(00:08:45):
All these things come into play and are challenging with it. Alright, so we’re going to get into the demo phase here and Doug’s going to be bouncing around between a lot of screens. So we thought a graphic was needed to kind of describe what we’re going to do in this demo. So we start out at the bottom there, we have ignition. Notice, it does not have a database. For our example here in our demo, you might have one, you might not, doesn’t matter, but we are going to be able to collect data PLCs, ignition’s great at this, PLCs or other databases or user input. Really ignition’s great at getting data from different devices, scanners, whatever else. So we’ll look at tags down there and we’re going to feed data through those tags. And then up to iq, we’re going to do some analytics and predictions and pass that back down to Ignition and display it on the ui.

(00:09:51):
Okay. We’ll also be relaying that information up to AWS in QuickSight. Now that’s like Power BI over in Azure and there’s other ones out there, Tableau and whatever else. So we will change data down at the ignition level and push that on up. And then we’ll also be doing a chat demonstration. So we have a private open AI account that we signed up for and we’re going to show how we can integrate our manufacturing data, actual OEE or loss or what have you in our chat dialogue, which is really cool. Incidentally, down on ignition there, that’s where our regular modules lift. So if you need electronic signatures or you need to control the workflow or you have batch processes or ensuring you have the right material or you’re doing the inspections at the right time, all those production control things that you need to make sure happen on the plant floor are existing modules handle that very well. Where SepaIQ is passive, it’s not going to complain if you say, I’m using this material on this production run and it’s not the right material fat finger, the material, or you don’t have the right labels or whatever else. So it’s passive. So that’s the difference from our existing products and this product.

Doug (00:11:22):
Alright, so I’m going to kill the camera so that we get a full screen here of the demo. And first things first, I’m going to get into Ignition just to show you how we are feeding data into this demo just for some context. So you have to bear with me. Sorry, ignition. So this is our ignition environment for this demo project. For those of you who’ve used Ignition, this should be pretty familiar. Along the bottom left here, we’ve got our tag browser and here is a UDT that I’ve defined where I’m effectively as I’m simulating data, I’m pushing the data into a tag. And then from there we’re feeding it all the way up into step iq. Right now my simulation’s paused because the randomization, while it’s great for generating data, is not great for demonstrations, while we might have to wait for data to come in. Alright, so that is generally just like this is the interface to SepaIQ as it relates to this demo.

(00:12:25):
And then make sure I get the correct. Where did we put that? Pardon me? I pull that up. There we go. Okay. So what we’re going to do is we’re going to start off showing some high performance analysis. This is plant floor analysis, and here we’ve got all of our enterprise data flowing up into Seppa iq. We’ve got somewhere around like 48,000 rows per month across our enterprise and we’ve established this hierarchy. Now, the nice part about SepaIQ is that we follow the UNS structure, but we’re providing some extra context to it and we’re not stuck and tied to the 95 hierarchy. It doesn’t always work for everybody. So here we’ve decided to break down instead of the typical hierarchy, which is enterprise site, area line, which is pretty constraining, we’ve broken it out enterprise, we have countries here. So we broke out some of this data across our countries, United States and Europe, Europe in the United States. We decided that we also wanted an extra layer here for West Central and Eastern United States sites. Again, just further breaking down, we’re drilling down through this hierarchy and then we get to our lines. Let’s just select one of these. Now, this line here, we’ve got 20,000, almost 21,000 rows for this particular processing line. This is an execution that is happening on demand at the moment on this screen. So you can see it completed pretty quickly to aggregate 21,000 rows.

Tom (00:14:15):
So that was SepaIQ that got the answer together. So your request align went up to Sepa

Doug (00:14:21):
IQ,

Tom (00:14:21):
The results came back.

Doug (00:14:23):
Yes,

Tom (00:14:23):
That quickly.

Doug (00:14:24):
Yeah, it’s very impressive. I know those of you who have struggled with performance in the past should be pretty pleased with this. I mean, the performance of SepaIQ is pretty incredible. Alright, so this is kind of your standard plant floor high performance analysis, but one of the things that we want to highlight here is the ability to perform predictions on data. You may have a whole series of data of production runs, of existing downtime reasons, and it would be nice to be able to say, given this series of materials, suppliers, crews shift, give me a prediction on what are going to be my downtime reasons for this particular run.

Tom (00:15:13):
So just to make sure I have this right, the efficiency was what did happen or is happening Now you’re saying I want to look into the future.

Doug (00:15:23):
Correct.

Tom (00:15:24):
And what could happen?

Doug (00:15:25):
Yeah, so one of the signs of maturity of analysis and of an organization is the ability to stop looking backwards, not even looking at the present. You should always do that, but starting to look forward. So what we’re going to do here is we’re going to plug in a value of 12,500 target quantity here. And what this did is this executed up, it passed the request up to SepaIQ. SepaIQ, got the request. It ran the prediction through a prediction model off of an existing trained model off of our own data. And it tells us where we can expect our losses, how long we can expect the run to last. And this allows us to say, Hey, maybe before we perform this run, we should consider startup. Maybe we can have some extra people to help speed up the startup process. We can look at variable viscosity, maybe there’s something that we can do about that, or conveyor slippage. We can do some work on the front end to reduce our losses because loss, it could be lost time, it’s lost production time, it could be lost units in the form of rejects or rework.

Tom (00:16:33):
So this is just an example that you could really do these predictions on any scenario.

Doug (00:16:40):
Yes. So the way that we’ll get into some of the predictions later on, how they look, and perhaps we can even talk about it if it comes up in the q and a, or you can reach out to us and we can explain it a little bit more. The next bit here is we’re calling it sentiment analysis. Tom, do you want to explain what sentiment is to the crown?

Tom (00:17:02):
Yeah, so sentiment, basically we’re taking textural data, deciphering the words and getting, is this a positive statement or is it a more negative statement? What’s it mean? And so this is new in manufacturing, I haven’t seen it, so I’m very excited to see how companies use this, but it could be operator notes, downtime notes, could be maintenance notes, supervisor, whoever enters the notes, and you can get the sentiment of it. And so operator might be struggling with the machine. So that could be reflected in the notes if you got a chronic problem or what have you. That doesn’t get detected automatically. But the people and how they’re feeling about it, we can pick up on, and there’s different technologies out there to do this. I think Stanford University came up with one that’s a general sentiment analysis. So the word supervisor, very neutral, positive, not negative. We do it a little differently with our machine learning engine that does the sentiment here. We actually train your actual words that you use within your factory, your industry, all that against your KPIs. So now supervisor could be, it could be negative every time the supervisor comes around and something’s going on, or it could be positive, the supervisor comes and helps.

(00:18:33):
So it adjusts to your specific use case and even different areas of your plant you could train separately and pick up on.

Doug (00:18:43):
So we’re effectively training the words that people type in against a particular factor. In this case, I think we trained this on loss production. So we took all of that text that someone spoke or typed into an operator comment and then we trained against it. So you can see here we’ve got some existing data where we’ve got just progressively more negative content, more negative sentiments. I’ll type in another one, let’s say had to shut down production due to a tornado threat. I know some people in chat have experienced this one and wow. Yeah, so that’s a pretty negative sentiment on that. And this is, again, this is all standard ignition stuff. We threw a histogram on it. So you can see it’s nice seeing the trend, so you can easily tell like, Hey, there’s something going on, maybe we need to,

Tom (00:19:39):
So when you say standard ignition stuff, you’re talking about all the components? We don’t have

Doug (00:19:44):
Custom, yes, no custom components, no custom stuff. This

Tom (00:19:47):
Using all perspective.

Doug (00:19:48):
Yes, this is all native perspective. You don’t have to install a module to get any of this. So this is all just native ignition stuff. So yeah, you can see all of the sentiment percentages here. So this is one example of using predictions against text. And the other example we use for this loss prediction page we used against values and against features of a particular production run in this instance. So the next bit here, what we’re going to do is we’re going to kind of show you how you can think about using machine learning and predictions as it relates to SPC. So statistical process controls, I know statistics makes people’s eyes glaze over. It’s not an exciting topic for most people. We could talk about all the fun SPC rules. It’s exciting for a few people, a little majority of people. But what we’ve got here is we’ve effectively trained samples we collect, and again, SepaIQ is a lot more passive.

(00:20:51):
We’re not putting controls and constraints on the definitions of samples that you’re trying to collect. We’re not saying you have to take five samples every time. Sometimes people take three samples, sometimes people will take seven samples in different industries. That’s okay. In some industries it’s not allowed. And what we’ve done is we’ve trained the sample, we’ve trained the result of that sample. So suppose we have a production run and the sample comes back and we have to reject the sample. We’re training against those rejects and some of those sample statistics. And then what we can do is as we collect the next sample, we can come up with a probability of it being defective off of our existing training. So, okay, I’m going to plug in some numbers. We’ve got some pretty wide variability in this sample that I’m going to do and we’ll see what comes up and the chance of a defect. Now, I’ve added an equipment here because samples aren’t always just about the values. Sometimes in order to determine the probability, it’s nice to know maybe which equipment was this generated on, maybe which sampling method did we use? What sample tool? Sorry, what tester did we end up use to come up with some of these things

Tom (00:22:11):
Combined with material, whatever elsewhere today, SPC samples and the SPC rules, Western Electric and Nelson rules and all those don’t take into account other data. It’s just looking at your sample

Doug (00:22:24):
Only strictly at those values.

(00:22:27):
So here what we’ve got is we’ve got a pretty high chance for a defect for this set of samples on machine one, and I can change this and adjust the machine, and those defect probabilities will shift around depending on that. And again, that’s all off of the trained prediction model that we’re using. So I’m going to get to the AI chat here in just a minute. This one’s I’m really excited about, but we’re going to pull up some of our, this is our run loss screen. This is what that data was simulating, and I’ve paused my simulation and we have a run, and then we have a reason that comes in that’s not just a standard running reason and we have a loss associated with that reason. So this loss is, it’s interpreted off of our run rate, et cetera. I’m not going to get too much into the details, but what we’ve got here is we have a value of delayed cooling, and I can change this value here to conveyor slippage. And then now what’s happened here is we’ve modified the value in SepaIQ. Now, I’m not going to show you guys the SepaIQ database unless you really want to see it. It’s all of this big raw data. You know what, actually let’s do that. I think we should do that. So you guys will have to bear with me. I’m going to pull it up on the other screen and drag it over.

Tom (00:24:05):
So this is the typical raw data that manufacturing you collect. So as values change, you just adding to it. Okay?

Doug (00:24:16):
Yep. So we’ve got our conveyor slippage, and if I change this value, you’ll see if I go back, if I go over to mold failures to release, you’ll see, okay, we pushed the change into our database. That’s expected. Let me refresh the data here. Mold failure to release, now it goes up to SepaIQ. SepaIQ performs this contextualization of this data and then pushes it up into AWS based off of that one screen that we had. So let’s pull that one up if I can find it. There we go. So this is the existing delayed cooling. That was the very first value that we had and provided our Redshift instance hasn’t decided to go down. There we go. Now we have mold failures to release. So that first row changed from that one value. But notice the data that we’re seeing on this row. It is our contextualized data. What we’ve done is we’ve effectively flattened all of that data that we’ve got in Seppa iq. We’ve added a lot of context, we’ve added some aggregated values, and then we’re storing it up here. So the contextualization of this UNS data is really, really, really important.

Tom (00:25:31):
So now that you have it up here, this isn’t about QuickSight, but now it’s much easier for just managers to go in here and start selecting and dicing and slicing and looking at it in different ways without having to know how to take all this raw data and make sense of it.

Doug (00:25:53):
And you don’t need, you saw how easy it was right there. We have a change on, let’s call it an operator screen, or maybe they miskeyed down reason or something was reported incorrectly. They make the change and then in seconds it is already reflected up into the cloud through SepaIQ. So, and then pushed up through SepaIQ to Redshift and then QuickSight. How

Tom (00:26:18):
Much code did that

Doug (00:26:19):
Take? That took no code except for we have one small calculation, and I can show that to you now if you guys would like to see it here. We’ll go through this. So just again, these are examples. We’re not here to demo QuickSight, but again, it’s easy enough. I think Tom put these together in the span of a couple hours. Yeah, a couple hours.

Tom (00:26:41):
I was learning QuickSight at the time.

Doug (00:26:43):
Yeah, and you’re able to see some pretty interesting trends here. So yeah, we’re not going to highlight QuickSight here. It’s similar to Power bi. It’s an alternative form. So let’s pull up SepaIQ and I can show you, if I can find the SepaIQ tab there. I can show you what some of those calculations ultimately look like. So the one to get our information up into AWS, this is the only code that I’ve got. All I do is I’m calculating a quantity of lost units based on a rate and the time difference. So this is seven lines of very simple JavaScript

Tom (00:27:33):
Specifically for the custom calculation,

Doug (00:27:36):
Specifically for custom calculation.

Tom (00:27:38):
But all the other,

Doug (00:27:39):
Everything else is,

Tom (00:27:40):
Everything is

Doug (00:27:41):
Drag and drop. So this is a SepaIQ interface. And remember this is more for the configuration side of things. SepaIQ is passive. So this is as you’re trying to figure out how to wire that data around through your organization, this is where you go to do that. And so that quantity actual, or the reason is it’s literally drag and drop.

Tom (00:28:03):
So those came from the tags in ignition.

Doug (00:28:06):
Yes.

Tom (00:28:06):
Up into here,

Doug (00:28:08):
You

Tom (00:28:08):
Ran this analysis on it, you have some custom calculations, you have the grouping, I see filtering whatever else in your, and then you’re able to configure and say, push this up to exactly AWS. Exactly. Okay. Yeah.

Doug (00:28:23):
Very easy. No code. I know that will make a lot of people happy. Oh, let’s go ahead and run it. Yeah,

Tom (00:28:30):
You can run it here.

Doug (00:28:31):
Yeah. So this is the most recent record, the mold failures to release. This is what it was told to push up to AWS. So what we’ve shown you so far is ignition. We’re collecting data, it goes up to SepaIQ, we perform an analysis on SepaIQ. We push the data back down to ignition and show you on the screen. We did the same thing with predictions. The predictions occur on SepaIQ. Then we showed you data from the controller, from the tags all the way up into the cloud through SepaIQ and through that process in SepaIQ, it is contextualized. Again, that’s very important. It means you don’t have to worry about data scientists, data engineers having to come in and spend hours cleaning and doing the work and trying to understand that process. And then the last bit here is, if I can get to the correct tab, there we are. We can go into the AI chat. This one I’m really excited about. So this is the talking to chat GPT and having chat GPT interrogate SepaIQ to come up with answers to your questions because Tom does chat. GPT know about my production process and about my production data.

Tom (00:29:46):
No, in fact, I’ve tried that and it just feeds me the formula for OE if I ask it for OE, and it absolutely doesn’t know. So

Doug (00:29:57):
Yeah, so the nice part is we get to use some natural language processing in order to have it understands what the question is, it understands how to get the data from SEPA IQ and it’ll return that back to us. So I’ve got some pre copy pasted questions here that I’ll post up. So the first one is, let’s pretend we’re a supervisor and we’re trying to understand what was the production loss we accrued over changeover yesterday. What this did is this went out to chatGPT, chatGPT said it’s asking about production loss. I’ve defined a production loss request in SepaIQ and I’ve configured chatGPT to go look at it. Whenever someone asks you for production loss, go execute this analysis against SepaIQ and then we return the value. That’s a lot simpler than trying to send them to a page to fill out a form, submit, have the results, come back and try and interpret it. This is natural language.

Tom (00:30:58):
You don’t even have to, I had to learn QuickSight in that.

Doug (00:31:03):
Yeah, it took hours.

Tom (00:31:04):
Yeah, I wouldn’t have to do that. I could just ask

Doug (00:31:07):
My questions. You can just ask your question. So I’m going to just pose a couple of questions. It’s like, which material supplier has the highest loss rate in the last week? See, it took a little bit longer. It’s sitting there, it goes out, it’s doing the chat, is talking to SepaIQ and returning all these results. So right here we’ve got all of our suppliers, the quantity loss, and we can show you if you want to see it. We’ve got the ability to see these requests and how chatGPT is hitting SepaIQ because one of the important things is that you want to verify that the data that you’re getting is accurate. So I’m going to production time loss, I’m going to pose a handful of questions and let’s see if we can say, how about, again, for those of you who haven’t used Chacha pt, this is kind of how you interact with it. It’s got memory. It’s not, you ask a question, it gives you an answer. It remembers what came prior.

Tom (00:32:03):
The conversation memory.

Doug (00:32:06):
Yeah. Yeah.

Tom (00:32:07):
And are these coded into SepaIQ, Doug, or

Doug (00:32:11):
No, they are

Tom (00:32:12):
Not.

Doug (00:32:13):
Yeah. So after this, what I can do is I can show you SepaIQ and the configuration, how we configure the tools. We can explicitly say, I want to expose this request and these parameters to chatGPT. And you give it a description and you tell chatGPT how it’s supposed to interpret that request. So this is nice. You were demonstrating here the memory, and it’s also shifting the timeframe so it’s not stuck to a single timeframe chatGPT knows what does the last 24 hours and it passes in that timeframe to us. So one of the nice parts is because it’s just hitting against a request, what we’re able to do is our predictions are a result of a request. So we can ask it to predict losses. You have to excuse that. Here we go. So here you see, I’ve asked for a prediction, I had to build a page here and people may go to that page and like that, but boy, it’s a lot easier just to type in a single sentence and do it. So here what we’ve got is we’ve got chatGPT using our predictions. This here, this chart. You’ll notice these percentages don’t align. That’s because these are percentages of the loss. So the values are correct, but the interpretation’s a little weird. But what we’re doing here is this chart is the data that was queried from SepaIQ that chatGPT has returned the results for. So we have the text and then we have a little extra verification. This is what SepaIQ

Tom (00:33:45):
Ran, that’s what it ran behind the scenes and feed it to chatGTP, and then we’re able to see it as well.

Doug (00:33:51):
Okay. All right. We’re going to see if this works. I played and paused our simulator, but what we’ve got is now this next question I think is a really important one. And this is something that really, really excites me and I know it excites everybody else I’ve shown it to. I’m asking it for to chain information. I’m saying based on my current runs information, what are my predicted losses? So what chatGPT is going to do is it’s going to interrogate SepaIQ for the predicted run. It’s going to take the result of that out feed of that result of that analysis and plug it in as an infeed to the next request. So it’s chaining requests back to back to back. So it’s smart in a weird way. We’ve put enough electricity through this

Tom (00:34:40):
Rock, so it’s picking up the current product and other information for the current run. And now it can get those results back fed into the next one, which was predicting what the losses were. Exactly. And this is all tied in with machine learning. It did the machine learning prediction and all that.

Doug (00:35:03):
No code. No code, no code. This is the coolest part about it. It was shockingly easy to accomplish this. So I really hope you guys get excited about it. Now, one of the big problems with AI is it has a tendency to hallucinate. It has a tendency to make up answers. It always wants to give you an answer. It will, if you ask a coding questions, it will make up code for you. That doesn’t compile, that doesn’t work. It’ll give you methods that don’t exist in libraries. It likes to fake it

Tom (00:35:34):
And it doesn’t tell you, oh, I’m only 5% confident this will work. It just tells you, yeah, here you go.

Doug (00:35:41):
Yeah. So I’m going to ask it. What was the OEE for line one this year? I have an analysis that gives me a line one OEE value, and we pass it passed in line one into this request. Now what happens if I don’t have the line? Now in most instances with a large language model, it will make up that information. So we’re in manufacturing, that doesn’t work. You cannot have that in manufacturing. You cannot have your data analysis just making up information. And you can see here it says, oh, we encountered an error, so it’s not going to give us fake information. It goes out, it attempts to execute it for line three and we don’t get an answer back. That’s important. That is very important. Yeah. So this is just this webinar demo project. Do we want to head back to the slides and finish up there? Or do you want to jump into SepaIQ? Okay, if I can find out where we put our slides. There

Tom (00:36:47):
We go.

Doug (00:36:48):
And how do we get back to the

Tom (00:36:50):
Slideshow? There we go.

(00:36:53):
Alright. Okay. So what functionality and SepaIQ did we use to be able to do this in a low code manner here? So we use what we’re calling extended UNS. So UNS is very, it’s a really good idea. It organizes your structure, your data, and has metadata with it. It has all these benefits really good, but you need to extend that. We found we want contextualize the data. We showed taking numerous rows, maybe hundreds of rows and consolidating it into a single row. And the benefits of that, that is extended UNS, what we’re calling it. So it’s calling, we’re restructuring that data. We can change names of items, whatever to get in a unified format across all sites, production processes, all that. And we did calculations in that. So now that encompasses the extended UNS

Doug (00:37:54):
And not just custom calculations. Aggregations built into SepaIQ.

Tom (00:37:58):
Oh yeah. Built in calculators within SepaIQ as well. Good point, Doug. So we saw high performance analytics being able to calculate across the entire enterprise that’s spread across the world and going through tens of thousands, hundreds of thousands of rows very quickly, machine learning that supported predictions focusing us into the future of what could happen here. We saw no code except for our custom calculations in there we have the ability to store data. So we have two different types of groups, data groups. We have a managed data group where you define it in SepaIQ, we’ll create the database structure. And when you tell us to record data, we’ll do calculations, record that data, we will do the analysis out, do more calculations, grouping, all that. And we have unmanaged groups, which are, you have data in some other database, it’s created, somebody else manages it, somebody else ingests the data into it.

(00:39:00):
And we just read that data and include that in our own analysis within SepaIQ. Real-time answers on LLMs and chat, we capitalized on that and we didn’t show it, we probably will show it, is that SepaIQ has templates. So once you have a line or different types of lines, whatever, you can create templates for those and then you can rapidly create a new line, all the configuration. So what Doug showed, that drag and drop and calculations. If I am going to apply that to all my lines in this facility, I can rapidly do that by just another instance of the template and managing changes as well.

Doug (00:39:46):
It’s a fantastic effective copy and paste of a

Tom (00:39:51):
Fancy copy, a fancy copy

Doug (00:39:52):
Paste,

Tom (00:39:53):
Yes. And then mentioned that SepaIQ is a cluster based architecture. It was designed from the ground up on that. So it can have multiple servers, you could have one that’s okay, but you can have, you want redundancy, you put two or you need more load and more redundancy, whatever. Add three, four, go 10 servers. Okay. Behind the load balancer, you go to one, you only have to configure one. It shares that configuration across the others logs. I don’t have to log into each one to get the logs of that server. It’s all combined. It tells me what server happened on. So it’s made for that space. And the intent is to have this at a higher level instead of putting it in each plant, have this at a higher level, maybe by business unit, maybe for your entire organization. Data change audit log. So I talked a little bit about that. We didn’t actually demonstrate that, but has those abilities. But definitely we connected and contextualize data.

Doug (00:41:06):
And one of the nice parts about that audit log, Tom, is just you can enable the audit log on a group and then you can make a series of changes to data that you’ve already added to that group. And you can roll back all those changes one by one by one if you want to. And it tracks all of the change, what it was and what it

Tom (00:41:25):
Became. Even the rollback.

Doug (00:41:26):
Even the rollback, you have an audit log of your rolling back of the change so you get a full picture. Full picture.

Tom (00:41:33):
So let’s dive a little bit more into UNS and extended UNS here. So on the left, UNS and it’s limited contact. So we see machine one there, we see our timestamps. They’re not legitimate timestamps, but it’s easier to read the number. And then you see, we record it off a lot in a pressure at those timestamps. And we noticed that the 400 timestamp where it’s grayed out, we didn’t have a lot. So during that time we had no product being processed. So that four is just noise

(00:42:08):
And we do not want to look for patterns or do machine learning on that. We just kind of want to ignore it because it doesn’t matter. And then down below we have machine two. So at a later time we process those lots that were generated on machine one and we did it in a different order and we have new data associated with that. So extended UNS is being able to combine the two of those together and into meaningful context, full context of data here. And so there we’re taking the average pressure and the quality. Now this is a very simplistic diagram, but in reality and manufacturing plants, it blows up pretty quickly. This blows up very quickly. And being able to get that full context is going to give you more relevant data and more relevant predictions.

Doug (00:43:03):
I like to think of it as what is the story we’re trying to tell with this data? We’ve got our telemetry off of maybe some vessel and then we’ve got some testing occurring on it after the fact. What is the story of our production? That’s what people want to know. They don’t want to know the raw data. They want to know the story. And this is the story. This was your average temperature. It was good or bad. Yeah.

Tom (00:43:23):
Okay. So what would a data scientist or company have to do if they didn’t have these things that SepaIQ took care of? So one, they’d have to educate their data science on manufacturing. So SepaIQ is in a space where manufacturing people that are familiar with manufacturing in the processes and all that can configure it easily without the code. We can handle changing data that was initially recorded and we saw that push all the way up into the cloud into AWS. So they would have to put, in fact, I talked to a company and the guy goes, oh my gosh, I have to do a lot of code. Or I did do a lot of code to allow them to go back two weeks and be able to change data. I limited it to the two weeks. But it was custom code that had to be done.

(00:44:15):
And now he was faced with, how do I roll that out? So text vectorization, that’s really important. Machine learning, because machine learning just deals with numbers, it doesn’t deal with text. So you have to convert that text to a number and you have to do it in a consistent manner. So if you’ve got EOP pressed and it’s number 1 0 5, you always have to refer to EOP press as 1 0 5, otherwise your predictions are going to be meaningless, contextualized data. They’re going to have to start doing that extended UNS type functionality and tying vents together with telemetry data and doing all kinds of stuff. So they’ll have to figure that out. The queries get really weird really fast when you start trying to do that. You’ve done it. Okay, perform time shifting. We showed that in the extended UNS, we process these lots at a different time than another one. How do you tie that together? Because something that you did on machine one might affect what is the outcome on machine two? How do we detect those patterns to identify that it may happen days

Doug (00:45:24):
And weeks later,

Tom (00:45:25):
Days or weeks later, later. So it requires time shifting to do that. And then clean data is extremely important in any machine learning, AI technology because garbage in, garbage out still applies. And MES tends to be extremely clean because if it’s not, it gets fixed because you got bigger. That’s how you control your production. And so that goes along with the accuracy. And then you’ve got different data and different sources. So how do you structure that and get it into a common format? SepaIQ can ingest the different formats and then reformat. You want to rename things. You want to put the date in a different format. You want to format the number differently. It has all no code methods of doing those things. And they would have to come up with a solution that now, okay, we got it working here, we got to roll this out. How many years is it going to take to roll out across your organization to where you benefit from it? And is the technology going to change before you get it rolled out? Oh yeah. You’re talking

Doug (00:46:29):
Potentially years. Yes. Yeah, many years. Yep.

Tom (00:46:33):
Okay. The onboarding process. Talk briefly about this. We rarely want to make sure that our customers succeed with set by iq and there’s a lot of new technologies and navigating those is what to expect. We get our predictions, are they exact? No, they’re not. They’re fuzzy, but they are possibilities of what can happen and it can be very useful. So just navigating all that, we have this onboarding process that we actually require If you purchase SepaIQ and what we include in it, this isn’t everything, but this is the highlights. We’re going to go through training. We’re going to teach about machine learning and what types of data is best for it and all that. We’re going to talk about AI and LLMs and shortcomings there and things to watch out for and how to address those. We’re going to talk about advanced analytics and how to organize your data and how to group it and best practices and all that.

(00:47:39):
And just generally about SepaIQ, we’re going to establish some minimum success criteria because people get excited and the scope just blows off. They shoot for the moon, they shoot for the moon. So we’ll kind of keep you in line there. And at the end we’ll evaluate whether we achieve that success criteria and if not pivot, where do we need to get some additional data or different data, all that kind of stuff. And then we’ll go through the architecture, how many servers you need, determining the collection data, what data to collect, analytics, all that kind of stuff. We’ll be part of that team meeting on a regular cadence just to make sure you end up with success.

Doug (00:48:26):
So it’s a very hands-on approach.

Tom (00:48:29):
Okay. Pricing and quoting. So it’s server-based pricing. So even though it’s a cluster, you add more servers to that. We have to make our money so you pay by server and we prefer the subscription method and that’s really the best value for the customer is subscription. But we know some customers they want perpetual, so we are going to offer perpetuals as well. The subscription includes the SepaIQ support, the perpetual, you do need to buy the support. And then other considerations that we can help you with and have the conversations about is this, how many servers, what’s your data storage, how much all that load balancing cloud hosting structure costs. We can have those conversations as well. So what we have out today is the highest tier that set us off SepaIQ premium. It includes everything you saw today. So we can connect to ai, BI machine learning, LLM integration, extended UNS, restructuring, all that full restful API, the advanced analytics. All that does not require the SEIS off modules, but if you do have ’em, that’s fine. They can work together ignitions, you might have other systems, other sources of data, but ignition’s a great product that it just really works extremely well with it. And this is available now.

(00:50:06):
So looking at the price, now keep in mind this is not in ignition instance or per site or anything. This is overarching your business unit or your entire organization even. You start out with your first server in a cluster at $65,000 a year on the subscription includes support each server after that 15,000. And then the onboarding is a flat one time $5,000 fee. We will have other versions. We know of one other version right now, but we might very well have some others. But step IQ standard, we’re going to take some features out of it. You’re not going to have the LLM some of that and it will be cheaper. So it will be very aligned with what other competitive products we see on the market feature to feature wise. And we will be very competitive of that same price.

Doug (00:51:06):
And it will still do the extended UNS

Tom (00:51:09):
And advanced analytics? Correct. Okay. Looking at a release timeline, we have this Episoft premium. You saw the LLM and SLM or small language model integration in there. So that’s going to be coming out very soon. We’re playing with that in-house here. And then in fall time we’re looking at adding scheduling and this will be constraint based scheduling to handle a variety of different type of use cases and all that. I know it’s very complex, the scheduling in some of the companies. So we’re excited about that. And then it can roll in some machine learning in there to kind of advance what options it tries to figure out the best schedule and then targeting for the end of the year travelers and traceability at being able to do that at a higher level and just on steroids, be able to ingest images and PDFs and what other data along with that traceability.

(00:52:12):
So a serialized item, you would have all that data aligned with it. We are going to have a user portal that you can go into and manage your licenses. So very excited to get that going as well as we have developers working on that now. So in summary, we really want to bring advanced analytics AI benefit the manufacturers at a reduced cost and they’d be able to do it quicker and roll that out in their entire infrastructure and be able to grow. Now a lot of companies have initiatives that like oe, we’re going to focus on OE and I encourage them to get a product. This is industry’s changing so quickly right now they get a product, yes, they can do the oe but it’s a platform that they can do other their next initiative. It will support it and be able to do it. We fill SepaIQ. Is that so full API support supports existing ignition infrastructure on all that with that,

Doug (00:53:24):
Yeah, we’ve got some questions. So we’ve got, I see about six and a half minutes remaining on our webinar and we’ve got a handful of questions from Gary and David. Let’s see. So first one being since SepaIQ isn’t sitting on the plants ignition gateway and may not need the web services to share the capabilities, it does have its own API. So all of the, it has its own API. So you’re not required to use the web service module. We’re using standard the system.net dot http client built-in ignition class in scripting. And so that is the case at the moment. And I know that in the future we’re planning on having an interface module for SepaIQ.

Tom (00:54:17):
That’s correct. And then also data can be ingested using anything really, but spark plug works with serious link modules.

Doug (00:54:26):
Right? And then sentiment analysis, does it support multiple languages?

Tom (00:54:32):
Yeah. So you train it against your text. Yeah. Interesting thing on the LLM in the chat, it does support multiple languages as well. All right.

Doug (00:54:47):
Cost for perpetual license.

Tom (00:54:50):
A lot more figure on, I think this can go to our sales department and they can answer that, but it’s a multiple more.

Doug (00:55:02):
And then are SepaIQ capabilities exposed through web services endpoints? Yes, it is. You know what? Let me show you the SepaIQ API at the very end. I’ll answer the rest of the questions and then I’ll come back to that. Okay. And then does SepaIQ run in the cloud or on-prem?

Tom (00:55:20):
Yes, either one. So again, it’s just like ignition. You can run it on Unix, you can run it on Windows. It runs behind a load balancer if you have multiple servers and a cluster, if you don’t, you have a single one. You don’t need that load balancer and it can run a docker container. Very flexible.

Doug (00:55:44):
This is a great question. Some customers are concerned about their data privacy. Can a client’s own chat GPT enterprise account be used?

Tom (00:55:52):
Yes, yes. In fact, that’s what we did here we have an open ai, which is ChatGTP. We created our own account and you pay per tokens, which it’s amazing how cheap it is. All are playing around with. This has been like 5 cents. Awesome. I thought I was charging you a lot of money with my tests. But it doesn’t matter if you’re using open AI chat, ADP’s private account or if you’re doing bedrock up in AWS and copilot and Azure is open ai.

Doug (00:56:28):
And if you have one for your organization within your organization’s network, we can connect to that

Tom (00:56:34):
Yeah. Yeah.

Doug (00:56:34):
Yeah. And then, alright, we’re getting a handful of these coming. Let’s see. Oh boy. Doug, how long do you think it will take to use SepaIQ instead of the existing way? We do analysis on our Ah, so yeah, migration from existing modules. We’re still working our way through that. It’s incredibly quick to spin up examples and data and ingest the data. It’s very, very simple. You simply make a handful of calls, you throw data into a group that you’ve created and then doing the analysis. If I had to talk about using the existing modules and then performing the same analysis in SepaIQ, I mean it is five times faster to do, 10 times faster to actually get to the end result of having that data on a page. It is significantly

Tom (00:57:31):
Faster. And you can have both running in parallel. You don’t have to have a over,

Doug (00:57:40):
We’re still working through our migration plan for those who want to start bringing data in. But at the moment I think we’re looking at a run it in parallel for a little while. So we’re still trying to work out the details.

Tom (00:57:51):
So here’s a good question. How do you see SepaIQ fitting into the existing legacy seus off architectures?

(00:58:01):
So keeping in mind, really all we’re replacing is the analysis in our existing architecture. So we plan to have a version where you can keep exactly what you have and no change and then you can start echoing that data over into Sippa iq. So you might have shorter term analysis or whatever. And if you’ve got that today, you can keep it and now you can get some more advanced analytics and see that in ignition and higher up with that. If you want to start phasing out, you can over time. It’s not like you have to do it all at once. So the long-term plan is this is the best place to do analysis and an ignition server is a single ignition server typically, and it’s doing a lot of other very important things on the factory floor. Going up and asking for the OE for the last year is probably not one of the best things that you should ask it to do. And I know it happens a lot. So this is a better place in a better architecture in the right place to really do that analysis. And we’ll have more on that and being able to pull your data across automatically. And again, that data doesn’t have to convert all at once and then cut over. It can be happening

(00:59:25):
Behind the scenes over time. So we will have more details on that

Doug (00:59:32):
Following up. And then we’ve got, yeah, how does it integrate with reports with the reporting module and ignition? Yeah, again, you saw all of the screens. Everything that I did in this project is driven off of tags. So think of our current modules, if you’re familiar with them. You have live analysis, you can load data into the tags. Same way it works in iq, you have a mechanism where data is pushed into a group, the analysis gets executed if it is relevant for the time period for that analysis. And then that analysis result is pushed into a tag in ignition. So everything is tag driven. So there are ways of architecting it. All right. And I think the last question that I’m seeing, and I apologize, there’s quite a few questions coming in.

Tom (01:00:24):
There’s one question I think is a good one, and then we’ll wrap up. And if we didn’t get to your question, we’ll follow up with you. Are you breaking away from the try before you buy and Yes, we are with this product because we can throw it over the fence and say good luck and you can struggle.

Doug (01:00:44):
A lot of people will fail with that.

Tom (01:00:45):
And there’s a lot of new technologies and we also want to know the outcome. We want to see the outcome successful, but we also want to know the outcome. So if it’s a particular use case and we don’t support it, we won’t sell it to it. And if it is a shortcoming in the product, we’ll take care of it. And I think that that because of that, yeah, we’re not just going to throw it over the fence, so let you try. If you are an integrator though, we do have training programs that we can more details on that where we’ll get you up to speed on being able to sell it and the initial details of how to configure it.

Doug (01:01:33):
Yeah, this is not just a relational database. It is a lot more than that. And the conventions that you have to follow and your understanding of the underlying data, it’s pretty important. So I think a lot of people have pitfalls if they were to try and just run it on their own and then they have a bad taste in their mouth and it’s because they did something wrong, because it’s again, new stuff

Tom (01:01:52):
And this industry is just kind of emerging. And when you look at the other companies, a lot of ’em are, Hey, pay us and then we’ll get going.

(01:02:01):
So, This industry is still trying to figure it out and we are too. So yeah. Alright. Well thank you everyone. I hope you really enjoyed it. And do reach out. We had the contact information there and the QR code for scheduling a demo. Please reach out and we can follow up on the conversations, get you more information, answer your questions. Yeah, dive deeper into how some of the things

Doug (01:02:30):
Work. Perfect. Thank you everybody. Have a great day.

Manufacturing data isn’t just numbers—it tells the story behind every process, machine, and decision. But when production values, historical records, and business data are stored in islands and analyzed separately, key relationships get lost, errors creep in, and making informed decisions becomes more difficult than it should be.

SepaIQ combines real-time and historical data in one place, establishing relationships between time-series, relational, and transactional data for a complete picture. By eliminating inconsistencies and reducing redundant data silos, it ensures that operators, engineers, and decision-makers always have accurate, contextualized information to work from—whether on the plant floor or in high-level AI and BI systems.

About the Speakers

Tom Hechtman
Tom Hechtman is the CTO and Co-Founder of Sepasoft. Since founding Sepasoft in 2010, he has led the development of MES modules for Ignition by Inductive Automation, focusing on open standards and technologies. In 2023, he transitioned to the role of CTO, where he continues to drive product innovation and shape the future of Sepasoft’s MES solutions.

Doug Brandl
Doug Brandl is a MES Solutions Engineer at Sepasoft, Inc., bringing over a decade of experience in pharmaceutical automation engineering and application development. Having grown up immersed in manufacturing execution systems (MES) and industry standards, he possesses a deep, ingrained understanding of the field. At Sepasoft, Doug focuses on implementing advanced MES solutions, including batch procedures, electronic batch records (EBR), and real-time production workflows.


Excited to learn more? Reach out to us to schedule a live demo today!