Model Monitors and Alerting at Scale with RStudio Connect | Adam Austin, Socure
Deploying a predictive model signals the end of one long journey and the start of another. Monitoring model performance is crucial for ensuring the long-term quality of your work. A monitor should provide insights into model inputs and output scores, and should send alerts when something goes wrong. However, as the number of deployed models increases or your customer base grows, the maintenance of your monitor portfolio becomes daunting. In this talk we’ll explore a solution for orchestrating monitor deployment and maintenance using RStudio Connect. I will show how applications of R Markdown, Shiny, and Plumber can unburden data scientists of time-consuming report upkeep by empowering end-users to deploy, update, and own their monitors. Timestamps: 2:01 - Start of presentation 3:43 - About Socure 6:12 - Model performance matters, deployment isn't the end of the story 7:26 - What is a monitor? 8:55 - What is an alert? 11:00 - Monitor example 18:09 - Firing an alert from RStudio Connect 19:00 - Why monitor from RStudio Connect? 24:33 - How monitoring drives success at Socure 30:00 - Git-backed deployment in RStudio Connect 36:00 - Shiny app that their account managers see 46:00 - Architecture of a monitoring system 56:00 - Connect hot tip System-wide packages 57:00 - Why did we try Connect for monitoring 59:02 - Why do we keep using it for that :) Resources shared: Blastula package: https://github.com/rstudio/blastula connectapi package: https://github.com/rstudio/connectapi rsconnect package: https://rstudio.github.io/rsconnect/ Intro to APIs blog post: https://www.rstudio.com/blog/creating-apis-for-data-science-with-plumber/ Speaker Bio: Adam Austin is a senior data scientist and RStudio administrator at Socure, a leading provider of identity verification and fraud prevention services. His work focuses on data science enablement through tools, automation, and reporting
image: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Thank you so much for joining us today. Welcome to the RStudio Enterprise Community Meetup. I'm Rachel. I'm actually calling in from Connecticut today. We are streaming out to LinkedIn and YouTube Live right now. If you just joined now, feel free to say hi through the chat window and maybe where you're calling in from. Today we are joined by my friend Adam Austin at Socure, who's going to be sharing how they monitor models and alert at scale with RStudio Connect.
If this is your first time joining one of these meetup sessions, welcome. This is a friendly and open meetup environment for teams to share use cases, teach lessons learned, and just meet each other and ask questions. Together, we are all dedicated to making this an inclusive and open environment for everyone, no matter your experience, industry, or background.
Adam is a senior data scientist and was previously an RStudio administrator at Socure, a leading provider of identity verification and fraud prevention services. His work focuses on data science enablement through tools, automation, and reporting.
All right. Thank you so much and welcome everybody. Thanks so much for joining today. Like Rachel said, our topic today is model monitors and alerting at scale with RStudio Connect. And I just wanted to give a shout out. I believe in the audience are Gordon Shotwell, who is a colleague of mine at Socure, and Jonathan Keene, who is a former Socurean, as we call them. And they were both instrumental in sort of developing the foundation of this whole monitoring and alerting framework. So shout out to them.
So a bit about me, I've been doing data science for about eight years. I've been at Socure for about a year and a half. And if you want to follow me on Twitter or like go find my GitHub, you're welcome to do that.
About Socure
So I do want to talk about Socure briefly, just to give you a little bit of context for where I'm coming from. It's a company that provides identity verification and fraud prediction models. And so we're essentially a business to business provider of model scores and insights via our API. So we're not consumer facing. So if you haven't heard of us, that's just because we don't really sell to the general public, we sell to businesses.
And the idea there is there's a number of firms like banks, is a good example, have regulatory requirements around who they can do business with, and around a certain minimum standard of authentication verification of their users. And so we provide services around that to them. Also things like financial institutions, and any firm that has an interest in moving money around financial transactions, or essentially working with your private personal data, might also want to prevent fraud, because it's just good for business.
So when I say we deploy a model, what I mean is we develop a predictive model based on all kinds of attributes about a person. And those models basically say, are you who you say you are? And do you have good intentions? So that's kind of what our business is. And so we will put a model in production, which means it's behind an API that a business can then make a call to and say, here's somebody applying for an account at our institution. Can you tell us, is this person legit? And do they have good intentions?
Why model monitoring matters
So with that out of the way, why are we here today? So we deploy models, like I said, but that really isn't the end of the story. The performance of that model matters, obviously. It matters over time. It's going to matter at the model level. And it's also going to matter at the customer level. So we want to make sure that our models over time are producing reliable, stable scores. And we also want to make sure that each individual customer is receiving the kinds of scores that they expect and that they can make use of.
So what we'll cover today, well, the name of this talk is model monitoring and alerting at scale with RStudio Connect. So I thought we would cover, first of all, model monitoring and alerting, how we scale it up, and how we do that specifically in the RStudio Connect framework. So I think of this as the what we do, who, when, and why you might want to scale it for, and then where it's all done.
What is a monitor?
All right. So first of all, model monitoring and alerting, what does that even mean? So what is a monitor? In its simplest terms, to me, a monitor is a visual presentation of model metrics that is refreshed regularly and will notify you if something is wrong. This is sort of the three components I think of when I think of monitoring. So basically, making sure that everything is running smoothly. And it's a place you can go to visually see what's happening. It's something that should be constantly updating, so it's always kept current, so you're never caught off guard by anything that goes wrong. And it should alert you if things do go wrong.
So I use that word alert. What actually is an alert? So this is going to be some kind of notification of abnormal data inputs or perhaps model outputs. And you want to be alerted if something goes wrong so that you can respond quickly. Maybe you need to retrain your models, or maybe you need to look into a problem with an upstream data source, or maybe you just need to communicate with a customer. Or in our case, in Socure's case, we want to communicate with the customer and say, hey, we noticed that our scores are changing. Is something different on your end, or should we do some more digging on our end?
So an example of an alert might be, you know, suddenly you're getting too few records from an input data source. Maybe you have just sudden spike in bad input data, so too many missing values. Maybe you're getting the wrong factor levels coming through and your model can't score those. Maybe you have certain input values that are exceeding expected thresholds, so they're too high, they're too low, and that just looks weird to you. And also, very importantly, you want to make sure that your models are producing the right kinds of scores. So if your model scores change considerably, that might have impacts in Socure's case for our clients in terms of how they're thinking about their customers, so you want to make sure that we alert them to that.
Using R Markdown for monitoring
All right, so how are we actually going to do this monitoring and alerting? Well, this is a great job for R Markdown, or of course, Quarto is sort of the new R Markdown, if you will. When we were developing this system, we've been using R Markdown, and it works really well for us. So why would we use R Markdown in this case? Well, it has, obviously, infinite flexibility. You can compute any function of your data.
And this is important because there are a lot of really great monitoring solutions out there. Different kinds of companies provide ways for you to monitor your data and your models. A lot of those solutions are a little bit limited in how you actually compute functions of your data. So maybe you're forced into using certain kinds of SQL queries, or maybe you're forced into using a kind of certain proprietary language to compute and construct your statistics, and that can be a little bit limiting. With R Markdown, you're just using R or even Python if you want, and so you can compute any function of your data.
R Markdown is great for hosting because you can publish and refresh your reports on RStudio Connect, and we'll see more about that in just a minute. And R Markdown is really nice because R has a huge suite of integrations. You can choose to send out alerts via email. You can use Slack. You can use GChat. You can use Microsoft Teams, and so on and so forth. There's all kinds of great integrations where you can choose different channels to send your alerts to based on your needs.
Anatomy of a monitor
Again, it's just R Markdown here at Socure. We have a nice custom template that we apply to R Markdown so that when it renders, it has this cool theming. And again, it's just going to be a report, and it's going to visually present to you some information. So, maybe you're interested in, again, the number of input records you have over time, the percent of missing values you get over time, so on and so forth. So, you can go to this report. You can examine all these things over time. And then also, once this is deployed to Connect, this can refresh and send you an alert if there is a problem. So, you don't have to be constantly watching this thing all the time because as you scale up and you have more and more monitors, that just becomes too time consuming, too burdensome.
So, let's discuss what the anatomy of this thing really is. I think of this as having three parts. There's going to be a header section. There's going to be sort of the bulk of the monitor, which is around making sure you can get your data, you transform that data, and then you can visualize it, create plots and things like that, tables. And then there's a final component, which is taking that output data and deciding whether or not you're going to alert.
So, first of all, the header, if you've written an R Markdown file before, you've certainly seen this YAML header before. So, that's where you specify your title, your author, the kind of output you're going to create. But I do want to call out one particular thing that you can add here, and this is especially for RStudio Connect. If you're going to be alerting from Connect, you want to think about when those alerts are going to fire. So, when you deploy a report to Connect, you can choose to set that report to refresh on a regular basis. And when it refreshes, it's going to want to send an email. What you can do is you can actually suppress that email until you absolutely need to send it, and that's going to be when you have an alert.
In the middle part of this monitor, this is just where you're going to have all of your logic, all of your R code, maybe your Python code to pull your data, compute your statistics, plot things, create tables, et cetera. This is all just what you would expect from a standard R Markdown file.
Now, the alerting part is kind of where some of the magic happens. So, again, it's just part of the standard R Markdown workflow. This is just another R code chunk in this case, but we're going to be relying on two packages in particular. One is obviously the R Markdown package, and one is the blastula package, and blastula is this really cool package to help you email out reports. It makes it super easy to do, especially in the RStudio Connect environment.
So, this is just kind of a high-level overview. Here, I'm taking my previously created statistics and functions of my data, and I'm going to process some alerts, and at this point, I'm going to say, do I have any alerts? That is, do I have anything that I see concerning in my data? Do I have too many missing values? Do I have model scores that are too high or too low? Whatever it is, and then you can decide in this code chunk whether or not you want to fire an alert. So, if you do have any alerts, at that point, you're going to use the R Markdown package to then say, I'm no longer going to be suppressing my emails. I'm going to tell Connect, I'm ready to send an email now, and then next, I'm going to use the blastula package to render my Connect email, and I'm going to use this alert email template, which I'll show you in a second, and I'm going to attach this email, and I'm going to send it off to my recipients.
So, I mentioned this alert email template, and let me just show you what I mean by that. So, you have here, you have a monitor markdown file, and then what you're doing is you're calling a secondary markdown. In this case, it's just going to be an alert email template, and I call this the alerting buddy system. So, if you've deployed this monitor to Connect, you're also going to want to deploy this little helper markdown, and this is just going to be, this is going to consist of a header and a body, and the header now is going to contain this blastula function, which specifically renders this as a blastula email. And then in the body of this alert markdown, it's just going to be any content that you want to report. If you want to put a table in there, if you want to just put some custom text that just says, hey, there's an alert, go check your monitor, you can do that.
One thing to be aware of here is that this alert email markdown file, when it's called in your code, it's being rendered within the scope of your original monitor. So, you don't need to recompute anything here. You don't need to pull new data. All of the objects that are available in this environment here in your monitor are going to be available in your alert email as well.
So, what does that actually look like when it fires an alert? In this case, we have Gringotts Bank, and maybe they had some model score drift, and maybe they had some input variables that have exceeded their thresholds. And so, in this case, this alert goes out and comes into my inbox overnight because that's when the report ran. And this just gives me a little bit of information. So, this text was rendered in my alert email R Markdown. And then what's nice about this is this comes into my inbox. I get in first thing in the morning, I see there was an alert, and now I can go click down here, and I can just go right to the report and see those plots and make sure that everything is looking good.
Why RStudio Connect for monitoring?
Okay. So, that's what a monitor is, sort of how the email system works here. Why would we want to monitor from RStudio Connect in general? Well, it's unreasonably easy to deploy reports there. It's super nice to be able to tag and categorize your content. And so, what those tags do is it allows you to search and filter and find your monitors, find any reports or content that you deployed. You can also really easily schedule your refreshes there. So, you can schedule a report to run overnight, you can schedule it to run every hour if you want more frequent updates. And there are tools for scaling. And that is the crux of what I want to get into next.
Scaling up: the journey at Socure
So, let's think about the scaling process and sort of the level one. The first step you might take in monitoring a client or monitoring a model is just to create something bespoke or just to create a custom monitor for every individual client or every individual need. When you're small, it kind of makes sense to start out that way because, you know, maybe you only have a few customers or a few models, and you're kind of learning which metrics are useful and what's the right refresh cadence and how do we send alerts and so forth.
So, perhaps it looks like this where you get a new client and the account manager comes and says, we need to monitor Cocoa inputs and the Bad Egg model scores for this new client Wonka, Willie Wonka. He's our customer now. So, data science says, okay, cool, we can do that. We're going to build a custom monitor. We're going to publish that to Connect, and now our account managers can view that and can receive alerts.
The trouble here is that this can be very time-consuming. Maybe every monitor looks different, so, you know, it's a little bit hard to go from one monitor to the other and understand sort of what's happening in each one. There's a little bit of, like, cognitive overhead there. So, as time goes on, you will want to begin to standardize via template.
So, in level two, you're going to be thinking about, can we monitor some key inputs and model scores across all the clients? So, maybe there are some clients that require a little bit of extra customization, and that's totally fine, but can we figure out maybe a templated solution where, for most of our clients, we just know these are the inputs we care about, and these are the outputs we care about? So, customization kind of becomes secondary, and you just want that consistency, those standardized reporting practices.
So, what might that look like? Data Science might decide to create a report template, and this is how we did it for a while as well, and this template is going to contain a lot of logic in the body of that report, and all that logic is going to be essentially the same for each client. But what you'll do is you can go in in each report and say, you know, this report is for Willy Wonka, or this report is for Gringotts, or whoever it is, and then the downstream logic can then go and grab the data specific to those clients.
So, this was really helpful. This is a really nice system for a while. The problem here is that this is a little bit time-consuming, and there's a multi-step deployment process that can be a little bit cumbersome, again, as you continue to scale up, which is what happened to us. So, time goes by, and all of a sudden we have a lot more clients, and our account managers are asking for a lot more monitors, and so we have to begin, again, thinking about how do we reduce a little bit of the friction on the data science team, and how do we make this, again, even still more automated.
And to that end, we came up with a package, and this package was all about deploying a report by combining or marrying this template and now a client-specific config or a configuration, and those two things together create a monitor.
So, broadly speaking, in this auto-monitor package, and I'll talk more about these specifics later, but broadly speaking, there's this deploy function, and deploy just says, give me your account name and give me your configuration, and I will put together a template and a configuration, and I will create a monitor, and I will deploy that. So, this template can live inside your package, so you just throw it in the int directory, and then this configuration is just going to be a list of client-specific details, monitoring thresholds, and so forth.
Also, inside this auto-monitor package are just your standard monitoring functions, and so those are what get used inside your template, and you can use – and this is, like, super cool – you can use the rsconnect package and the connectapi package to add viewer permissions. You can set your refresh schedule. You can tag deployed content so it's easy to find. You can do all these really cool things programmatically, so you don't have to deploy a report and then go to Connect and configure everything manually. You can do all of this programmatically from R.
You can do all these really cool things programmatically, so you don't have to deploy a report and then go to Connect and configure everything manually. You can do all of this programmatically from R.
So, now, with auto-monitor, we can just say, I want to deploy. I'm going to deploy for, you know, this specific client. Maybe you have a set of config defaults, so you don't have to worry about that either. You can just programmatically run through all of these client names and then push out all these reports from your console.
So, now, you're cooking with gas. Everything feels good, except the problem here becomes you have all these reports now, and unless you take the time to go in and customize all of the configs and all of the alerting thresholds – and this gets back to our earlier question – what's going to happen? You have these default set of thresholds, and because they're not client specific and customized for the client needs, you're just going to get tons of alerts, right? So, you kind of want to set these overly cautious alerting thresholds so you're not missing anything, but because each client has different needs based on their business, you're probably just going to end up with a mess of alerts, and this becomes overwhelming. You get a lot of false positives. You get a lot of alert fatigue, which just means people are just going to stop reading their alerts because they're just tired of staring at them.
Self-service monitoring with Shiny
So, time goes by, and what we came up with is a self-service monitoring ecosystem, and data science says download Jira tickets, long live self-service, and now here's the old world where account managers are getting all these alerts. They could submit tickets, or in our case, what we chose to do is deploy a Shiny app, so now this is a web app that faces our account managers, and our account managers can go in, and in that Shiny app, choose their clients, and then change the configurations on those configs, so they're changing the thresholds at which alerts are fired, and that app will then go and update those monitors so that you're actually getting fewer alerts, and what you're getting is not just fewer alerts, you're getting the right alerts, so you're getting the alerts only when they matter.
What you're getting is not just fewer alerts, you're getting the right alerts, so you're getting the alerts only when they matter.
This is actually an example of what our account managers see, and it's basically a list of all of our clients. It gives the account ID, and then it tells you what was the last alert date, so the reason we're showing this here, and we actually sort by this date, is if you wake up in the morning, and you get all these alerts, and you're just sick and tired of it, you can go to this app, and you can find the accounts that are alerting right away.
So let's update Gringotts, so I'll click on Gringotts here, and what pops up is a panel where I can configure my config file, so what I'll do is I'll open this general settings tab, and now you can see that this is all of the parameters and things that go into our alerting framework. So let's say we just deployed a new model for Gringotts, so I want to go here to my model names, and I want to add this new industry-specific model for Gringotts, and I want to get rid of this old fraud model. Okay, do that, and now I notice that my monitor is only showing me the past 90 days of history. I'm actually interested in 180 days of history for that client, and with that, I will say, okay, I'm ready to go. I'm going to hit update. Okay, so I've updated the config for that client, so the next time that report refreshes, it's going to use those settings.
Architecture: how it all fits together
Okay. We've been talking about Connect this whole time, but I want to dig a little bit more into how Connect is playing a role here because it's sort of the glue that holds everything together, and we're going to get more in the weeds about what exactly is happening behind the scenes. So, let's talk kind of about the architecture.
An account manager goes to configure the monitor from this web app, and so now what's actually happening? Well, behind the scenes, the web app is actually interacting, at least in Socure's case, is interacting with a plumber API. A plumber API, if you're not familiar with APIs, they're just a system for computers or programs to talk to one another, so to communicate, to send data back and forth. So, what actually happens here is Shiny, when you hit that update button in that Shiny app, what it does is it will send a request to this API to say, hey, I need you to update this one particular config file, and the API will say, okay, I'm going to go do that, and the API goes and it updates a set of config files that are living in the Connect storage.
So, Connect just has, you know, it's sitting on a server, and the server has storage, and so we have a bunch of config files sitting on Connect, and the API will just go and make those changes to the config file that you requested, or sorry, to the client that you requested it for. Okay, from there, the config files, the next time the monitors refresh, the monitors then pull in that information from the config files, and that's actually where they're getting their thresholds from. So, we've sort of separated the monitors from that configuration and those settings, all of those thresholds and things, and so that makes it really easy now to make changes to how those monitors are actually alerting.
So, what actually is this config file? You know, I said there's files sitting on the server, and that's true. In this case, they're just JSON files. So, whereas previously I said, you know, a configuration could be like a list of thresholds, now in this case, it's essentially just like a language agnostic form of that same thing. So, in this case, it's just JSONs. So, if we're looking at the Willy Wonka company, we might have the wonka.json file sitting here on Connect, and this just has things like the account name, the account ID, the model names they're interested in, what's your reporting window, and then all of your thresholds. And basically, the deeper you get into data science, the more you realize it's all just JSON and YAML, and that's pretty much what everything converges to.
So, then when a monitor goes and refreshes, what it's actually doing inside that monitor is we have these functions from the automonitor package, and they're basically a suite of API interaction functions that just go out and talk to the plumber API, and then make a request for that config. So, when this monitor refreshes, it's just going to say, hey, I need the latest version of my config file, and the API says, great, here it is, and from there, we can now use that data downstream in our monitors. So, in our case, here we refresh these monitors every night, and so every night, each monitor goes out and grabs its specific set of thresholds and brings those in. So, if an account manager had made a change during that day, that will be reflected the next day in the report.
Okay, so you might be saying that sounds really convoluted, and it kind of is, although I will say plumber APIs, if you're not familiar with APIs, they sound kind of scary, and kind of magical. Actually, the plumber package makes it so easy to deploy an API that we should all be doing it just for fun.
So, but the broader question is, why would we use an API in the first place? And the basic idea is that it really makes it simple to add configs to our Connect storage during deployment. So, you think about this deployment process, again, it's a template plus the config file, and you get a monitor. Okay, so I'll use auto monitor to deploy a monitor, and that sends a monitor to Connect. That's great. But my config file actually gets pushed through our plumber API and then ends up on the server. And then why is that important? Why do we do it that way? Well, it's actually the easiest way to get something right onto your Connect server.
So, now I can just be sitting here. I'm a data scientist. I can be totally abstracted away from where this stuff lives and how it gets updated. I just have this nice API sitting there that I can then use. And then also in our auto monitor package, we include API interaction functions that allow you to create configs, retrieve configs, update configs, delete configs, and so forth. So, all of our data scientists have a nice set of tools at their disposal if they wanted to interact with these configs sitting on Connect.
And this also makes it really nice just for programmatic deployments or updates. So, in addition, any other sort of local development or app testing or access by other applications company-wide is made super easy by having this API there. So, you know, say we wanted to migrate the monitor system somewhere else. The question is, well, how would we communicate with those other systems? What are thresholds that we're currently using are? And the answer is just stick an API on Connect. And now all these systems company-wide can talk to that API.
All right. APIs are also really cool in Connect because they come with Swagger docs for free. So, when you deploy an API, you get a nice set of documentation that comes with that. This is essentially what the documentation looks like when you deploy an API. So, API has these things called endpoints, which are just like URLs that you can go and send information to and get information from. And so, this is a Swagger documentation that Plumber will build for you, which is so nice when you create an API. And so, now you can, you know, if somebody has a question about, oh, how does this API work? How do I interact with it? Or some engineer somewhere else in the company wants to interact with it, you just point them here and you have this nice set of documentation.
Current state and future possibilities
But, you know, one, this is kind of our current state here of monitoring. There's a future state you could imagine where you don't necessarily have to involve data science in every single deployment. So, I showed before how data science uses the auto monitor package to deploy things programmatically to deploy monitors and that works great. If you even wanted to remove them from the equation a little bit, you know, you could think about providing a deploy feature to your Shiny app so the account managers can go in and deploy new monitors as they get new accounts.
You could even think about maybe having a process that grabs all of our data, looks for new accounts, and if it sees a new account, auto deploys a monitor, sets up a config, etc. So, you can think of a world that, you know, is even more hands-off. We haven't moved there yet, but that's something that's also within the realm of possibility.
One thing I do want to call out with Connect that really helped enable this process for us is the usage of what they call external packages. These are like system-wide packages. The idea here is that we as data scientists can build a package, like in our case auto monitor, and then we can create an installation on Connect and that is the version that our API is going to use or that our monitors are going to use and that our Shiny app is going to use. The idea there is that you just have this one nice standardized version that every system is using and you know that it's always up to date because you can go in and configure that.
That's just been a really nice way to make sure that we know exactly what our monitors, how they're going to behave because we are controlling the version of that package on Connect. The documentation does say that external packages decrease reproducibility and isolation of the content on Connect and should only be used as a last resort. I think that's fair. We study YOLO and it's actually been working out well for us in this particular instance.
Why we keep using Connect
Okay, just to wrap things up, why did we try Connect for monitoring in the first place? You know, like we said before, there's all kinds of great monitoring solutions out there. Even if you or your company is exploring options for monitoring and looking at different vendors, the fact that if you have Connect and you're prototyping today, that has a very high value. So, you don't have to wait, you know, for a system to be stood up and then to go learn that system in order to start deploying and seeing the value of monitors. If you have Connect, have R Markdown, you can start doing this stuff today and just kind of figuring out what works.
And, you know, there's just no waiting around for engineering resources or prioritization, you can just start working. Scheduling and user access control is really effortless on Connect. It's super nice to just put something up and then you know who can see it, you know how often it gets refreshed, you can change that really easily. And then also our non-coding partners, like our account managers, can be super engaged and empowered in this whole process.
So, that's why we started using Connect and now this has been a journey that's, you know, longer than my tenure at Socure. And so, the question is, why do we keep using Connect for monitoring even when we have other options at our disposal? It's, well, it's super cost effective, for one. There's no marginal expenditure for hosting additional monitors, for building additional APIs and hosting those, and for pushing out web apps. It's low maintenance, so a well-designed ecosystem can run indefinitely without intervention. I don't think I've actually touched this system or the code in a year because it just runs so well.
And when we say well-designed, you know, that's going to include making sure your packages are well-tested. I have not gotten in this talk, but AutoMonitor is thoroughly tested, including all of our API interaction calls. So, make sure you're doing that. But once you do, yeah, it can run really well. And then just the extra control. I mean, again, there's no waiting around for someone to go and configure something for you if you need that. You know, if you're a data scientist, you can own this product and can respond to user needs as quickly as you'd like.
It's low maintenance, so a well-designed ecosystem can run indefinitely without intervention. I don't think I've actually touched this system or the code in a year because it just runs so well.
Okay. So, with that, thank you very much. Yeah. And Connect has been a lifesaver. And I tweeted out last February just like how Connect has been super helpful just in empowering me and empowering our end users to own their work. So, I hope that you can find a way to do that, too.