Unless your user is actually experiencing a service or website as it’s meant to be experienced, it doesn’t matter if your monitoring tool says it’s up or down.
These are words to live by, especially as digital environments are held accountable for helping users along their buying journey, and a smooth journey ultimately benefits the bottom line.
Today’s landscape: digital user experience
Today, when users encounter a website or application issue, they typically try the function once more (i.e., hitting “Place Order” on an online shopping cart) and then abandon the page if it still doesn’t work—equaling retail dollars lost through no fault of the user. There are countless other examples, both in B2B scenarios as well as B2C, which is why we need to monitor the user experience, so that we can mitigate issues before they take place and circumvent a bottom-line loss. Put simply,
real user experience monitoring is a critical element for understanding potential availability problems before they become an issue with current and prospective customers.
Beyond the
e-commerce web performance example, say you’re in software sales and need to prepare for a sales meeting using a lead generation web application, or you’re a finance department employee reconciling biweekly payroll in your web app. If the website or application has servers set up across different cloud providers (AWS, Azure, Google Cloud, or a private cloud) and geographically dispersed across North America and Europe, the components of each provider in those regions means a U.S.-based user will have a different experience than their counterpart in Germany. In both B2C and B2B examples, there are global consumers of every app and website, and each should experience seamless, full availability.
Real user monitoring is important, but so is collaboration. User experience monitoring can’t be done without mobilizing the web team, operations team, and development team to collaborate on monitoring user, metrics, tracing, and logs; these teams need to work in tandem to monitor and manage web and application environments, but when the inevitable workload latency or regional server disruption happens, monitoring the applications’ performance from a user’s perspective will make the real difference in service.
Case study in real user monitoring
Let’s dive into a case study.
- If your user is experiencing login errors to their web app, you’ll likely see this crop up in your log management tool first due to alerts, leading you to dig into the event logs and discover those failed logins.
- Then, jump into an APM solution to explore the metrics and trace the issue across your distributed systems—but because metrics are aggregated over time, you can’t delve into the regional data that’s needed to see the problem. So, the metrics and tracing aren’t showing the issue that was clearly brought to your attention in the log tool.
- So, perhaps it’s an issue with the web front end. You take a look at your user experience monitoring solution and run a page speed test on the U.S. login screen; again, things look normal. So you change the view to EMEA, and you can see that the page speed test is showing long wait times.
You can dig into component times, identify the significant increase in latency when compared to North America, and identify the authentication database, which is hosted by a provider, experiencing network issues in the EMEA region.
- Problem solved! Your user can go back to using the application without latency or other challenges.
Real user monitoring in the future
In short, it doesn’t matter if your metrics and tracing are showing normal activity levels in your monitoring system if your event logs and user reports say otherwise. Today’s user expects 100% service and uptime, otherwise they’ll simply disengage, potentially leveraging a competitor instead. Even more importantly in the age of customer centricity, it’s incumbent on web, dev and ops teams to have visibility into the user’s experience so you can ensure full availability of the service you’re providing.