This page presents example monitor configurations for some common alerting use cases.
To receive a notification on all occurrences of an error, create a match monitor where the filter conditions match the events reporting the error.
To receive only certain attributes in the notification message, use the project
operator.
To receive a notification when the error rate exceeds a threshold, create a threshold monitor with an APL query that identifies the rate of error messages.
For example, logs in your dataset ['sample_dataset']
have a status.code
attribute that takes the value ERROR
when a log is about an error. In this case, the following example query tracks the error rate every minute:
Other options:
above or equal
.To receive a notification when the number of error message of a given type exceeds a threshold, create a threshold monitor with an APL query that counts the different error messages.
For example, logs in your dataset ['sample_dataset']
have a error.message
attribute. In this case, the following example query counts errors by type every 5 minutes:
Other options:
By default, the monitor enters the alert state when any of the counts returned by the query cross the threshold, and remains in the alert state until no counts cross the threshold. To alert separately for each message value instead, enable Notify by group.
To receive a notification whenever your response times spike without having to rely on a single threshold, create an anomaly monitor with an APL query that tracks your median response time.
For example, you have a dataset ['my_traces']
of trace data with the following:
route
field.duration
field.parent_span_id
field is empty.The following query gives median response times by route in one-minute intervals:
Other options: