Log Streams: Common uses
With Convex, you can see information about each function executed by Convex, such as whether it succeeded and how long it took to execute, as well as any log lines from console.log
s within your functions. These are useful for understanding what your Convex deployment is doing as well as debugging any unexpected issues. Recent events are visible in the dashboard and from the CLI with npx convex logs
or with the --tail-logs
argument to npx convex dev
.
However, you can also set up Log Streams to send these events to Axiom or Datadog.
Log streams give you more control over your logs and errors:
- Retain historical logs as long as you want (vs. Convex only keeps logs for the last 1000 functions)
- Add more powerful filtering + data visualizations base on logs
- Integrate your log streaming platform with other tools (e.g. PagerDuty, Slack)
This article will go over a few common ways to use log streams and how to set them up with either Axiom or Datadog:
- Replicating the Convex dashboard logs page
- Filtering to relevant logs by request ID
- Searching for logs containing a particular string
- Emitting + filtering namespaced logs with structured metadata
- Visualizing Convex usage
- Alerting on approaching Convex limits
How to set up a log stream
Follow our docs to set up a log stream. You’ll need to set up an account for whichever tool you’re using. I’ve personally liked using Axiom for logs and Sentry for exception reporting.
Common ways to use log streams
The full schema of the Convex log events is documented here, and the log stream provider of your choosing will have their own docs on how to filter and visualize data, but in this section, we’ll go through a couple common scenarios.
Recreating the dashboard logs page
The dashboard logs page shows console
log lines + function executions sorted by time.
To do this with a log stream, we can filter to logs where topic
is either console
or function_execution
.
Some useful columns to display
function.path
,function.type
,function.request_id
- For function executions:
functon.cached
,status
,error_message
- For console events:
log_level
,message
Since there are different columns for console logs events vs. function execution log events, you might set up two different views for them. Once you have these set up how you want, save the queries or add them to a dashboard for easy use later on.
Below is an example showing console logs in Axiom and an example of showing function executions in Datadog.
Console logs in Axiom
Function executions in Datadog
Filtering to a request ID
In the dashboard, clicking on an entry in the logs page will open up a view filtered to that request using the Request ID. You can also do this in Axiom or Datadog by filtering your events further on function.request_id
. The request ID shows up in error messages and sentry, so this can be useful for investigating an error found in Sentry or reported by a user.
Request ID filtering in the dashboard Request ID in Sentry
Axiom: In the Axiom “Explore” tab with something like this:
your_dataset
| where ['data.function.request_id'] == "your request ID here"
Datadog: In the Datadog logs page:
@function.request_id:"your request ID here"
Filtering to console
events with a particular message
Axiom:
your_dataset
| where ['data.topic'] == "console"
| where ['data.message'] contains "hello"
Datadog:
@message:hello
Namespaced logs + structured metadata
As an example, if I have an app where users play games against each other, I might want to log information about each game with some specific attached metadata (like the game ID).
In my Convex functions, I’ll do something like this:
console.log(JSON.stringify({
topic: "GAME",
metadata: { gameId: "my game ID here" },
message: "Started"
}))
Then I can parse these logs in Axiom or Datadog and be able to filter to all events with topic “GAME”
with a particular ID.
To make this a little easier, we can make this a helper function:
function logEvent(topic, metadata, message) {
console.log(JSON.stringify({ topic, metadata, message }))
}
Going further, we could use customFunctions
to wrap console.log
and handle logging these structured events. A usage of this might look something like
ctx.logger.log(LOG_TOPICS.Game, { gameId }, "Started")
An example implementation of ctx.logger
and some examples of its usage can be found here.
Axiom:
(optional) Add a virtual field parsed_message
so we can use this field in filters. This saves us from having to repeat the parsing logic in our query.
['your_dataset']
| extend parsed_message = iff(
isnotnull(parse_json(trim("'", ["data.message"]))),
parse_json(trim("'", ["data.message"]),
parse_json('{}')
)
Adding a virtual field in Axiom
In the “Explore” page:
your_dataset
| where ['data.topic'] == "console"
| where parsed_message["topic"] == "GAME"
| where parsed_message["metadata"]["gameId"] == <your id>
| project ['data.timestamp'], ['data.log_level'], parsed_message["message"]
Filtering to logs for a game in Axiom
Datadog:
Add a pipeline with a Grok parser to parse the message
field as JSON on all events with the topic
as console
. I used
rule '%{data:structured_message:json}'
Adding a Grok parser in Datadog
Filter logs as follows:
@structured_message.topic:GAME @structured_message.metadata.gameId:<specific ID>
Filtering to logs for a game in Datadog
Note: message
is formatted using object-inspect
, so printing a string requires removing the outer single quotes.
Visualizing usage
Function executions contain the usage
field which can be used to track usage state like database bandwidth and storage per function.
Axiom:
your_dataset
| where ['data.topic'] == "function_execution"
| extend databaseBandwithKb = (todouble(['data.usage.database_read_bytes']) + todouble(['data.usage.database_write_bytes'])) / 1024
| summarize sum(databaseBandwithKb) by ['data.function.path'], bin_auto(_time)
Datadog:
You will want to make this a “measure” for the usage fields you care about and might want to make a “facet” for function.path
. Below is an example of making a measure for database_write_bytes
.
Defining a measure in Datadog
Making a pie chart in Datadog
Convex system warnings
Convex automatically adds warning messages when a function is nearing limits (e.g. total bytes read, execution time). These have the system_code
field which is a short string summarizing the limit. Adding an alert for events with system_code
is a good way of automatically detecting functions that are approaching limits before they exceed the limits and break.
An alert in Datadog for Convex system warnings
Summary
Log streams like Axiom and Datadog can be used to provide powerful querying and alerting on logs and errors from your Convex functions, helping with debugging issues when they come up and providing early insights to detect smaller issues before they become bigger.
This article covers how to do the following common things with either Axiom or Datadog hooked up as a Convex log stream:
- Replicating the Convex dashboard logs page, but with more history
- Filtering to relevant logs by request ID
- Searching for logs containing a particular string
- Emitting + filtering namespaced logs with structured metadata
- Visualizing Convex usage
- Alerting on approaching Convex limits
Convex is the sync platform with everything you need to build your full-stack project. Cloud functions, a database, file storage, scheduling, search, and realtime updates fit together seamlessly.