
Dev Diary: Log Streaming ft. Axiom
hey my name is rakib and I'm one of the engineers at convex who built our new log streaming feature and so I'm going to tell you a little bit about that for Conta convex has been supporting larger and larger customers and one of the most important things in production ready backends is being able to Monitor and observe your application so what this means for convex is we wanted users to be able to monitor the performance and support more complex query and storage patterns with logs that you generate in your convex functions uh so log streaming takes those logs as well as a bunch of other metadata and events that happen in your back end and sends it out to your favorite logging destination we currently support three log streams we started with data dog um axom which was a user request and web hooks which is kind of like our escape hatch where we post log events to whatever URL you configure and if there's another log stream that you'd like to see here please let us know on Discord or send us an email and we'll be happy to build it so one one of the tricky things about getting log streams to work in convex was really thinking about some of the consistency guarantees we wanted to provide um it's challenging because logs are some of the highest throughput events that are generated by uh many backends because it's really easy to just busy Loop and generate a ton of logs and so we had to consider what are the delivery guarantees and uh ordering guarantees we could provide on this and to see some of our decisions you can check our log streams page in the documentation um but we we took a lot of inspiration from CFA and we also looked at other log streaming systems um or I guess other stream processing systems like Apachi storm and so that was a really gratifying and really interesting part of the engineering effort for log streams so to show you some of the cool things you can do with log streams I'm going to set up log stream in axom for our project AI town here I'm on my AI Town dashboard in convex dashboard and I've navigated to the settings and then the log streams tab here I want to configure an axom log Stream So I I can uh open up the axom configuration model and I can see that I need a data set name an API key and I can specify a list of attributes so let's do that copy this go back to data sets now I can just call this AI Town put an API key and maybe I want to add an attribute here call it project anaton these attributes will just show up on all the payloads that get sent to Axiom as you can see the axium sync is verified and now active so if I go navigate to the streams page click AI town I can see that my function logs are all being streamed in the structured format to axom so I can do all kinds of cool things here so maybe I want to find which functions are erroring uh as a as a way to debug so I can look at the topic every log event in con has a topic and specifically I'm interested in the execution record so let's start with that let's look for data. status equals failure so this is any convex function which throws an error an uncaught error will generate this uh like failure event and any convex function will generate after executing will generate an execution record log event so I can see here that oh there's a bunch of functions that are erroring I can see specifically this run agent batch has this uncut server error so maybe this is something that I should probably go and debug uh there's a bunch here as well and another cool thing you could do is actually generate dashboards here I can go to this dashboards Tab and create a new dashboard I'm going to call this AI Town test and specifically I'd like to visualize which functions on my conx back end are slower and take a finer look at how I can improve the performance of those after so I'm going to create a chart here where I'm going to be plotting the percentiles of the execution times so as you can see uh they already give a suggestion for this which is nice all right and maybe I want to group by the function names themselves so oh it's right here data doore function path this will actually identify the functions so now I can save this and maybe make this a little bit bigger and as you can see here I can see that these are the latencies of my functions I can see that specifically it's it's this run agent back batch function that's taking much larger like a lot more time than all the other functions uh so maybe I need to take a final finer green look at that and of course you can set whatever latencies you want and another cool thing you can do is actually generate monitors so I'm not going to actually set this up right now but uh you can create alerts based on these metrics as well so maybe I want to create some kind of alert like you know function milliseconds and if there's any function that takes longer than 2 seconds then maybe this is something that I'd want to Monitor and I just need to generate a query here here I can just do data dot let's say that if the average of the execution time all seconds gets too high create this monitor what this is telling me is that if any of my functions if the average execution time of my functions exceeds 2 seconds within any 5minute period it'll alert me and I can set alert like monitors like who I want to alert whether it be email or whatever and so this is a great way to get visibility and into your convex deployments thanks for checking out this video log streaming has been a pleasure of a feature to work on so make sure if you have convex Pro to give this feature a try
We just released log streaming in Convex Pro. We invited Rakeeb, our awesome engineer behind the feature, to come on the channel and share his thoughts and give a quick demo.
Read more about the feature in the docs here.
Convex is the backend platform with everything you need to build your full-stack AI project. Cloud functions, a database, file storage, scheduling, workflow, vector search, and realtime updates fit together seamlessly.