Weekly Reports for Early Adopter Organizations


#1

Here at Sentry, we care a lot about about software quality and aim to provide tools that help you provide a better experience to your users. Most of the functionality that Sentry provides focuses on helping you identify and resolve issues in your production environment as quickly as possible.

In addition to the ability to react to errors as they happen, a large (and often under-appreciated) element in building quality software is looking back at your past performance and understanding if your software is getting more or less reliable.

With this in mind, we’ve been working on a new feature for Sentry that we’ve been calling Weekly Reports. These reports are intended to show larger trends in your organization’s performance across all of the projects you’re involved in over a larger time interval than you’re usually interested in when viewing the Sentry web interface.

For example, here’s a rendered report (with mocked data):

We’re sending these reports out to hosted Sentry users every Monday morning to Early Adopters organizations. If you’d like to receive these emails and haven’t already enabled the Early Adopters setting on your organization, you can do so by ticking the “Early Adopter” checkbox in your organization settings:

If your organization has the Early Adopter setting on but you don’t want to receive these emails for some reason, you can disable these emails in your notification settings (but please take a few seconds and let us know what you didn’t find useful — we’d really appreciate it):

We still have some additional features on deck that we’re planning on implementing before we release these to everybody, but we’d love to hear your thoughts, ideas, or other feedback here as we’re working on these reports.


More detailed stats
#2

This is great and I can’t wait to try this, PMs who need to share information internally could use this to a great extent.

Each company which has its own custom tags might want reports varying per tag, but maybe that would be configurable ? Maybe to get started this could be limited to “native” tags, like environment, or release, or context (os.name).


#3

Yes! Everything @bsergean said. We use a custom tag to distinguish between about a dozen highly-integrated projects (whose javascript all runs together but must be debugged separately). I generate a janky text-based report every week, so a classy report like this will be amazing!


#4

Yeah we’re hoping this provides general high level “what is going on with our account” type statistics. The plan is to follow up this with a project-specific report which breaks down things in more detail. We decided against doing any of that organization-wide as it’s really hard to keep context when you’re spanning multiple projects on multiple platforms.


#5

With respect to the screen shot, “Filtered and Rate limited” are not that useful to us. But the same user count present in project dashboard (tsdb) would be good to have there.

Lately our organization has been looking into per os version crashes (say for the latest iOS 10 release) and it’s been super useful to look at tags subpage on crash aggregates to get insights, but this is always limited to just one crash. Maybe this kind of feature would per doable for a single project though, like David was saying. So should the “dashboard” view for a project become the new home for tag distribution charts for all crashes on a project ? Should this be its own discussion in the forum ?

Or maybe the weekly report can also be available per project ? It is useful to have something per organization as we already have 4 project in our main organization (or team actually … which might be the same thing).


#6

Thanks everyone for the thoughtful feedback.

This is a great idea, and I’ll look at adding this into the report. This is already tracked at the project level today, but I don’t think we display it anywhere in the application UI.

We try not to use this as a basis for direct comparison between projects, since some projects/applications either don’t have a concept of a user, such as batch processing jobs or asynchronous tasks that don’t actually impact a request workflow. It’s good information to display when it’s available, though.

We’ve actually talked a little bit about this internally, and I wouldn’t be surprised if that’s the direction that we eventually move in for the project dashboards. (We haven’t formally decided that, though — so no promises here.)

As David mentioned, the plan is to first release a high-level organization report which doesn’t focus on specific projects, environments, or issues but instead provides more of a manager’s-eye view of organization performance for metrics that are generally comparable across all projects (number of events, users, etc.) After this, we’ll focus on per-project reports that’ll have a lot more of the details that I think you are interested in as an engineer or an engineering manager for a specific project, such as breakdowns of releases that have has lots of issues, platforms that are particularly problematic, etc. I think this was basically just a long way of saying: yes, per-project reports where our head is at also. :grinning: (I think that this addresses @mwcz’s point also.)


#8

Nice work on the new report! One piece of feedback—two of the primary charts emphasize day-over-day views instead of week-over-week. However, I’m not really concerned with whether Friday was different than Thursday; this is too low a granularity from which to steer development work.

Rather, I want to know whether this week was worse or better than last week, and to drill in by project. The final chart gets at that, but it obscures it by preferring a calendar view to a direct weekly comparison view.

Just a few ideas; perhaps you can validate with other folks you’re talking to.


#9

Just got a weekly update email, love it! I do wish there was a way to hide or segment warnings from errors, though.


#10

Thanks for the feedback!

We’ve had some similar conversations internally, and we’re going to make some changes to the top chart to show a longer time range.

We also have project reports on the roadmap, and we’ll be able to bring some more detailed information into those reports. There’s only so much information we can fit in a single email, though. :slight_smile:

I think we’ll probably do something like this in the project report mentioned above. We’ll take a look at it for organization reports, but it might be a little too much detail for what we’re trying to communicate at that level.

Also, for anybody who’s viewed the report on mobile — we know it’s pretty ugly! We’ve got some changes on the way to make it significantly more readable on mobile devices.


#11

I see a lot of improvements have been discussed Oct 2016, now it’s Apr 2017. Has there been any movement in this direction and can we expect improvements any time soon?


#12

Also is there any report for a large period than 12h/24h/several days/week?

It will be very useful do view a stats-diagram that shows exceptions (different and total) for any large period of time since the product release period and lifetime spans for months and years.