Archive for the 'Mozilla' Category

How I Do Code Reviews at Mozilla

Wednesday, October 22nd, 2014

Since I received some good feedback about my prior post, How I Hire at Mozilla, I thought I’d try to continue this is a mini-series about how I do other things at Mozilla. Next up is code review.

Even though I have found new module owners for some of the code I own, I still end up doing 8-12 review/feedback cycles per week. Reviews are only as good as the time you spend on them: I approach reviews in a fairly systematic way.

When I load a patch for review, I don’t read it top-to-bottom. I also try to avoid reading the bug report: a code change should be able to explain itself either directly in the code or in the code commit message which is part of the patch. If bugzilla comments are required to understand a patch, those comments should probably be part of the commit message itself. Instead, I try to understand the patch by unwrapping it from the big picture into the small details:

The Commit Message

  • Does the commit message describe accurately what the patch does?
  • Am I the right person to make a decision about this change?
  • Is this the right change to be making?
  • Are there any external specifications for this change? This could include a UX design, or a DOM/HTML specification, or something else.
  • Should there be an external specification for this change?
  • Are there other experts who should be involved in this change? Does this change require a UX design/review, or a security, privacy, legal, localization, accessibility, or addon-compatibility review or notification?

Read the Specification

If there is an external specification that this change should conform to, I will read it or the appropriate sections of it. In the following steps of the review, I try to relate the changes to the specification.

Documentation

If there is in-tree documentation for a feature, it should be kept up to date by patches. Some changes, such as Firefox data collection, must be documented. I encourage anyone writing Mozilla-specific features and APIs to document them primarily with in-tree docs, and not on developer.mozilla.org. In-tree docs are much more likely to remain correct and be updated over time.

API Review

APIs define the interaction between units of Mozilla code. A well-designed API that strikes the right balance between simplicity and power is a key component of software engineering.

In Mozilla code, APIs can come in many forms: IDL, IPDL, .webidl, C++ headers, XBL bindings, and JS can all contain APIs. Sometimes even C++ files can contain an API; for example Mozilla has an mostly-unfortunate pattern of using the global observer service as an API surface between disconnected code.

In the first pass I try to avoid reviewing the implementation of an API. I’m focused on the API itself and its associated doccomments. The design of the system and the interaction between systems should be clear from the API docs. Error handling should be clear. If it’s not perfectly obvious, the threading, asynchronous behavior, or other state-machine aspects of an API should be carefully documented.

During this phase, it is often necessary to read the surrounding code to understand the system. None of our existing tools are very good at this, so I often have several MXR tabs open while reading a patch. Hopefully future review-board integration will make this better!

Brainstorm Design Issues

In my experience, the design review is the hardest phase of a review, the part which requires the most experience and creativity, and provides the most value.

  • How will this change interact with other code in the tree?
  • Are there edge cases or failure cases that need to be addressed in the design?
  • Is it likely that this change will affect performance or security in unexpected ways?

Testing Review

I try to review the tests before I review the implementation.

  • Are there automated unit tests?
  • Are there automated performance tests?
  • Is there appropriate telemetry/field measurement, especially for error conditions?
  • Do the tests cover the specification, if there is one?
  • Do the tests cover error conditions?
  • Do the tests strike the right balance between “unit” testing of small pieces of code versus “integration” tests that ensure a feature works properly end-to-end?

Code Review

The code review is the least interesting part of the review. At this point I’m going through the patch line by line.

  • Make sure that the code uses standard Mozilla coding style. I really desperately want somebody to automated lots of this as part of the patch-submission process. It’s tedious both for me and for the patch author if there are style issues that delay a patch and require another review cycle.
  • Make sure that the number of comments is appropriate for the code in question. New coders/contributors sometimes have a tendency to over-comment things that are obvious just by reading the code. Experienced contributors sometimes make assumptions that are correct based on experience but should be called out more explicitly in the code.
  • Look for appropriate assertions. Assertions are a great form of documentation and runtime checking.
  • Look for appropriate logging. In Mozilla we tend to under-log, and I’d like to push especially new code toward more aggressive logging.
  • Mostly this is scanning the implementation. If there is complexity (such as threading!), I’ll have to slow down a lot and make sure that each access and state change is properly synchronized.

Re-read the Specification

If there is a specification, I’ll briefly re-read it to make sure that it was covered by the code I just finished reading.

Mechanics

Currently, I primarily do reviews in the bugzilla “edit” interface, with the “edit attachment as comment” option. Splinter is confusing and useless to me, and review-board doesn’t seem to be ready for prime-time.

For long or complex reviews, I will sometimes copy and quote the patch in emacs and paste or attach it to bugzilla when I’m finished.

In some cases I will cut off a review after one of the earlier phases: if I have questions about the general approach, the design, or the API surface, I will often try to clarify those questions before proceeding with the rest of the review.

There’s an interesting thread in mozilla.dev.planning about whether it is discouraging to new contributors to mark “review-” on a patch, and whether there are less-painful ways of indicating that a patch needs work without making them feel discouraged. My current practice is to mark r- in all cases where a patch needs to be revised, but to thank contributors for their effort so that they are still appreciated and to be as specific as possible about required changes while avoiding any words that could be perceived as an insult.

If I haven’t worked with a coder (paid or volunteer) in the past, I will typically always ask them to submit an updated patch with any changes for re-review. This allows me to make sure that the changes were completed properly and didn’t introduce any new problems. After I gain some experience, I will often trust people to make necessary changes and simply mark “r+ with review comments fixed”.

How I Hire at Mozilla

Thursday, October 2nd, 2014

As a manager, one of my most important responsibilities is hiring. While I’m hiring, I spend a lot of time sifting through resumés, screening candidates, and eventually doing interviews. This is both fun and depressing at the same time, because you get to meet and learn a lot about some interesting people, but you also have to wade through a lot of terrible resumés and phone screens. I want to share some things that I look for during the hiring process, and some tips for potential job-seekers about how to deal with recruiting:

Read the Job Description

Please read the job description before applying for a job! I put a lot of effort into writing a job description, and I try very hard to describe both the job responsibilities and the necessary and desirable skills. Your cover letter should show some evidence that you’ve thought about applying for this particular job.

Write a Good Cover Letter

Periodically, I see articles advising job-seekers to ditch the cover letter. This is terrible advice. I read cover letters carefully. In fact, the cover letter sometimes gives me a better sense for your skill level and ability to do the job than your resumé.

Grammar and Spelling Matter

Every job I’ve ever posted has required good communication skills, and your cover letter is the first evidence of your communication skills. It doesn’t need to be more than a paragraph or two; I’d love to know why you think that this job is a good fit, and some evidence that you read the job description. Spelling and grammar are important.

I’m Picky

The last time I posted a job position, I screened more than 800 resumés, did almost 100 phone screens, and did five interviews before I found the right person. It took six months. It’s better for Mozilla if I’m too picky and reject a qualified candidate than if I’m not picky enough and accept a bad candidate. Please don’t take rejection as a comment on your personal worth. I’m going to reject many people during this process.

Smart and Gets Things Done

Joel Spolsky is right: I’m primarily looking for somebody who is smart and gets things done. When I’m scanning through resumés, I’m looking for evidence that you’ve gotten things done in the past. If you’re just coming out of school, internship and open-source project experience helps. If you’ve been working, I want to see some description of the things you’ve gotten done. Be specific: “Led multiple projects to completion” is meaningless. You can expect phone-screen questions about your prior projects. Be prepared to talk about what worked, what didn’t, and how you solved problems.

No Assholes

It may seem obvious, but assholes need not apply. I will reject a technically-qualified candidate for even a whiff of assholery. I use reference checks to try and guard against assholes.

Passion Isn’t Sufficient

At Mozilla we get a lot of applicants who are passionate, either about the open web in general or about Firefox. Passion is great, perhaps even necessary, but it’s not sufficient.

Interview Questions Are Based on Your Resumé

In phone screens and interview panels, I try very hard to base my questions on the things that you already know. If your resumé says that you are a master of C++, you should expect that there is at least one person on the interview panel who will grill you on C++ in excruciating detail. If you say that you know statistics, you had better be able to answer detailed questions about statistics.

Don’t overstate your skills. If you have written a couple scripts in Python, you are familiar with Python, not proficient. More than once I’ve rejected people because they claimed to be a master of C++ debugging but didn’t know how a vtable is structured in memory. Knowing vtable layout isn’t usually a job prerequisite, but if you are a master of C++ debugging you’d better know how that works in detail.

If you claim to be a master of both Python and JavaScript, expect me to ask you detailed questions about how Python and JS closures and methods work, how they are different in JS and python, how JS this-binding is different from Python bound methods, and other details of the language. I will be impressed if you can discuss these questions intelligently, and reject your application if you can’t.

I Value Code Reading

I value people who can learn new code quickly. You’re going to need to be comfortable learning new systems, libraries, and languages. My team in particular often touches large swaths of the Mozilla codebase; you will be expected to be able to read and understand new code quickly. You can expect an interview session entirely dedicated to reading and explaining a piece of code that you’ve never seen before. Perhaps it will be reviewing a patch, or trying to find a bug in a piece of code. I’ll try to find a problem in a language that you are already comfortable with, so see above about keeping your resumé honest!

Do You Love Your Tools?

Every good programmer has a toolbox. I don’t care whether you use vim or emacs or Visual Studio or Eclipse or XCode, but I do care that you have an editor and use it productively. I also care that you are proficient using a debugger (or two, or three) and hopefully a profiler (or two, or three). If you can’t tell me why you love your editor or your debugger, it’s likely that you won’t be a successful software engineer.

You need to be proficient in a scripting language. I expect you to be able to use at least one scripting language to process text data, read CSV files, write JSON, and that kind of thing. Mozilla has gravitated toward Python for scripting, and you’ll probably be expected to learn Python if you don’t know it already.

Also, can you type? I’m constantly surprised by applicants who are unable to type quickly and accurately. A significant portion of your job is going to be writing code, and not having mastered the act of typing is probably not a good sign.

Phone Screens

When I conduct a phone-screen, it is usually over Skype video. Please make sure that you have a decent headset and that I will be able to hear you. During a phone screen, I try to include at least one coding question which is typically conducted via etherpad. You should be at a computer with access to the web in order to do this successfully.

By the way, did I mention I’m hiring?

Using Software Copyright To Benefit the Public

Friday, March 21st, 2014

Imagine a world where copyright on a piece of software benefits the world even after it expires. A world where eventually all software becomes Free Software.

The purpose of copyright is “To promote the Progress of Science and useful Arts”. The law gives a person the right to profit from their creation for a while, after which everyone gets to profit from it freely. In general, this works for books, music, and other creative works. The current term of copyright is far too long, but at least once the term is up, the whole world gets to read and love Shakespeare or Walter de la Mare equally.

The same is not true of software. In order to be useful, software has to run. Imagine the great commercial software of the past decade: Excel, Photoshop, Pagemaker. Even after copyright expires on Microsoft Excel 95 (in 2090!), nobody will be able to run it! Hardware that can run Windows 95 will not be available, and our only hope of running the software is to emulate the machines and operating systems of a century ago. There will be no opportunity to fix or improve the software.

What should we reasonably require from commercial software producers in exchange for giving them copyright protection?

The code.

In order to get any copyright protection at all, publishers should be required to make the source code available. This can either happen immediately at release, or by putting the code into escrow until copyright expires. This needs to include everything required to build the program and make it run, but since the same copyright rules would apply to operating systems and compilers, it ought to all just work.

The copyright term for software also needs to be rethought. The goal when setting a copyright term should be to balance the competing desires of giving a software author time to make money by selling software, with the natural rights of people to share ideas and use and modify their own tools.

With a term of 14 years, the following software would be leaving copyright protection around now:

  • Windows 95
  • Excel 95
  • Photoshop 6.0
  • Adobe InDesign 1.0

A short copyright term is an incentive to software developers to constantly improve their software, and make the new versions of their software more valuable than older versions which are entering the public domain. It also opens the possibility for other companies to support old software even after the original author has decided that it isn’t worthwhile.

The European Union is currently holding a public consultation to review their copyright laws, and I’ve encouraged Mozilla to propose source availability and a shorter copyright term for software in our official contribution/proposal to that process. Maybe eventually the U.S. Congress could be persuaded to make such significant changes to copyright law, although recent history and powerful money and lobbyists make that difficult to imagine.

Commercial copyrighted software has done great things, and there will continue to be an important place in the world for it. Instead of treating the four freedoms as ethical absolutes and treating non-Free software as a “social problem”, let’s use copyright law to, after a period of time, make all software Free Software.

Use -debugexe to debug apps in Visual Studio

Monday, March 10th, 2014

Many people don’t know about how awesome the windows debuggers are. I recently got a question from a volunteer mentee: he was experiencing a startup crash in Firefox and he wanted to know how to get the debugger attached to Firefox before the crash.

On other systems, I’d say to use mach debug, but that currently doesn’t do useful things on Windows. But it’s still pretty simple. You have two options:

Debug Using Your IDE

Both Visual Studio and Visual C++ Express have a command-line option for launching the IDE ready for debugging.

devenv.exe -debugexe obj-ff-debug/dist/bin/firefox.exe -profile /c/builds/test-profile -no-remote

The -debugexe flag informs the IDE to load your Firefox build with the command lines you specify. Firefox will launch with the “Go” command (F5).

For Visual C++ express edition, run WDExpress.exe instead of devenv.exe.

Debug Using Windbg

windbg is a the Windows command-line debugger. As with any command-line debugger it has an arcane debugging syntax, but it is very powerful.

Launching Firefox with windbg doesn’t require any flags at all:

windbg.exe obj-ff-debug/dist/bin/firefox.exe -profile /c/builds/test-profile -no-remote

Debugging Firefox Release Builds

You can also debug Firefox release builds on Windows! Mozilla runs a symbol server that allows you to automatically download the debugging symbols for recent prerelease builds (I think we keep 30 days of nightly/aurora symbols) and all release builds. See the Mozilla Developer Network article for detailed instructions.

Debugging official builds can be a bit confusing due to inlining, reordering, and other compiler optimizations. I often find myself looking at the disassembly view of a function rather than the source view in order to understand what exactly is going on. Also note that if you are planning on debugging a release build, you probably want to disable automatic crash reporting by setting MOZ_CRASHREPORTER_DISABLE=1 in your environment.

Don’t Use Mozilla Persona to Secure High-Value Data

Tuesday, February 11th, 2014

Mozilla Persona (formerly called Browser ID) is a login system that Mozilla has developed to make it better for users to sign in at sites without having to remember passwords. But I have seen a trend recently of people within Mozilla insisting that we should use Persona for all logins. This is a mistake: the security properties of Persona are simply not good enough to secure high-value data such as the Mozilla security bug database, user crash dumps, or other high-value information.

The chain of trust in Persona has several attack points:

The Public Key: HTTPS Fetch

When the user submits a login “assertion”, the website (Relying Party or RP) fetches the public key of the email provider (Identity Provider or IdP) using HTTPS. For instance, when I log in as benjamin@smedbergs.us, the site I’m logging into will fetch https://smedbergs.us/.well-known/browserid. This relies on the public key and CA infrastructure of the internet. Attacking this part of the chain is hard because it’s the network connection between two servers. This doesn’t appear to be a significant risk factor to me except for perhaps some state actors.

The Public Key: Attacking the IdP HTTPS Server

Attacking the email provider’s web server, on the other hand, becomes a very high value proposition. If an attacker can replace the .well-known/browserid file on a major email provider (gmail, yahoo, etc) they have the ability to impersonate every user of that service. This puts a huge responsibility on email providers to monitor and secure their HTTPS site, which may not typically be part of their email system at all. It is likely that this kind of intrusion will cause signin problems across multiple users and will be detected, but there is no guarantee that individual users will be aware of the compromise of their accounts.

Signing: Accessing the IdP Signing System

Persona email providers can silently impersonate any of their users just by the nature of the protocol. This opens the door to silent identity attacks by anyone who can access the private key of the identity/email provider. This can either be subverting the signing server, or by using legal means such as subpoenas or national security letters. In these cases, the account compromise is almost completely undetectable by either the user or the RP.

What About Password-Reset Emails?

One common defense of Persona is that email providers already have access to users account via password-reset emails. This is partly true, but it ignores an essential property of these emails: when a password is reset, a user will be aware of the attack then next time they try to login. Being unable to login will likely trigger a cautious user to review the details of their account or ask for an audit. Attacks against the IdP, on the other hand, are silent and are not as likely to trigger alarm bells.

Who Should Use Persona?

Persona is a great system for the multitude of lower-value accounts people keep on the internet. Persona is the perfect solution for the Mozilla Status Board. I wish the UI were better and built into the browser: the current UI that requires JS, shim libraries, and popup windows; it is not a great experience. But the tradeoff for not having to store and handle passwords on the server is worth that small amount of pain.

For any site with high-value data, Persona is not a good choice. On bugzilla.mozilla.org, we disabled password reset emails for users with access to security bugs. This decision indicates that persona should also be considered an unacceptable security risk for these users. Persona as a protocol doesn’t have the right security properties.

It would be very interesting to combine Persona with some other authentication system such as client certificates or a two-factor system. This would allow most users to use the simple login system, while providing extra security properties when users start to access high-value resources.

In the meantime, Mozilla should be careful how it promotes and uses Persona; it’s not a universal solution and we should be careful not to bill it as one.

Mozilla Summit: Listen Hard

Tuesday, October 1st, 2013

Listen hard at the Mozilla Summit.

When you’re at a session, give the speaker your attention. If you are like me and get distracted easily by all the people, take notes using a real pen and paper. Practice active listening: don’t argue with the speaker in your head, or start phrasing the perfect rebuttal. If a speaker or topic is not interesting to you, leave and find a different session.

At meals, sit with at least some people you don’t know. Introduce yourself! Talk to people about themselves, about the project, about their personal history. If you are a shy person, ask somebody you already know to make introductions. If you are a connector who knows lots of people, one of your primary jobs at the summit should be making introductions.

In the evenings and downtime, spend time working through the things you heard. If a presentation gave you a new technique, spend time thinking about how you could use it, and what the potential downsides are. If you learned new information, go back through your old assumptions and priorities and question whether they are still correct. If you have questions, track down the speaker and ask them in person. Questions that come the next day are one of the most valuable forms of feedback for a speaker (note: try to avoid presentations on the last day of a conference).

Talk when you have something valuable to ask or say. If you are the expert on a topic, it is your duty to lead a conversation even if you are naturally a shy person. If you aren’t the expert, use discretion so you don’t disrupt a conversation.

If you disagree with somebody, say so! Usually it’s better to disagree in a private conversation, not in a public Q&A session. If you don’t know the history of a decision, ask! Be willing to change your mind, but also be willing to stay in disagreement. You can build trust and respect even in disagreement.

If somebody disagrees with you, try to avoid being defensive (it’s hard!). Keep sharing context and asking questions. If you’re not sure whether the people you’re talking to know the history of a decision, ask them! Don’t be afraid to repeat information over and over again if the people you’re talking to haven’t heard it before.

Don’t read your email. Unfortunately you’ll probably have to scan your email for summit-related announcements, but in general your email can wait.

I’ve been at two summits, a mozcamp, and numerous all-hands and workweeks. They are exhausting and draining events for introverted individuals such as myself. But they are also motivating, inspiring, and in general awesome. Put on a positive attitude and make the most of every part of the event.

More great summit tips from Laura Forrest.

Click-To-Play Plugin Telemetry

Friday, September 13th, 2013

Last week we finally turned on click-to-play plugins as the default state for all plugins except Flash in Nightly builds (which will be Firefox 26). This is a milestone in giving Firefox users control over plugins and helping protect them from being exploited via unused and unwanted plugins.

As part of this feature, we have started to measure how users interact with the click-to-play UI. Nightly users aren’t typical, so this data probably doesn’t mean much yet, but it’s nice to see it in action:

PLUGINS_NOTIFICATION_PLUGIN_COUNT

This data shows how many different kinds of plugins were present in the plugin notification UI when each user saw it. When designing the notification, we wanted to streamline the common case, which we believed was that normally there would be only one kind of plugin on a page. This telemetry data will help verify our assumption. The current Nightly data shows a single type of plugin is the most common case, but not by as much as I originally thought:

# of Plugins Notification Count

1 32994

2 5935

3 179

4 3

5 or more 0

PLUGINS_NOTIFICATION_SHOWN

This data shows what user action triggered showing the plugin notification.

User Action Notification Count

Click on in-content plugin UI 23706

Click on location bar icon 15405

I’m surprised that so many users are clicking on the location bar icon. That may just be inquisitive users checking what each button does, but I’ll be monitoring this as it goes up the trains to the more representative beta population. If this stays very high, then we may have a problem with distracting users with unnecessary UI.

PLUGINS_NOTIFICATION_USER_ACTION

This data shows what action users are choosing to take in the plugin notification. Note that when multiple plugins are shown in the same notification, there will be a separate action for each plugin:

User Action Notification Count

Allow Now 16705

Allow Always 9196

Block 2199

I’m a little surprised at the distribution of “Allow Now” and “Allow Always”. When designing this UI, we expected that most users would want the “Allow Always” option, and we wanted to highlight that. But again, Nightly users are atypical and may not be a good sample. I’ll be watching this data also in beta.

I’m a wary of drawing any significant conclusions from early data, but I’m happy that we appear to be collecting the correct data and with the new telemetry dashboard it’s not hard to get at simple measurements such as this. Kudos to Taras, Mark Reid, and Chris Lonnen for getting that runing and the small daily improvements that make all our lives better.

Graph of the Day: Virtual and Physical Memory Starvation

Wednesday, April 24th, 2013

Today’s graph is a scatter plot of out-of-memory crashes. It categorizes crashes according to the smallest block of available VM and the amount of available pagefile space.

There were roughly 1000 crashes due to bug 829954 between 10-April and 15-April 2013. Click on individual crash plots to see memory details and a link to the crash report.

Direct link to SVG file. Link to raw data.

Conclusions

After graphing these crashes, it seems clear that there are two distinct issues:

  • Crashes which are above the blue line and to the left have free space in their page file, but we have run out of contiguous virtual memory space. This is likely caused by the virtual memory leak from last week.
  • Crashes which are below the blue line and to the right have available virtual memory, but don’t have any real memory for allocation. It is likely that the computer is already thrashing pretty heavily and appears very slow to the user.

I was surprised to learn that not all of these crashes were caused by the VM leak.

The short-term solution for this issue remains the same: the Mozilla graphics engine should stop using the infallible/aborting allocator for graphics buffers. All large allocations (network and graphics buffers) should use the fallible allocator and take extra effort to be OOM-safe.

Long-term, we need Firefox to be aware of the OS memory situation and continue to work on memory-shrinking behavior when the system starts paging or running out of memory. This includes obvious behaviors like throwing away the in-memory network and graphics caches, but it may also require drastic measures such as throwing away the contents of inactive tabs and reloading them later.

Charting Technique

With this post, I am starting to use a different charting technique. Previously, I was using the Flot JS library to generate graphs. Flot makes pretty graphs (although it doesn’t support labeling Axes without a plugin!). It also features a wide range of plugins which add missing features. But often, it doesn’t do exactly what I want and I’ve had to dig deep into its guts to get things to look good. It is also cumbersome to include dynamically generated JS graphs in a blog post, and the prior graphs have been screenshots.

This time around, I generated the graph as an SVG image using the svgwrite python library. This allows me to put the full SVG graph directly into the blog, and it also allows me to dynamic features such as rollovers directly in these blog posts. Currently I’m setting up the axes and labels manually in python, but I expect that this will turn into a library pretty quickly. I experimented with svgplotlib but the installation requirements were too complex for my needs.

I’m not sure whether or not the embedded SVG will make it through feed aggregators./readers or not. Leave comments if you see weird results.

Graph of the Day: Empty Minidump Crashes Per User

Monday, April 22nd, 2013

Sometimes I make a graph to confirm a theory. Sometimes it doesn’t work. This is one of those days.

I created this graph in an attempt to analyze bug 837835. In that bug, we are investigating an increase in the number of crash reports we receive which have an empty (0-byte) minidump file. We’re pretty sure that this usually happens because of an out-of-memory condition (or an out of VM space condition).

Robert Kaiser reported in the bug that he suspected two date ranges of causing the number of empty dumps to increase. Those numbers were generated by counting crashes per build date. But they were very noisy, partly because they didn’t account for the differences in user population between nightly builds.

In this graph, I attempt to account for crashes per user. This was a slightly complicated task, because it assembles information from three separate inputs:

  • ADU (Active Daily Users) data is collected by Metrics. After normalizing the data, it is saved into the crash-stats raw_adu table.
  • Build data is pulled into the crash-stats database by using a tool called ftpscraper and saved into the releases_raw table. Anything called “scraper” is finicky and changes to other system can break it.
  • Crash data is collected directly in crash-stats and stored in the reports_clean table.

Unfortunately, each of these systems has their own way of representing build IDs, channel information, and operating systems:

Product

Build ID

Channel

OS

raw_adu

“Firefox”

string “yyyymmddhhmmss”

“nightly”

“Windows”

releases_raw

“firefox”

integer yyyymmddhhmmss

“Nightly”

“win32″

reports_clean

“Firefox” (from product_versions)

integer yyyymmddhhmmss

“Nightly” when selecting from reports_clean.release_channel, but “nightly” when selecting from reports.release_channel.

“Windows NT”, but only when a valid minidump is found: when there is an empty minidump, os_name is actually “Unknown”

In this case, I’m only interested in the Windows data, and we can safely assuming that almost all of the empty minidump crashes occur on Windows. The script/SQL query to collect the data simply limits each data source separately and then combines them after they have been limited to windows nightly builds, users, and crashes.

Frequency of Empty Dump crashes on Windows Nightlies

This missing builds are the result of ftpscraper failure.

I’m not sure what to make of this data. It seems likely that we may have fixed part of the problem in the 2013-01-25-03-10-18 nightly. But I don’t see a distinct regression range within this time frame. Perhaps around 25-December? Of course, it could also be that the dataset is so noisy that we can’t draw any useful conclusions from it.

Graph of the Day: Old Flash Versions and Blocklist Effectiveness

Friday, April 19th, 2013

Today’s graph charts the percentage of Firefox users who have known-insecure versions of Flash. It also allows us to visually see the impact of various plugin blocks that have been staged over the past few months.

We are gradually rolling out blocks for more and more versions of Flash. In order to make sure that the blocklist was not causing significant user pain, we started out with the oldest versions of Flash that have the fewest users. We have since been expanding the block to include more recent versions of Flash that are still insecure. We hope to extend these blocks to all insecure versions of Flash in the next few months.

Flash Insecure Release Distribution

From the data, we see that users on very old versions of Flash (Flash 10.2 and earlier) are not changing their behavior because of the blocklist. This either means that the users never see Flash content, or that they always click through the warning. It is also possible that they attempted to upgrade but for some reason are unable.

Users with slightly newer versions seem more likely to upgrade. Over about a month, almost half of the users who had insecure versions of Flash 10.3-11.2 have upgraded.

Finally, it is interesting that these percentages drop down on the weekends. This indicates that work or school computers are more likely to have insecure versions of Flash than home computers. Because there are well-known exploits for all of these Flash versions, this represents a significant risk to organizations who are not keeping up with security updates!

View the chart in HTML version and the raw data. This data was brought to you by Telemetry, and so the standard cautions apply: telemetry is an opt-in sample on the beta/release channels, and may under-represent certain populations, especially enterprise deployments which may lock telemetry off by default. This data represents Windows users only, because we just recently started collecting Flash version information on Mac, and the Linux Flash player doesn’t expose its version at all.

Raw aggregates for Flash usage can be found in my dated directories on crash-analysis.mozilla.com, for example yesterday’s aggregate counts. You are welcome to scrape this data if you want to play with it; I am also willing to provide interested researchers with additional data dumps on request.