Electrolysis: Making Mozilla Faster and More Stable Using Multiple Processes
For a long while now (even before Google Chrome was announced), Mozilla has been examining ways to make Firefox better by splitting the work of displaying web pages up among multiple processes. There are several possible benefits of using multiple processes:
- Increased stability: if a plugin or webpage tries to use all the processor, memory, or even crashes, a process can isolate that bad behavior from the rest of the browser.
- Performance: By splitting work up among multiple processes, the browser can make use of multiple processor cores available on modern desktop computers and the next generation of mobile processors. The user interface can also be more responsive because it doesn’t need to block on long-running web page activities.
- Security: If the operating system can run a process with lower privileges, the browser can isolate web pages from the rest of the computer, making it harder for attackers to infect a computer.
Now that we’re basically done with Firefox 3.5 we’ve formed a project team. We’re calling the project “Electrolysis”. Because we can’t do everything at once, we are currently focusing on performance and stability; using a security sandbox will be implemented after the initial release. Details of the plan are available on the Mozilla wiki, but the outline is simple:
- Sprint as fast as possible to get basic code working, running simple testcase plugins and content tabs in a separate process.
- Fix the brokenness introduced in step one: shared networking, document navigation and link targeting, context menus and other UI functions, focus, drag and drop, and probably many other aspects of the code will need modifications. Many of these tasks can be performed in parallel by multiple people.
- Profile for performance, and fix extension compatibility to the extent possible.
- Ship!
We’re currently in the middle of stage one: Ben Turner and Chris Jones have borrowed the IPC message-passing and setup code from Chromium. We even have some very simple plugins loading across the process boundary! Most of the team is in Mountain View this week and we’re sprinting to see if we can implement a very basic tab in a separate process today and tomorrow.
For the moment we’re focusing on Windows and Linux, because the team is most familiar and comfortable on these environments. I sat down with Josh Aas on Friday and we discussed some of the unknowns/difficulties faced on mac. As soon as our initial sprint produces working code we’d love to have help from interested mac hackers!
If you’re interested in helping, or just lurking to see what’s going on, the Electrolysis team is using the #content channel on IRC and the mozilla.dev.tech.dom newsgroup for technical discussions and progress updates. We’ll also cross-post important status updates to mozilla.dev.platform.
If you’ve emailed me volunteering to help and I haven’t gotten back to you, I apologize! Until we get the stage-one sprint done there aren’t really any self-contained tasks which can be done in parallel.
June 16th, 2009 at 1:42 pm
I appreciate the goal here, improvements in the following areas: performance, stability, security. However, I’m still not sold on this design. I realize that this has become the trendy new thing now that Google is doing it (yes, I know there was talk of this well before Chrome came out but it wasn’t until it was released that this really got a high priority). However, I feel like they had to do this initially because they were so crash happy.
I feel like the overhead of an entire process (startup time and memory usage) would suggest that this model would actually have the opposite effect on the performance. This is probably more true on Windows than other platforms. I’m imagining an average computer user that is viewing the task manager trying to understand why their machine is running so slowly and seeing a bunch of Firefox processes running. I suppose it would be neat if there was an easy way to identity which process was driving a particular page so a user could find pages causing trouble, but this probably wouldn’t be easy to determine from task manager. It also seems like this problem (identifying troublesome pages) could be tackled in the current architecture.
Regarding stability, Firefox crashes on me so rarely (maybe twice this past year). Session restore works fine for recovering from this.
Anyways, I’m sure all of this has been thought out previously and I imagine tons of performance metrics will be collected to prove me wrong so I’m probably just overly cautious. That being said, are there any plans on maintaining the current single process model once this feature is implemented? :)
June 16th, 2009 at 3:22 pm
Exciting stuff! Why the name Electrolysis?
June 16th, 2009 at 6:42 pm
Maybe this will finally allow chrome to be displayed over content?
https://bugzilla.mozilla.org/show_bug.cgi?id=130078
June 16th, 2009 at 6:49 pm
Also, will this take place in a separate repository to mozilla-central? If so, at what phase do you envision merging with mozilla-central?
June 17th, 2009 at 2:44 pm
Jason: startup time is only a problem if you do lots of stuff at startup. On Windows and mobile devices in particular I/O is very expensive, so we’ll need to make sure that the content rendering process doesn’t need to do much I/O. Sharing DLL memory and avoiding network and NSS initialization in the child process should avoid most of the startup issues.
Identifying troublesome sites is much easier using OS tools rather than tracking usage per-site in a single process… to do it in a single process you’d need a VM architecture which is very different from our current C++ codebase.
kiroset: no, this project will not make bug 130078 easier at all. You’re probably thinking of the compositor project.
Dan: http://en.wikipedia.org/wiki/Electrolysis_of_water and http://www.articlesbase.com/technology-articles/an-introduction-to-chrome-plating-8022.html might help answer your question.
June 17th, 2009 at 5:15 pm
[…] Utviklerne av Firefox jobber allerede med Ã¥ fÃ¥ det samme inn en framtidig versjon av Firefox. Benjamin Smedberg blogger om dette prosjektet som har fÃ¥tt navnet […]
June 18th, 2009 at 3:35 am
Having the GUI in a separate process will make Firefox feel and act much more snappy. Dual core is normal, quad core is high end and 8+ threads will become the high end within 2 years. I hope that Firefox will be able to use all threads that are available and 3dcards of course.
But apart from having the GUI freeze until it is ready I haven’t had any complaints about stability of Firefox. Even the nightlies are more stable than any Chrome version I have used. I am looking forward to the snappy of Chrome.
June 18th, 2009 at 6:52 am
A problem that Google mentioned was that they had to run plugins at high privileges, but with changes made to the plugins it would be possible to run them at lower privileges.
Any comments on this? It sounds like something that would get a lot more traction if Mozilla (and other browser makers) joined Google in asking for.
June 22nd, 2009 at 1:30 am
[…] Smedberg recently discussed the motivation for splitting Firefox into multiple process, so I won’t recap that here. […]
June 22nd, 2009 at 8:56 am
finally! non-application-blocking auth-dialogs! i hate it wenn all my tabs lock because 1 needing authentication.
June 22nd, 2009 at 9:37 am
schnalle: multi-process Firefox has little or nothing to do with auth dialogs not blocking the application. That bug can be fixed independently.
June 23rd, 2009 at 10:58 am
Your 2nd point, performance, is wrong. Multiple threads (what Firefox already uses) will use multiple processors just fine.
June 23rd, 2009 at 11:07 am
“west nile virus”: although you could achieve similar performance using threads and processes, using processes involves fewer changes to our code (because of global variables) and is necessary for the stability aspect (to protect the entire browser against crashes of one piece).
July 3rd, 2009 at 4:05 am
Having run Chrome on my dual core CPU laptop, I don’t expect the change you’re talking about here to make Firefox faster. Chrome often works in a choppy way and doesn’t feel faster than FF at all. And it seems to crash more often than Firefox (loaded with extensions and having 50 to 100 open tabs).
July 7th, 2009 at 12:17 pm
Some suggestions:
1) Have separate settings so I can enable per-process tabs/pages, per-process plugins, or both. IMHO it’s more important to keep plugins in their own process because they are usually the worst offenders in loading time (freezing the whole browser), stability, security, and resource usage.
2) Keep your IPC stuff and sandboxing compatible or at least reasonably aligned with Chrome’s. This could make easier for plugin providers to develop next-gen plugins that are more cooperative with new browsers, either reducing their need of privileges or using these new IPC channels explicitly for better performance and features.
3) When this stuff is mature, talk to Sun to consider the idea of running the new Java PlugIn’s (6u10+) launcher process inside the browser’s process. This launcher is tiny and it manages new processes per-applet/application; we don’t need TWO layers of process separation, that would only increase overhead and decrease the browser’s ability to manage stuff. Right now this is already a problem for Chrome, if you raise its Task Manager you won’t see the actual JVM processes, only the jp2launcher process.
July 23rd, 2009 at 11:52 am
1. Seems like memory usage will skyrocket much like Chrome.
2. Can’t you implement a construct similar to AppDomain in MS .Net that provide isolation without the overhead and cumbersome binary communication of multi-process computing. You could use code from the Mono project.
3. Speaking of increased performance with multi-process browsing requires analysis of work-flow and code to determine the extent to which tasks can broken to execute in parallel. Has this been done or it’s just an speculation or your behalf?
July 23rd, 2009 at 2:29 pm
jasmith: 1) let’s measure instead of making assumptions. I think we can do a lot better than Chrome 2) VM domains can’t help protect against crashes 3) Did you read the planning wiki or are you simply talking snarky for the fun of it?
January 5th, 2010 at 3:41 am
Hi,
great ideas, I hope you succeed.
I wanted to know if it is possible to develop a browser with the same idea of Linux SE. It is to say, block the code that can provoke a buffer overflow. At least avoid some basic buffer overflow attempts. This can avoid lots of evil web pages.
Thanks for your great work!
January 5th, 2010 at 6:39 am
Just like some earlier posters, I don’t get why you want to use multiple process and not threads
1. IPC is way more complex than threads shared memory and basic mutex/semaphores
2. Threads are faster and consume less memory
3. POSIX threads work on W$, MacOS and most Unices, there is no need for different code on each (boost::thread might be even better)
Now I have no idea how (and if) threads can be salvaged in case of a crash.
January 15th, 2010 at 11:25 am
PM, the aim is to isolate tabs so they can’t affect each other, you can’t do that with threads.
Firefox uses many threads already.
June 26th, 2010 at 12:22 am
[…] will be the first of the two making an appearance in Fennec. Benjamin Smedberg’s kickoff post for Electrolysis summarizes the project […]