Things I’ve Learned

Wednesday, May 27th, 2009

Things I’ve learned recently:

  • Using hg log on a file that was removed will not list the revision in which the file was removed. You want hg log --removed.
  • Waist Watcher sodas are sweetened with Splenda, but don’t have the metallic taste that some other diet sodas do. I especially like the Citrus Frost (grapefruit) flavor. It’s like Fresca without the hidden Aspartame. (I have bad digestive reactions to Apartame.)
  • Linking libxul on my Linux machine takes between 3 and 10 seconds, but apparently this is unusual. Several other people have reported link times that can range into the minutes. I recommend buying as much RAM as you can: if your entire object directory fits in the filesystem memory cache, build speed is much faster.
  • When Microsoft Visual C++ mangles function symbols, the return type is encoded in the mangled name. When GCC mangles names, the return type is not encoded:
    GCC MSVC

    int testfunc(int)

    _Z8testfunci

    ?testfunc@@YAHH@Z

    void testfunc(int)

    _Z8testfunci

    ?testfunc@@YAXH@Z

    This means that separate functions in different translation units may not differ only by return type. We found this trying to combine the Chromium IPC code with Mozilla: const std::string& EmptyString() and const nsAFlatString& EmptyString() differ only by return type. On Windows this links correctly, but on Linux this causes multiple-definition errors.

Perils in Rewriting

Thursday, November 8th, 2007

Automatically rewriting code is perilous business. Can you spot the bug in the following rewrite to remove stack nsCOMPtrs?

@@ -432,8 +432,8 @@ LoadAppDirIntoArray(nsIFile* aXULAppDir, if (!aXULAppDir) return; 
- nsCOMPtr<nsIFile> subdir; 
- aXULAppDir->Clone(getter_AddRefs(subdir)); 
+ nsIFile* subdir; 
+ aXULAppDir->Clone(&subdir);
  if (!subdir) return;

The patch produced by garburator matches the spec I originally wrote for Taras, bug and all… the bug is that nsCOMPtr automatically initializes itself to null, while a raw pointer doesn’t.

What makes automatic rewriting hard is that the specification of the rewrite is usually incomplete. I could say to Taras “get rid of nsCOMPtrs on the stack” or even provide sample patches, but reality is much more complicated. For the automatic rewriting of stack nsCOMPtrs, this was an iterative process. Taras would run the tool (named garburator), apply the patch, and test-build. When something didn’t work right, he’d analyze the problem (often with help from the IRC channel) and then refine the rewriting tool. Sometimes refining the tool wasn’t worth it, and he’d simply fix the code by hand or hide it from the rewriter.

Quick quiz: how do you deal with functions that take a nsCOMPtr<nsIFoo>&?

Answer: It depends… is this a hashtable enumerator callback function? If so, don’t rewrite the signature at all. If not, the correct rewriting is probably nsIFoo**… but then if the code passes a heap variable to the function, you have to wrap it in getter_AddRefs to get the correct write-barrier semantics.

Automatic rewriting is not yet a complete solution: in particular, it doesn’t attempt to rewriting templated functions, and it will often give up on items which are passed to templated functions. And of course, we can really only run the rewriting tools on Linux, which means that platform-specific mac and windows code will need to be rewritten manually. If you want horror stories, ask Taras about nsMaybeWeakPtr, the helper-class from hell! In any case, I now have an XPCOMGC tree which runs through component registration before crashing… it feels like quite an accomplishment, even though it hasn’t gotten past the first GC yet.

What If?

Monday, November 5th, 2007

Is C++ hurting Mozilla development? Is it practical to move the Mozilla codebase to another language? Automated rewriting of code can help some, but there are some basic language limitations that limit the scope of rewriting within C++. What if Mozilla invented a mostly-C++-compatible language that solved problems better than C++?

Why!?

Am I crazy? Well, maybe a little. But what are the disadvantages of C++?

  • Poor memory safety;
  • Lack of good support for UTF strings/iterators;
  • difficulty integrating correctly with garbage-collection runtimes (MMgc), especially if we want to move towards an exact/moving garbage collector with strong type annotations;
  • difficulty integrating exception handling with other runtimes (Tamarin);
  • C++ lacks features Mozilla hackers want: see some of roc’s ideas;
  • static analysis: It would be good to statically prevent certain code patterns (such as stack-allocated nsCOMPtr), or behave differently than C++ normally would (change the behavior of a stack-allocated nsCOMArray versus a heap-allocated one);
  • We are hoping to use trace-based optimization to speed JavaScript. If the C++ frontend shares a common runtime with the JS frontend, these optimizations could occur across JS/C++ language boundaries. The Moz2 team brainstormed using a C++ frontend that would produce Tamarin bytecode, but Tamarin bytecode doesn’t really have the primitives for operating on binary objects.

On the other hand, C++ or something like it have advantages that we should preserve:

  • C++ can be very low level and performant, if coded carefully;
  • The vast majority of our codebase is written in C++;

Is it practical?

I don’t know. Some wild speculation is below. Please don’t take it as anything more than a brainstorm informed by a little bit of IRC conversation.

LLVM is a project implementing a low-level bytecode format with strong type annotations, and code generator, optimizer, and compiler (static and JIT) to operate on this bytecode. There is a tool which uses the GCC frontend to compile C/C++ code into LLVM bytecode. It is already possible to use the llvm-gcc4 frontend to compile and run mozilla; it compiles the Mozilla codebase to an intermediate LLVM form and from there into standard binary objects. We would not invent an entirely new language: rather we would take the GCC frontend and gradually integrate features as needed by Mozilla.

In addition, we could translate spidermonkey bytecode into LLVM bytecode for optimization and JITting.

Finally, optimization passes for trace-based optimizations could be added which operate on the level of LLVM bytecode.

Problems…

The most pressing problem from my perspective is that using the G++ frontend requires compiling Mozilla with a mingw-llvm toolchain on Windows. The gcc toolchain on Windows does not use vtables which are MS-COM compatible, which means that Mozilla code which uses or implements MS-COM interfaces will fail. In addition, we would be tied to the mingw win32api libraries, which are not the supported Microsoft SDKs and may not be always up to date, because they are clean-room reverse-engineered headers and libraries. This mainly affects the accessibility code which makes extensive use of MS COM and ATL.

  • Is it feasible to teach gcc/mingw to use the MS SDK headers and import libraries?
  • Is it feasible to teach the minw compiler about the MSVC ABI so that our code can continue to call and implement MS COM interfaces?
  • Is there another solution so that the accessibility code continues to work?

Questions

  • Am I absolutely crazy?
  • As a Mozilla developer, what features do you think are most important for ease of Mozilla development?
  • Are there solutions other than the g++ frontend that would allow our C++ codebase to evolve to a new language?
  • Is this a silly exercise? Would we be spending way too much time on the language and GCC and LLVM and whatnot, and not enough on our codebase? Are there other ways to modernize our codebase gradually but effectively? Is LLVM or a custom language something we should consider for mozilla2, or keep in mind for later releases?