Sign into your Cozi account: Sign In

November 13, 2013

Turn off Windows Defender on your builds

I spent some time last weekend profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files.

Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive.

A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6 seconds.

Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have.

I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe.

What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction.

The moral of this story is: don't let your virus checker run on your builds.

Windows Defender: Exclude directory for scanning

September 28, 2011

Setting up Git in a headless Windows environment

The documentation for setting up Git to work well in a headless Windows environments is surprisingly sparse and the process is extremely frustrating (in my experience). Hopefully this will help!

Setup Git

It is advisable to run services (like a build service) as a pre-defined user, as opposed to the SYSTEM user. However, if Git really must be used as the SYSTEM user, the following will help emulate that environment.
  • To run commands as the SYSTEM user, you can use psexec.exe from SysInternals.
    • From an Administrator cmd.exe prompt, psexec -i -s cmd.exe will open a new shell as the SYSTEM user.

General Advice when Setting Up Git

  • Define a HOME env var equal to %USERPROFILE%.
  • Create passphrase-less rsa keys and put them in %HOME%/.ssh. These keys should be setup on whatever server hosts the Git repos. In GitHub, for example, you would need to add the public keys to your account.
  • Do an initial ssh [email protected] to add GitHub to the known_hosts.
  • Get rid of any GIT_SSH env vars if using the default OpenSSL ssh client for auth (as opposed to plink.exe, etc). GIT_SSH=c:\…\plink.exe may exist if you have previously used PuTTY/Pageant/TortoiseGit/etc to access Git repos.
  • ssh [email protected] (or wherever your repo is) is very useful for debugging. One to three -v flags (i.e. ssh -vv [email protected]) may be added to help debug the connection process.
  • Set the %HOME%/.ssh/config to specify which authentication to use:
    User git
    PreferredAuthentications publickey
  • If you see the following error message and your files do have the correct perms (0600), then you are suffering from a bug in the msysgit ssh executable. Unix permissions (0644) don't map to NTFS ACLs. Msys just fakes the behavior of chmod, but it can't fake a chmod to a restrictive enough permissions set. Steps to fix are below.
Permissions 0644 for '/path/to/key' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /path/to/key
  • Assuming cygwin is installed at c:\cygwin and msysgit is installed at c:\Program Files\Git, this will replace the ssh executable in msysgit with the one from cygwin, which recognizes file perms:
@rem From an Administrator cmd.exe
@rem This works for 32bit Windows. Adjust accordingly for 64bit.
ren "C:\Program Files\Git\bin\ssh.exe" "C:\Program Files\Git\bin\ssh.bak.exe"
copy "C:\cygwin\bin\ssh.exe" "C:\Program Files\Git\bin\ssh.exe"
copy "C:\cygwin\bin\cyg*.dll" "C:\Program Files\Git\bin\"

*** This is an excerpt from Jenkins Windows Slave and Git, originally published on Thomas Van Doren's blog.

Appreciated feedback from: George Reilly

July 05, 2011

Git fsck Is Your Friend, or "How I recovered lost commits in Git"

The following is a little tale of mine about how to uncover lost work in Git. If you just want the answer, skip to the end.

I realized that some of my code in the Git tree was not working as expected, so I went into the code to look at it and... WHERE ARE MY CHANGES?!?! The code looks as if I never even touched it! I could have sworn I had committed and pushed my changes long ago, but apparently I had not. Now going back to look for them, the local branch I was working on no longer exists. Whether I deleted the branch or not, I don’t remember, but in any case, it's not there. FREAK OUT ensues. I frantically start combing git log, GitHub, gitk and gitg for any evidence of my old changes. Nothing. More freaking out. The Git Gnomes are out to get me!!! Luckily for me, something called The Internet exists and I can search for things on it. I find a page that tells me git reflog can save me.

git reflog show master
git reflog show my_old_branch
git help reflog
git reflog --all

Good try! But to no avail. None of the messages show my commits or any existence of the hideous, horrible ol' my_old_branch. Those damned gnomes have run off with my codes! More searching of Los Internetes. Then I come across The illustrated guide to recovering lost commits with Git. After a decent amount of relatively useless reading not unlike the first part of this little tale, I come to the Recovering a Lost Commit section. "git fsck is your savior, Tristan." It says.

git fsck --lost-found
dangling commit 2846188e27ed330bc2925509d926835b783382cc
dangling blob 9654b4c62f6b4e129b1f066efb0349f37fd35ac2
dangling commit 5c7e8ca2f9ebdbcbc2685170bab9210748015a36
dangling commit 7ccd8f8d6bfed71248f43091fd6ffac0afa571e0
dangling commit 88d907d7b03936c9dd843673d67d70cab50e427b
dangling blob 99f73f63b515398a0702bc2975ff3c0fb347084a

I get some 38 lines of old commits and blobs that seem to have come out of nowhere. Doing a git show on each one appears to be getting me nowhere... until!

git show fe475203edfd01b7007809c82310077b0fe87750

Yes! My changes! And not just that, as I go on I see each of the 8 commits I made to good ol' my_old_branch. The evil Git Gnomes have been vanquished and I have conquered the world of Gitarnia™! After a quick git merge my changes are in place! An hour lost, a year of my life bequeathed back unto me.

In conclusion, git fsck --lost-found is your friend, treat her well!


Note: git fsck --lost-found may not work if you have run git gc recently, as it removes dangling commits.


April 22, 2011

SerializationException: the constructor was not found


I spent some time earlier this week trying to fix a SerializationException in some .NET code. The fix ultimately turned out to be quite simple, but it took a while for me to understand what the exception message was actually telling me.

We were seeing occasional SerializationExceptions in our logs, with callstacks like this:

System.Runtime.Serialization.SerializationException: The constructor
to deserialize an object of type 'Cozi.WebClient.AccountTagsMap'
was not found.
  at System.Runtime.Serialization.ObjectManager.GetConstructor(
        Type t, Type[] ctorParams)
  at System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(
        Object obj, SerializationInfo info, StreamingContext context)
  --- End of inner exception stack trace ---
  at System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(
        Object obj, SerializationInfo info, StreamingContext context)
  at System.Runtime.Serialization.ObjectManager.FixupSpecialObject(ObjectHolder holder)
  at System.Runtime.Serialization.ObjectManager.DoFixups()
  at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(
        HeaderHandler handler,__BinaryParser serParser, Boolean fCheck,
        Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
  at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(
        Stream serializationStream, HeaderHandler handler, Boolean fCheck,
        Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
  at System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader)
  at System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert()
  at System.Web.SessionState.SessionStateItemCollection.DeserializeItem(
        String name, Boolean check)
  at System.Web.SessionState.SessionStateItemCollection.DeserializeAllItems()
  at System.Web.SessionState.SessionStateItemCollection.get_Keys()
  at System.Web.SessionState.HttpSessionStateContainer.get_Keys()
  at System.Web.SessionState.HttpSessionState.get_Keys()

where AccountTagsMap looked like this:

public class AccountTagsMap : Dictionary<string, DateTime>
    // Various irrelevant methods, but no additional data
    // and no constructors

It needed to be marked [Serializable] as it was sometimes being stashed in HttpContext.Current.Session.

Understandably, I assumed from the error message that simply adding a default constructor, AccountTagsMap(), would fix the problem.

It took me some time, with the help of Reflector, to realize that I also needed to add a serialization constructor:

public class AccountTagsMap : Dictionary<string, DateTime>
    public AccountTagsMap() : base() { }
    protected AccountTagsMap(
        SerializationInfo info, StreamingContext context)
    : base(info, context) {}
    // ...

This isn't a full implementation of serialization. That's handled by the base class, Dictionary<K,V>. We wrongly assumed that inheriting from Dictionary was all we needed to do.

Here's some test code that demonstrates the problem. Comment out the serialization constructor to see the tests fail.

using System;
using System.Collections.Generic;
using System.Reflection;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.IO;
using NUnit.Framework;

namespace Cozi.Test
    public class MapSerializationTests
        public void SerializationCtor()

        public void SerializationRoundtrip_AccountTagsMap()
            AccountTagsMap map1 = new AccountTagsMap();
            map1["foo"] = DateTime.Now;
            map1["bar"] = DateTime.Now;
            AccountTagsMap map2 = SerializationTester.RoundTrip(map1);
            Assert.AreEqual(map1["foo"], map2["foo"]);
            Assert.AreEqual(map1["bar"], map2["bar"]);

    public static class SerializationTester
        public static T RoundTrip<T>(T value)
            if (value == null)
                throw new ArgumentNullException("value");
            using (MemoryStream stream = new MemoryStream())
                BinaryFormatter formatter = new BinaryFormatter();
                formatter.Serialize(stream, value);
                stream.Seek(0, SeekOrigin.Begin);
                return (T)formatter.Deserialize(stream);

        // Reverse-engineered from System.Runtime.Serialization
        // .ObjectManager.GetConstructor with Reflector
        public static ConstructorInfo GetConstructor(Type t)
            ConstructorInfo info = t.GetConstructor(
                BindingFlags.NonPublic | BindingFlags.Public
                    | BindingFlags.Instance,
                new Type[] { typeof(SerializationInfo),
                             typeof(StreamingContext) },
                null) as ConstructorInfo;
            if (info == null)
                throw new SerializationException(
                        "Serialization_ConstructorNotFound: {0}",
                        new object[] { t.FullName }));
            return info;

I hope this saves you some time.

April 08, 2011

Security 101 for Developers

Why should we care about Security?

A few weeks ago, Andrew Abrahamowicz and I gave an introductory presentation on Secure Programming for Developers to the Cozi Dev Team.

As it may be of interest to other developers, I uploaded it to SlideShare.

Other Resources:

April 30, 2010

Generating UUIDs in JavaScript

UUID format

A UUID is a universally unique 128-bit number, which can be generated without recourse to a central registry. UUIDs are used in many places, including database keys.

Version 1 UUIDs were generated from a combination of a network MAC address and a timestamp. Due to privacy concerns, most UUIDS are now RFC 4122 Version 4 UUIDs, generated from random data.

Most languages provide facilities to generate UUIDs, often built on top of the operating system's entropy pool of random data. JavaScript has a mediocre library and provides no way to generate UUIDs.

Robert Kieffer supplies one implementation at It uses Math.random, JavaScript's built-in pseudo-random number generator.

I looked around and found a much better seedable random number generator for JavaScript by David Bau. The only downside is that if you don't specify an explicit random seed, its default method of gathering entropy is a recursive traversal of the window object. This traversal takes several hundred milliseconds on Firefox, though it's much faster on other modern browsers. Better, then, to supply seed data randomly generated on the server, such as Math.seedrandom('<%= Guid.NewGuid %>').

Here's another implementation of UUID generation. If you're using Bau's seedrandom.js, it overrides Math.random; otherwise, you're using JavaScript's built-in RNG.

var UUID = {
 // Return a randomly generated v4 UUID, per RFC 4122
 uuid4: function()
  return this._uuid(
    this.randomInt(), this.randomInt(),
    this.randomInt(), this.randomInt(), 4);

 // Create a versioned UUID from w1..w4, 32-bit non-negative ints
 _uuid: function(w1, w2, w3, w4, version)
  var uuid = new Array(36);
  var data = [
   (w1 & 0xFFFFFFFF),
   (w2 & 0xFFFF0FFF) | ((version || 4) << 12), // version (1-5)
   (w3 & 0x3FFFFFFF) | 0x80000000,    // rfc 4122 variant
   (w4 & 0xFFFFFFFF)
  for (var i = 0, k = 0; i < 4; i++)
   var rnd = data[i];
   for (var j = 0; j < 8; j++)
    if (k == 8 || k == 13 || k == 18 || k == 23) {
     uuid[k++] = '-';
    var r = (rnd >>> 28) & 0xf; // Take the high-order nybble
    rnd = (rnd & 0x0FFFFFFF) << 4;
    uuid[k++] = this.hex.charAt(r);
  return uuid.join('');

 hex: '0123456789abcdef',

 // Return a random integer in [0, 2^32).
 randomInt: function()
  return Math.floor(0x100000000 * Math.random());

This should do a good-enough job of generating UUIDs. If these UUIDs are being sent to your servers, remember that you must never trust data from a client. If you're expecting unique UUIDs, an attacker might get past your defenses by repeatedly sending you an already used UUID.

April 02, 2010

SQLAlchemy Sharding


Some time ago, we sharded our production database, splitting accounts onto a couple of partitions. We're using Consistent Hashing on account_ids to determine in which partition a particular account is located.

Our older services are written in C#; our newer, in Python. We use the SQLAlchemy object-relational mapper in the Python codebase, and we based our sharding code on the supplied example for SQLAlchemy 0.5.

Recently, it became apparent that some database queries that should have been sharded, weren't. These queries were querying all partitions. That's not too costly when there are only two partitions, but it will not scale.

SQLAlchemy's ShardedQuery calls our query_chooser for each query, to compute a set of shard ids on which to execute the query. To restrict a query to just one of our shards, the query must include an explicit account_id, so that we can compute the shard id.

There were several reasons why these queries didn't shard properly.

  • ForeignKey relationships were not fully declared in some tables. SQLAlchemy was smart enough to create queries that worked, but did not have enough information to infer that the account_id should be included in the query.
class CalendarItem(Base):
  __tablename__ = 'calendar_items'
  calendar_item_id = Column(UUID, nullable=False, primary_key=True)
  calendar_id = Column(UUID, ForeignKey(Calendar.calendar_id),
                       nullable=False, index=True)
  account_id  = Column(UUID, ForeignKey(Account.account_id),
  # ...

class CalendarItemRecurrence(Base):
  __tablename__ = 'calendar_item_recurrences'
  calendar_item_recurrence_id = Column(UUID, nullable=False,
  account_id = Column(UUID, ForeignKey(CalendarItem.account_id),
                      nullable=False, index=True)
  calendar_item_id = Column(UUID,
                       nullable=False, index=True)
  # ...
  • ForeignKeys need to be directly to the foreign table, not to the transitive closure. In this case, CalendarItemRecurrence.account_id is a FK to CalendarItem.account_id, not to Account.account_id.
  • Some inter-table relations needed explicit primaryjoins
CalendarItem.recurrence_dates = relation(
            == CalendarItem.account_id,
            == CalendarItem.calendar_item_id
  cascade='all, delete-orphan',
  • Some explicit queries in our code should have filtered by account_id, but didn't.
  • The provided example assumes that constants occur only on the right-hand side of binary expressions. We found that in lazy-load relations, constants could occur on the left-hand side.
  • Some SQLAlchemy-generated queries had _BindParamClauses instead of constants
SELECT shopping_list_items.version AS shopping_list_items_version
FROM shopping_list_items
WHERE shopping_list_items.account_id = :param_1
  AND shopping_list_items.shopping_list_item_id = :param_2

I posted to the SQLAlchemy discussion list about the last two issues. After a couple of iterations, Michael Bayer came up with a much cleaner solution for traversing a query to choose ids, which has been checked into the 0.6 trunk.

Update: 2010/4/5: We discovered our non-sharded queries by adding some logging to the end of query_chooser, in the case where len(ids) == 0.

March 26, 2010

Engineering Jobs at Cozi

Cozi is hiring. We have positions in Web Development, Software Engineering, and System Engineering at our headquarters in Seattle.

Full details at the Careers Page.

April 09, 2009

Iframes: thinking outside the box



Iframes have their uses, but they are not easy to deal with.

I added some text advertisements to our product this week. The standard technique for including advertising is to use an iframe. This works well for banner ads which come in well-known sizes.

I immediately ran into a problem with text ads in an iframe: there's no easy way to apply CSS to the contents of the iframe. Styles do not cascade through the iframe barrier. Normally, this is what you want, a self-contained unit on the page. It's fine for a banner ad, which requires no styling, but Times Roman text is jarring in a page of Arial.

It's difficult, perhaps outright impossible, to inject styles into an iframe coming from another domain.

Another problem is knowing how big to make the iframe. They don't autosize and the ad text could be one or more lines long.

I needed another way.

Making an Ajax call to fetch just the raw data that I cared about (title, body copy, link) was the obvious answer. A little wad of JSON would be much easier to deal with than trying to style an iframe. Unfortunately, the XMLHTTPRequest object cannot make cross-domain calls. But I read up on JSONP last week, so I knew that I could inject a script tag into my HTML DOM and set the src attribute to the adserver.

jQuery makes this easy: jQuery.getScript injects the script tag and removes it after the script has loaded.

We uploaded a custom template to the adserver:

"adTitle": "%%TITLE%%", "bodyCopy": "%%BODYCOPY%%", "clickUrl": "%%CLICKURL%%" });

I put this call to invoke the adserver's template in my page:

$.getScript(adServerUrl + cache_busting_random_token());

And the setAdText handler in my HTML page looks like this:

function setAdText(data)
console.log("setAdText: adTitle=[%s], bodyCopy=[%s], clickUrl=%s." data.adTitle, data.bodyCopy, data.clickUrl);
// add the ad to the DOM }

Problem solved.

March 30, 2009

Augmenting Python's strftime

Big Ben

The strftime function is the prescribed way to format dates and times in Python (and other languages). It has limitations, such as forcing a leading zero on days of the month, 01-31, and on 12-hour clock hours, 01-12.

Edit I noticed that we were repeatedly writing expressions like these

d.strftime('%A, %B ') + str(
t.strftime("%I:%M").lstrip('0') + ('a' if t.hour < 12 else 'p')

and realized that there had to be a better way.

Here's a straightforward way to augment the directives: preprocess the format string, replacing new directives with their values, then let the underlying strftime implementation take care of the rest.

import re

_re_aux_format = re.compile("%([DiP])")

def strftime_aux(d, format):
    Augmented strftime that handles additional directives.

    %D  Day of the month as a decimal number [1,31] (no leading zero)
    %i  Hour (12-hour clock) as a decimal number [1,12] (no leading zero)
    %P  'a' for AM, 'p' for PM

    >>> import datetime
    >>> d = datetime.datetime(2009, 4, 1, 9+12, 37)
    >>> strftime_aux(d, '%A, %B %d, %I:%M %p')
    'Wednesday, April 01, 09:37 PM'
    >>> strftime_aux(d, '%A, %B %D, %i:%M%P')
    'Wednesday, April 1, 9:37p'

    # Precompute the values of the augmented directives
    directive_map = {
        'D': str(,
        'i': '12' if d.hour in (0, 12) else str(d.hour % 12),
        'P': 'a' if d.hour < 12 else 'p',
    # Substitute those values into the format string
    new_format = _re_aux_format.sub(
        lambda match: directive_map.get(, ''),
    # Let the stock implementation of strftime handle everything else
    return d.strftime(new_format)

if __name__ == "__main__":
    import doctest


Cozi Tech Blog RSS Feed Cozi RSS XML Feed

Other Cozi Blogs

  • Cozi Connection Blog
    Visit the Cozi Connection Blog for the latest information about Cozi (the company) and tips about Cozi (the software).
  • flow|state
    The user interface blog by Cozi co-founder Jan Miksovsky.