This week I added some additional tests to my Bloggulus project. In the process of doing so, I discovered a couple places where timestamp values weren’t matching what was expected. For some background, I generate, process, and store all timestamps in the UTC time standard.

The two gotchas were:

  1. Unexpected conversion of UTC timestamps to local time
  2. Mismatched precision between Go and PostgreSQL timestamps

Unexpected Conversion Link to heading

I store timestamps in the database using PostgreSQL’s timestamptz data type. The problem I ran into was that even when I was inserting proper UTC timestamps (generated by Go’s time package), they were being returned from the database in my local time zone. What’s going on? Why are these UTC timestamps being converted to Central Standard Time when selected from the database?

All of my Go projects use the amazing pgx driver for communicating with PostgreSQL databases. After quite a bit of researching, I realized that I was not alone in experiencing this inconsistency. Someone else noticed and opened a GitHub issue for pgx/v4 back in 2022. The intially-proposed solutions were a bit fussy (in my opinion) and involved creating a custom codec type for the timestamptz type.

Thankfully, enough time has passed that the package’s author was able to include a more convenient fix in pgx/v5! A new ScanLocation field was added to the TimestamptzCodec type that enables customization of what time zone timestamptz values are read into.

In short, I just needed to override the type’s codec:

config, err := pgx.ParseConfig(uri)
if err != nil {
    return nil, err
}

conn, err := pgx.ConnectConfig(ctx, config)
if err != nil {
    return nil, err
}

// Ensure timestamps read from a timestamptz column retain their UTC location.
conn.TypeMap().RegisterType(&pgtype.Type{
    Name:  "timestamptz",
    OID:   pgtype.TimestamptzOID,
    Codec: &pgtype.TimestamptzCodec{ScanLocation: time.UTC},
})

The code is similar if using pgxpool but we need to setup the override inside of the pool’s AfterConnect hook:

config, err := pgxpool.ParseConfig(uri)
if err != nil {
    return nil, err
}

// Ensure timestamps read from a timestamptz column retain their UTC location.
config.AfterConnect = func(ctx context.Context, conn *pgx.Conn) error {
    conn.TypeMap().RegisterType(&pgtype.Type{
        Name:  "timestamptz",
        OID:   pgtype.TimestamptzOID,
        Codec: &pgtype.TimestamptzCodec{ScanLocation: time.UTC},
    })

    return nil
}

pool, err := pgxpool.NewWithConfig(ctx, config)
if err != nil {
    return nil, err
}

Digging Deeper Link to heading

Why did this happen in the first place? Why is there even any ambiguity with respect to what time zone should be used when reading the timestamp from the database? Isn’t the time zone included with the timestamp since I’m using timestamptz?

PostgreSQL offers two timestamp types:

  1. timestamp without time zone (aka timestamp)
  2. timestamp with time zone (aka timestamptz)

When I starting building the app, I figured that timestamptz was what I wanted since my timestamps include a specific time zone: UTC (I’ve since learned that UTC isn’t really a time zone: it is a time standard). I thought that the main difference between timestamp and timestamptz was that timestamptz stored a few extra bits of data to represent the timestamp’s time zone. However, that is not the case.

I dug deeper into the documentation and found some important notes:

For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, GMT).

When a timestamp with time zone value is output, it is always converted from UTC to the current timezone zone, and displayed as local time in that zone.

It turns out that the timestamptz type does not work how I’d originally assumed. Instead, timestamps are checked for a time zone upon insertion but then converted to and stored as UTC. The original time zone of the timestamp is lost after being normalized and inserted. When selected, PostgreSQL converts timestamptz values to the database’s configured timezone (which is correctly set to UTC in my case).

When pgx reads a timestamptz value from the database and converts it into Go’s time.Time type, it has to make a decision about what to do with the stamp’s time zone. Should it be left as UTC or converted to the program’s local time zone? Historically, pgx unconditionally chose the latter. But as of v5, the package now allows us to influence the decision. Neat!

Mismatched Precision Link to heading

This one is quite a bit simpler! In short, Go’s timestamps are represented with nanosecond precision but PostgreSQL’s only handle microseconds. Therefore, you lose precision upon insertion and any logic that “round-trips” a Go timestamp to and from the database yields a different, less precise value at the end. This hiccup has been discovered and discussed before, too. The suggestion from that question that made the most sense to me was to immediately round any timestamps generated by Go down to microsecond precision.

This, combined with defaulting all newly-created timestamps to UTC, led me to write a simple timeutil package to encapsulate the growing “incantation” required to generate a valid timestamp:

package timeutil

import "time"

// Return the current UTC time rounded to microseconds (to match PostgreSQL).
func Now() time.Time {
    return time.Now().UTC().Round(time.Microsecond)
}

Conclusion Link to heading

That’s all for today. I figure that these “gotchas” could easily happen to someone else and were worth sharing. At the end of the day, I learned something new and documenting the journey will help me remember the details someday.

Thanks for reading!