
A round up of Go 1.24's performance improvements
We all want our code to run as fast as possible. But have you ever stopped to wonder how many microseconds it takes to do a map lookup in Go? Or how much memory gets allocated when you create a small object?
If you’re like most developers, you probably don’t think about these things too often. You’re busy building features, fixing bugs, and trying to keep your services running smoothly. But the Go team thinks about this stuff constantly, and in Go 1.24, they’ve made some pretty exciting improvements that make everything about 2-3% faster.
The Big Changes
Swiss Tables: Making Maps Faster
The Go team has completely revamped how maps work under the hood. They’ve switched to something called Swiss Tables, which, despite sounding like a fancy piece of furniture, is actually a clever way to organize data in memory that significantly improves lookup speeds.
Here’s what a map looks like in your code (don’t worry, you don’t need to change anything):
package main
import "fmt"
func main() {
myMap := make(map[string]int)
myMap["gopher"] = 1
fmt.Println("Value:", myMap["gopher"])
}
Behind the scenes, this code now runs faster and uses memory more efficiently. If for some reason you need to stick with the previous implementation (maybe you’re running some very specific benchmarks), you can disable it with GOEXPERIMENT=noswissmap
when building. We dug into this further here
Small Object Allocations: The Little Things Matter
As someone who deals with real-time data processing (my blood glucose monitoring system generates new readings every few minutes), I get particularly excited about improvements to small object allocations. The Go team has made this significantly more efficient in 1.24, reducing memory fragmentation and improving overall performance.
Here’s a simple example of the kind of code that benefits:
package main
import "bytes"
func main() {
buf := bytes.NewBuffer(make([]byte, 0, 1024))
buf.WriteString("Hello, Go 1.24!")
println(buf.String())
}
Runtime Mutex Improvements
The Go runtime itself got an upgrade to how it handles internal synchronization. While this doesn’t directly affect the mutexes in your application code, it makes the Go runtime itself more efficient at handling concurrent operations.
If you need to stick with the previous mutex implementation for the runtime, you can use GOEXPERIMENT=nospinbitmutex
when building.
Real World Impact
These improvements might seem small on paper, but they add up in real-world applications. If you’re building something that:
- Handles lots of concurrent requests
- Processes real-time data
- Needs to be memory efficient
- Deals with high-throughput scenarios
…then you’re already getting these performance improvements just by using Go 1.24. The best part? You don’t need to change any code - just upgrade and enjoy the speed boost.
Configuration Options
While most of us should just take these improvements and run with them (they’re enabled by default), sometimes you might need to disable specific features. Here’s a quick reference:
Feature | How to Disable |
---|---|
Swiss Table Maps | GOEXPERIMENT=noswissmap |
New Runtime Mutex System | GOEXPERIMENT=nospinbitmutex |
You probably don’t need to do this though!
Wrapping Up
As someone who builds monitoring systems for both work and staying alive (literally), I get pretty excited about performance improvements like these. Go 1.24 makes everything a bit faster and more efficient, and you don’t have to change any code to get the benefits.
If you’re running something where performance really matters, I’d recommend doing some before-and-after benchmarks when you upgrade. You might be surprised by how much these “small” improvements add up!
Let me know if you build anything cool with Go 1.24 - I’d love to hear about it!