Timeline: High-Performance Task Scheduling in Go
This library provides a simple and efficient way to schedule and manage tasks based on time. It offers a fine-grained resolution of 10 milliseconds and uses a bucketing system to efficiently manage scheduled tasks. The library is designed to be thread-safe and can handle concurrent scheduling and execution of tasks.
Advantages
-
High Performance: This library is optimized for speed, handling a large number of tasks with minimal overhead. For instance, it's ideal for real-time game servers where tasks like player movements or AI decisions need frequent scheduling.
-
Fine-grained Resolution: With its 10ms resolution, this package offers precise scheduling. This resolution is useful for applications where tasks need to be scheduled at a high frequency.
-
Efficient Memory Management: The library's bucketing system ensures linear and predictable memory consumption. This efficiency is beneficial in cloud environments where memory usage impacts costs.
-
Thread-safe: Timeline is designed for concurrent scheduling and execution, making it suitable for multi-threaded applications like web servers handling simultaneous requests.
Disadvantages
-
Not Suitable for Long-term Scheduling: This library is optimized for short-term tasks. It's not intended for tasks scheduled days or weeks in advance, making it less ideal for applications like calendar reminders.
-
Requires Active Ticking: The library needs active ticking (via the Tick method) to process tasks. This design might not be suitable for scenarios with sporadic task scheduling.
Quick Start
scheduler := timeline.New()
cancel := scheduler.Start(context.Background())
defer cancel()
task := func(now time.Time, elapsed time.Duration) bool {
fmt.Printf("Task executed at %d:%02d.%03d, elapsed=%v\n",
now.Hour(), now.Second(), now.UnixMilli()%1000, elapsed)
return true
}
scheduler.Run(task)
scheduler.RunEvery(task, 1*time.Second)
scheduler.RunAfter(task, 5*time.Second)
time.Sleep(10 * time.Second)
It outputs:
Task executed at 04.400, elapsed=0s
Task executed at 05.000, elapsed=600ms
Task executed at 06.000, elapsed=1s
Task executed at 07.000, elapsed=1s
Task executed at 08.000, elapsed=1s
Task executed at 09.000, elapsed=1s
Task executed at 09.400, elapsed=5s
Task executed at 10.000, elapsed=1s
Task executed at 11.000, elapsed=1s
Task executed at 12.000, elapsed=1s
Task executed at 13.000, elapsed=1s
Task executed at 14.000, elapsed=1s
Event Scheduling (Integration)
The github.com/kelindar/timeline/emit sub-package seamlessly integrates the timeline scheduler with event-driven programming. It allows you to emit and subscribe to events with precise timing, making it ideal for applications that require both event-driven architectures and time-based scheduling.
type Message struct {
Text string
}
func (Message) Type() uint32 {
return 0x1
}
func main() {
event.Next(Message{Text: "Hello, World!"})
event.Every(Message{Text: "Are we there yet?"}, 1*time.Second)
cancel := event.On[Message](func(ev Message, now time.Time, elapsed time.Duration) error {
fmt.Printf("Received '%s' at %02d.%03d, elapsed=%v\n",
ev.Text,
now.Second(), now.UnixMilli()%1000, elapsed)
return nil
})
defer cancel()
time.Sleep(5 * time.Second)
}
Benchmarks
The following benchmarks were ran on a 13th Gen Intel(R) Core(TM) i7-13700K. Two scenarios are compared
- Immediate event scheduling (next tick)
- Delayed event scheduling, bigger the batck farther the delay
Type | Input Size | Nanoseconds/Op | Million Run/Sec | Allocs/Op |
---|
next | 1 | 37.56 | 32.0 Million | 0 |
next | 10 | 191.8 | 62.83 Million | 0 |
next | 100 | 1746.0 | 68.57 Million | 0 |
next | 1000 | 17213.0 | 70.59 Million | 0 |
next | 10000 | 170543.0 | 69.66 Million | 0 |
next | 100000 | 2074903.0 | 51.4 Million | 4 |
after | 1 | 38.53 | 31.17 Million | 0 |
after | 10 | 198.9 | 60.45 Million | 0 |
after | 100 | 1761.0 | 68.57 Million | 0 |
after | 1000 | 23361.0 | 48.58 Million | 0 |
after | 10000 | 730699.0 | 7.252 Million | 0 |
after | 100000 | 3436339.0 | 0.06827 Million | 7 |