Games Task Scheduler (GTS)
A multi-processor scheduling framework for games engines
Quick Start Guide

This section is a brief overview of how to get started with GTS. See Programming Model for the gritty details.

Worker Pool

A Worker Pool is the fundamental executor of CPU work. It determines how SW Worker threads are mapped to a processor's HW threads. By default, GTS creates one SW thread per HW thread to avoid oversubscription.

// Create an unintilialized WorkerPool.
gts::WorkerPool workerPool;
// Initialize it. Empty means #workers == #HW threads.
workerPool.initialize(/* you can also explicitly specify the number of workers. */);
A collection of running Worker threads that a MicroScheduler can be run on.
Definition: WorkerPool.h:54
bool initialize(uint32_t threadCount=0)


Micro-scheduler

A Micro-scheduler defines the scheduling policy of work onto a Worker Pool. It is the fundamental scheduler of work onto the CPU. It creates and consumes Tasks as a fundamental unit of work. A Micro-scheduler is always mapped to a Worker Pool with a 1:1 mapping between Local Schedulers and Workers.

// Create an unintilialized MicroScheduler.
gts::MicroScheduler microScheduler;
// Initialize it with a WorkerPool.
microScheduler.initialize(&workerPool);
A work-stealing task scheduler. The scheduler is executed by the WorkerPool it is initialized with.
Definition: MicroScheduler.h:81
bool initialize(WorkerPool *pWorkerPool)
Initializes the MicroScheduler and attaches it to pWorkPool, where each worker in pWorkPool will exec...


Example

This program increments the values of all elements in a vector using a parallel-for pattern.

#include "gts/micro_scheduler/WorkerPool.h"
#include "gts/micro_scheduler/MicroScheduler.h"
#include "gts/micro_scheduler/patterns/ParallelFor.h"
...
// Create an unintilialized WorkerPool.
gts::WorkerPool workerPool;
// Initialize it.
bool result = workerPool.initialize();
GTS_ASSERT(result);
// Create an unintilialized MicroScheduler.
gts::MicroScheduler microScheduler;
// Initialize it with a WorkerPool.
result = microScheduler.initialize(&workerPool);
GTS_ASSERT(result);
// Create a ParallelFor object attached to the scheduler.
gts::ParallelFor parFor(taskScheduler);
// Increment a vector of items in parallel.
std::vector<int> vec(1000000, 0);
parFor(vec.begin(), vec.end(), [](std::vector<int>::iterator iter) { (*iter)++; });
// Verify results.
for (auto const& v : vec)
{
GTS_ASSERT(v == 1);
}
printf("Success!\n");
// These resources can be shutdown explicitly or their destructor will shut them down implicitly.
taskScheduler.shutdown();
workerPool.shutdown();
A construct that maps parallel-for behavior to a MicroScheduler.
Definition: ParallelFor.h:48
#define GTS_ASSERT(expr)
Causes execution to break when expr is false.
Definition: Assert.h:144

Ouput:

SUCCESS!


Macro-scheduler

A Macro-scheduler is a high level scheduler of persistent task DAGs. It produces Schedules onto Compute Resources, like the Micro-scheduler. The Task DAGs are formed by Nodes, with each Node containing Workloads that define what Compute Resource it can be scheduled to.

// Create a MicroScheduler_ComputeResource that will execute Workloads through the MicroScheduler.
MicroScheduler_ComputeResource microSchedulerCompResource(&microScheduler, 0, 0);
// Add the microSchedulerComputeResource to the description struct for the MacroScheduler.
gts::MacroSchedulerDesc macroSchedulerDesc;
macroSchedulerDesc.computeResources.push_back(&microSchedulerCompResource);
// Create and initialize the MacroScheduler with the macroSchedulerDesc.
pMacroScheduler->init(macroSchedulerDesc);
A generalized DAG scheduler utilizing work stealing. This scheduler delegates its responsibilities to...
Definition: CentralQueue_MacroScheduler.h:58
A MacroScheduler builds ISchedules for a set of ComputeResources from a DAG of Node.
Definition: MacroScheduler.h:50
virtual bool init(MacroSchedulerDesc const &desc)=0
The description of a MacroSchedulerDesc to create.
Definition: MacroSchedulerTypes.h:60
gts::Vector< ComputeResource * > computeResources
The ComputeResource that the MacroScheduler can schedule to.
Definition: MacroSchedulerTypes.h:62


Example 1

This program builds a DAG of Nodes, where each Node prints its name. For this example we use the Dynamic Micro-scheduler implementation of the Macro-scheduler.

#include "gts/micro_scheduler/WorkerPool.h"
#include "gts/micro_scheduler/MicroScheduler.h"
#include "gts/macro_scheduler/Node.h"
#include "gts/macro_scheduler/compute_resources/MicroScheduler_Workload.h"
#include "gts/macro_scheduler/compute_resources/MicroScheduler_ComputeResource.h"
#include "gts/macro_scheduler/schedulers/homogeneous/central_queue/CentralQueue_MacroScheduler.h"
...
// Create an unintilialized WorkerPool.
gts::WorkerPool workerPool;
// Initialize it.
bool result = workerPool.initialize();
GTS_ASSERT(result);
// Create an unintilialized MicroScheduler.
gts::MicroScheduler microScheduler;
// Initialize it with a WorkerPool.
result = microScheduler.initialize(&workerPool);
GTS_ASSERT(result);
// Create a MicroScheduler_ComputeResource that will execute Workloads through the MicroScheduler.
gts::MicroScheduler_ComputeResource microSchedulerComputeResource(&microScheduler);
// Add the microSchedulerComputeResource to the description struct for the MacroScheduler.
gts::MacroSchedulerDesc macroSchedulerDesc;
macroSchedulerDesc.computeResources.push_back(&microSchedulerComputeResource);
// Create and initialize the MacroScheduler with the macroSchedulerDesc.
GTS_ASSERT(pMacroScheduler);
pMacroScheduler->init(macroSchedulerDesc);
//
// Build a DAG of Nodes
/*
+---+
+---->| B |-----+
| +---+ |
| v
+---+ +---+
| A | | D |
+---+ +---+
| ^
| +---+ |
+---->| C |-----+
+---+
*/
Node* pA = pMacroScheduler->allocateNode();
pA->addWorkload<gts::MicroSchedulerLambda_Workload>([](WorkloadContext const&){ printf("A "); });
Node* pB = pMacroScheduler->allocateNode();
pB->addWorkload<gts::MicroSchedulerLambda_Workload>([](WorkloadContext const&){ printf("B "); });
Node* pC = pMacroScheduler->allocateNode();
pC->addWorkload<gts::MicroSchedulerLambda_Workload>([](WorkloadContext const&){ printf("C "); });
Node* pD = pMacroScheduler->allocateNode();
pD->addWorkload<gts::MicroSchedulerLambda_Workload>([](WorkloadContext const&){ printf("D "); });
pA->addChild(pB);
pA->addChild(pC);
pB->addChild(pD);
pC->addChild(pD);
// Build and execute the schedule
gts::Schedule* pSchedule = pMacroScheduler->buildSchedule(pA, pD);
pMacroScheduler->executeSchedule(pSchedule, microSchedulerCompResource.id(), true);
virtual Schedule * buildSchedule(Node *pStart, Node *pEnd)=0
virtual void executeSchedule(Schedule *pSchedule, ComputeResourceId id)=0
A ComputeResource that wraps a MicroScheduler.
Definition: MicroScheduler_ComputeResource.h:53
A concrete lambda Workload that maps to the MicroScheduler.
Definition: MicroScheduler_Workload.h:116
GTS_INLINE TWorkload * addWorkload(TArgs &&... args)
Allocates a new Workload object of type TWorkload.
Definition: Node.h:174
The execution schedule for all ComputeResources.
Definition: Schedule.h:45

Ouput:

A B C D

or

A C B D


Example 2

This program builds a DAG of Nodes, where Node runs a more fine-grained computation using a Micro-scheduler. The DAG can be seen as an execution flow that organizes the lower level computation. In this example we will divide up execution on a vector over the Nodes.

//... Init Boilerplate as seen in the previous examples.
//
// Build a DAG of Nodes
/*
+---+
+---->| B |-----+
| +---+ |
| v
+---+ +---+
| A | | D |
+---+ +---+
| ^
| +---+ |
+---->| C |-----+
+---+
A: Increments all elements in a vector by 1.
B: Increments elements [0, n/2) by 2.
C: Increments elements [n/2, n) by 3.
D: Increments all elements by by 1.
Result: { 4, 4, ..., 4, 5, 5, ..., 5}.
*/
gts::ParallelFor parFor(microScheduler);
std::vector<int> vec(1000000, 0);
// Increments all elements in a vector by 1.
Node* pA = pMacroScheduler->allocateNode();
pA->addWorkload<gts::MicroSchedulerLambda_Workload>([&parFor, &vec](WorkloadContext const& ctx)
{
parFor(vec.begin(), vec.end(), [](std::vector<int>::iterator iter) { (*iter)++; });
});
// Increments elements [0, n/2) by 2.
Node* pB = pMacroScheduler->allocateNode();
pB->addWorkload<gts::MicroSchedulerLambda_Workload>([&parFor, &vec](WorkloadContext const&)
{
parFor(vec.begin(), vec.begin() + vec.size() / 2, [](std::vector<int>::iterator iter) { (*iter) += 2; });
});
// Increments elements [n/2, n) by 3.
Node* pC = pMacroScheduler->allocateNode();
pC->addWorkload<gts::MicroSchedulerLambda_Workload>([&parFor, &vec](WorkloadContext const&)
{
parFor(vec.begin() + vec.size() / 2, vec.end(), [](std::vector<int>::iterator iter) { (*iter) += 3; });
});
// Increments all elements by by 1.
Node* pD = pMacroScheduler->allocateNode();
pD->addWorkload<gts::MicroSchedulerLambda_Workload>([&parFor, &vec](WorkloadContext const&)
{
parFor(vec.begin(), vec.end(), [](std::vector<int>::iterator iter) { (*iter)++; });
});
pA->addChild(pB);
pA->addChild(pC);
pB->addChild(pD);
pC->addChild(pD);
// Build and execute the schedule
gts::Schedule* pSchedule = pMacroScheduler->buildSchedule(pA, pD);
pMacroScheduler->executeSchedule(pSchedule, microSchedulerCompResource.id(), true);
// Validate result = { 4, 4, ..., 4, 5, 5, ..., 5}.
for (auto iter = vec.begin(); iter != vec.begin() + vec.size() / 2; ++iter)
{
GTS_ASSERT(*iter == 4);
}
for (auto iter = vec.begin() + vec.size() / 2; iter != vec.end(); ++iter)
{
GTS_ASSERT(*iter == 5);
}
printf ("SUCCESS!\n");

Ouput:

SUCCESS!

Examples Source