This section is a brief overview of how to get started with GTS. See Programming Model for the gritty details.
Worker Pool
A Worker Pool is the fundamental executor of CPU work. It determines how SW Worker threads are mapped to a processor's HW threads. By default, GTS creates one SW thread per HW thread to avoid oversubscription.
A collection of running Worker threads that a MicroScheduler can be run on.
Definition: WorkerPool.h:54
bool initialize(uint32_t threadCount=0)
Micro-scheduler
A Micro-scheduler defines the scheduling policy of work onto a Worker Pool. It is the fundamental scheduler of work onto the CPU. It creates and consumes Tasks as a fundamental unit of work. A Micro-scheduler is always mapped to a Worker Pool with a 1:1 mapping between Local Schedulers and Workers.
A work-stealing task scheduler. The scheduler is executed by the WorkerPool it is initialized with.
Definition: MicroScheduler.h:81
bool initialize(WorkerPool *pWorkerPool)
Initializes the MicroScheduler and attaches it to pWorkPool, where each worker in pWorkPool will exec...
Example
This program increments the values of all elements in a vector using a parallel-for pattern.
#include "gts/micro_scheduler/WorkerPool.h"
#include "gts/micro_scheduler/MicroScheduler.h"
#include "gts/micro_scheduler/patterns/ParallelFor.h"
...
gts::WorkerPool workerPool;
std::vector<int> vec(1000000, 0);
parFor(vec.begin(), vec.end(), [](std::vector<int>::iterator iter) { (*iter)++; });
for (auto const& v : vec)
{
}
printf("Success!\n");
taskScheduler.shutdown();
A construct that maps parallel-for behavior to a MicroScheduler.
Definition: ParallelFor.h:48
#define GTS_ASSERT(expr)
Causes execution to break when expr is false.
Definition: Assert.h:144
Ouput:
Macro-scheduler
A Macro-scheduler is a high level scheduler of persistent task DAGs. It produces Schedules onto Compute Resources, like the Micro-scheduler. The Task DAGs are formed by Nodes, with each Node containing Workloads that define what Compute Resource it can be scheduled to.
MicroScheduler_ComputeResource microSchedulerCompResource(µScheduler, 0, 0);
pMacroScheduler->
init(macroSchedulerDesc);
A generalized DAG scheduler utilizing work stealing. This scheduler delegates its responsibilities to...
Definition: CentralQueue_MacroScheduler.h:58
A MacroScheduler builds ISchedules for a set of ComputeResources from a DAG of Node.
Definition: MacroScheduler.h:50
virtual bool init(MacroSchedulerDesc const &desc)=0
The description of a MacroSchedulerDesc to create.
Definition: MacroSchedulerTypes.h:60
gts::Vector< ComputeResource * > computeResources
The ComputeResource that the MacroScheduler can schedule to.
Definition: MacroSchedulerTypes.h:62
Example 1
This program builds a DAG of Nodes, where each Node prints its name. For this example we use the Dynamic Micro-scheduler implementation of the Macro-scheduler.
#include "gts/micro_scheduler/WorkerPool.h"
#include "gts/micro_scheduler/MicroScheduler.h"
#include "gts/macro_scheduler/Node.h"
#include "gts/macro_scheduler/compute_resources/MicroScheduler_Workload.h"
#include "gts/macro_scheduler/compute_resources/MicroScheduler_ComputeResource.h"
#include "gts/macro_scheduler/schedulers/homogeneous/central_queue/CentralQueue_MacroScheduler.h"
...
gts::WorkerPool workerPool;
pMacroScheduler->
init(macroSchedulerDesc);
pA->addChild(pB);
pA->addChild(pC);
pB->addChild(pD);
pC->addChild(pD);
pMacroScheduler->
executeSchedule(pSchedule, microSchedulerCompResource.id(),
true);
virtual Schedule * buildSchedule(Node *pStart, Node *pEnd)=0
virtual void executeSchedule(Schedule *pSchedule, ComputeResourceId id)=0
A ComputeResource that wraps a MicroScheduler.
Definition: MicroScheduler_ComputeResource.h:53
A concrete lambda Workload that maps to the MicroScheduler.
Definition: MicroScheduler_Workload.h:116
GTS_INLINE TWorkload * addWorkload(TArgs &&... args)
Allocates a new Workload object of type TWorkload.
Definition: Node.h:174
The execution schedule for all ComputeResources.
Definition: Schedule.h:45
Ouput:
or
Example 2
This program builds a DAG of Nodes, where Node runs a more fine-grained computation using a Micro-scheduler. The DAG can be seen as an execution flow that organizes the lower level computation. In this example we will divide up execution on a vector over the Nodes.
std::vector<int> vec(1000000, 0);
{
parFor(vec.begin(), vec.end(), [](std::vector<int>::iterator iter) { (*iter)++; });
});
{
parFor(vec.begin(), vec.begin() + vec.size() / 2, [](std::vector<int>::iterator iter) { (*iter) += 2; });
});
{
parFor(vec.begin() + vec.size() / 2, vec.end(), [](std::vector<int>::iterator iter) { (*iter) += 3; });
});
{
parFor(vec.begin(), vec.end(), [](std::vector<int>::iterator iter) { (*iter)++; });
});
pA->addChild(pB);
pA->addChild(pC);
pB->addChild(pD);
pC->addChild(pD);
pMacroScheduler->
executeSchedule(pSchedule, microSchedulerCompResource.id(),
true);
for (auto iter = vec.begin(); iter != vec.begin() + vec.size() / 2; ++iter)
{
}
for (auto iter = vec.begin() + vec.size() / 2; iter != vec.end(); ++iter)
{
}
printf ("SUCCESS!\n");
Ouput:
Examples Source