Using xdc.runtime Gates

From RTSC-Pedia

Jump to: navigation, search
revision tip
—— LANDSCAPE orientation
[printable version]  [offline version]offline version generated on 24-Feb-2019 01:02 UTC 

Using xdc.runtime Gates

How to serialize access to shared data structures

Contents

Introduction

The xdc.runtime package contains modules that work together to provide support for serializing access to shared data structures. This section focuses on the Gate module which is used to "enter" and "leave" critical sections of code that access shared data.

Use of this module is illustrated below by a series of examples that take advantage of one other distinguished xdc.runtime module:

  • GateNull - a module that is used to eliminate unnecessary serialization overhead for those situations where data is known to be accessed by one thread at a time; e.g., in single threaded systems or for modules that are known to be used by just one thread in a multi-threaded application.

Obviously the most interesting examples will require a "real" gate; i.e., one that serializes access to a shared data structure in a multi-threaded application. Extending xdc.runtime Gates provides an example that uses posix mutexes and illustrates how, using services commonly provided by an RTOS, the xdc.runtime package can be made thread-safe with respect to that RTOS.

Overview

Each embedded application runs in a unique environment with specific mechanisms for managing multiple threads; some use a real-time operating system while others simply run a control loop that coordinates with interrupt service routines. The Gate module provides an RTOS independent interface that allows applications to portably protect shared data structures in multi-threaded environments by simply "entering a gate" prior to using the data and "leaving the gate" once access is no longer required.

Gates are also used internally by the xdc.runtime package. In order to allow the xdc.runtime functions to be called from different threads, it is important that any global data managed by the xdc.runtime modules is updated "atomically". The xdc.runtime package atomically updates global data by always "entering the System gate" (via Gate_enterSystem()) prior to accessing the global data and "leaving the System gate" (via Gate_leaveSystem()) once the update is complete.

While gates make it easy to protect shared data structures and ensure the functional correctness of a multi-threaded application, it is important to minimize the time spent within a gate; the longer a thread operates within a gate the greater the chance that this thread will interfere with the timely operation of other threads in the system. If two (or more) threads need to enter the same gate, all but one will be suspended until the thread in the gate leaves even if the thread inside the gate has lower priority than the blocked threads. Worse, for performance reasons, some gates work by disabling all scheduling while inside the gate; even threads that are not trying to modify the data protected by the gate are potentially affected.

By carefully selecting the type of gate used to protect shared data structures and associating each shared data structure with a unique gate, it is possible to strike a balance between the runtime overhead caused by frequent calls to enter and leave gates and the scheduling latency engendered by unnecessarily blocking the execution of threads that are operating on unrelated data.

The xdc.runtime package

  • enables creation of portable thread-safe code with shared data structures,
  • provides a distinguished "System gate" to efficiently protect very short critical sections,
  • supports gates on both a per-module and a per-instance basis for modules that need to operate within a gate for periods longer than the worst-case scheduling latency allowed by the application, and
  • allows system integrators to configure gates to achieve the proper balance between runtime overhead and scheduling latency for their application.

Architecture

Any module may declare that it is "gated"; i.e., that the module protects internal data shared among multiple threads in the system via an xdc.runtime.Gate instance. Gated modules always have at least one gate instance created implicitly. However, it is possible to override the creation of this gate by explicitly creating it and assigning it to the module's common$.gate configuration parameter during configuration.

Modules enter and leave gates via the methods provided by the xdc.runtime.Gate module. This Gate module provides methods that enable modules to enter (and leave) their gate as well as dynamically create and delete gates associated with their instance objects. Users of gated modules don't need to use these methods (they are for use within the implementation of RTSC modules) and should consult the documentation of each gated module used to understand how best to balance the module's need to protect shared data structures and the application's scheduling latency requirements.

In addition to the services provided to implement thread-safe modules, the Gate module also provides access to the distinguished "System gate" via Gate_enterSystem() and Gate_leaveSystem. In fact, this System gate is the gate associated with the gated xdc.runtime.System module. This one gate is used throughout the xdc.runtime package to serialize access to global data and it's only used when the duration of the critical section is known to be deterministic and very short.

Using the System gate to protect logically independent data structures is contrary to the design principle that, to minimize scheduling latency effects, you should use a unique gate for independent data structures. This exception exists because, if the duration of time required to update these independent structures is always very short (less than any application's maximum scheduling latency requirements), the System gate can be implemented very efficiently by simply disabling all scheduling while inside the gate. While this may sound excessive, it often results in a significant overall performance gain, keeps data space requirements to a minimum with just one gate, and has no impact to scheduling latency.

Image:RuntimeGates.png

Deadlocks.  Although it is possible to "nest" gates — enter a gate while inside another gate — care should be taken to avoid deadlocks; e.g., thread 1 enters gate a, thread 2 enters gate b, thread 1 tries to enter gate b and blocks waiting for thread 2 to leave, but thread 2 tries to enter gate a and blocks waiting for thread 1 to leave. One way to avoid deadlocks is to ensure that nested gates are always entered in the same order.

Since the System gate is used throughout the xdc.runtime and the modules in this package are widely used, you should never enter another gate while inside the System gate. If another thread enters the same gate and makes a call to an xdc.runtime method, such as System_printf(), there is a risk of deadlock.

The System gate should only be used to protect low-level data structures where the total execution time within the gate has a constant upper bound independent of the data being protected.

Scheduling Latency.  Undisciplined use of gates can lead to scheduling latencies that violate a system's real-time constraints. Since the System gate is used to protect a wide variety of independent data structures, it is important to keep the time within the System gate to an absolute minimum. Similarly, when assigning gates to gated modules it's important to minimize sharing gates between different modules; while sharing can reduce your application's data footprint, you risk creating unnecessary scheduling blockages.

Configuration

Users can configure their application to specify

  • on a per application basis, the System gate used to protect global data (including the xdc.runtime shared data),
  • on a per module basis, the gate used to protect the module's global data structures as well as the parameters used to create additional gates used internally by the module to protect the data in its individual instances.

The System gate.  Since the System gate is used throughout the xdc.runtime package, it's important to configure the System module's gate before using the xdc.runtime modules in a multi-threaded environment. Fortunately, some RTOS providers automatically configure the xdc.runtime.System gate to ensure correct operation in a multi-threaded environment. For example, if you use the ti.sysbios package, lines similar to the following are effectively added to your configuration by the ti.sysbios package itself:

 
 
 
var System = xdc.useModule("xdc.runtime.System");
var GateHwi = xdc.useModule("ti.sysbios.gates.GateHwi");
System.common$.gate = GateHwi.create();

If the System gate is not configured by some package in the system, it will default to an instance of GateNull. Since GateNull provides no synchronization, it should only be used for modules that are never called by more than one thread at a time. Multi-threaded applications must either

  1. configure the System module's gate with a mutex that serializes all threads that access the xdc.runtime modules, or
  2. ensure that there are no concurrent accesses to the xdc.runtime modules.

It's easy to identify those modules that provide gates; only modules that inherit the xdc.runtime.IGateProvider interface can be used to create gates. For posix-based applications, you can use the Lock module developed in Extending xdc.runtime Gates/Example 1.

For more information about how the System gate is used within the xdc.runtime package, see Multi-Threading Support section of Working with xdc.runtime.

Configuring gated modules.  Every gated module, Mod, has two configuration parameters related to gates:

  1. Mod.common$.gate — the gate instance object used by Mod to protect module-wide shared data.
  2. Mod.common$.gateParams — gate instance creation parameters used by Mod to create gates for its instance objects. These parameters are used with the same module that manages the Mod.common$.gate instance.

Suppose there is a heap memory module, say Heap, whose instances manage independent heaps and, to avoid unnecessary scheduling blockages, Heap uses a separate gate per instance to protect each heap. The following configuration script uses a hypothetical gate module named GateSemaphore, with a timeout instance parameter, to provide gates to Heap.

 
 
 
 
 
var Heap = xdc.useModule("...Heap");
var GateSemaphore= xdc.useModule("...GateSemaphore");
Heap.common$.gate = GateSemaphore.create();
Heap.common$.gateParams = new GateSemaphore.Params();
Heap.common$.gateParams.timeout = ...;

Examples

In the table below, we provide a examples that illustrate the key capabilities of the Gate module.

Example Description Purpose
Example 1 Using Gate to protect global data a minimal example to illustrate using Gate in existing code bases

In addition to the "client-side" examples above, the table below lists examples of how to create IGateProvider modules that can be used to manage simple concurrency requirements.

Example Description Purpose
Example 1 posix-based IGateProvider a simple but complete example of a Gate provider for pthread-based applications

Performance Considerations

There are important scheduling latency and performance considerations that affect the "type" of gate used to protect each data structure. For example, the best way to protect a shared counter on a single core CPU is to simply disable all interrupts before the update and restore the interrupt state after the update; disabling all interrupts prevents all thread switching, so the update is guaranteed to be "atomic". Although highly efficient, this method of creating atomic sections causes serious system latencies when the time required to update the data structure can't be bounded.

For example, a memory manager's list of free blocks can grow indefinitely long during periods of high fragmentation. Searching such a list during an allocation operation with interrupts disabled would cause system latencies to also become unbounded. In this case, the best solution is to provide a gate that suspends the execution of threads that try to enter a gate that has already been entered; i.e., the gate "blocks" the thread until the thread already in the gate leaves. The time required to enter and leave the gate is greater than simply enabling and restoring interrupts, but since the time spent within the gate is relatively large, the overhead caused by entering and leaving gates will not become a significant percentage of overall system time. More importantly, threads that do not need to access the shared data structure are completely unaffected by threads that do access it.

TODO:  describe system gate, gate qualities, and the latency-performance tradeoff

[printable version]  [offline version]offline version generated on 24-Feb-2019 01:02 UTC 
Copyright © 2008 The Eclipse Foundation. All Rights Reserved
Personal tools
package reference