Multithreaded execution is commonly required in software development, especially for complex test systems and large projects. Single-thread programs cannot meet these requirements in many cases. This article summarizes key concepts and practical considerations for thread programming in LabVIEW.
Overview
This tutorial covers:
- Basic concepts of single-thread and multi-thread execution in LabVIEW
- How LabVIEW schedules and runs code across threads and CPU cores
- The LabVIEW execution systems and VI priority settings
- Use of timing structures and parallel loops
1. Single-thread vs. Multi-thread
LabVIEW supported multithreading starting with version 5.0; earlier versions ran single-threaded. Single-thread does not mean code is permanently bound to a particular OS thread; the system may move execution between threads, but at any given moment the code occupies only one thread and is not executed simultaneously by multiple threads. Multi-threading means the system can execute code on multiple threads at the same time.
In LabVIEW, any portions of a block diagram that can run in parallel (for example, two independent While loops with no data dependency between them) are automatically scheduled to run on separate threads.
Typically LabVIEW creates at least two threads when running a VI: a user interface thread responsible for front panel updates and user interactions, and one or more execution threads that perform non-UI work.
2. LabVIEW Multithreading in Practice
If two While loops are independent, LabVIEW will assign them to two threads during execution. When those loops have no wait functions and perform heavy work, LabVIEW may schedule them onto different CPU cores on multicore systems to balance load and avoid saturating a single core.
3. LabVIEW Execution Systems
LabVIEW has an internal scheduler called the execution system. Current LabVIEW versions include several execution systems such as: User Interface, Standard, Instrument I/O, Data Acquisition, and two Other execution systems. SubVIs within an application can be assigned to different execution systems via the VI properties.
The User Interface execution system uses a single thread (the UI thread). Other execution systems may have multiple threads available for code execution.
4. VI Priority Settings
VI priority can be set in VI properties. Available priority levels are: Low (background), Standard, Above Standard, High, and Realtime. The first five levels represent increasing priority; higher priority VIs have a greater chance to preempt CPU resources. There is also a Subroutine setting: when a VI is set as a Subroutine, its front panel and debugging information are removed (it cannot be used as a UI or debug target), and the VI executes without being preempted by other threads until it completes. Subroutine VIs therefore receive maximum CPU availability and are suitable for short-running algorithmic VIs that should not be interrupted.
Notes when setting VI priority:
- Raising one VI's priority does not reduce the total CPU time consumed by the application; it only increases that VI's chance to obtain CPU time relative to others.
- Higher priority does not guarantee immediate execution; it increases the likelihood that the VI will preempt CPU resources.
On single-core systems, LabVIEW can create up to four threads per priority level for each execution system. On multicore systems, the number of available threads scales with the number of CPU cores. In practice, most applications do not use so many threads.
Be aware that creating too many threads can degrade performance because thread creation, teardown, and context switching consume resources.
5. Timing Structures
On multicore computers, the OS scheduling of threads to cores may not always be optimal for every workload. When precise control of core assignment is required for performance reasons, LabVIEW provides timing structures that support explicit processor allocation and high-precision timing. The timing structures include the Timed Loop structure and the Timed Sequence structure.
Timed structures include configuration terminals on the left side that can be set dynamically through wires at run time or configured statically via a configuration panel. The processor affinity option allows manual selection of which CPU core(s) the timed structure should run on. In practice, assign long-running or compute-intensive tasks to dedicated cores and balance other tasks across remaining cores to improve overall performance.
6. Parallel For Loops
When two loops run in parallel with no data dependency, LabVIEW assigns them to separate threads. For a single loop that is compute-intensive and slow, LabVIEW normally uses a single thread. To accelerate such loops, configure the loop to run iterations in parallel by enabling parallel loop iterations.
Right-click the loop and choose Configure Loop Parallelism to open the configuration dialog. Enable parallel iterations and set the number of parallel instances. A practical guideline is to set the instance count up to the number of CPU cores; setting more instances than cores typically offers no additional benefit.
After enabling parallel iterations and setting the instance count, execution time can drop significantly for compute-bound loops.
Important considerations:
- Parallel loop configuration requires that iterations are independent. Do not use feedback nodes or shift registers that depend on prior iterations, because iteration dependencies prevent parallel execution and will cause configuration errors.
- Parallel loops do not allow probes or certain debugging techniques inside the loop. If debugging is necessary, enable the "Allow debugging" option in the parallel loop settings; this forces iterations to run sequentially and reduces performance to single-threaded levels. Remember to disable this option after debugging is complete.
7. Summary
- Understand the distinction between single-thread and multi-thread execution in LabVIEW.
- LabVIEW schedules parallelizable code automatically across threads and cores.
- Familiarize yourself with LabVIEW execution systems and how to assign VIs to them.
- Use VI priority settings with care; they affect relative CPU access but not total CPU usage.
- Use timed structures and processor affinity when fine-grained control over core assignment is necessary.
- Use parallel loop iterations to accelerate independent, compute-intensive loops, and respect the constraints on iteration independence and debugging.