The File - Nov 1, 2008 - (Page 6)
In Focus | Multicore/multiprocessor design Optimise software for multi-core processors continued from page 1 software execution, whereas AMP does not. AMP In the AMP model, a separate OS, or a separate copy of the same OS, runs on each core. Each OS manages a pre-configured portion of system memory and system I/O—this configuration typically takes place during system bootup (figure 1). In AMP, no OS manages the whole system. Consequently, the system designer must assume the complex task of managing shared hardware resources. Also, if an application running on one core needs to access a resource (for instance, an Ethernet port) from another core, it must make the request through an application-level communications protocol. Such protocols must also be used to synchronise tasks running on different cores. In effect, a quad-core AMP system appears as four separate systems—just like four processor cards in a multi-card set-up. That said, the quad-core system consumes less power and has a smaller footprint than four processor cards. It also allows applications running on different cores to communicate with one another via high-speed shared memory. In a system of discrete processors, the applications would need to use a slower interface such as Ethernet, PCI or VME. In AMP, the system designer statically assigns a given application to an individual OS and CPU core. This static load balancing yields lower overall throughput than SMP since it leads to scenarios where one or two cores become heavily utilised while other cores spend time idling. Additionally, an application and all of its threads can run on only one core—adding more cores cannot make the application faster. SMP In SMP, a single instantiation of an OS manages all processor cores simultaneously and has access to all system memory and I/O (figure 2). The OS can then execute tasks or threads in true parallel fashion because it has all the cores at its disposal. This approach, combined with multi-threaded programming techniques, allows developers to speed up compute-intensive operations by using multiple cores simultaneously. Also, developers can employ semaphores, mutexes, barriers, and other threadlevel primitives to synchronise applications across cores. These primitives offer synchronisation with much lower overhead than the application-level protocols required by AMP. From the developer’s perspective, writing software for an SMP system is relatively straightforward since the OS hides the actual number of hardware processing units from the application programmer. The OS also manages the complex task of allocating and sharing resources among the processor cores. As a result, existing applications can often run on an SMP system without source code modifications. Figure 1: In AMP, each core has a separate OS that manages its own memory regions and I/O. Implementing parallelism An SMP-capable RTOS running on an SMP system schedules threads dynamically on any core. Much like its uniprocessor equivalent, the RTOS’s pre-emptive scheduler guarantees that, at any given time, threads of the highest priority are running. However, because each processor in an SMP system can run any thread, multiple threads can, in fact, run at the same time. If a thread of higher priority becomes ready to run, it will preempt the running thread of the lowest priority. To i m p ro v e p ro ce s si n g throughput and speed in an SMP system, developers can introduce threads to take advantage of the multiple cores or CPUs. Non-threaded software will still benefit from the increased processing capacity offered by SMP since the system can run multiple Figure 2: In SMP, a single instance of the OS manages all processor cores and can access all system memory and I/O. float array[NUM _ ROWS][NUM _ COLUMNS]; void fill _ array() { int i, j; for ( i = 0; i < NUM _ ROWS; i++ ) { for ( j = 0; j < NUM _ COLUMNS; j++ ) { array[i][j] = ((i/2 * j) / 3.2) + 1.0; } } } Code listing 1: Single-threaded fill_array() function. processes in parallel. However, to speed up an existing process, the developer must divide it into multiple parallel threads. To achieve parallelism, the developer can employ a variety of design patterns. • Worker threads—A main thread creates several threads to execute a workload in parallel. The number of worker threads should equal the number of CPU cores so that each core can handle an equal share of the work. The main thread may monitor the worker threads and/or wait continued on page 9 EE Times-India | November 1-15, 2008 | www.eetindia.com
http://www.eetindia.co.in/SEARCH/SUMMARY/technical-articles/CPU%2Bcore.HTM?ClickFromNewsletter_081101
http://www.eetindia.co.in/SEARCH/SUMMARY/technical-articles/CPU%2Bcore.HTM?ClickFromNewsletter_081101
http://www.eetindia.com/STATIC/REDIRECT/Newsletter_081101_EETI02.htm?ClickFromNewsletter_081101
Table of Contents for the Digital Edition of The File - Nov 1, 2008
EETimes India - November 1, 2008
Contents
National Semiconductor
Managing Threads, Communications in Multicore Partitioning
Texas Instruments
DigiKey
Improve Multi-core Hypervisor Efficiency
NECE 2008, Power India 2008, CSF: Electronics & Components, Wind India 2008, User2User 2008 India, National Conference On E-Governance
The File - Nov 1, 2008
The File - Nov 1, 2008 - Contents (Page 1)
The File - Nov 1, 2008 - National Semiconductor (Page 2)
The File - Nov 1, 2008 - National Semiconductor (Page 3)
The File - Nov 1, 2008 - Managing Threads, Communications in Multicore Partitioning (Page 4)
The File - Nov 1, 2008 - Texas Instruments (Page 5)
The File - Nov 1, 2008 - Texas Instruments (Page 6)
The File - Nov 1, 2008 - DigiKey (Page 7)
The File - Nov 1, 2008 - Improve Multi-core Hypervisor Efficiency (Page 8)
The File - Nov 1, 2008 - Improve Multi-core Hypervisor Efficiency (Page 9)
The File - Nov 1, 2008 - NECE 2008, Power India 2008, CSF: Electronics & Components, Wind India 2008, User2User 2008 India, National Conference On E-Governance (Page 10)
The File - Nov 1, 2008 - NECE 2008, Power India 2008, CSF: Electronics & Components, Wind India 2008, User2User 2008 India, National Conference On E-Governance (Page 11)
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20090101
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20081216
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20081201
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20081116
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20081101
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20081016
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20081008
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20080916
https://www.nxtbook.com/nxtbooks/emedia/eetindia_20080901
https://www.nxtbookmedia.com