The VxWorks network can also be used for communication among multiple processors on a common backplane. In this case, data is passed through shared memory. This is implemented in the form of a standard network driver so that all the higher levels of network components are fully functional over this shared-memory "network." Thus, all the high-level network facilities provided over an Ethernet are also available over the shared-memory network.
A multiprocessor backplane bus contains a separate Internet network. Each shared-memory network has its own network/subnet number. As usual, each processor (host) on the shared-memory network has a unique Internet address.
|
CAUTION: This is different if you are using proxy ARP. See 4.7 ARP and Proxy ARP for Transparent Subnets for additional information.
|
||||||||||||||||||
In the example shown in Figure 3-2, two CPUs are on a backplane. The shared-memory network's Internet address is 161.27.0.0. Each CPU on the shared-memory network has a unique Internet address, 161.27.0.1 for vx1 and 161.27.0.2 for vx2.
The routing capabilities of the VxWorks IP layer allow processors on a shared-memory network to reach systems on other networks over a gateway processor on the shared-memory network. The gateway processor has connections to both the shared-memory network and an external network. These connections allow higher-level protocols to transmit data between any processor on the shared-memory network and any other host or target system on the external network.
The low-level data transfer mechanism of the shared-memory network driver is also available directly. This allows alternative protocols to be run over the shared-memory network in addition to the standard ones.
The following features allow the VxWorks shared-memory network driver to send network packets from one processor on the backplane to another:
The processors on the backplane are each assigned a unique backplane processor number starting with 0. The assignment of numbers is arbitrary, except for processor 0, which by convention is the shared-memory network master, described in the next section.
The processor numbers are established by the parameters supplied to the boot ROMs when the system is booted. These parameters can be burned into ROM, set in the processor's NVRAM (if available), or entered manually.
One of the processors on the backplane is the shared-memory network master. The shared-memory network master has the following responsibilities:
No processor can use the shared-memory network until the master has initialized it. However, the master processor is not involved in the actual transmission of packets on the backplane between other processors. After the shared-memory pool is initialized, the processors, including the master, are all peers.
The configuration module target/src/config/usrNetwork.c sets the processor number of the master to 0. The master usually boots from the external (Ethernet) network directly. The master has two Internet addresses in the system: its Internet address on the Ethernet, and its address on the shared-memory network. See the reference entry for usrConfig.
The other processors on the backplane boot indirectly over the shared-memory network, using the master as the gateway. They have only an Internet address on the shared-memory network. These processors specify the shared-memory network interface, sm, as the boot device in the boot parameters.
The location of the shared-memory pool depends on the system configuration. In many situations, you want to allocate the shared memory at run-time rather than fixing its location at the time the system is built.
Of course, all processors on the shared-memory network must be able to access the shared-memory pool, even if its location is not assigned at compile time. The shared-memory anchor serves as a common point of reference for all processors. The anchor is a small data structure assigned at a fixed location at compile time. This location is usually in low memory of the dual-ported memory of one of the processors. Sometimes the anchor structure is stored at some fixed address on the separate memory board.
The anchor contains a pointer to the actual shared-memory pool. The master sets this pointer during initialization. The value of the pointer to the shared-memory pool is actually an offset from the anchor itself. Thus, the anchor and pool must be in the same address space so that the offset is valid for all processors.
The backplane anchor address is established by configuration constants or by boot parameters. For the shared-memory network master, the anchor address is assigned in the master's configuration at the time the system image is built. The shared memory anchor address, as seen by the master, is also set during configuration. The relevant configuration macro is SM_ANCHOR_ADRS.
For the other processors on the shared-memory network, a default anchor address can also be assigned during configuration in the same way. However, this requires burning boot ROMs with that configuration, because the other processors must, at first, boot from the shared-memory network. For this reason, the anchor address can also be specified in the boot parameters if the shared-memory network is the boot device. To do this, enter the address (separated by an equal sign, "=") after the shared-memory network boot device specifier sm. For example, the following line sets the anchor address to 0x800000:
boot device: sm=0x800000
In this case, this is the address of the anchor as seen by the processor being booted.
The processors on the shared-memory network cannot communicate over that network until the shared-memory pool initialization is finished. To let the other processors know when the backplane is "alive," the master maintains a shared-memory heartbeat. This heartbeat is a counter that is incremented by the master once per second. Processors on the shared-memory network determine that the shared-memory network is alive by watching the heartbeat for a few seconds.
The shared-memory heartbeat is located in the first 4-byte word of the shared-memory pool. The offset of the shared-memory pool is the fifth 4-byte word in the anchor, as shown in Figure 3-3.
Thus, if the anchor were located at 0x800000:
[VxWorks Boot]: d 0x800000 800000: 8765 4321 0000 0001 0000 0000 0000 002c *.eC!...........,* 800010: 0000 0170 0000 0000 0000 0000 0000 0000 *...p............* 800020: 0000 0000 0000 0000 0000 0000 0000 0000 *................*
The offset to the shared-memory pool is 0x170. To view the start of the shared-memory pool, display 0x800170:
[VxWorks Boot]: d 0x800170 800170: 0000 0050 0000 0000 0000 0bfc 0000 0350 *...P...........P*
In this example, the value of the shared-memory heartbeat is 0x50. Examine this location again to determine whether the network is alive. If the value has changed, the network is alive.
As mentioned previously, shared memory is assigned a fixed location at compile time or it is allocated dynamically at run-time. The location is determined by the value of the shared memory size set during configuration (configuration constant: SM_MEM_ADRS). This constant can be specified as follows:
The size of the shared-memory pool is set during configuration. The relevant configuration macro is SM_MEM_SIZE.
The size required for the shared-memory pool depends on the number of processors and the expected traffic. There is less than 2KB of overhead for data structures. After that, the shared-memory pool is divided into 2KB packets. Thus, the maximum number of packets available on the backplane network is (poolsize - 2KB) / 2KB. A reasonable minimum is 64KB. A configuration with a large number of processors on one backplane and many simultaneous connections can require as much as 512KB. Having too small a pool slows down communications.
The configuration of VxWorks includes a conditional compilation constant that makes it easy to select a pair of typical configurations, for instance between an off-board shared-memory pool and an on-board shared memory pool. The relevant configuration macro is SM_OFF_BOARD.
A typical off-board configuration establishes the backplane anchor and memory pool at an absolute address of 0x800000 on a separate memory board with a pool size of 512KB.
The on-board configuration establishes the shared-memory anchor at a low address in the master processor's dual-ported memory. The shared-memory pool size is set to 64KB allocated from the master's own memory at run time.
|
NOTE: These configurations are provided as examples. Change them to suit your needs.
|
||||||||||||||||||
Because the shared-memory pool is accessed by all processors on the backplane, that memory must be configured as non-cacheable. On some systems, this requires that you change the sysPhysMemDesc[ ] table insysLib.c. Specifically, any board whose MMU is enabled (the default) must disable caching for off-board memory. Fortunately, if the VME address space used for the shared-memory pool already has a virtual-to-physical mapping in the table, the memory is already marked non-cacheable. Otherwise, you must add the appropriate mapping (with caching disabled).
For the MC680x0 family of processors, virtual addresses must equal physical addresses. For the 68030, if the MMU is off, caching must be turned off globally; see the reference entry for cacheLib. Note that the default for all BSPs is to have their VME bus access set to non-cacheable in sysPhysMemDesc[ ]. See VxWorks Programmer's Guide: Virtual Memory Interface.
Unless some form of mutual exclusion is provided, multiple processors can simultaneously access certain critical data structures of the shared-memory pool and cause fatal errors. The VxWorks shared-memory network uses an indivisible test-and-set instruction to obtain exclusive use of a shared-memory data structure. This translates into a read-modify-write (RMW) cycle on the backplane bus.
It is important that the selected shared memory supports the RMW cycle on the bus and guarantee the indivisibility of such cycles. This is especially problematic if the memory is dual-ported, as the memory must then also lock out one port during a RMW cycle on the other.
Some processors do not support RMW indivisibly in hardware, but do have software hooks to provide the capability. For example, some processor boards have a flag that can be set to prevent the board from releasing the backplane bus, after it is acquired, until that flag is cleared. You can implement these techniques for a processor in the sysBusTas( )routine of the system-dependent library sysLib.c. The shared-memory network driver calls this routine to set up mutual exclusion on shared-memory data structures.
|
CAUTION: Configure the shared memory test-and-set type for VxWorks (configuration constant: SM_TAS_TYPE) to either SM_TAS_SOFT or SM_TAS_HARD. If even one processor on the backplane lacks hardware test and set, all processors in the backplane must use the software test and set (SM_TAS_SOFT).
|
||||||||||||||||||
Each processor on the backplane has a single input queue for packets received from other processors. There are three methods processors use to determine when to examine their input queues: polling, bus interrupts, and mailbox interrupts.
When using polling, the processor examines its input queue at fixed intervals. When using interrupts, the sending processor notifies the receiving processor that its input queue contains packets. Interrupt-driven communication is much more efficient than polling.
However, most backplane buses have a limited number of interrupt lines available on the backplane (for example, VMEbus has seven). Although a processor can use one of these interrupt lines as its input interrupt, each processor must have its own interrupt line. In addition, not all processor boards are capable of generating bus interrupts. Nor can you always use bus interrupts.
As an alternative interrupt mechanism, you can use mailbox interrupts, also called location monitors because they monitor the access to specific memory locations. A mailbox interrupt specifies a bus address that, when written to or read from, causes a specific interrupt on the processor board. Each board can be set, with hardware jumpers or software registers, to use a different address for its mailbox interrupt.
To generate a mailbox interrupt, a processor writes to that location. There is effectively no limit to the number of processors that can use mailbox interrupts, because each interrupt requires only a single address on the bus. Most modern processor boards include some kind of mailbox interrupt.
Each processor must tell the other processors which notification method it uses. Each processor enters its interrupt type and up to three related parameters in the shared-memory data structures. This information is used by the shared-memory network drivers of the other processors when sending packets.
The interrupt type and parameters for each processor are specified during configuration. The relevant configuration macro is SM_INT_TYPE (also SM_INT_ARGn). The possible values are defined in the header file smNetLib.h. Table 3-5 summarizes the available interrupt types and parameters.
|
|||||||||||||||||||
|
|||||||||||||||||||
|
|||||||||||||||||||
Sequential addressing is a method of assigning IP addresses to processors on the network based on their processor number. Addresses are assigned in ascending order, with the master having the lowest address, as shown in Figure 3-4.
Using sequential addressing, a target on the shared-memory network can determine its own IP address. Only the master's IP address need be entered manually. All other processors on the backplane determine their IP address by adding their processor number to the starting IP address.
Sequential addressing provides a more uniform environment for the shared-memory network. Because a target can determine both its own Internet address and the Internet addresses of all other targets on the shared-memory network, hardware-to-IP translation (ARP) is unnecessary over the VxWorks shared-memory network, and is therefore eliminated.
When setting up a shared-memory network with sequential addressing, choose a block of IP addresses and assign the lowest address in this block to the master.
When the shared-memory network driver is initialized by the master with smNetInit( ), the starting IP address is passed as a parameter and stored in the shared-memory pool.
Each target sets its interface address with ifAddrSet( ). This routine checks that the assigned address matches the expected address for its location on the backplane, based on the processor number from the boot parameters. If any other address is specified, the operation fails. To determine the starting address for an active shared-memory network, use smNetShow( ).
In the following example, the master's IP address is 150.12.17.1.
-> smNetShow value = 0 = 0x0
The following output displays on the standard output device:
Anchor Local Addr: 0x800000, SOFT TAS Sequential addressing enabled. Master address: 150.12.17.1 heartbeat = 453, header at 0x800170, free pkts = 235. cpu int type arg1 arg2 arg3 queued pkts ----- ----------- --------- --------- --------- -------------- 0 mbox-1 0x2d 0x803f 0x10 0 1 mbox-1 0x2d 0x813f 0x10 0 input packets = 366 output packets = 376 input errors = 0 output errors = 1 collisions = 0
With sequential addressing, when booting a slave, the backplane IP address and gateway IP boot parameters are no longer necessary. The default gateway address is the address of the master. Another address can be specified if this is not the desired configuration.
[VxWorks Boot]: p boot device : sm=0x800000 processor number : 1 file name : /folk/fred/wind/target/config/bspname/vxWorks host inet (h) : 150.12.1.159 user (u) : darger flags (f) : 0x0 [VxWorks Boot] : @ boot device : sm=0x800000 processor number : 1 file name : /folk/fred/wind/target/config/bspname/vxWorks host inet (h) : 150.12.1.159 user (u) : darger flags (f) : 0x0 Backplane anchor at 0x800000... Attaching network interface sm0... done. Backplane inet address: 150.12.17.2 Subnet Mask: 0xffffff00 Gateway inet address: 150.12.17.1 Attaching network interface lo0... done. Loading... 364512 + 27976 + 20128 Starting at 0x1000...
Sequential addressing can be enabled during configuration. The relevant configuration macro is INCLUDE_SM_SEQ_ADDR.
For UNIX, configuring the host to support a shared-memory network uses the same procedures outlined earlier in this chapter for other types of networks. In particular, a shared-memory network requires that:
To illustrate the previous discussion, this section presents an example of a simple shared-memory network. The network contains a single host and two target processors on a single backplane. In addition to the target processors, the backplane includes a separate memory board for the shared-memory pool, and an Ethernet controller board. The additional memory board is not essential, but provides a configuration that is easier to describe.
Figure 3-5 illustrates the overall configuration. The Ethernet network is assigned network number 150, and the shared-memory network is assigned 161. The host h1 is assigned the Internet address 150.12.0.1.
The master is vx1, and functions as the gateway between the Ethernet and shared-memory networks. It therefore has two Internet addresses: 150.12.0.2 on the Ethernet network and 161.27.0.1 on the shared-memory network.
The other backplane processor is vx2; it is assigned the shared-memory network address 161.27.0.2. It has no address on the Ethernet because it is not, directly connected to that network. However, it can communicate with h1 over the shared-memory network, using vx1 as a gateway. Of course, all gateway use is handled by the IP layer and is completely transparent to the user. Table 3-6 shows the example address assignments.
|
|||||||||||||||||||
|
|||||||||||||||||||
|
|||||||||||||||||||
To configure the UNIX system for our example, the /etc/hosts file must contain the Internet address and name of each system. Note that the backplane master has two entries. The second entry, vx1.sm, is not actually necessary, because the host system never accesses that system with that address--but it is useful to include it in the file to ensure that the address is not used for some other purpose.
The entries in /etc/hosts are as follows:
150.12.0.1 h1 150.12.0.2 vx1 161.27.0.1 vx1.sm 161.27.0.2 vx2
To allow remote access from the target systems to the UNIX host, the .rhosts file in your home directory, or the file /etc/hosts.equiv, must contain the target systems' names:
vx1 vx2
To inform the UNIX system of the existence of the Ethernet-to-shared-memory network gateway, make sure the following line is in the file /etc/gateways at the time the route daemon routed is started.
net 161.27.0.0 gateway 150.12.0.2 metric 1 passive
Alternatively, you can add the route manually (effective until the next reboot) with the following UNIX command:
% route add net 161.27.0.0 150.12.0.2 1
The target system's configurations include the parameters shown in Table 3-7. The backplane master, vx1, uses the following boot parameters:
boot device : gn processor number : 0 host name : h1 file name : /usr/wind/target/config/bspname/vxWorks inet on ethernet (e) : 150.12.0.2 inet on backplane (b) : 161.27.0.1 host inet (h) : 150.12.0.1 gateway inet (g) : user (u) : darger ftp password (pw) (blank=use rsh) : flags (f) : 0
|
NOTE:
For more information on boot devices, see the Tornado User's Guide: Getting Started. To determine which boot device to use, see the BSP's documentation.
|
||||||||||||||||||
The other target, vx2, has the following boot parameters:1
boot device : sm=0x800000 processor number : 1 host name : h1 file name : /usr/wind/target/config/bspname/vxWorks inet on ethernet (e) : inet on backplane (b) : 161.27.0.2 host inet (h) : 150.12.0.1 gateway inet (g) : 161.27.0.1 user (u) : darger ftp password (pw) (blank=use rsh): flags (f) : 0
|
|||||||||||||||||||
|
|||||||||||||||||||
|
|||||||||||||||||||
Getting a shared-memory network configured for the first time can be tricky. If you have trouble, here are a few troubleshooting procedures you can use. Take one step at a time.
Backplane anchor at anchor-addrs...Attaching network interface sm0...done.
-> smNetShow ["interface"] [, 1] value = 0 = 0x0
boot device: sm=0x800000
1: The parameters inet on backplane (b) and gateway inet (g) are optional with sequential addressing.