3.1  XTI/TLI Programming

The X/Open Transport Interface (XTI) and Transport Level Interface (TLI) define an independent transport service interface that allows multiple applications to communicate at the transport level of the transport provider. XTI is a newer and enhanced superset of TLI. (In this supplement, the two interfaces are generally referred to together, XTI/TLI, since the discussion applies to both.) XTI/TLI provides generalized routines that support network communications involving many possible transport protocols. XTI/TLI routines provide an interface between an application program and the protocol software, as shown in Figure 2.

XTI and TLI are implemented as interfaces to the STREAMS transport provider protocol. They are media- and protocol-independent, and allow applications to run over any transport provider that complies with the Transport Provider Interface standard published in the Transport Provider Interface Specification (UNIX International). The XTI and TLI specifications describe transport characteristics supported by a wide range of transport-layer protocols.

The functionality provided by the transport provider includes:

  • Connection establishment.

  • State change support.

  • Event handling.

  • Data transfer.

  • Options manipulation.

All transport protocols support these characteristics.

XTI and TLI provide transport users considerable independence from the underlying transport provider. For example, XTI/TLI applications written for the TCP transport protocol can be easily adapted to work over the OSI transport protocol. XTI and TLI help transport users interact with different transport providers as if there were only small differences between providers, for instance, their protocol addresses.

For a list of the differences between XTI and TLI, see §3.1.8 Examining the Relationship Between XTI and TLI.

3.1.1  Transport Endpoints

A transport endpoint specifies a communication path between a transport user and a transport provider, as shown in Figure 3. When a user opens a transport provider by invoking t_open( ), a file descriptor is returned that identifies the transport endpoint. A transport provider is the transport protocol that provides the services of the transport layer. All requests to the transport provider must pass through the transport endpoint. To be active, a transport endpoint must have a transport address associated with it. This association is established by the t_bind( ) call. A transport connection is established by routines that create an association between two active endpoints.

3.1.2  Transport Providers

The transport provider is a protocol module that provides transport services. It implements the transport-level protocol used for communication. Multiple transport protocols can co-exist in the STREAMS environment and function simultaneously. The identifier parameter of the transport provider passed to t_open( ) determines the required transport provider. Applications that manage multiple transport providers must call t_open( ) for each provider and listen for connect indications on each of the associated file descriptors.

3.1.3  Transport User

The transport user is an application program that accesses the services of the transport provider by issuing the appropriate service requests, as illustrated in Figure 4. One example of a transport user operation is a request to transfer data over a connection. Consequently, the transport provider notifies the user of various events in response to requests, such as the arrival of data on a connection.

3.1.4  Transport Connection Establishment

The connection establishment phase enables two transport users to create a connection between them, as illustrated in Figure 5.

An example of the connection establishment phase is a client-server relationship between two transport users. The server typically advertises some service to a group of users and then listens for requests for its service. As clients require the service, they attempt to connect temselves to the server using the server's advertised transport address. The t_connect( ) routine initiates the connect request. One argument to t_connect( ) is the transport address which identifies the server the client wishes to access. The server is notified of each incoming request while waiting in the t_listen( ) routine. The server then can call t_accept( ) to accept the connection or can invoke t_snndis( )to reject the request. If the request is accepted, the transport connection is established. If it is rejected, t_snddis( ) notifies the client of rejection. For more information about client-server programming, see §3.1.11 XTI Client-Server Applications.

3.1.5  Transport Provider Interface (TPI)

The Transport Provider Interface (TPI) is a STREAMS service interface designed to implement the OSI transport layer service interface. A user communicates to a transport provider using a full duplex path known as a stream, as shown in Figure 6. This stream provides a mechanism in which messages can be passed to the transport provider from the transport user, and vice versa. The XTI/TLI library works in close co-operation with timod, a STREAMS module that serves as a medium for TPI primitives passing from this library to the transport provider.

There are four kinds of primitives defined by the TPI:

  • Local management primitives

  • Connection-establishment primitives

  • Data-transfer primitives

  • Connection-release primitives.

The primitives are implemented as M_PROTO and M_PCPROTO STREAMS messages with the first four bytes in the data buffer indicating the primitive.

XTI and TLI are implemented using TPI primitives. The XTI/TLI library routines map to specific TPI primitives. For example, t_bind( ) translates to a TPI message primitive T_BIND_REQ, and expects a T_BIND_ACK as a response. The t_connect( ) routine translates to a TPI primitive T_CONN_REQ.

Figure 7 shows an application making a t_connect( ) request. It shows the primitives involved in the establishment of connection. The client transport user sends a connect request to the transport provider by using the T_CONN_REQ primitive. If the transport provider encounters an error, it sends a T_ERROR_ACK primitive to the client program, otherwise it responds to the transport user with a T_OK_ACK.

At the server end, the client's connect request is delivered to the transport user as a T_CONN_IND primitive. If the server wishes to accept the connection request, it sends a T_CONN_RES primitive to the transport provider. This message is then sent to the client transport provider, which sends a connect confirmation, T_CONN_CON, to the client program.

For a complete description of the STREAMS-based Transport Provider Interface, see STREAMS Modules and Drivers: UNIX SVR4.2 (UNIX Press) and Unix System V Network Programming, by Stephen Rago (Addison-Wesley).

3.1.6  WindNet STREAMS XTI/TLI Library Routines

WindNet STREAMS provides an XTI/TLI library containing the XTI-compatible and TLI-compatible routines listed in Table 1. Configure the library into VxWorks by defining INCLUDE_STREAMS_TLI in configAll.h. For information about configuring WindNet STREAMS, see §6.2 WindNet STREAMS Configuration. For more specific information about these routines, see the UNIX System V Release 4 Programmer's Guide: Networking Interfaces.

Table 1.   WindNet STREAMS XTI/TLI Routines


Routine
Description


t_accept( )
Accept a connection on a endpoint.
t_alloc( )
Allocate a library structure.
t_bind( )
Bind an address to a transport endpoint.
t_close( )
Close a transport endpoint.
t_connect( )
Establish a connection with another transport user.
t_error( )
Produce an error message.
t_free( )
Free a library structure.
t_getinfo( )
Get protocol specific service information.
t_getstate( )
Get the current state.
t_listen( )
Listen for a connect request.
t_look( )
Look at the current event on a transport endpoint.
t_open( )
Establish a transport endpoint in XTI mode.
tli_open( )
Establish a transport endpoint in TLI mode.
t_optmgmt( )
Manage options for a transport endpoint.
t_rcv( )
Receive data or expedited data sent over a connection.
t_rcvdis( )
Retrieve information from disconnect.
t_rcvrel( )
Acknowledge receipt of an orderly release indication.
t_rcvudata( )
Receive a data unit.
t_rcvuderr( )
Receive a unit data error indication.
t_snd( )
Send data or expedited data over a connection.
t_snddis( )
Send a user-initiated disconnect request.
t_sndrel( )
Initiate an orderly release.
t_sndudata( )
Send a data unit.
t_sync( )
Synchronize a transport library.
t_unbind( )
Disable a transport endpoint.

3.1.7  Run-time Configuration of XTI/TLI

You can configure your application during run-time to use either XTI or TLI by choosing t_open( ) or tli_open( ), respectively. Calls for connection establishment, event handling, data transfer, and connection release are the same in both XTI and TLI. For additional information on XTI/TLI, consult the XTI/TLI references listed in Appendix A. WindNet STREAMS Reference List.

To configure XTI semantics into an application at run-time, use t_open( ), as shown in the following code fragment. The /dev/tcp in the example specifies a TPI-compliant transport provider.

{
...
t_open("/dev/tcp", O_RDWR, &info);
...
} 

Applications that require TLI semantics must make a tli_open( ) call instead. To configure TLI semantics, use the following code fragment. The /dev/xns in the example also specifies a TPI-compliant transport provider.

{
...
tli_open("/dev/xns", O_RDWR, &info);
...
} 

3.1.8  Examining the Relationship Between XTI and TLI

XTI is a refinement of TLI, the older of the two interfaces. The following features are XTI extensions of TLI:

  • Additional features have been introduced to t_snd( ), t_rcv( ), t_sndrel( ), and t_rcvrel( ) to allow fuller use of transport providers and to cater to service and protocol problems. For example, t_snd( ), t_rcv( ), and t_sndrel( ) called out of state in TLI return -1 and do not set the TLI errno variable, t_errno; whereas, XTI checks for the state, returns ERROR, and sets t_errno to TOUTSTATE if the state is not data-transfer.

  • XTI has modified the packing of the t_opthdr structure as follows:

struct t_opthdr 
    {
    unsigned long len;
    unsigned long level;
    unsigned long name; 
    unsigned long status;
    };
Whereas, in TLI the t_opthdr structure is as follows:

struct t_opthdr 
    {
    long level;
    long name;
    long len;
    };

3.1.9  Differences Between the WindNet STREAMS and UNIX XTI/TLI Libraries

The WindNet STREAMS XTI/TLI library is source-compatible with the UNIX XTI/TLI library; however the library differs from its UNIX counterpart in the following way: WindNet STREAMS provides the routine tliUserInit( ) to convert the global variable t_errno into a task variable. It is mandatory to do this in order to preserve the value of t_errno on a task-by-task basis. Without calling tliUserInit( ), you cannot be certain of the reason for the failure of an XTI/TLI service; therefore invoke the routine before any XTI/TLI services are used.

The following code fragment demonstrates the use of tliUserInit( ):

{
extern int t_errno;
tliUserInit();
...
t_open(...);
...
}

3.1.10  Read/Write Interface

The XTI/TLI interface does not directly support a read/write interface to the transport provider; however, the WindNet STREAMS XTI/TLI library does by providing a STREAMS module called tirdwr. Any application requiring the read/write interface pushes the tirdwr module onto the stream associated with the transport endpoint where the connection was established. Use tirdwr as follows:

{
...
ioctl(fd, I_PUSH, "tirdwr");
read(fd,...);
write(fd,...);
...
}

3.1.11  XTI Client-Server Applications

The following example illustrates a client-server XTI application and demonstrates the use of various XTI services. The client establishes a connection with the server, receives data from the server, and writes it to standard output. The server establishes a connection with the client and transfers data to the client.


*   

NOTE: The transport provider, /dev/tpit, mentioned in Example 1 is not supported by Wind River Systems.

Example 1  XTI Client-Server Program

Client Program
#include "stdio.h"
#include "tiuser.h"
#include "fcntl.h"
#include "stropts.h"
#include "vxWorks.h"

#define SRV_ADDR 1
void xtiClient( void )
    {
    int xFd;                 /* XTI endpoint descriptor */
    int nBytes;              /* Number of bytes returned by t_rcv */
    int flags = 0;           /* Flags passed to the t_rcv() call */
    char buf[30];            /* Buffer to store data in t_rcv */
    struct t_call *sndCall;  /* sndCall parameter stores server's address */
    extern int t_errno;      /* XLI errno variable */

    /* The tliUserInit() call is used to make the TLI variable "t_errno"
     * a task variable, This call has to be made before any other XTI/TLI
     * services are used. 
     */
    
    tliUserInit();
    
    /* Here the t_open call opens the transport provider named "tpit". 
     * The first argument to t_open call is a transport provider 
     * name. Here the /dev/tpit is a STREAMS clone device node that
     * identifies a connection-based transport protocol. The third
     * argument may be used to return the service characteristics of the 
     * transport provider to the user. 
     */ 
    
    if(( xFd = t_open("/dev/tpit", O_RDWR, NULL)) < 0) 
        {
        t_error("t_open failed\n");
        return;
        } 
    
    /* The t_bind call is used to bind an transport endpoint to an port.
     * The first argument identifies the transport endpoint, the second 
     * argument describes the address the user would like to bind to 
     * the endpoint and the third argument is set on return from t_bind to
     * specify the address that the provider bound. Normally a client does 
     * not care what its address is. A NULL second argument directs the 
     * transport provider to choose an address for the user. 
     */ 

    if(t_bind(xFd,NULL,NULL) <0)
        {
        t_error("t_bind failed\n");
        return;
        } 

    if((sndCall = (struct t_call *)t_alloc(xFd,T_CALL,T_ADDR)) == NULL) 
        {
        t_error("t_alloc failed\n");
        return;
        } 

    sndCall->addr.len = sizeof(int) ;
    *(int *)sndCall->addr.buf = SRV_ADDR;

    /* The t_connect call establishes the connection with the server, the
     * first argument to t_connect identifies the transport endpoint 
     * through which the connection is established, the second argument 
     * identifies the destination server. This argument is a pointer to a 
     * t_call structure, The destination server address is the addr field
     * of the t_call structure. 
     */

    if(t_connect(xFd, sndCall, NULL) < 0) 
        {
        t_error("t_connect failed for xFd\n");
        return;
        } 

    /* The client continuously calls t_rcv to process incoming data, if no
     * data is available, t_rcv blocks until data arrives
     */ 

    while (1) 
        {
        if((nBytes = t_rcv(xFd, buf, 1024, &flags)) != ERROR)
            {
            printf("Server wrote %s\n",buf);
            return;
            } 

    /* The client processes the connection release when it receives a 
     * orderly release indication , it proceeds with the release procedure
     * by calling t_rcvrel to process the indication and t_sndrel to
     * inform the server that it is ready to release the connection
     */ 

    if((t_errno == TLOOK) && (t_look(xFd) == T_ORDREL)) 
        {
        if(t_rcvrel(xFd) < 0) 
            {
            t_error("t_rcvrel failed");
            return;
            } 
            if(t_sndrel(xFd) < 0) 
                {
                t_error("t_sndrel failed");
                return;
                } 
        return;
        }
        t_error("t_rcv failed");
    return;
    }
Server Program
#include "tiuser.h"
#include "stropts.h"
#include "fcntl.h"
#include "stdio.h"
#include "vxWorks.h"
#include "taskLib.h"

#define DISCONNECT
#define SRV_ADDR 1

int connFd;
extern int t_errno;
void runServer1();

void xtiServer() 
    {
    int listenFd;                   /* file descriptor server is listening on */
    struct t_bind *tBind;           /* port address to be bound by the server */
    struct t_bind *retBind;         /* port address actually bound by the server*/
    struct t_call *call;            /* Address of the client */ 

    /* The tliUserInit() call is used to make the TLI variable "t_errno" 
     * a task variable. This call has to be made before any other XTI/TLI
     * services are used
     */ 

    tliUserInit();

    /* Server makes a t_open to establish a transport endpoint with the
     * /dev/tpit transport provider. This descriptor "listenFd" will be
     * used to listen for connect indications. 
     */ 

    if((listenFd = t_open("/dev/tpit", O_RDWR, NULL)) <0) 
        {
        t_error("t_open failed for listenFd");
        return;
        }

    if((tBind = (struct t_bind *)t_alloc(listenFd, T_BIND, T_ALL)) == NULL) 
        {
        t_error("t_alloc of t_bind structure failed");
        return;
        }

    tBind->qlen = 5;
    tBind->addr.len = sizeof(int);
    *(int *)tBind->addr.buf = SRV_ADDR;

    /* The server binds its well known address to the transport endpoint */ 

    if(t_bind(listenFd, tBind, retBind) < 0) 
        {
        t_error("t_bind failed for listenFd");
        return;
        }

    if( *(int *)tBind->addr.buf != SRV_ADDR) 
        {
        printf("t_bind bound wrong address %d\n", *(int *)retBind->addr.buf);
        return;
        }

    if((call = (struct t_call *)t_alloc(listenFd, T_CALL, T_ALL)) == NULL) 
        {
        t_error("t_alloc of t_call structure failed");
        return;
        }

    /* The server will loop forever, processing each connect indication
     * When one arrives, the server calls accept_call to accept the
     * connect request
     */

    while (1) 
        {
        if(t_listen(listenFd, call) < 0) 
            {
            t_error("t_listen failed for listen");
            return
            }
        if((connFd = acceptCall1(listenFd, call)) != DISCONNECT)
            runServer1(listenFd);
        } 
    }

void acceptCall1
    (
    int listenFd,             /* XTI endpoint descriptor server is listening */
    struct t_call *call       /* Client's address is returned */
    )
    {
    int resfd;                /* endpoint server accepts the connection */

    /* The server establishes another transport endpoint to establish the
     * connection. A connection could have been established on the listenFd
     * but that would prevent other clients from accessing the server for 
     * the duration of the connection.
     */ 

    if((resfd = t_open("/dev/tpit", O_RDWR, NULL)) < 0) 
        {
        t_error("t_open for responding fd failed");
        return;
        } 

    if(t_bind(resfd,NULL,NULL) < 0) 
        {
        t_error("t_bind for responding fd failed");
        return;
        } 

    /* The first two arguments to t_accept listening transport endpoint
     * and the endpoint where the connection will be accepted. The third 
     * argument points to the t_call structure associated with the connect
     * indication, This structure contains the address of the calling user
     * and the sequence number returned by t_listen
     */

    if(t_accept(listenFd, resfd, call) < 0) 
        {
        if(t_errno == TLOOK) 
            {

    /* If t_accept fails and a disconnect indication arrives, The server
     * retrieves the disconnect indication using t_rcvdis
     */

        if(t_rcvdis(listenFd,NULL) < 0) 
            {
            t_error("t_rcvdis failed for listenFd");
            return;
            } 

    /* The server closes the responding transport endpoint and returns 
     * DISCONNECT which informs the server that the connection was
     * disconnected by the client
     */

    if(t_close(resfd) < 0) 
        {
        t_error("t_close failed for responding fd");
        return;
        } 
    return(DISCONNECT);
    } 
    t_error("t_accept failed");
    return;
    } 
    return(resfd);
    }

void connrelease1( void )
    {
    if(t_look(connFd) == T_DISCONNECT) 
        {
        printf("connection aborted\n")
        return;
        } 
    return;
    }

void runServer1
    (
    int listenFd                /* Endpoint where connection is setup */
    )
    {
    char buf[50];               /* Buffer to store data to send to client */

    strcpy(buf, "This is SERVER calling to the client");

    if(t_close(listenFd) < 0) 
        {
        t_error("t_close failed for listenFd");
        return;
        } 

    if(t_look(connFd) != 0) 
        {
        printf("t_look : unexpected error\n");
        return;
        } 

    /* The server sends the data to the client using t_snd. The fourth
     * argument can contain flags to specify that the data is expedited.
     */

    while (1) 
        {
        if(t_snd(connFd, buf, sizeof(buf), 0) < 0) 
            {
            t_error("t_snd failed");
            return;
            } 
        } 

    /* The server invokes the t_sndrel to perform the orderly release of 
     * connection, when all data has been transferred
     */

    if( t_sndrel(connFd) < 0) 
        {
        t_error("t_sndrel failed");
        return;
        } 
    }