Prefetching Support

Data prefetching refers to loading data from a relatively slow memory into a relatively fast cache before the data is needed by the application. Data prefetch behavior depends on the architecture:

Issuing prefetches improves performance in most cases; however, there are cases where issuing prefetch instructions might slow application performance. Experiment with prefetching; it can be helpful to turn prefetching on or off with a compiler option while leaving all other optimizations unaffected to isolate a suspected prefetch performance issue. See Prefetching with Options for information on using compiler options for prefetching data.

There are two primary methods of issuing prefetch instructions. One is by using compiler directives and the other is by using compiler intrinsics.

PREFETCH and NOPREFETCH Directives

The PREFETCH and NOPREFETCH directives are supported by Itanium® processors only. These directives assert that the data prefetches be generated or not generated for some memory references. This affects the heuristics used in the compiler.

If loop includes expression A(j), placing  PREFETCH A in front of the loop, instructs the compiler to insert prefetches for A(j + d) within the loop. d is the number of iterations ahead to prefetch the data and is determined by the compiler. This directive is supported only when option -O3 (Linux*) or /O3 (Windows*) is on. These directives are also supported when you specify options -O1 and -O2 (Linux) or /O1 and /O2 (Windows). Remember that -O2 or /O2 is the default optimization level.

Example

!DEC$ NOPREFETCH c

!DEC$ PREFETCH a

  do i = 1, m

    b(i) = a(c(i)) + 1

  enddo

The following example is for IA-64 architecture only:

Example

do j=1,lastrow-firstrow+1

i = rowstr(j)

iresidue = mod( rowstr(j+1)-i, 8 )

sum = 0.d0

!DEC$ NOPREFETCH a,p,colidx

do k=i,i+iresidue-1

  sum = sum +  a(k)*p(colidx(k))

enddo

!DEC$ NOPREFETCH colidx

!DEC$ PREFETCH a:1:40

!DEC$ PREFETCH p:1:20

do k=i+iresidue, rowstr(j+1)-8, 8

  sum = sum + a(k  )*p(colidx(k  ))

  &      + a(k+1)*p(colidx(k+1)) + a(k+2)*p(colidx(k+2))

  &      + a(k+3)*p(colidx(k+3)) + a(k+4)*p(colidx(k+4))

  &      + a(k+5)*p(colidx(k+5)) + a(k+6)*p(colidx(k+6))

  &      + a(k+7)*p(colidx(k+7))

enddo

q(j) = sum

enddo

memref_control Pragma

The memref_control pragma is supported on by Itanium® processors only. This pragma provides a method for controlling load latency and temporal locality at the variable level. The memref_control pragma allows you to specify locality and latency at the array level. For example, using this pragma allows you to control the following:

The syntax for this pragma is shown below:

Syntax

#pragma memref_control [name1[:<locality>[:<latency>]],[name2...]

The following table lists the supported arguments.

Argument

Description

name1, name2

Specifies the name of array or pointer. You must specify at least one name; however, you can specify names with associated locality and latency values.

locality

An optional integer value that indicates the desired cache level to store data for future access. This will determine the load/store hint (or prefetch hint) to be used for this reference. The value can be one of the following:

  • l1 = 0

  • l2 = 1

  • l3 = 2

  • mem = 3

To use this argument, you must also specify name.

latency

An optional integer value that indicates the load (or the latency that has to be overlapped if a prefetch is issued for this address). The value can be one of the following:

  • l1_latency = 0

  • l2_latency = 1

  • l3_latency = 2

  • mem_latency = 3

To use this argument, you must also specify name and locality.

When you specify source-level and the data locality information at a high level for a particular data access, the compiler decides how best to use this information. If the compiler can prefetch profitably for the reference, then it issues a prefetch with a distance that covers the specified latency specified and then schedules the corresponding load with a smaller latency. It also uses the hints on the prefetch and load appropriately to keep the data in the specified cache level.

If the compiler cannot compute the address in advance, or decides that the overheads for prefetching are too high, it uses the specified latency to separate the load and its use (in a pipelined loop or a Global Code Scheduler loop). The hint on the load/store will correspond to the cache level passed with the locality argument.

You can use this with the prefetch and noprefetch to further tune the hints and prefetch strategies. When using the memref_control with noprefetch, keep the following guidelines in mind:

This following example illustrates the case where the address is not known in advance, so prefetching is not possible. In this case, the compiler will schedule the loads of the tab array with an L3 load latency of 15 cycles (inside a software pipelined loop or GCS loop).

Example: gather

#pragma memref_control  tab : l2 : l3_latency

for (i=0; i<n; i++)

{

   x = <generate 64 random bits inline>;

   dum += tab[x&mask]; x>>=6;

   dum += tab[x&mask]; x>>=6;

   dum += tab[x&mask]; x>>=6;

}

The following example illustrates one way of using memref_control, prefetch, and noprefetch together.

Example: sparse matrix

   if( size <= 1000 ) {

#pragma noprefetch cp, vp

#pragma memref_control x:l2:l3_latency

 

#pragma noprefetch yp, bp, rp

#pragma noprefetch xp

  for (iii=0; iii<rag1m0; iii++) {

    if( ip < rag2 ) {

      sum  -= vp[ip]*x[cp[ip]];

      ip++;

    } else {

       xp[i] = sum*yp[i];

       i++;

       sum   = bp[i];

       rag2  = rp[i+1];

    }

  }

  xp[i] = sum*yp[i];

} else {

 

#pragma prefetch cp, vp

#pragma memref_control x:l2:mem_latency

 

#pragma prefetch yp, bp, rp

#pragma noprefetch xp

  for (iii=0; iii<rag1m0; iii++) {

    if( ip < rag2 ) {

      sum  -= vp[ip]*x[cp[ip]];

        ip++;

    } else {

      xp[i] = sum*yp[i];

      i++;

      sum   = bp[i];

       rag2  = rp[i+1];

    }

  }

  xp[i] = sum*yp[i];

}

See General Compiler Directives for more information about these directives.

Intrinsics

Before inserting compiler intrinsics, experiment with all other supported compiler options and directives. Compiler intrinsics are less portable and less flexible than either a compiler option or compiler directives.

Directives enable compiler optimizations while intrinsics perform optimizations. As a result, programs with directives are more portable, because the compiler can adapt to different processors, while the programs with intrinsics may have to be rewritten/ported for different processors. This is because intrinsics are closer to assembly programming.

The compiler supports an intrinsic subroutine mm_prefetch. In contrast the way the prefetch directive enables a data prefetch from memory, the subroutine mm_prefetch prefetches data from the specified address on one memory cache line. The mm_prefetch subroutine is described in the Intel® Fortran Language Reference.