summaryrefslogtreecommitdiffstats
path: root/init
diff options
context:
space:
mode:
authorChuck Lever <chuck.lever@oracle.com>2016-05-02 14:41:05 -0400
committerAnna Schumaker <Anna.Schumaker@Netapp.com>2016-05-17 15:47:58 -0400
commit302d3deb20682a076e1ab551821cacfdc81c5e4f (patch)
treef379f2c56120ba7d6f970306d4520c4cc5517726 /init
parent949317464bc2baca0ccc69e35a7b5cd3715633a6 (diff)
downloadop-kernel-dev-302d3deb20682a076e1ab551821cacfdc81c5e4f.zip
op-kernel-dev-302d3deb20682a076e1ab551821cacfdc81c5e4f.tar.gz
xprtrdma: Prevent inline overflow
When deciding whether to send a Call inline, rpcrdma_marshal_req doesn't take into account header bytes consumed by chunk lists. This results in Call messages on the wire that are sometimes larger than the inline threshold. Likewise, when a Write list or Reply chunk is in play, the server's reply has to emit an RDMA Send that includes a larger-than-minimal RPC-over-RDMA header. The actual size of a Call message cannot be estimated until after the chunk lists have been registered. Thus the size of each RPC-over-RDMA header can be estimated only after chunks are registered; but the decision to register chunks is based on the size of that header. Chicken, meet egg. The best a client can do is estimate header size based on the largest header that might occur, and then ensure that inline content is always smaller than that. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Diffstat (limited to 'init')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud