Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 232869

Re: Solaris 11.1 - Jumbo Frames?

$
0
0

Hi,

 

I am trying the same on a openindiana system:

 

root@filer01:~# uname -a
SunOS filer01 5.11 oi_151a7 i86pc i386 i86pc Solaris
root@filer01:~# dladm show-linkprop -p mtu
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
eth1         mtu             rw   9000           1500           9000
eth0         mtu             rw   9000           1500           9000
root@filer01:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
eth0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9000 index 12
        inet 10.20.34.2 netmask ff000000 broadcast 10.255.255.255
        ether 0:50:56:8e:5c:cc
eth1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9000 index 10
        inet 10.20.30.2 netmask ff000000 broadcast 10.255.255.255
        ether 0:50:56:8e:5c:c5
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
root@filer01:~# ping -s 10.20.34.252 1472 3
PING 10.20.34.252: 1472 data bytes
1480 bytes from 10.20.34.252: icmp_seq=0. time=0.751 ms
1480 bytes from 10.20.34.252: icmp_seq=1. time=0.184 ms
1480 bytes from 10.20.34.252: icmp_seq=2. time=0.209 ms

 

----10.20.34.252 PING Statistics----
3 packets transmitted, 3 packets received, 0% packet loss
round-trip (ms)  min/avg/max/stddev = 0.184/0.381/0.751/0.320
root@filer01:~# ping -s 10.20.34.252 8000 3
PING 10.20.34.252: 8000 data bytes
8008 bytes from 10.20.34.252: icmp_seq=0. time=0.380 ms
8008 bytes from 10.20.34.252: icmp_seq=1. time=0.423 ms
8008 bytes from 10.20.34.252: icmp_seq=2. time=0.311 ms

 

----10.20.34.252 PING Statistics----
3 packets transmitted, 3 packets received, 0% packet loss
round-trip (ms)  min/avg/max/stddev = 0.311/0.371/0.423/0.057
root@filer01:~#

 

eth1 has mtu 1500 set in the nwamcfg and eventhough it shows everywhere mtu 9000 it does not work:

root@filer01:~# ping -s 10.20.30.252 1400 3
PING 10.20.30.252: 1400 data bytes
1408 bytes from 10.20.30.252: icmp_seq=0. time=2.399 ms
1408 bytes from 10.20.30.252: icmp_seq=1. time=0.259 ms
1408 bytes from 10.20.30.252: icmp_seq=2. time=0.264 ms

 

----10.20.30.252 PING Statistics----
3 packets transmitted, 3 packets received, 0% packet loss
round-trip (ms)  min/avg/max/stddev = 0.259/0.974/2.399/1.234
root@filer01:~# ping -s 10.20.30.252 8000 3
PING 10.20.30.252: 8000 data bytes

 

----10.20.30.252 PING Statistics----
3 packets transmitted, 0 packets received, 100% packet loss
root@filer01:~#

 

 

I got it to work using this kb article:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2032669

Basically the driver has to be enabled to use Jumbo Frames.

After unloading and reloading the driver it worked:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010071

 

And to receive Jumbo Frames I still had to do this:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012445

on the interface that has to receive Jumbo Frames:

ndd -set /dev/vmxnet3s0 accept-jumbo 1

 

Best regards

Rainer


Viewing all articles
Browse latest Browse all 232869

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>