Its easy to forget to check the obvious sometimes, a user complains that performance of a RAC cluster is appalling, a quick dive into OEm shows everything is spending forever in cluster waits, its easy to dive into sql performance tuning mode. Alternatively you can start looking for hot blocks or small tables being pinged back and forth across the interconnect. Luckily I noticed something odd in SAR first, my interconnect only transferring 1.2mb/s but everything still hung up on cluster waits a quick check of the network card:-
racnode1:~ # ethtool eth4 Settings for eth4: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: umbg Wake-on: g Current message level: 0x00000003 (3)
Link detected: yes
10mb/s? so much for the fancy switch management and alerting software, failing the bond to the other network link fixed things until a networking engineer can get to it and move the cable.
Sometimes its not a complex sql tuning task sometimes it is a database performance issue and not the fault of the developers.