월간 인기 게시물

게시물 96건
   
[README] Tehuti 10 Gigabits TOE SmartNIC Driver
글쓴이 : 최고관리자 날짜 : 2011-08-25 (목) 09:00 조회 : 10826
글주소 :
                          

참고 : 
http://umz.kr/04Ic5


                   Tehuti 10 Gigabits TOE SmartNIC Driver
                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TOC
1. Supported HW
2. Supported OS
3. Features
4. Configuration
5. Installation


1. Supported Hardware
        1. x86
        2. x86_64

2. Supported Operating Systems
   The Linux driver supports the following Linux kernels and distrubutions:
        1. RHEL/CentOs 5.5 (kernels 2.6.18-194*)
        2. Standard linux kernels 2.6.24-2.6.36
        3. Pre-release linux kernel 2.6.37-rc1

   This release was tested on the following distributions:
        1. RedHat Advanced Server
        2. CentOs 5.5 (x86_64)
        3. Open SuSe 11.3
        4. Fedora 13
        5. Ubuntu 10.04

3. Features
   Along with providing the required/mandatory functionality, the driver also
   implements the following bonus features:

        1. TSO - TCP Segmentation Offload - driver is capable to accept TCP
           packets much larger than the MTU (e.g.: 1500), say 64K packets, and
           fragment them in the NIC HW into TCP packets smaller then MTU. The
           TCP  header of the original packet serves as a template for the
           TCP headers of smaller packets.

        2. HW Checksum Offload - move the IP, UDP, TCP checksum computation
           from the host CPU to the NIC.

        3. Scatter/Gather IO - driver can send a packet consisting of
           several distinct memory segments. The driver also does HW
           checksumming on each segment.

        4. High DMA - the NIC can access the entire 64bit address space,
           including memory above 1 GB, aka high memory.

        5. NAPI - Polling vs Interrupt-Driven. This API implements switching
           between polling and interrupt-driven workflow. When the driver
           sees that there are always more packets to proceed it disables
           the HW interrupts and switches to polling mode. When the OS finds
           time, it calls the driver's poll routine to accept the new
           packets. No expensive ISR cost is ever paid. When the incoming
           packet rate drops, the driver switches back to interrupt-driven
           flow.

        6. ethtool - extra diagnostics and statistics. Used by the ethtool
           program.

        7. HW multicast filtering of incoming traffic.

        8. HW VLAN filtering of incoming traffic.

4. Installation

   o To build the driver run
     make

   o To install the driver to /lib/modules/$KERNEL_VER/kernel/drivers/net
     do one of the following:
     o Run:
         make install
         depmod `uname -r`
     o Or run:
         make driver
       Running make driver is equivalent to the above operations.

     o Or run:
         tinstall
       tinstall Will copy the driver from the current directory to
       /lib/modules/$KERNEL_VER/kernel/drivers/net and will then run
       depmod `uname -r`

    o To install a precompiled driver copy the driver (tehuti.ko) and the
      tinstall to a flash drive/cd/dvd/network accessible directory. Both
      files need to be in the same folder. On the target machine dor the
      following:
      o Login as root.
      o cd to the distribution directory
      o Run: sh tinstall

5. Configuration

 o The driver itself does not require any user configuration and will
   automatically adjust to its working environment

 o The kernel can be instructed to use more memory (buffers, caches, socket
   data) to prevent memory from becoming a bottleneck.

   Run "sysctl -p sysctl_luxor.conf" to set the relevant parameters.

 o Linux 2.6.35 and above supports a feature called RPS (Receive Packet Steering). This feature allows spreading received packets to specified processors. In order to use this packet each interface has two configuration variables. These variables are located in:
     /sys/class/net/eth?/queues/rx-0/rps_cpus
     /sys/class/net/eth?/queues/rx-0/rps_flow_cnt

   The rps_cpus is a bitmap that specifies, which cpus will handle received packets, every processor has bit in the bitmap (i.e. "echo 0e > rps_cpus" will enable processors 1,2 and 3). The rps_flow_cnt contains the size of the hash table (i.e. set it to 256) that is used for associating a CPU with a packet. To use RPS both values need to be initialized. This can be done automatically by a script after booting.
   The init_rps script may be used to setup these variables.
   Usage: init_rps [options] eth? ...

          -m <hex mask>         # Default: all cpus
          -t <hash table size>  # Default 256

   Example: init_rps eth1 eth2

   The mask and the hash table size are automatically calculated unless specified explicitly by the user. init_rps requires python.

o On Linux 2.6.35 and above the driver include on parameter, which is only used for testing purposes. The variable is located in:
      /sys/module/tehuti/parameters/paged_buffers

  It defaults to 1, by setting it to zero we disable paged buffers, which will slow down the driver.


6. Usage

o The driver should load automatically when needed. To load the driver
  manually (not really needed), do one of the following:
    o Run: insmod tehuti
    o Or run: insmod /lib/modules/`uname -r`/kernel/drivers/net/tehuti.ko

o To verify the the driver is loaded run: lsmod|grep -i tehuti

o To manually unload the driver run: rmmod tehuti

o Now you can run dmesg and ifconfig -a to see which interface was added
  and  configure an IP address on it. For example:
       ifconfig eth1 10.0.0.1/24

o Permanently configuring network interfaces. This procedure involves
  setting up configuration files and is specific to the actual Linux
  distro. Below are notes for various popular Linux distributions:

    o RHEL and family (CentOs, Fedora):
      Network configuration files are located in the
      /etc/sysconfig/network-scripts directory. Each interface has a
      configuration file that is named after the interface name
      (i.e. /etc/sysconfig/network-scripts/ifcfg-eth0). The file may be used
      to setup the IP using dhcp or manually. Examples:

        o Static IP:
          DEVICE=eth5
          BOOTPROTO=static             # Static ip
          onBOOT=yes
          HWADDR=00:30:4F:76:AB:09
          BROADCAST=192.168.5.255
          IPADDR=192.168.5.1
          NETMASK=255.255.255.0
          NETWORK=192.168.5.0

        o Dynamic IP using dhcp:
          DEVICE=eth0
          HWADDR=00:04:23:C7:33:EF
          BOOTPROTO=dhcp
          onBOOT=yes

    o Open Suse:
      Network configuration files are located in the
      /etc/sysconfig/network. Each interface has a configuration file that
      is named after the interface name
      (i.e. /etc/sysconfig/network/ifcfg-eth0). The file may be used
      to setup the ip using dhcp or manually. The syntax is differs slightly
      from RHEL. Examples:

        o Static IP:
          STARTMODE=auto           # {manual*|auto|hotplug|ifplugd|
                                   #  nfsroot|off}
                                   # hotplug is similar to auto but
                                   # rcnetwork will not fail if it
                                   # cannot configure the interface.
          LLADDR=00:30:4F:76:AB:09
          BOOTPROTO=static         # {static*|dhcp|autoip|dhcp+autoip|6to4}
          NETMASK=255.255.255.0
          BROADCAST=192.168.5.255
          NETWORK=192.168.5.0
          IPADDR=192.168.5.1
          # IPADDR=192.168.5.1/24  # This is shorthand for the above.
          MTU=1500

        o Dynamic IP using dhcp:
          STARTMODE=auto
          HWADDR=00:04:23:C7:33:EF
          BOOTPROTO=dhcp

    o Ubuntu and family (Debian, Ubuntu, Kubuntu, etc.)
      Network configuration files are located in a single file
      /etc/network/interfaces.

      Examples:
       auto  lo
       iface lo inet loopback

       auto  eth0
       iface eth0 inet static
             address    192.168.5.1
             hwaddress  ether 00:30:4f:76:ab:09
             netmask    255.255.255.0
             broadcast  192.168.5.255
             network    192.168.5.0

o Configuration files setup notes:

  o These configuration files will setup the interface and the routing
    tables. No additional scripting or setting of other configuration files
    is needed in most cases. One exception may be enabling RPS, which may
    require some additional scripting.

  o Make sure that the driver is installed properly (including running
    depmod) before attempting to set and use the configuration files.

  o Hardware detection programs have the bad habit of messing with these
    files so keep a backup somewhere in case they are removed or modified.

  o You may need to reboot after setting up these files in order for the
    changes to take place.


7 Misc.

o The provided lseth script is a nice tool, which combines the output of
  ifcfg and ethtool in a nice, compact and easy to understand
  format. ethtool requires the following programs:
    o python
    o ifcfg
    o ethtool

  Usage:
  lseth [-d][-h][-k][-l][-n] [interfaces]

   -6               Show ipv6 addresses
   -d <driver>      Show only drivers of the specified type
   -h               Help
   -i               NIC info (symbolic names)
   -I               NIC info (numeric codes)
   -k               List offload information
   -l               Long list
   -n               Do not resolve ip addresses

   The -d option limits the display to a specific driver type (i.e. -d
   tehuti). If no interfaces are specified all interfaces will be listed,
   otherwise only the specified interfaces will be listed. Wildcard
   characters may also be specified (i.e. lseth eth[1-3])



기사본문 발췌

리눅스 커널의 최신 2.6.35 버전이 구글이 개발한 두 가지 프로토콜을 이용해 네트워크 트래픽 처리를 한층 가속화시킬 수 있을 것으로 기대되고 있다.
 
지난 일요일 리누스 토발즈가 발표한 새 버전에는 통상적인 버그 수정과 최적화는 물론 몇 가지 새로운 기능이 추가됐는데, 오늘날의 멀티코어 네트워크 환경에서의 유용성을 높이는 데 중점을 두고 있다.
 
새로운 기능 중 가장 대표적인 것은 구글의 RPS(Receive Packet Steering)와 RFS(Receive Flow Steering)이다. RPS는 수신 패킷을 시스템에서 사용 가능한 모든 CPU로 분산시켜 주는 역할을 한다. RFS는 어떤 애플리케이션이 네트워크 트래픽을 사용하게 될 것이라는 등의 요소를 감안해 어떤 코어가 작업에 가장 적합한지를 계산해 준다.
 
커널 뉴비스 웹 사이트는 8코어 인텔 CPU 기반의 서버와 인텔 e1000e 네트워크 어댑터를 이용한 벤치마크 테스트 결과도 발표했다. 테스트 결과 RPS와 RFS를 이용해 실행했을 때 네트워킹 기반의 트랜잭션 성능이 10만 4,000tps에서 30만 3,000tps로 3배 가까이 향상됐다. CPU 활용률은 각각 30%와 61%를 기록했다.
 
최근 네트워크 트래픽 프로토콜이 더 많은 입출력을 지원할 수 있도록 발전하고 있다는 점을 감안할 때 이런 기능은 시의적절한 것으로 평가되고 있다. 실제로 이더넷 장비업체들은 이더넷을 40Gbps와 100Gbps용 새로운 표준으로 업그레이드하는 작업을 진행하고 있다. 커널 뉴비스는 “네트워크 카드는 이제 단일 CPU로 처리하기 벅찬 대역폭까지 발전했다”고 설명했다.

이외에도 새로운 메모리 압축 기능과 SGI가 제공한 디버거를 위한 프론트엔드, 다중 멀티캐스트 라우트 테이블 관리 기능, 입출력 트래픽을 줄이기 위해 로킹 태스크를 함께 담은 XFS 파일 시스템 등의 새로운 기능이 추가됐다.
한편 이번 버전은 이전 버전인 2.6.34 발표 이후 한 달 만에 나온 것이다.

이름 패스워드
비밀글 (체크하면 글쓴이만 내용을 확인할 수 있습니다.)
왼쪽의 글자를 입력하세요.
   

 



 
사이트명 : 모지리네 | 대표 : 이경현 | 개인커뮤니티 : 랭키닷컴 운영체제(OS) | 경기도 성남시 분당구 | 전자우편 : mojily골뱅이chonnom.com Copyright ⓒ www.chonnom.com www.kyunghyun.net www.mojily.net. All rights reserved.