码迷,mamicode.com
首页 > 其他好文 > 详细

深入理解TCP协议及其源代码

时间:2019-12-27 00:07:14      阅读:100      评论:0      收藏:0      [点我收藏+]

标签:mamicode   虚拟   something   函数   ica   listen   文件中   ril   OLE   

本实验跟踪TCP三次握手过程

 

实验环境:ubuntu18.04,用qemu虚拟加载linux-5.0.1内核,在MenuOS中添加简单TCP通信demo命令。

首先,先理论分析三次握手,严格的来说应该是三报文握手,并不是三次握手,因为握手只进行了一次。

 

技术图片

 

 

第一步:Client将标志位SYN置为1,随机产生一个值seq=J,并将该数据包发送给Server,Client进入SYN_SENT状态,等待Server确认。
第二步:Server收到数据包后由标志位SYN=1知道Client请求建立连接,Server将标志位SYN和ACK都置为1,ack=J+1,随机产生一个值seq=K,并将该数据包发送给Client以确认连接请求,Server进入SYN_RCVD状态。
第三步:Client收到确认后,检查ack是否为J+1,ACK是否为1,如果正确则将标志位ACK置为1,ack=K+1,并将该数据包发送给Server,Server检查ack是否为K+1,ACK是否为1,如果正确则连接建立成功,Client和Server进入ESTABLISHED状态,完成握手,随后Client与Server之间可以开始传输数据了。

 

在应用层代码来看,握手协议的API对应的是客户端的connect函数和服务器端的accept函数,握手过程是在内核实现的,这两个函数会通过系统调用的方式让内核来实现握手过程,下面来跟踪内核代码看看都经过了哪些内核函数。

 

先从内核源代码分析

TCP协议相关的代码主要集中在linux-5.0.1/net/ipv4/目录下,其中linux-5.0.1/net/ipv4/tcp_ipv4.c文件中的结构体变量struct proto tcp_prot指定了TCP协议栈的访问接口函数,socket接口层里sock->opt->connect和sock->opt->accept对应的接口函数即是在这里制定的,sock->opt->connect实际调用的是tcp_v4_connect函数,sock->opt->accept实际调用的是inet_csk_accept函数。

struct proto tcp_prot = {
    .name            = "TCP",
    .owner            = THIS_MODULE,
    .close            = tcp_close,
    .pre_connect        = tcp_v4_pre_connect,
    .connect        = tcp_v4_connect,
    .disconnect        = tcp_disconnect,
    .accept            = inet_csk_accept,
    .ioctl            = tcp_ioctl,
    .init            = tcp_v4_init_sock,
    .destroy        = tcp_v4_destroy_sock,
    .shutdown        = tcp_shutdown,
    .setsockopt        = tcp_setsockopt,
    .getsockopt        = tcp_getsockopt,
    .keepalive        = tcp_set_keepalive,
    .recvmsg        = tcp_recvmsg,
    .sendmsg        = tcp_sendmsg,
    .sendpage        = tcp_sendpage,
    .backlog_rcv        = tcp_v4_do_rcv,
    .release_cb        = tcp_release_cb,
    .hash            = inet_hash,
    .unhash            = inet_unhash,
    .get_port        = inet_csk_get_port,
    .enter_memory_pressure    = tcp_enter_memory_pressure,
    .leave_memory_pressure    = tcp_leave_memory_pressure,
    .stream_memory_free    = tcp_stream_memory_free,
    .sockets_allocated    = &tcp_sockets_allocated,
    .orphan_count        = &tcp_orphan_count,
    .memory_allocated    = &tcp_memory_allocated,
    .memory_pressure    = &tcp_memory_pressure,
    .sysctl_mem        = sysctl_tcp_mem,
    .sysctl_wmem_offset    = offsetof(struct net, ipv4.sysctl_tcp_wmem),
    .sysctl_rmem_offset    = offsetof(struct net, ipv4.sysctl_tcp_rmem),
    .max_header        = MAX_TCP_HEADER,
    .obj_size        = sizeof(struct tcp_sock),
    .slab_flags        = SLAB_TYPESAFE_BY_RCU,
    .twsk_prot        = &tcp_timewait_sock_ops,
    .rsk_prot        = &tcp_request_sock_ops,
    .h.hashinfo        = &tcp_hashinfo,
    .no_autobind        = true,
#ifdef CONFIG_COMPAT
    .compat_setsockopt    = compat_tcp_setsockopt,
    .compat_getsockopt    = compat_tcp_getsockopt,
#endif
    .diag_destroy        = tcp_abort,
};
EXPORT_SYMBOL(tcp_prot);

 

下面来加断点验证一下

b tcp_v4_connect
b inet_csk_accept
c

技术图片

 

 然后在MenuOS中输入replyhi,结果断在了inet_csk_accept函数

技术图片

 

接着在MenuOS中输入hello,结果断在了tcp_v4_connect函数。

技术图片

这说明握手过程确实经过了这两个函数,最好再用tcpdump抓下包再确认一下,但时间有限,这里就不抓包验证了。

 

下面分别看看inet_csk_accept和tcp_v4_connect这两个函数的定义。

 

tcp_v4_connect函数的主要作用就是发起一个TCP连接,建立TCP连接的过程自然需要底层协议的支持,因此我们从这个函数中可以看到它调用了IP层提供的一些服务,比如ip_route_connect和ip_route_newports从名称就可以简单分辨,这里我们关注在TCP层面的三次握手,不去深究底层协议提供的功能细节。我们可以看到这里设置了 TCP_SYN_SENT并进一步调用了 tcp_connect(sk)来实际构造SYN并发送出去。

/* This will initiate an outgoing connection. */
int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
{
    struct sockaddr_in *usin = (struct sockaddr_in *)uaddr;
    struct inet_sock *inet = inet_sk(sk);
    struct tcp_sock *tp = tcp_sk(sk);
    __be16 orig_sport, orig_dport;
    __be32 daddr, nexthop;
    struct flowi4 *fl4;
    struct rtable *rt;
    int err;
    struct ip_options_rcu *inet_opt;
    struct inet_timewait_death_row *tcp_death_row = &sock_net(sk)->ipv4.tcp_death_row;

    if (addr_len < sizeof(struct sockaddr_in))
        return -EINVAL;

    if (usin->sin_family != AF_INET)
        return -EAFNOSUPPORT;

    nexthop = daddr = usin->sin_addr.s_addr;
    inet_opt = rcu_dereference_protected(inet->inet_opt,
                         lockdep_sock_is_held(sk));
    if (inet_opt && inet_opt->opt.srr) {
        if (!daddr)
            return -EINVAL;
        nexthop = inet_opt->opt.faddr;
    }

    orig_sport = inet->inet_sport;
    orig_dport = usin->sin_port;
    fl4 = &inet->cork.fl.u.ip4;
    rt = ip_route_connect(fl4, nexthop, inet->inet_saddr,
                  RT_CONN_FLAGS(sk), sk->sk_bound_dev_if,
                  IPPROTO_TCP,
                  orig_sport, orig_dport, sk);
    if (IS_ERR(rt)) {
        err = PTR_ERR(rt);
        if (err == -ENETUNREACH)
            IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
        return err;
    }

    if (rt->rt_flags & (RTCF_MULTICAST | RTCF_BROADCAST)) {
        ip_rt_put(rt);
        return -ENETUNREACH;
    }

    if (!inet_opt || !inet_opt->opt.srr)
        daddr = fl4->daddr;

    if (!inet->inet_saddr)
        inet->inet_saddr = fl4->saddr;
    sk_rcv_saddr_set(sk, inet->inet_saddr);

    if (tp->rx_opt.ts_recent_stamp && inet->inet_daddr != daddr) {
        /* Reset inherited state */
        tp->rx_opt.ts_recent       = 0;
        tp->rx_opt.ts_recent_stamp = 0;
        if (likely(!tp->repair))
            tp->write_seq       = 0;
    }

    inet->inet_dport = usin->sin_port;
    sk_daddr_set(sk, daddr);

    inet_csk(sk)->icsk_ext_hdr_len = 0;
    if (inet_opt)
        inet_csk(sk)->icsk_ext_hdr_len = inet_opt->opt.optlen;

    tp->rx_opt.mss_clamp = TCP_MSS_DEFAULT;

    /* Socket identity is still unknown (sport may be zero).
     * However we set state to SYN-SENT and not releasing socket
     * lock select source port, enter ourselves into the hash tables and
     * complete initialization after this.
     */
    tcp_set_state(sk, TCP_SYN_SENT);
    err = inet_hash_connect(tcp_death_row, sk);
    if (err)
        goto failure;

    sk_set_txhash(sk);

    rt = ip_route_newports(fl4, rt, orig_sport, orig_dport,
                   inet->inet_sport, inet->inet_dport, sk);
    if (IS_ERR(rt)) {
        err = PTR_ERR(rt);
        rt = NULL;
        goto failure;
    }
    /* OK, now commit destination to socket.  */
    sk->sk_gso_type = SKB_GSO_TCPV4;
    sk_setup_caps(sk, &rt->dst);
    rt = NULL;

    if (likely(!tp->repair)) {
        if (!tp->write_seq)
            tp->write_seq = secure_tcp_seq(inet->inet_saddr,
                               inet->inet_daddr,
                               inet->inet_sport,
                               usin->sin_port);
        tp->tsoffset = secure_tcp_ts_off(sock_net(sk),
                         inet->inet_saddr,
                         inet->inet_daddr);
    }

    inet->inet_id = tp->write_seq ^ jiffies;

    if (tcp_fastopen_defer_connect(sk, &err))
        return err;
    if (err)
        goto failure;

    err = tcp_connect(sk);

    if (err)
        goto failure;

    return 0;

failure:
    /*
     * This unhashes the socket and releases the local port,
     * if necessary.
     */
    tcp_set_state(sk, TCP_CLOSE);
    ip_rt_put(rt);
    sk->sk_route_caps = 0;
    inet->inet_dport = 0;
    return err;
}
EXPORT_SYMBOL(tcp_v4_connect);

tcp_v4_connect调用了tcp_connect函数,传入的参数是sk这个结构体,并且sk已经初始化了TCP_SYN_SENT这个标记。

 

tcp_connect函数具体负责构造一个携带SYN标志位的TCP头并发送出去,同时还设置了计时器超时重发。

int tcp_connect(struct sock *sk)
{
    struct tcp_sock *tp = tcp_sk(sk);
    struct sk_buff *buff;
    int err;

    tcp_call_bpf(sk, BPF_SOCK_OPS_TCP_CONNECT_CB, 0, NULL);

    if (inet_csk(sk)->icsk_af_ops->rebuild_header(sk))
        return -EHOSTUNREACH; /* Routing failure or similar. */

    tcp_connect_init(sk);

    if (unlikely(tp->repair)) {
        tcp_finish_connect(sk, NULL);
        return 0;
    }

    buff = sk_stream_alloc_skb(sk, 0, sk->sk_allocation, true);
    if (unlikely(!buff))
        return -ENOBUFS;

    tcp_init_nondata_skb(buff, tp->write_seq++, TCPHDR_SYN);
    tcp_mstamp_refresh(tp);
    tp->retrans_stamp = tcp_time_stamp(tp);
    tcp_connect_queue_skb(sk, buff);
    tcp_ecn_send_syn(sk, buff);
    tcp_rbtree_insert(&sk->tcp_rtx_queue, buff);

    /* Send off SYN; include data in Fast Open. */
    err = tp->fastopen_req ? tcp_send_syn_data(sk, buff) :
          tcp_transmit_skb(sk, buff, 1, sk->sk_allocation);
    if (err == -ECONNREFUSED)
        return err;

    /* We change tp->snd_nxt after the tcp_transmit_skb() call
     * in order to make this packet get counted in tcpOutSegs.
     */
    tp->snd_nxt = tp->write_seq;
    tp->pushed_seq = tp->write_seq;
    buff = tcp_send_head(sk);
    if (unlikely(buff)) {
        tp->snd_nxt    = TCP_SKB_CB(buff)->seq;
        tp->pushed_seq    = TCP_SKB_CB(buff)->seq;
    }
    TCP_INC_STATS(sock_net(sk), TCP_MIB_ACTIVEOPENS);

    /* Timer for repeating the SYN until an answer. */
    inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
                  inet_csk(sk)->icsk_rto, TCP_RTO_MAX);
    return 0;
}
EXPORT_SYMBOL(tcp_connect);

 

其中tcp_transmit_skb函数负责将tcp数据发送出去,这里调用了__tcp_transmit_skb函数。

static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
                gfp_t gfp_mask)
{
    return __tcp_transmit_skb(sk, skb, clone_it, gfp_mask,
                  tcp_sk(sk)->rcv_nxt);
}

tcp_v4_connect函数就跟踪到这了。

 

下面来看下inet_csk_accept函数

struct sock *inet_csk_accept(struct sock *sk, int flags, int *err, bool kern)
{
    struct inet_connection_sock *icsk = inet_csk(sk);
    struct request_sock_queue *queue = &icsk->icsk_accept_queue;
    struct request_sock *req;
    struct sock *newsk;
    int error;

    lock_sock(sk);

    /* We need to make sure that this socket is listening,
     * and that it has something pending.
     */
    error = -EINVAL;
    if (sk->sk_state != TCP_LISTEN)
        goto out_err;

    /* Find already established connection */
    if (reqsk_queue_empty(queue)) {
        long timeo = sock_rcvtimeo(sk, flags & O_NONBLOCK);

        /* If this is a non blocking socket don‘t sleep */
        error = -EAGAIN;
        if (!timeo)
            goto out_err;

        error = inet_csk_wait_for_connect(sk, timeo);
        if (error)
            goto out_err;
    }
    req = reqsk_queue_remove(queue, sk);
    newsk = req->sk;

    if (sk->sk_protocol == IPPROTO_TCP &&
        tcp_rsk(req)->tfo_listener) {
        spin_lock_bh(&queue->fastopenq.lock);
        if (tcp_rsk(req)->tfo_listener) {
            /* We are still waiting for the final ACK from 3WHS
             * so can‘t free req now. Instead, we set req->sk to
             * NULL to signify that the child socket is taken
             * so reqsk_fastopen_remove() will free the req
             * when 3WHS finishes (or is aborted).
             */
            req->sk = NULL;
            req = NULL;
        }
        spin_unlock_bh(&queue->fastopenq.lock);
    }
out:
    release_sock(sk);
    if (req)
        reqsk_put(req);
    return newsk;
out_err:
    newsk = NULL;
    req = NULL;
    *err = error;
    goto out;
}
EXPORT_SYMBOL(inet_csk_accept);

服务端调用inet_csk_accept函数会请求队列中取出一个连接请求,如果队列为空则通过inet_csk_wait_for_connect阻塞住等待客户端的连接。

 

inet_csk_wait_for_connect函数就是无限for循环,一旦有连接请求进来则跳出循环。

/*
 * Wait for an incoming connection, avoid race conditions. This must be called
 * with the socket locked.
 */
static int inet_csk_wait_for_connect(struct sock *sk, long timeo)
{
    struct inet_connection_sock *icsk = inet_csk(sk);
    DEFINE_WAIT(wait);
    int err;

    /*
     * True wake-one mechanism for incoming connections: only
     * one process gets woken up, not the ‘whole herd‘.
     * Since we do not ‘race & poll‘ for established sockets
     * anymore, the common case will execute the loop only once.
     *
     * Subtle issue: "add_wait_queue_exclusive()" will be added
     * after any current non-exclusive waiters, and we know that
     * it will always _stay_ after any new non-exclusive waiters
     * because all non-exclusive waiters are added at the
     * beginning of the wait-queue. As such, it‘s ok to "drop"
     * our exclusiveness temporarily when we get woken up without
     * having to remove and re-insert us on the wait queue.
     */
    for (;;) {
        prepare_to_wait_exclusive(sk_sleep(sk), &wait,
                      TASK_INTERRUPTIBLE);
        release_sock(sk);
        if (reqsk_queue_empty(&icsk->icsk_accept_queue))
            timeo = schedule_timeout(timeo);
        sched_annotate_sleep();
        lock_sock(sk);
        err = 0;
        if (!reqsk_queue_empty(&icsk->icsk_accept_queue))
            break;
        err = -EINVAL;
        if (sk->sk_state != TCP_LISTEN)
            break;
        err = sock_intr_errno(timeo);
        if (signal_pending(current))
            break;
        err = -EAGAIN;
        if (!timeo)
            break;
    }
    finish_wait(sk_sleep(sk), &wait);
    return err;
}

通过代码可以看出connect之后将连接请求发送出去,accept等待连接请求,connect启动到返回和accept返回之间就是所谓三次握手的时间。

 

以上是握手过程一个大概的分析,就是一个握手过程就这么复杂了,只能说明TCP协议巨复杂。

深入理解TCP协议及其源代码

标签:mamicode   虚拟   something   函数   ica   listen   文件中   ril   OLE   

原文地址:https://www.cnblogs.com/shadu/p/12104973.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!