首页 文章

全局maxconn和服务器maxconn haproxy之间的区别

提问于
浏览
81

我对我的haproxy配置有疑问:

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log         127.0.0.1 syslog emerg
    maxconn     4000
    quiet
    user        haproxy
    group       haproxy
    daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will 
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode        http
    log         global
    option      abortonclose
    option      dontlognull
    option      httpclose
    option      httplog
    option      forwardfor
    option      redispatch
    timeout connect 10000 # default 10 second time out if a backend is not found
    timeout client 300000 # 5 min timeout for client
    timeout server 300000 # 5 min timeout for server
    stats       enable

listen  http_proxy  localhost:81

    balance     roundrobin
    option      httpchk GET /empty.html
    server      server1 myip:80 maxconn 15 check inter 10000
    server      server2 myip:80 maxconn 15 check inter 10000

正如您所看到的那样直截了当,但我对maxconn属性如何工作有点困惑 .

在listen块中,服务器上有全局one和maxconn . 我的想法是这样的:全局管理作为服务的haproxy连接总数将一次排队或处理 . 如果数量超过这个数量,它会杀死连接,还是某些linux套接字中的池?我不知道如果数字高于4000会发生什么 .

然后你将服务器maxconn属性设置为15.首先,我将其设置为15,因为我的php-fpm,这是转发到一个单独的服务器上,只有它可以使用的这么多子进程,所以我确保我是在这里汇集请求,而不是在php-fpm中 . 我觉得哪个更快 .

但回到这个主题,关于这个数字的理论是这个块中的每个服务器一次只能发送15个连接 . 然后连接将等待打开的服务器 . 如果我打开了cookie,连接将等待CORRECT打开服务器 . 但我没有 .

所以问题是:

  • 如果全球连接超过4000,会发生什么?他们死了吗?或以某种方式在Linux中池?

  • 全局连接是否与服务器连接相关,除了事实上您不能使服务器连接总数大于全局?

  • 在确定全局连接时,不应该是服务器部分中添加的连接数量加上一定的池化百分比吗?很明显,你对连接有其他约束,但实际上你想要发送给代理的数量是多少?

先感谢您 .

1 回答

  • 142

    威利通过电子邮件给我一个答案 . 我以为我会分享它 . 他的答案是大胆的 .

    我对我的haproxy配置有疑问:

    #---------------------------------------------------------------------
       # Global settings
       #---------------------------------------------------------------------
       global
           log         127.0.0.1 syslog emerg
           maxconn     4000
           quiet
           user        haproxy
           group       haproxy
           daemon
       #---------------------------------------------------------------------
       # common defaults that all the 'listen' and 'backend' sections will 
       # use if not designated in their block
       #---------------------------------------------------------------------
       defaults
           mode        http
           log         global
           option      abortonclose
           option      dontlognull
           option      httpclose
           option      httplog
           option      forwardfor
           option      redispatch
           timeout connect 10000 # default 10 second time out if a backend is not found
           timeout client 300000 # 5 min timeout for client
           timeout server 300000 # 5 min timeout for server
           stats       enable
    
       listen  http_proxy  localhost:81
    
           balance     roundrobin
           option      httpchk GET /empty.html
           server      server1 myip:80 maxconn 15 check inter 10000
           server      server2 myip:80 maxconn 15 check inter 10000
    

    正如您所看到的那样直截了当,但我对maxconn属性如何工作有点困惑 .

    在listen块中,服务器上有全局one和maxconn .

    And there is also another one in the listen block which defaults to something like 2000.

    我的想法是这样的:全局管理作为服务的haproxy连接总数将一次排队或处理 .

    Correct. It's the per-process max number of concurrent connections.

    如果数量超过这个数量,它会杀死连接,还是某些linux套接字中的池?

    The later, it simply stops accepting new connections and they remain in the socket queue in the kernel. The number of queuable sockets is determined by the min of (net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, and the listen block's maxconn).

    我不知道如果数字高于4000会发生什么 .

    The excess connections wait for another one to complete before being accepted. However, as long as the kernel's queue is not saturated, the client does not even notice this, as the connection is accepted at the TCP level but is not processed. So the client only notices some delay to process the request. But in practice, the listen block's maxconn is much more important, since by default it's smaller than the global one. The listen's maxconn limits the number of connections per listener. In general it's wise to configure it for the number of connections you want for the service, and to configure the global maxconn to the max number of connections you let the haproxy process handle. When you have only one service, both can be set to the same value. But when you have many services, you can easily understand it makes a huge difference, as you don't want a single service to take all the connections and prevent the other ones from working.

    然后你将服务器maxconn属性设置为15.首先,我将其设置为15,因为我的php-fpm,这是转发到一个单独的服务器上,只有它可以使用的这么多子进程,所以我确保我是在这里汇集请求,而不是在php-fpm中 . 我觉得哪个更快 .

    Yes, not only it should be faster, but it allows haproxy to find another available server whenever possible, and also it allows it to kill the request in the queue if the client hits "stop" before the connection is forwarded to the server.

    但回到这个主题,关于这个数字的理论是这个块中的每个服务器一次只能发送15个连接 . 然后连接将等待打开的服务器 . 如果我打开了cookie,连接将等待CORRECT打开服务器 . 但我没有 .

    That's exactly the principle. There is a per-proxy queue and a per-server queue. Connections with a persistence cookie go to the server queue and other connections go to the proxy queue. However since in your case no cookie is configured, all connections go to the proxy queue. You can look at the diagram doc/queuing.fig in haproxy sources if you want, it explains how/where decisions are taken.

    所以问题是:

    • 如果全球连接超过4000,会发生什么?他们死了吗?或以某种方式在Linux中池?

    They're queued in linux. Once you overwhelm the kernel's queue, then they're dropped in the kernel.

    • 全局连接是否与服务器连接相关,除了事实上您不能使服务器连接总数大于全局?

    No, global and server connection settings are independant.

    • 在确定全局连接时,不应该是服务器部分中添加的连接数量加上一定的池化百分比吗?很明显,你对连接有其他约束,但实际上你想要发送给代理的数量是多少?

    You got it right. If your server's response time is short, there is nothing wrong with queueing thousands of connections to serve only a few at a time, because it substantially reduces the request processing time. Practically, establishing a connection nowadays takes about 5 microseconds on a gigabit LAN. So it makes a lot of sense to let haproxy distribute the connections as fast as possible from its queue to a server with a very small maxconn. I remember one gaming site queuing more than 30000 concurrent connections and running with a queue of 30 per server ! It was an apache server, and apache is much faster with small numbers of connections than with large numbers. But for this you really need a fast server, because you don't want to have all your clients queued waiting for a connection slot because the server is waiting for a database for instance. Also something which works very well is to dedicate servers. If your site has many statics, you can direct the static requests to a pool of servers (or caches) so that you don't queue static requests on them and that the static requests don't eat expensive connection slots. Hoping this helps, Willy

相关问题