码迷,mamicode.com
首页 > Web开发 > 详细

使用apache和nginx代理实现tomcat负载均衡及集群配置详解

时间:2016-05-27 12:33:36      阅读:369      评论:0      收藏:0      [点我收藏+]

标签:

实验环境:

技术分享


1、nginx的代理功能



nginx proxy:
eth0: 192.168.8.48
vmnet2 eth1: 192.168.10.10


tomcat server1:
vmnet2 eth0: 192.168.10.20


tomcat server2:
vmnet2 eth0: 192.168.10.30


技术分享

# yum install -y nginx-1.8.1-1.el6.ngx.x86_64.rpm
# vim /etc/nginx/conf.d/default.conf 


location / {
        root  /web/htdocs;
        index index.jsp index.html index.htm;
    }


    location ~* \.(jsp|do|action)$ {
        proxy_pass http://192.168.10.20;
    }


    location ~* \.(jpg|jpeg|gif|png|pdf|doc|rar|exe|zip|)$ {
        proxy_pass http://192.168.10.30;
    }


2、apache的代理功能

http方式的代理

# cd /etc/httpd/conf.d/
# vim mod_proxy.conf
加入如下内容:


ProxyRequests off
ProxyPreserveHost on


ProxyPass / http://192.168.10.20/
ProxyPassReverse / http://192.168.l0.20/


<Location />
  Order Allow,Deny
  Allow from all
</Location>

技术分享

②ajp方式的代理

ProxyVia on
ProxyRequests off
ProxyPreserveHost on


ProxyPass / ajp://192.168.10.20/
ProxyPassReverse / ajp://192.168.l0.20/


<Location />
  Order Allow,Deny
  Allow from all
</Location>

技术分享

②、配置apache基于mod_jk的负载均衡



a.安装apxs插件
# yum install -y httpd-devel
# tar xf tomcat-connectors-1.2.37-src.tar.gz 
# cd tomcat-connectors-1.2.37-src/native
# ./configure --with-apxs=/usr/sbin/apxs
# make && make install
技术分享

配置相关文件
# cd /etc/httpd/conf.d
# vim mod_jk.conf


LoadModule jk_module modules/mod_jk.so
JkWorkersFile /etc/httpd/conf.d/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel notice
JkMount /* TomcatA
JkMount /status/ statA
                                 
# vim workers.properties


worker.list=TomcatA,statA
worker.TomcatA.type=ajp13
worker.ToccatA.port=8009
worker.TomcatA.host=192.168.10.20
worker.TomcatA.lbfactor=1
worker.statA.type=status
技术分享

访问如下地址,可以看到tomcat server1的状态
http://192.168.8.48/status/
技术分享

③使用mod_jk配置负载均衡服务器



配置代理服务器
# cd /etc/httpd/conf.d
# cat mod_jk.conf 
LoadModule jk_module modules/mod_jk.so
JkWorkersFile /etc/httpd/conf.d/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel notice
JkMount /* lbcA
JkMount /status/ statA


# cat workers.properties 
worker.list=lbcA,statA
worker.TomcatA.type=ajp13
worker.TomcatA.port=8009
worker.TomcatA.host=192.168.10.20
worker.TomcatA.lbfactor=1
worker.TomcatB.type=ajp13
worker.TomcatB.port=8009
worker.TomcatB.host=192.168.10.30 
worker.TomcatB.lbfactor=1
worker.lbcA.type=lb
worker.lbcA.sticky_session=1
worker.lbcA.balance_workers=TomcatA,TomcatB
worker.statA.type=status


配置后端tomcat
# cd /usr/local/tomcat/webapps/
# mkdir testapp
# cd testapp/
# mkdir -pv WEB-INF/{classes,lib}
mkdir: created directory `WEB-INF‘
mkdir: created directory `WEB-INF/classes‘
mkdir: created directory `WEB-INF/lib‘
# vim index.jsp


TOMCATA服务器
index.jsp:
<%@ page language="java" %>
<html>
  <head><title>TomcatA</title></head>
  <body>
    <h1><font color="red">TomcatA.chinasoft.com</font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>
    <% session.setAttribute("chinasoft.com.com","chinasoft.com"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
     </tr>
    </table>
  </body>
</html>


TOMCATB服务器
index.jsp:
<%@ page language="java" %>
<html>
  <head><title>TomcatB</title></head>
  <body>
    <h1><font color="blue">TomcatB.chinasoft.com</font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>
    <% session.setAttribute("chinasoft.com","chinasoft.com"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
     </tr>
    </table>
  </body>
</html>


测试是否OK
# curl http://192.168.10.20:8080/testapp/index.jsp
# curl http://192.168.10.30:8080/testapp/index.jsp

技术分享

技术分享


配置session会话绑定(会破坏负载均衡效果):
负载均衡,且实现会话绑定要注意给每个tomcat实例的egine容器一个jvmRoute属性!此名称要跟前端调度模块使用名称保持一致!
另外,在mod_proxy实现负载均衡的会话绑定时,还要使用sticksession=JSESSIONID(字符要大写)!
worker.properties:
worker.TomcatB.lbfactor=1
在后端tomcat服务器server.xml文件中定义jvmRoute:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="TomcatA">

技术分享

技术分享
④.apache mod_proxy实现基于http的负载均衡配置:



# mv mod_jk.conf mod_jk.conf.bak
# mv mod_proxy.conf.bak mod_proxy.conf
# vim mod_proxy.conf 


ProxyVia on
ProxyRequests off
ProxyPreserveHost on


<Proxy balancer://lb>
  BalancerMember http://192.168.10.20:8080 loadfactor=1 route=TomcatA
  BalancerMember http://192.168.10.30:8080 loadfactor=1 route=TomcatB
</Proxy>


ProxyPass / balancer://lb/
ProxyPassReverse / balancer://lb/


<Location />
  Order Allow,Deny
  Allow from all
</Location>


⑤.配置session会话的持久性



ProxyVia on
ProxyRequests off
ProxyPreserveHost on


<Proxy balancer://lb>
  BalancerMember http://192.168.10.20:8080 loadfactor=1 route=TomcatA
  BalancerMember http://192.168.10.30:8080 loadfactor=1 route=TomcatB
</Proxy>


ProxyPass / balancer://lb/ stickysession=JSESSIONID
ProxyPassReverse / balancer://lb/


<Location />
  Order Allow,Deny
  Allow from all
</Location>

技术分享

⑥.配置管理接口

ProxyVia on
ProxyRequests off
ProxyPreserveHost on


<Proxy balancer://lb>
  BalancerMember http://192.168.10.20:8080 loadfactor=1 route=TomcatA
  BalancerMember http://192.168.10.30:8080 loadfactor=1 route=TomcatB
</Proxy>


<Location /lbmanager>
  SetHandler balancer-manager
</Location>


ProxyPass /lbmanager !
ProxyPass / balancer://lb/
ProxyPassReverse / balancer://lb/


<Location />
  Order Allow,Deny
  Allow from all

</Location>

技术分享

访问:http://192.168.8.48/lbmanager,可以看到负载均衡的状态界面

技术分享


集群配置:

负载均衡器:

192.168.10.10:
# vim mod_proxy.conf


ProxyVia on
ProxyRequests off
ProxyPreserveHost on


<Proxy balancer://lb>
  BalancerMember http://192.168.10.20:8080 loadfactor=1 route=TomcatA
  BalancerMember http://192.168.10.30:8080 loadfactor=1 route=TomcatB
</Proxy>


<Location /lbmanager>
  SetHandler balancer-manager
</Location>


ProxyPass /lbmanager !
ProxyPass / balancer://lb/
ProxyPassReverse / balancer://lb/


<Location />
  Order Allow,Deny
  Allow from all
</Location>


192.168.10.20:


<Engine name="Catalina" defaultHost="localhost" jvmRoute="TomcatA">
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
	 channelSendOptions="8">


  <Manager className="org.apache.catalina.ha.session.DeltaManager"
	   expireSessionsOnShutdown="false"
	   notifyListenersOnReplication="true"/>


  <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
		address="228.1.0.4"
		port="45564"
		frequency="500"
		dropTime="3000"/>
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
	      address="192.168.10.20"
	      port="4000"
	      autoBind="100"
	      selectorTimeout="5000"
	      maxThreads="6"/>


  <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
      <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
  </Channel>


  <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
	 filter=""/>
  <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>


  <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
	    tempDir="/tmp/war-temp/"
	    deployDir="/tmp/war-deploy/"
	    watchDir="/tmp/war-listen/"
	    watchEnabled="false"/>


  <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
  <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>



192.168.10.30:


<Engine name="Catalina" defaultHost="localhost" jvmRoute="TomcatB">
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
	 channelSendOptions="8">


  <Manager className="org.apache.catalina.ha.session.DeltaManager"
	   expireSessionsOnShutdown="false"
	   notifyListenersOnReplication="true"/>


  <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
		address="228.1.0.4"
		port="45564"
		frequency="500"
		dropTime="3000"/>
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
	      address="192.168.10.30"
	      port="4000"
	      autoBind="100"
	      selectorTimeout="5000"
	      maxThreads="6"/>
	<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>


<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
 filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>


<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
    tempDir="/tmp/war-temp/"
    deployDir="/tmp/war-deploy/"
    watchDir="/tmp/war-listen/"
    watchEnabled="false"/>


<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>


并在各集群节点上添加路由信息:
route add -net 228.1.0.4 netmask 255.255.255.255 dev eth0


并在:

web.xml节点上添加:

<distributable/>

技术分享

可以看到session id没有变化

技术分享

使用apache和nginx代理实现tomcat负载均衡及集群配置详解

标签:

原文地址:http://blog.csdn.net/reblue520/article/details/51491680

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!