标签:
前言
在Tomcat集群中,当一个节点出现故障,其他节点该如何接管故障节点的Session信息呢?本文带来的解决方案是基于MSM+Memcached实现Session共享。
相关介绍
MSM
MSM–Memcached Session Manager是一个高可用的Tomcat Session共享解决方案,除了可以从本机内存快速读取Session信息(仅针对黏性Session)外,同时可使用Memcached存取Session,以实现高可用。
工作原理
Sticky Session(黏性) 模式下的工作原理
#Tomcat本地Session为主Session,Memcached 中的Session为备Session
安装在Tomcat上的MSM使用本机内存保存Session,当一个请求执行完毕之后,如果对应的Session在本地不存在(即某用户的第一次请求),则将该Session复制一份至Memcached;当该Session的下一个请求到达时,会使用Tomcat的本地Session,请求处理结束之后,Session的变化会同步更新到 Memcached,保证数据一致。
当集群中的一个Tomcat挂掉,下一次请求会被路由到其他Tomcat上。负责处理此此请求的Tomcat并不清楚Session信息,于是从Memcached查找该Session,更新该Session并将其保存至本机。此次请求结束,Session被修改,送回Memcached备份。
Non-sticky Session (非黏性)模式下的工作原理
#Tomcat本地Session为中转Session,Memcached为主备Session
收到请求,加载备Session至本地容器,若备Session加载失败则从主Session加载
请求处理结束之后,Session的变化会同步更新到Memcached,并清除Tomcat本地Session
实现过程
实验拓扑
#系统环境:CentOS6.6
nginx安装配置
解决依赖关系 [root@scholar ~]# yum groupinstall "Development Tools" "Server Platform Deveopment" -y [root@scholar ~]# yum install openssl-devel pcre-devel -y [root@scholar ~]# groupadd -r nginx [root@scholar ~]# useradd -r -g nginx nginx [root@scholar ~]# tar xf nginx-1.6.3.tar.gz [root@scholar ~]# cd nginx-1.6.3 [root@scholar nginx-1.6.3]# ./configure > --prefix=/usr/local/nginx > --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx/nginx.pid > --lock-path=/var/lock/nginx.lock > --user=nginx > --group=nginx > --with-http_ssl_module > --with-http_flv_module > --with-http_stub_status_module > --with-http_gzip_static_module > --http-client-body-temp-path=/usr/local/nginx/client/ > --http-proxy-temp-path=/usr/local/nginx/proxy/ > --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ > --http-uwsgi-temp-path=/usr/local/nginx/uwsgi > --http-scgi-temp-path=/usr/local/nginx/scgi > --with-pcre [root@scholar nginx-1.6.3]# make && make install 为nginx提供SysV init脚本 [root@scholar ~]# vim /etc/rc.d/init.d/nginx #新建文件/etc/rc.d/init.d/nginx,内容如下: #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /var/run/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/etc/nginx/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/subsys/nginx make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed‘s/[^*]*--user=\([^ ]*\).*/\1/g‘ -` options=`$nginx -V 2>&1 | grep ‘configure arguments:‘` for opt in $options; do if [ `echo $opt | grep ‘.*-temp-path‘` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force- reload|configtest}" exit 2 esac 配置nginx [root@scholar ~]# vim /etc/nginx/nginx.conf upstream www.scholar.com { server 172.16.10.123:8080; server 172.16.10.124:8080; } server { listen 80; server_name www.scholar.com; location / { proxy_pass http://www.scholar.com; index index.jsp index.html index.htm; } } [root@scholar ~]# service nginx start Starting nginx: [ OK ] 测试页面 [root@node1 local]# cd tomcat/webapps/ [root@node1 webapps]# mkdir -pv test/WEB-INF/{classes,lib} [root@node1 webapps]# cd test/ [root@node1 test]# vim index.jsp <%@ page language="java" %> <html> <head><title>TomcatA</title></head> <body> <h1><font color="red">TomcatA.scholar.com</font></h1> <table align="centre" border="1"> <tr> <td>Session ID</td> <% session.setAttribute("scholar.com","scholar.com"); %> <td><%= session.getId() %></td> </tr> <tr> <td>Created on</td> <td><%= session.getCreationTime() %></td> </tr> </table> </body> </html> #另一个节点将TomcatA替换为TomcatB,颜色设为蓝色 [root@node1 test]# service tomcat start
此时Session信息并不一致,接下来我们通过配置MSM实现Session共享
memcached安装
#解决依赖关系 [root@scholar ~]# yum groupinstall "Development Tools" "Server Platform Deveopment" -y #安装libevent #memcached依赖于libevent API,因此要事先安装之 [root@scholar ~]# tar xf libevent-2.0.22-stable.tar.gz [root@scholar ~]# cd libevent-2.0.22-stable [root@scholar libevent-2.0.22-stable]# ./configure --prefix=/usr/local/libevent [root@scholar libevent-2.0.22-stable]# make && make install [root@scholar ~]# echo "/usr/local/libevent/lib" > /etc/ld.so.conf.d/libevent.conf [root@scholar ~]# ldconfig #安装配置memcached [root@scholar ~]# tar xf memcached-1.4.24.tar.tar [root@scholar ~]# cd memcached-1.4.24 [root@scholar memcached-1.4.24]# ./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent [root@scholar memcached-1.4.24]# make && make install [root@scholar ~]# vim /etc/init.d/memcached #!/bin/bash # # Init file for memcached # # chkconfig: - 86 14 # description: Distributed memory caching daemon # # processname: memcached # config: /etc/sysconfig/memcached . /etc/rc.d/init.d/functions ## Default variables PORT="11211" USER="nobody" MAXCONN="1024" CACHESIZE="64" RETVAL=0 prog="/usr/local/memcached/bin/memcached" desc="Distributed memory caching" lockfile="/var/lock/subsys/memcached" start() { echo -n $"Starting $desc (memcached): " daemon $prog -d -p $PORT -u $USER -c $MAXCONN -m $CACHESIZE RETVAL=$? [ $RETVAL -eq 0 ] && success && touch $lockfile || failure echo return $RETVAL } stop() { echo -n $"Shutting down $desc (memcached): " killproc $prog RETVAL=$? [ $RETVAL -eq 0 ] && success && rm -f $lockfile || failure echo return $RETVAL } restart() { stop start } reload() { echo -n $"Reloading $desc ($prog): " killproc $prog -HUP RETVAL=$? [ $RETVAL -eq 0 ] && success || failure echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) restart ;; condrestart) [ -e $lockfile ] && restart RETVAL=$? ;; reload) reload ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|condrestart|status}" RETVAL=1 esac exit $RETVAL 将所需jar包放入各tomcat节点的tomcat安装目录下的lib目录中 [root@node1 ~]# cd msm/ [root@node1 msm]# ls javolution-5.4.3.1.jar msm-javolution-serializer-1.8.1.jar memcached-session-manager-1.8.1.jar spymemcached-2.10.2.jar memcached-session-manager-tc7-1.8.1.jar [root@node1 msm]# cp * /usr/local/tomcat/lib/ #各tomcat节点都需执行以上操作 [root@node1 msm]# vim /usr/local/tomcat/conf/server.xml <?xml version=‘1.0‘ encoding=‘utf-8‘?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener"/> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" </GlobalNamingResources> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/> <Engine name="Catalina" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.LockOutRealm"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="www.scholar.com" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Context path="/test" docBase="/usr/local/tomcat/webapps/test/" reloadable="true"> <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:172.16.10.126:11211,n2:172.16.10.212:11211" failoverNodes="n1" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.Javolu tionTranscoderFactory" /> </Context> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="scholar_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="scholar_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> </Host> </Engine> </Service> </Server>
访问测试
由此可见,Session共享已实现,下面我们模拟TomcatB节点故障,看一下Session是否会发生改变
虽然因为TomcatB故障,导致用户请求被调度到了TomcatA节点上,但Session ID并未发生改变,即Session集群内的所有节点都保存有全局的Session信息,很好的实现了用户访问的不中断
当n2(memcached)节点发生故障,Session信息会不会转移到其他memcached节点呢,我们来试一下
Session已转移到n1上,而且Session ID并未发生改变,至此,Tomcat基于MSM+Memcached实现Session共享目的已实现
The end
Tomcat基于MSM+Memcached实现Session共享实验就说到这里了
Tomcat基于MSM+Memcached实现Session共享
标签:
原文地址:http://my.oschina.net/eddylinux/blog/530904