socketRead0阻塞2个半小时?

前几个月我们遇到过某些http请求会在本机会阻塞2个半小时左右最后成功返回的情况。经过排查发现问题并不在对方服务器,而是这2个半小时基本都是在自己的网络环境阻塞,通过dns端日志发现请求到达dns服务器也是2个半小时后。

在geek talk群里有人指出,glibc的低版本会在查完域名后把拿到的ip反过来查域名,dns服务器都不支持反解就一层层的传给其他dns,可能会变慢。redhat在6u fix了这个问题。但跟我们的情况不符。

当时的堆栈显示阻塞在socket的读取上(我不确定这里的连接是复用上次HttpURLConnection创建过的连接,还是首次跟对方建立连接),我google到有些人也遇到过相似的问题,hang在同样的代码上,疑似是虚拟机(kvm/vmware)或jdk网络层面的bug (JDK 1.8.0_65), 因为无法稳定的重现,没有进一步验证,记录一下这个诡异的问题

at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:335)
- locked <0x0000000c868c460> (a java.io.BufferedInputStream)
at sun.net.www.MeteredStream.read(MeteredStream.java:134)
- locked <0x0000000c868c420> (a sun.net.www.http.KeepAliveStream)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3336)
at java.io.FilterInputStream.read(FilterInputStream.java:133)

OneApm的问题

1) 我们没有使用它的大数据版本,普通版本在压测情况下服务器端如果吞吐不够,agent端会缓存大量的数据导致full gc,这里它占据了1G的内存

2) 对HttpURLConnection拦截时可能发生空指针异常,导致业务线程终止

java.lang.NullPointerException
 at com.blueware.monitor.bridge.reflect.ClassReflection$2.run(ClassReflection.java:30) ~[?:?]
 at com.blueware.monitor.bridge.reflect.ClassReflection$2.run(ClassReflection.java:28) ~[?:?]
 at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_65]
 at com.blueware.monitor.bridge.reflect.ClassReflection.loadClass(ClassReflection.java:28) ~[?:?]
 at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441) ~[?:1.8.0_65]
 at sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2979) ~[?:1.8.0_65]
 at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:489) ~[?:1.8.0_65]
 at com...

JDK在mac和linux上对connection reset的行为不一致?

遇到一个JDK(版本都是1.8.0_51-b16)在mac和linux上行为不一致的问题,这个问题是针对redis服务器端关闭连接时的状况处理;即在redis-server端设置连接的空闲时间,当超过这个空闲时间后server主动把该连接关闭掉。在我的mac上这个行为是符合预期的,对这个超时的连接再发起请求时会抛出 SocketException "Connection Reset",但在linux上却不会。

简单模拟一下,在Redis服务端配置timeout为3秒,客户端建立连接之后 sleep 4秒(或更久),让连接超时

import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.net.Socket;

public class ConnRestTest {

    public static void main(String[] args) throws Exception {
        Socket socket = new Socket();
        socket.connect(new InetSocketAddress("localhost", 6379));

        InputStream is = socket.getInputStream();
        OutputStream os = socket.getOutputStream();

        Thread.sleep(4000);

        os.write("*1\r\n$4\r\nPING\r\n".getBytes());
        os.flush();

        for (int i = 0; i < 7; i++) {
            int r = is.read();
            System.out.print( r == -1 ? r : (char) r);
        }

        socket.close();
    }
}

上面的代码在mac上运行后会看到如下异常(如果注释掉 Thread.sleep(4000) 会得到 "+PONG\r\n")

Exception in thread "main" java.net.SocketException: Connection reset
    at java.net.SocketInputStream.read(SocketInputStream.java:209)
    at java.net.SocketInputStream.read(SocketInputStream.java:141)
    at java.net.SocketInputStream.read(SocketInputStream.java:223)
    at com.wacai.common.redis.io.Test.main(Test.java:23)    

而在linux上运行后则是打印7个连续的"-1",即并不会抛出"Connection reset"

分别进行tcpdump,在mac上:

 ➜  sudo tcpdump -i lo0 port 6379

23:13:43.398003 IP localhost.63864 > localhost.6379: Flags [S], seq 1658673677, win 65535, options [mss 16344,nop,wscale 5,nop,nop,TS val 1063188198 ecr 0,sackOK,eol], length 0
23:13:43.398064 IP localhost.6379 > localhost.63864: Flags [S.], seq 355134851, ack 1658673678, win 65535, options [mss 16344,nop,wscale 5,nop,nop,TS val 1063188198 ecr 1063188198,sackOK,eol], length 0
23:13:43.398075 IP localhost.63864 > localhost.6379: Flags [.], ack 1, win 12759, options [nop,nop,TS val 1063188198 ecr 1063188198], length 0
23:13:43.398085 IP localhost.6379 > localhost.63864: Flags [.], ack 1, win 12759, options [nop,nop,TS val 1063188198 ecr 1063188198], length 0

23:13:47.063640 IP localhost.6379 > localhost.63864: Flags [F.], seq 1, ack 1, win 12759, options [nop,nop,TS val 1063191852 ecr 1063188198], length 0
23:13:47.063671 IP localhost.63864 > localhost.6379: Flags [.], ack 2, win 12759, options [nop,nop,TS val 1063191852 ecr 1063191852], length 0

23:13:48.403144 IP localhost.63864 > localhost.6379: Flags [P.], seq 1:15, ack 2, win 12759, options [nop,nop,TS val 1063193184 ecr 1063191852], length 14
23:13:48.403255 IP localhost.6379 > localhost.63864: Flags [R], seq 355134853, win 0, length 0  

在linux上:

$ sudo tcpflow -p -c -i lo port 6379

00:26:13.303233 IP localhost.34609 > localhost.6379: Flags [S], seq 1094106697, win 43690, options [mss 65495,sackOK,TS val 8462190 ecr 0,nop,wscale 7], length 0
00:26:13.303272 IP localhost.6379 > localhost.34609: Flags [S.], seq 2755621045, ack 1094106698, win 43690, options [mss 65495,sackOK,TS val 8462190 ecr 8462190,nop,wscale 7], length 0
00:26:13.303298 IP localhost.34609 > localhost.6379: Flags [.], ack 1, win 342, options [nop,nop,TS val 8462190 ecr 8462190], length 0

00:26:17.037992 IP localhost.6379 > localhost.34609: Flags [F.], seq 1, ack 1, win 342, options [nop,nop,TS val 8465925 ecr 8462190], length 0
00:26:17.038450 IP localhost.34609 > localhost.6379: Flags [.], ack 2, win 342, options [nop,nop,TS val 8465926 ecr 8465925], length 0

00:26:18.305591 IP localhost.34609 > localhost.6379: Flags [P.], seq 1:15, ack 2, win 342, options [nop,nop,TS val 8467193 ecr 8465925], length 14
00:26:18.305630 IP localhost.6379 > localhost.34609: Flags [R], seq 2755621047, win 0, length 0     

在mac上比linux上多了一次在连接建立后从server发给client端的ack,在3秒之后,连接因为超过空闲时间server端向client端发起了fin,client回复ack,之后client端程序对socket进行写操作,在tcpdump里看到标记为P的标记,然后server端发送了reset

在linux上看到连接状态的变化也符合预期:

$ netstat -antp | grep 6379
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:6379          127.0.0.1:34607         ESTABLISHED -
tcp6       0      0 127.0.0.1:34607         127.0.0.1:6379          ESTABLISHED 3683/java

tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:6379          127.0.0.1:34607         FIN_WAIT2   -
tcp6       1      0 127.0.0.1:34607         127.0.0.1:6379          CLOSE_WAIT  3683/java

按说server端发送reset标记后,mac上抛出异常的行为更符合预期,可能是JDK网络层面的实现细节不同,或者tcpdump出的数据已经能解释原因只是我看不出来。

优雅关闭与session draining

最近看到nginx plus收费版里有个session draining的概念,在用nginx做反向代理,要优雅停止后端应用的话,需要先在nginx层控制请求不再发到相应的节点,但与此同时已经建立的链接则要继续保持,一直等这些链接都结束了再停止应用。

其实在免费版的nginx/tengine就能实现这个特性,在以前的这篇nginx反向代理对后端某个节点优雅下线的问题 文章里测试过后端tomcat有一个耗时很长的请求,当tengine端健康监测已发现后端节点不可用的情况下,该请求并不会被tengine中止,而会等后端响应结束后再返回给客户端。

优化关闭应用并不只是在负载均衡或反向代理层面就能完美解决的,后端的应用本身也要考虑很多细节。举一个例子,Actor怎么有序的关闭,可以参考之前分享的Governor模式,所有干活的Actor由Governor创建,在创建时就设定好优先级。

在应用关闭的时候,对master发送PoisonPill,示例代码如下:

// 在你的容器或微容器里监听到应用关闭事件时触发shutdown,最简单的情况下是在shutdownhook里
def shutdown(): Unit = {
    gracefulStopActors
    system.terminate
    Await.result(system.whenTerminated, Duration.Inf)
    springContext.close
}

private def gracefulStopActors {
    master ! PoisonPill
    signal.acquire //wait
}

在governor角色里,当收到Terminated消息后顺序结束子actor

...
case Terminated(actor) if actor == observable => {
  logger.info("===receive terminated message from: " + actor)
  // 先顺序的杀死所有托管的子actor,并将结果发给自己
  stopAll(managedActors.toList.sorted) pipeTo self
}
...

// 顺序的停止所有子actor
protected def stopAll(kids: List[OrderedActorRef]): Future[Any] = {
    kids match {
      case first :: Nil =>
        logger.info("===graceful stop: " + first.actor)
        gracefulStop(first.actor, stopTimeout).flatMap { _ =>
          Future { AllDead }
      }
      case first :: rest =>
        logger.info("===graceful stop: " + first.actor)
        gracefulStop(first.actor, stopTimeout).flatMap { _ =>
        stopAll(rest)
      }
      case Nil =>
        Future { AllDead }
    }
}

再举一个例子,应用里有一个线程一直轮询从redis里获取数据,你在关闭应用的时候如何优雅终止这个线程呢?下面用一段代码简单模拟这个场景;假设这个应用没有任何容器,结束就是靠收到kill信号,那么在ShutdownHook里,要阻塞的等待(一段时间)worker线程结束后再退出才算安全

public class Test {
  static volatile boolean running = true;

  public static void main(String[] args) {
  Thread worker = new Thread() {
      public void run() {
          while (running) {
              try {
                  System.out.println("i'm running.1");
                  Thread.sleep(1000);
                  System.out.println("i'm running.2");
                  Thread.sleep(1000);
                  System.out.println("i'm running.3");
                  Thread.sleep(1000);
              } catch (InterruptedException e) {
                  e.printStackTrace();
              }
          }
      }
  };
  worker.start();
  Runtime.getRuntime().addShutdownHook(new Thread() {
      public void run() {
          System.out.println("receive shutdown signal.");
          running = false;

          while(worker.isAlive()) {
              System.out.println("waiting for worker thread finish.");
              try {
                  Thread.sleep(1000);
              } catch (InterruptedException e) {
              }
          }
          System.out.println("wroker thread finished!");
      }
  });
 }
}