You can profile a Java application in Linux with `perf`.
Install `perf`:
yum install `perf`
Profile a Java application with the following command:
sudo perf record -g -p 1722
and press Ctrl + C after waiting for enough samples.
Create a report with the following command:
sudo perf report
and navigate it.
References:
http://www.evanjones.ca/java-native-leak-bug.html
http://superuser.com/questions/380733/how-can-i-install-perf-a-linux-peformance-monitoring-tool
Sunday, May 29, 2016
How to analyze native memory leak in Java
Memory leak on heap is easy but how can I analyze native memory leak in Java?
You can use `jemalloc` for this purpose.
Install `jemalloc`:
wget https://github.com/jemalloc/jemalloc/releases/download/4.2.0/jemalloc-4.2.0.tar.bz2
bzip2 -d jemalloc-4.2.0.tar.bz2
tar xvf jemalloc-4.2.0.tar
cd jemalloc-4.2.0
./configure --prefix=/home/izeye/programs/jemalloc --enable-prof
make
make install
Profile your application just by running it with the following environment variables:
export LD_PRELOAD=/home/izeye/programs/jemalloc/lib/libjemalloc.so
export MALLOC_CONF=prof_leak:true,lg_prof_sample:0,prof_final:true
Create a report with the following command:
/home/izeye/programs/jemalloc/bin/jeprof --show_bytes --svg `which java` jeprof.65301.0.f.heap > result.svg
Now you can spot the leaking point by seeing the result.
In my case, the culprit was Deflater. More precisely it was me. I missed to call `end()` to release the native resources.
References:
http://www.evanjones.ca/java-native-leak-bug.html
https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/
https://github.com/jemalloc/jemalloc/releases
https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
You can use `jemalloc` for this purpose.
Install `jemalloc`:
wget https://github.com/jemalloc/jemalloc/releases/download/4.2.0/jemalloc-4.2.0.tar.bz2
bzip2 -d jemalloc-4.2.0.tar.bz2
tar xvf jemalloc-4.2.0.tar
cd jemalloc-4.2.0
./configure --prefix=/home/izeye/programs/jemalloc --enable-prof
make
make install
Profile your application just by running it with the following environment variables:
export LD_PRELOAD=/home/izeye/programs/jemalloc/lib/libjemalloc.so
export MALLOC_CONF=prof_leak:true,lg_prof_sample:0,prof_final:true
Create a report with the following command:
/home/izeye/programs/jemalloc/bin/jeprof --show_bytes --svg `which java` jeprof.65301.0.f.heap > result.svg
Now you can spot the leaking point by seeing the result.
In my case, the culprit was Deflater. More precisely it was me. I missed to call `end()` to release the native resources.
References:
http://www.evanjones.ca/java-native-leak-bug.html
https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/
https://github.com/jemalloc/jemalloc/releases
https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
sh: dot: command not found
If you got the following error:
sh: dot: command not found
try the following command:
sudo yum install graphviz -y
Reference:
http://johnjianfang.blogspot.kr/2009/10/sh-dot-command-not-found.html
sh: dot: command not found
try the following command:
sudo yum install graphviz -y
Reference:
http://johnjianfang.blogspot.kr/2009/10/sh-dot-command-not-found.html
Thursday, May 19, 2016
How to find logs by OOM killer
When your application has been killed by OOM killer, you can use the following command to find the logs by it:
sudo grep oom /var/log/*
Reference:
http://unix.stackexchange.com/questions/128642/debug-out-of-memory-with-var-log-messages
sudo grep oom /var/log/*
Reference:
http://unix.stackexchange.com/questions/128642/debug-out-of-memory-with-var-log-messages
Check Kafka offset lag
To check Kafka's offset lag, use the following command:
$ ./bin/kafka-consumer-offset-checker.sh --broker-info --group test-group --zookeeper localhost:2181 --topic test-topic
[2016-05-19 16:57:30,771] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
Group Topic Pid Offset logSize Lag Owner
test test-topic 0 294386 4349292 4054906 none
BROKER INFO
0 -> 1.2.3.4:9092
$
$ ./bin/kafka-consumer-offset-checker.sh --broker-info --group test-group --zookeeper localhost:2181 --topic test-topic
[2016-05-19 16:57:30,771] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
Group Topic Pid Offset logSize Lag Owner
test test-topic 0 294386 4349292 4054906 none
BROKER INFO
0 -> 1.2.3.4:9092
$
Wednesday, May 18, 2016
Setup JMX in Kafka
To setup JMX in Kafka, use the following command:
JMX_PORT=10000 ./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
JMX_PORT=10000 ./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
Monday, May 16, 2016
How to clean up `marked for deletion` in Kafka
After deleting topic as follows:
$ ./bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-topic
Topic my-topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
$
you will still see the topic with `marked for deletion` as follows:
$ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
my-topic - marked for deletion
$
To clean it up, add the following line to `config/server.properties`:
delete.topic.enable=true
and restart the Kafka.
You can see the topic has gone away soon.
$ ./bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-topic
Topic my-topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
$
you will still see the topic with `marked for deletion` as follows:
$ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
my-topic - marked for deletion
$
To clean it up, add the following line to `config/server.properties`:
delete.topic.enable=true
and restart the Kafka.
You can see the topic has gone away soon.
Logstash doesn't work with Kafka output
With the following configuration:
input {
file {
path => "/home/izeye/programs/logstash-2.3.2/some.log"
start_position => beginning
}
}
output {
kafka {
bootstrap_servers => "1.2.3.4:9092"
topic_id => "some-topic"
codec => line
}
# stdout {
# }
}
the following command didn't work:
./bin/logstash -f some.conf >> logstash.log 2>&1 &
Trying with Java client didn't work, either.
In my case, the cause was the wrong host name which is unreachable.
So I modified `config/server.properties` as follows:
advertised.host.name=1.2.3.4
After that, both Logstash and Java client worked.
input {
file {
path => "/home/izeye/programs/logstash-2.3.2/some.log"
start_position => beginning
}
}
output {
kafka {
bootstrap_servers => "1.2.3.4:9092"
topic_id => "some-topic"
codec => line
}
# stdout {
# }
}
the following command didn't work:
./bin/logstash -f some.conf >> logstash.log 2>&1 &
Trying with Java client didn't work, either.
In my case, the cause was the wrong host name which is unreachable.
So I modified `config/server.properties` as follows:
advertised.host.name=1.2.3.4
After that, both Logstash and Java client worked.
Sunday, May 15, 2016
Kafka commands
The following commands are for quick reference:
./bin/zookeeper-server-start.sh config/zookeeper.properties >> zookeeper.log 2>&1 &
./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
./bin/kafka-topics.sh --list --zookeeper localhost:2181
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Reference:
http://kafka.apache.org/documentation.html#quickstart
./bin/zookeeper-server-start.sh config/zookeeper.properties >> zookeeper.log 2>&1 &
./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
./bin/kafka-topics.sh --list --zookeeper localhost:2181
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Reference:
http://kafka.apache.org/documentation.html#quickstart
Tuesday, May 10, 2016
connect() failed (110: Connection timed out) while connecting to upstream
If you encounter the following error in logs/error.log in Nginx:
2016/05/10 22:59:24 [error] 20722#0: *56830 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.2.3.4, server: localhost, request: "GET /api/v1/events/xxx HTTP/1.1", upstream: "http://127.0.0.1:8080/api/v1/events/xxx", host: "api.izeye.com", referrer: "https://www.izeye.com/"
you can fix by setting keepalive between Nginx and Tomcat as follows:
upstream backend {
server localhost:8080;
keepalive 32;
}
server {
listen 80;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
...
2016/05/10 22:59:24 [error] 20722#0: *56830 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.2.3.4, server: localhost, request: "GET /api/v1/events/xxx HTTP/1.1", upstream: "http://127.0.0.1:8080/api/v1/events/xxx", host: "api.izeye.com", referrer: "https://www.izeye.com/"
you can fix by setting keepalive between Nginx and Tomcat as follows:
upstream backend {
server localhost:8080;
keepalive 32;
}
server {
listen 80;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
...
}
Subscribe to:
Posts (Atom)