To install MySQL via Homebrew on Mac, do as follows:
$ brew install mysql
$ mysql.server start
Tuesday, November 29, 2016
Remove MariaDB installed via Homebrew on Mac
To remove MariaDB installed via Homebrew on Mac, do as follows:
$ sudo mysql.server stop
$ brew remove mariadb
$ brew cleanup
Reference:
https://gist.github.com/vitorbritto/0555879fe4414d18569d
$ sudo mysql.server stop
$ brew remove mariadb
$ brew cleanup
Reference:
https://gist.github.com/vitorbritto/0555879fe4414d18569d
Thursday, November 24, 2016
Windows equivalent for grep and wc -l
If you want to find the number of established connections on 8080 in Windows, you can use the following command:
netstat -an | findstr 8080 | findstr ESTABLISHED | find /c /v ""
References:
http://superuser.com/questions/300815/grep-equivalent-for-windows-7
http://superuser.com/questions/959036/what-is-the-windows-equivalent-of-wc-l
netstat -an | findstr 8080 | findstr ESTABLISHED | find /c /v ""
References:
http://superuser.com/questions/300815/grep-equivalent-for-windows-7
http://superuser.com/questions/959036/what-is-the-windows-equivalent-of-wc-l
Tuesday, November 22, 2016
Get PGID in Mac or Linux
To get PGID in Mac or Linux, use the following command:
ps -o pgid= 1115 | tr -d ' '
Reference:
http://stackoverflow.com/questions/392022/best-way-to-kill-all-child-processes
ps -o pgid= 1115 | tr -d ' '
Reference:
http://stackoverflow.com/questions/392022/best-way-to-kill-all-child-processes
Friday, November 4, 2016
List files in a jar file
To list files in a jar file, do as follows:
unzip -l build/libs/spring-boot-throwaway-branches-1.0.jar
unzip -l build/libs/spring-boot-throwaway-branches-1.0.jar
Thursday, November 3, 2016
Get CPU count in Mac
To get CPU count in Mac, do as follows:
$ sysctl -n hw.ncpu
8
$
Reference:
http://stackoverflow.com/questions/1715580/how-to-discover-number-of-logical-cores-on-mac-os-x
$ sysctl -n hw.ncpu
8
$
Reference:
http://stackoverflow.com/questions/1715580/how-to-discover-number-of-logical-cores-on-mac-os-x
Wednesday, November 2, 2016
Get the start and expiry dates of a SSL certificate in a server
To get the start and expiry dates of a SSL certificate in a server, do as follows:
openssl s_client -connect 1.2.3.4:443 | openssl x509 -noout -dates
openssl s_client -connect 1.2.3.4:443 | openssl x509 -noout -dates
Monday, October 31, 2016
/dev/random vs. /dev/urandom
There's no noticeable difference of the speed between '/dev/random' and '/dev/urandom' in MacOS as follows::
$ time head -n 1 /dev/random
?^??ݣ??%???8W?~G???Y??m5?
?w?\:b??"?d>r?A?tۦ??9? k?o??&?%>M10ݠ??M?yH??
real 0m0.010s
user 0m0.001s
sys 0m0.008s
$ time head -n 1 /dev/urandom
,}8????
?G^??x??#~?o??%?\?$?PѠr۾????8cT?ښ??????Po??P??Ѐ?0??G??%? ?%.' ?-Z??(????ϣ?1=??G????Tn?C%?C;?Uߌ?&P?*+C?^??x?L
??7?g??N_?e?????ļ?k??êN??M?!?@a??? ?tW??"?7R?`?V???VOjD
?ۮj??U???D2?{{|
?\???v2?9n?ZUC'
??F&? B???D?/?/}???I???Rk?O?8?Q?K?ݮ?#:ZE?.?@rR=B?l?H?m?
real 0m0.010s
user 0m0.001s
sys 0m0.008s
$
but there's big difference in CentOS as follows:
$ time head -n 1 /dev/random
?I?[k+???ത?y??c?Ϛ-%#M?_?{|?????c?$??^ '>e. ??.?ӺQێ5o?q?????f8?-<?]ß"?d*i;u@?9r^??`VYyrJ??V?W?[8??$c?O???q/?;QP?p?
????#&i0?z???<v?ⁱ???̵VH:x??%uТ????hPU?R???`??djԹc?c? Hz4?a@+?ƹqX???5V??[J
?發??7AΟbvy]E2?ݍ??4k?'MR??????k#?F????x?)vж??p?-??f%?$7??t?qd?"?<?T?'t~S?@?H?}JA?g???:@b??vԆ@???#??????fZ?j?+?t??J??M ?X?w??I=Kj?yfC
J(0???MVl????????/N???ަ??yI??1
real 0m43.838s
user 0m0.001s
sys 0m0.004s
$ time head -n 1 /dev/urandom
?1Q:?{?GS?pX?K??,oW%?????????
?ERʧ??????'?N???;@Lvg????T?0???9RZ??ܓ
???&?????aŃ@?Y???L??WM?O~?,V?h<?3?ļ?j
??!?zdw?-a?????߈?¿ͯ^?Ɍ`tx?d??uN?qO?b?????j?s&A?e?d$?~??24w?_[Da???x???s?5??????????ZR~?,?? *??I?????L???yb7??3vqU???Cଆ~?YN톾?]7n?Q???px[]??ي{??F?- '??+?nDQ??:*?U????̼??hm*F??$<??1;?ؓ??\?.??Cݽ?R???I:?Hq?@?????Ab???(??Vϕɫga?O?ޱz??'??d~M?Z?g?Z???Z??f^ Yv?}?/?@??)N7=?5???0ļJ?n|??M??*??7[???????<?6
?%???b????5l???? v?+4k??X?*^+??k????J"??>??s}F?H?-??m
real 0m0.004s
user 0m0.001s
sys 0m0.002s
$
Reference:
https://docs.oracle.com/cd/E13209_01/wlcp/wlss30/configwlss/jvmrand.html
$ time head -n 1 /dev/random
?^??ݣ??%???8W?~G???Y??m5?
?w?\:b??"?d>r?A?tۦ??9? k?o??&?%>M10ݠ??M?yH??
real 0m0.010s
user 0m0.001s
sys 0m0.008s
$ time head -n 1 /dev/urandom
,}8????
?G^??x??#~?o??%?\?$?PѠr۾????8cT?ښ??????Po??P??Ѐ?0??G??%? ?%.' ?-Z??(????ϣ?1=??G????Tn?C%?C;?Uߌ?&P?*+C?^??x?L
??7?g??N_?e?????ļ?k??êN??M?!?@a??? ?tW??"?7R?`?V???VOjD
?ۮj??U???D2?{{|
?\???v2?9n?ZUC'
??F&? B???D?/?/}???I???Rk?O?8?Q?K?ݮ?#:ZE?.?@rR=B?l?H?m?
real 0m0.010s
user 0m0.001s
sys 0m0.008s
$
but there's big difference in CentOS as follows:
$ time head -n 1 /dev/random
?I?[k+???ത?y??c?Ϛ-%#M?_?{|?????c?$??^ '>e. ??.?ӺQێ5o?q?????f8?-<?]ß"?d*i;u@?9r^??`VYyrJ??V?W?[8??$c?O???q/?;QP?p?
????#&i0?z???<v?ⁱ???̵VH:x??%uТ????hPU?R???`??djԹc?c? Hz4?a@+?ƹqX???5V??[J
?發??7AΟbvy]E2?ݍ??4k?'MR??????k#?F????x?)vж??p?-??f%?$7??t?qd?"?<?T?'t~S?@?H?}JA?g???:@b??vԆ@???#??????fZ?j?+?t??J??M ?X?w??I=Kj?yfC
J(0???MVl????????/N???ަ??yI??1
real 0m43.838s
user 0m0.001s
sys 0m0.004s
$ time head -n 1 /dev/urandom
?1Q:?{?GS?pX?K??,oW%?????????
?ERʧ??????'?N???;@Lvg????T?0???9RZ??ܓ
???&?????aŃ@?Y???L??WM?O~?,V?h<?3?ļ?j
??!?zdw?-a?????߈?¿ͯ^?Ɍ`tx?d??uN?qO?b?????j?s&A?e?d$?~??24w?_[Da???x???s?5??????????ZR~?,?? *??I?????L???yb7??3vqU???Cଆ~?YN톾?]7n?Q???px[]??ي{??F?- '??+?nDQ??:*?U????̼??hm*F??$<??1;?ؓ??\?.??Cݽ?R???I:?Hq?@?????Ab???(??Vϕɫga?O?ޱz??'??d~M?Z?g?Z???Z??f^ Yv?}?/?@??)N7=?5???0ļJ?n|??M??*??7[???????<?6
?%???b????5l???? v?+4k??X?*^+??k????J"??>??s}F?H?-??m
real 0m0.004s
user 0m0.001s
sys 0m0.002s
$
Reference:
https://docs.oracle.com/cd/E13209_01/wlcp/wlss30/configwlss/jvmrand.html
Wednesday, October 19, 2016
Index documents and search them in Elasticsearch
To index documents and search them in Elasticsearch, do as follows:
curl -XDELETE localhost:9200/persons?pretty=true
curl -XPUT localhost:9200/persons?pretty=true -d '
{
"mappings": {
"persons": {
"properties": {
"firstName": {
"type": "string"
},
"lastName": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}'
curl -XPOST localhost:9200/persons/persons?pretty -d '{firstName: "Johnny", lastName: "Lim", age: 20}'
curl -XPOST localhost:9200/persons/persons?pretty -d '{firstName: "John", lastName: "Kim", age: 30}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {match_all: {}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {term: {firstName: "Johnny"}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {term: {lastName: "Lim"}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {term: {age: 20}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {match: {firstName: "Johnny"}}}'
Note that term query with analyzed string doesn't work.
For analyzed string, use match query.
curl -XDELETE localhost:9200/persons?pretty=true
curl -XPUT localhost:9200/persons?pretty=true -d '
{
"mappings": {
"persons": {
"properties": {
"firstName": {
"type": "string"
},
"lastName": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}'
curl -XPOST localhost:9200/persons/persons?pretty -d '{firstName: "Johnny", lastName: "Lim", age: 20}'
curl -XPOST localhost:9200/persons/persons?pretty -d '{firstName: "John", lastName: "Kim", age: 30}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {match_all: {}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {term: {firstName: "Johnny"}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {term: {lastName: "Lim"}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {term: {age: 20}}}'
curl localhost:9200/persons/persons/_search?pretty=true -d '{query: {match: {firstName: "Johnny"}}}'
Note that term query with analyzed string doesn't work.
For analyzed string, use match query.
Thursday, October 13, 2016
Run a command on a remote server using SSH
To run a command on a remote server using SSH, do as follows:
ssh izeye@test ls
ssh izeye@test ls
Monday, October 10, 2016
Setup Checkstyle with properties in IntelliJ
When you setup Checkstyle with properties in IntelliJ, use absolute paths for properties as follows:
/Users/izeye/IdeaProjects/java-utils/config/checkstyle/checkstyle-header.txt
/Users/izeye/IdeaProjects/java-utils/config/checkstyle/checkstyle-suppressions.xml
/Users/izeye/IdeaProjects/java-utils/config/checkstyle/checkstyle-header.txt
/Users/izeye/IdeaProjects/java-utils/config/checkstyle/checkstyle-suppressions.xml
Wednesday, October 5, 2016
Use RestTemplate with gzip
To use RestTemplate with gzip, add the following dependency:
compile("org.apache.httpcomponents:httpclient")
Reference:
http://stackoverflow.com/questions/34415144/cannot-parse-gzip-encoded-response-with-resttemplate-from-spring-web
compile("org.apache.httpcomponents:httpclient")
Reference:
http://stackoverflow.com/questions/34415144/cannot-parse-gzip-encoded-response-with-resttemplate-from-spring-web
Thursday, September 29, 2016
-bash: _get_comp_words_by_ref: command not found
When you try to use Spring Boot CLI shell completion, you might encounter the following error:
$ . ~/.sdkman/candidates/springboot/current/shell-completion/bash/spring
$ spring -bash: _get_comp_words_by_ref: command not found
-bash: [: -ne: unary operator expected
$
Install 'bash-completion' as follows:
brew install bash-completion
brew tap homebrew/completions
And add the following to '.bash_profile':
if [ -f $(brew --prefix)/etc/bash_completion ]; then
. $(brew --prefix)/etc/bash_completion
fi
Apply the configuration as follows:
source ~/.bash_profile
and now you can see completion candidates as follows:
$ spring
grab init jar shell uninstall war
help install run test version
$
References:
http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#getting-started-installing-the-cli
http://davidalger.com/development/bash-completion-on-os-x-with-brew/
$ . ~/.sdkman/candidates/springboot/current/shell-completion/bash/spring
$ spring -bash: _get_comp_words_by_ref: command not found
-bash: [: -ne: unary operator expected
$
Install 'bash-completion' as follows:
brew install bash-completion
brew tap homebrew/completions
And add the following to '.bash_profile':
if [ -f $(brew --prefix)/etc/bash_completion ]; then
. $(brew --prefix)/etc/bash_completion
fi
Apply the configuration as follows:
source ~/.bash_profile
and now you can see completion candidates as follows:
$ spring
grab init jar shell uninstall war
help install run test version
$
References:
http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#getting-started-installing-the-cli
http://davidalger.com/development/bash-completion-on-os-x-with-brew/
Saturday, September 24, 2016
Run Spring Boot application with another port with Gradle 'bootRun'
To run Spring Boot application with another port with Gradle 'bootRun', do as follows:
SERVER_PORT=28080 ./gradlew bootRun
SERVER_PORT=28080 ./gradlew bootRun
Friday, September 23, 2016
error: unmappable character for encoding MS949
When you generate Javadoc with Gradle, you might encounter the following error:
error: unmappable character for encoding MS949
Add the following configuration to your `build.gradle`:
javadoc {
options.encoding = 'UTF-8'
}
Reference:
http://stackoverflow.com/questions/25912190/how-to-set-an-encoding-for-the-javadoc-in-gradle
error: unmappable character for encoding MS949
Add the following configuration to your `build.gradle`:
javadoc {
options.encoding = 'UTF-8'
}
Reference:
http://stackoverflow.com/questions/25912190/how-to-set-an-encoding-for-the-javadoc-in-gradle
Tuesday, September 20, 2016
Error:java: Bad service configuration file, or exception thrown while constructing Processor object: javax.annotation.processing.Processor: Provider org.springframework.boot.configurationprocessor.ConfigurationMetadataAnnotationProcessor not found
When you run a test of Spring Boot in IntelliJ, you might be encounter the following error:
Error:java: Bad service configuration file, or exception thrown while constructing Processor object: javax.annotation.processing.Processor: Provider org.springframework.boot.configurationprocessor.ConfigurationMetadataAnnotationProcessor not found
I didn't dig into its root cause but I could detour the problem by removing the content in the file `spring-boot-tools/spring-boot-configuration-processor/src/main/resources/META-INF/services/javax.annotation.processing.Processor` as follows:
#org.springframework.boot.configurationprocessor.ConfigurationMetadataAnnotationProcessor
Error:java: Bad service configuration file, or exception thrown while constructing Processor object: javax.annotation.processing.Processor: Provider org.springframework.boot.configurationprocessor.ConfigurationMetadataAnnotationProcessor not found
I didn't dig into its root cause but I could detour the problem by removing the content in the file `spring-boot-tools/spring-boot-configuration-processor/src/main/resources/META-INF/services/javax.annotation.processing.Processor` as follows:
#org.springframework.boot.configurationprocessor.ConfigurationMetadataAnnotationProcessor
Monday, September 19, 2016
How to change the default Java class comment in IntelliJ
To change the default Java class comment in IntelliJ, do as follows:
IntelliJ IDEA -> Preferences... -> Editor -> File and Code Templates -> Includes -> File Header
IntelliJ IDEA -> Preferences... -> Editor -> File and Code Templates -> Includes -> File Header
Run textsum TensorFlow model
To run `textsum` TensorFlow model, do as follows:
git clone https://github.com/tensorflow/models.git
mkdir textsum_test
cd textsum_test/
ln -s ../models/textsum/data data
ln -s ../models/textsum/ textsum
touch WORKSPACE
bazel build -c opt textsum/...
bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
--data_path=data/data \
--vocab_path=data/vocab \
--log_root=textsum/log_root \
--train_dir=textsum/log_root/train
You can change max run steps with `--max_run_steps`.
Reference:
https://github.com/tensorflow/models/tree/master/textsum
git clone https://github.com/tensorflow/models.git
mkdir textsum_test
cd textsum_test/
ln -s ../models/textsum/data data
ln -s ../models/textsum/ textsum
touch WORKSPACE
bazel build -c opt textsum/...
bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
--data_path=data/data \
--vocab_path=data/vocab \
--log_root=textsum/log_root \
--train_dir=textsum/log_root/train
You can change max run steps with `--max_run_steps`.
Reference:
https://github.com/tensorflow/models/tree/master/textsum
Sunday, September 18, 2016
AssertionError: Empty filelist.
When you run `textsum` TensorFlow model as follows:
bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
--data_path=data/training-* \
--vocab_path=data/vocab \
--log_root=textsum/log_root \
--train_dir=textsum/log_root/train
you might encounter the following error:
AssertionError: Empty filelist.
Replace `--data_path=data/training-*` with `--data_path=data/data` as follows:
bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
--data_path=data/data \
--vocab_path=data/vocab \
--log_root=textsum/log_root \
--train_dir=textsum/log_root/train
Reference:
https://github.com/tensorflow/models/issues/370
bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
--data_path=data/training-* \
--vocab_path=data/vocab \
--log_root=textsum/log_root \
--train_dir=textsum/log_root/train
you might encounter the following error:
AssertionError: Empty filelist.
Replace `--data_path=data/training-*` with `--data_path=data/data` as follows:
bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
--data_path=data/data \
--vocab_path=data/vocab \
--log_root=textsum/log_root \
--train_dir=textsum/log_root/train
Reference:
https://github.com/tensorflow/models/issues/370
ImportError: cannot import name pywrap_tensorflow
When you import `tensorflow`, you might encounter the following error:
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
ImportError: cannot import name pywrap_tensorflow
>>>
If you're in the source directory of TensorFlow, moving out from it will solve the issue.
Reference:
https://github.com/tensorflow/tensorflow/issues/3217
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
ImportError: cannot import name pywrap_tensorflow
>>>
If you're in the source directory of TensorFlow, moving out from it will solve the issue.
Reference:
https://github.com/tensorflow/tensorflow/issues/3217
Install TensorFlow from source on Mac
To install TensorFlow from source on Mac, do as follows:
git clone https://github.com/tensorflow/tensorflow
brew install bazel swig
sudo easy_install -U six
sudo easy_install -U numpy
sudo easy_install wheel
sudo easy_install ipython
./configure
bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip install /tmp/tensorflow_pkg/tensorflow-0.10.0-py2-none-any.whl
Reference:
https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#installing-from-sources
git clone https://github.com/tensorflow/tensorflow
brew install bazel swig
sudo easy_install -U six
sudo easy_install -U numpy
sudo easy_install wheel
sudo easy_install ipython
./configure
bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip install /tmp/tensorflow_pkg/tensorflow-0.10.0-py2-none-any.whl
Reference:
https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#installing-from-sources
Check pip version
To check pip version, do as follows:
$ pip -V
pip 8.1.2 from /Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg (python 2.7)
$
$ pip -V
pip 8.1.2 from /Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg (python 2.7)
$
Install pip in Mac
To install pip in Mac, do as follows:
sudo easy_install pip
Reference:
http://stackoverflow.com/questions/17271319/installing-pip-on-mac-os-x
sudo easy_install pip
Reference:
http://stackoverflow.com/questions/17271319/installing-pip-on-mac-os-x
Monday, September 12, 2016
How to exclude modules from Maven test
To exclude modules from Maven test, do as follows:
./mvnw clean test -pl \!:spring-boot-loader-tools,\!:spring-boot-cli,\!:spring-boot-gradle-tests
Reference:
http://stackoverflow.com/questions/5539348/how-to-exclude-a-module-from-a-maven-reactor-build
./mvnw clean test -pl \!:spring-boot-loader-tools,\!:spring-boot-cli,\!:spring-boot-gradle-tests
Reference:
http://stackoverflow.com/questions/5539348/how-to-exclude-a-module-from-a-maven-reactor-build
Friday, September 9, 2016
Run MariaDB as a service using Homebrew in Mac
To run MariaDB as a service using Homebrew in Mac, use the following command:
brew services start mariadb
brew services start mariadb
Install Homebrew on Mac
To install Homebrew on Mac, use the following command:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Reference:
http://brew.sh/
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Reference:
http://brew.sh/
Wednesday, September 7, 2016
Create a new space in OS X El Cpitan
Press `F3` to open Mission Control.
Press `+` button to create a new space.
References:
https://support.apple.com/kb/PH18757?locale=en_US
https://support.apple.com/kb/PH22059?locale=en_US&viewlocale=en_US
Press `+` button to create a new space.
References:
https://support.apple.com/kb/PH18757?locale=en_US
https://support.apple.com/kb/PH22059?locale=en_US&viewlocale=en_US
Concatenate in React
To concatenate in React, do as follows:
<a href={"/restaurants?landmarkId=" + this.props.landmark.id}>{this.props.landmark.name}</a>
Reference:
http://stackoverflow.com/questions/21668025/react-jsx-access-props-in-quotes
<a href={"/restaurants?landmarkId=" + this.props.landmark.id}>{this.props.landmark.name}</a>
Reference:
http://stackoverflow.com/questions/21668025/react-jsx-access-props-in-quotes
Warning: Unknown DOM property class. Did you mean className?
When you use React, you might encounter the following warning as follows:
Warning: Unknown DOM property class. Did you mean className?
Change `class` to `className`.
Warning: Unknown DOM property class. Did you mean className?
Change `class` to `className`.
Set a custom data.sql in Spring Boot
To set a custom `data.sql` in Spring Boot, use `spring.datasource.data` as follows:
spring.datasource.data=classpath*:test/trust_data_20160907.sql
spring.datasource.data=classpath*:test/trust_data_20160907.sql
Backup only table data in MySQL
To backup only table data in MySQL, use `--no-create-info` as follows:
mysqldump -u trust -p db_trust --no-create-info > trust_data_20160907.sql
Reference:
http://stackoverflow.com/questions/5109993/mysqldump-data-only
mysqldump -u trust -p db_trust --no-create-info > trust_data_20160907.sql
Reference:
http://stackoverflow.com/questions/5109993/mysqldump-data-only
Tuesday, September 6, 2016
Uncaught TypeError: client(...).done is not a function
When you use `rest` as follows:
client({
method: 'GET',
path: '/api/employees'
}).done(response => {
this.setState({
employees: response.entity._embedded.employees
});
});
you might get the following error:
Uncaught TypeError: client(...).done is not a function
Changing `done()` to `then()` as follows fixes the error:
client({
method: 'GET',
path: '/api/employees'
}).then(response => {
this.setState({
employees: response.entity._embedded.employees
});
});
Reference:
https://github.com/cujojs/rest
client({
method: 'GET',
path: '/api/employees'
}).done(response => {
this.setState({
employees: response.entity._embedded.employees
});
});
you might get the following error:
Uncaught TypeError: client(...).done is not a function
Changing `done()` to `then()` as follows fixes the error:
client({
method: 'GET',
path: '/api/employees'
}).then(response => {
this.setState({
employees: response.entity._embedded.employees
});
});
Reference:
https://github.com/cujojs/rest
Uncaught TypeError: React.render is not a function
When you run the following code:
const React = require('react');
ReactDOM.render(
<App />,
document.getElementById('react')
);
you might encounter the following error:
Uncaught TypeError: React.render is not a function
Add `react-dom` as follows:
sudo npm install react-dom --save
Import and use it as follows:
const React = require('react');
const ReactDOM = require('react-dom');
ReactDOM.render(
<App />,
document.getElementById('react')
);
Reference:
http://stackoverflow.com/questions/26627665/error-with-basic-react-example-uncaught-typeerror-undefined-is-not-a-function
const React = require('react');
ReactDOM.render(
<App />,
document.getElementById('react')
);
you might encounter the following error:
Uncaught TypeError: React.render is not a function
Add `react-dom` as follows:
sudo npm install react-dom --save
Import and use it as follows:
const React = require('react');
const ReactDOM = require('react-dom');
ReactDOM.render(
<App />,
document.getElementById('react')
);
Reference:
http://stackoverflow.com/questions/26627665/error-with-basic-react-example-uncaught-typeerror-undefined-is-not-a-function
Module build failed: SyntaxError: Unexpected token
When you run `webpack` with React, you might encounter the following error:
Module build failed: SyntaxError: Unexpected token
Install `babel-preset-react` as follows:
sudo npm install babel-preset-react --save
add `query` to `webpack.config.js` as follows:
module.exports = {
...
module: {
loaders: [
...
{
test: /\.js$/,
loader: 'babel',
query: {
presets: ['react']
}
}
]
}
};
Reference:
http://stackoverflow.com/questions/33460420/babel-loader-jsx-syntaxerror-unexpected-token
Module build failed: SyntaxError: Unexpected token
Install `babel-preset-react` as follows:
sudo npm install babel-preset-react --save
add `query` to `webpack.config.js` as follows:
module.exports = {
...
module: {
loaders: [
...
{
test: /\.js$/,
loader: 'babel',
query: {
presets: ['react']
}
}
]
}
};
Reference:
http://stackoverflow.com/questions/33460420/babel-loader-jsx-syntaxerror-unexpected-token
Monday, September 5, 2016
Module build failed: ReferenceError: Promise is not defined
If you use webpack as follows:
webpack ./js/entry.js ./js/bundle.js
you might encounter the following error:
ERROR in /Users/izeye/~/css-loader!./css/style.css
Module build failed: ReferenceError: Promise is not defined
at LazyResult.async (/Users/izeye/node_modules/css-loader/node_modules/postcss/lib/lazy-result.js:237:31)
at LazyResult.then (/Users/izeye/node_modules/css-loader/node_modules/postcss/lib/lazy-result.js:141:21)
at processCss (/Users/izeye/node_modules/css-loader/lib/processCss.js:199:5)
at Object.module.exports (/Users/izeye/node_modules/css-loader/lib/loader.js:24:2)
@ /Users/izeye/~/style-loader!/Users/izeye/~/css-loader!./css/style.css 4:14-91
Upgrade your Node.js to the latest version.
Reference:
https://github.com/webpack/css-loader/issues/145
webpack ./js/entry.js ./js/bundle.js
you might encounter the following error:
ERROR in /Users/izeye/~/css-loader!./css/style.css
Module build failed: ReferenceError: Promise is not defined
at LazyResult.async (/Users/izeye/node_modules/css-loader/node_modules/postcss/lib/lazy-result.js:237:31)
at LazyResult.then (/Users/izeye/node_modules/css-loader/node_modules/postcss/lib/lazy-result.js:141:21)
at processCss (/Users/izeye/node_modules/css-loader/lib/processCss.js:199:5)
at Object.module.exports (/Users/izeye/node_modules/css-loader/lib/loader.js:24:2)
@ /Users/izeye/~/style-loader!/Users/izeye/~/css-loader!./css/style.css 4:14-91
Upgrade your Node.js to the latest version.
Reference:
https://github.com/webpack/css-loader/issues/145
Upgrade Node.js to the latest version
To upgrade Node.js to the latest version, use `n` as follows:
sudo npm install n -g
n stable
Reference:
http://stackoverflow.com/questions/10075990/upgrading-node-js-to-latest-version
sudo npm install n -g
n stable
Reference:
http://stackoverflow.com/questions/10075990/upgrading-node-js-to-latest-version
Convert `.mp4` to `.mp3` using ffmpeg
To convert `.mp4` to `.mp3` using ffmpeg, do as follows:
ffmpeg -i moon_20160905.mp4 -vn -acodec libmp3lame moon_20160905.mp3
Reference:
http://stackoverflow.com/questions/3032280/how-to-convert-mp4-to-mp3-in-java
ffmpeg -i moon_20160905.mp4 -vn -acodec libmp3lame moon_20160905.mp3
Reference:
http://stackoverflow.com/questions/3032280/how-to-convert-mp4-to-mp3-in-java
Install ffmpeg in Ubuntu
To install ffmpeg in Ubuntu, do as follows:
git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
cd ffmpeg
./configure --disable-yasm --enable-libmp3lame
make
sudo make install
git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
cd ffmpeg
./configure --disable-yasm --enable-libmp3lame
make
sudo make install
Unknown encoder 'libmp3lame'
When you try to convert `.mp4` to `.mp3` with ffmpeg as follows:
ffmpeg -i moon_20160905.mp4 -vn -acodec libmp3lame moon_20160905.mp3
you might encounter the following error:
Unknown encoder 'libmp3lame'
Add `--enable-libmp3lame` when configuring ffmpeg as follows:
./configure --disable-yasm --enable-libmp3lame
ffmpeg -i moon_20160905.mp4 -vn -acodec libmp3lame moon_20160905.mp3
you might encounter the following error:
Unknown encoder 'libmp3lame'
Add `--enable-libmp3lame` when configuring ffmpeg as follows:
./configure --disable-yasm --enable-libmp3lame
ERROR: libmp3lame >= 3.98.3 not found
When you install ffmpeg with the following command in Ubuntu:
./configure --disable-yasm --enable-libmp3lame
you might encounter the following error:
ERROR: libmp3lame >= 3.98.3 not found
Install `libmp3lame-dev` with the following command:
sudo apt-get install libmp3lame-dev
./configure --disable-yasm --enable-libmp3lame
you might encounter the following error:
ERROR: libmp3lame >= 3.98.3 not found
Install `libmp3lame-dev` with the following command:
sudo apt-get install libmp3lame-dev
Friday, September 2, 2016
Prevent RestTemplate from URL encoding of URL value which has already been encoded
To prevent `RestTemplate` from URL encoding of URL value which has already been encoded, use `UriComponentsBuilder` as follows:
uri = UriComponentsBuilder.fromHttpUrl(url).build(true).toUri();
echoed = restTemplate.getForObject(uri, String.class);
assertThat(echoed).isEqualTo(message);
Note the invocation on `build()` with `encoded` parameter having `true` value.
You can see the full sample code in: https://github.com/izeye/spring-boot-throwaway-branches/blob/rest/src/test/java/learningtest/org/springframework/web/client/RestTemplateTests.java
Reference:
http://stackoverflow.com/questions/28182836/resttemplate-to-not-escape-url
uri = UriComponentsBuilder.fromHttpUrl(url).build(true).toUri();
echoed = restTemplate.getForObject(uri, String.class);
assertThat(echoed).isEqualTo(message);
Note the invocation on `build()` with `encoded` parameter having `true` value.
You can see the full sample code in: https://github.com/izeye/spring-boot-throwaway-branches/blob/rest/src/test/java/learningtest/org/springframework/web/client/RestTemplateTests.java
Reference:
http://stackoverflow.com/questions/28182836/resttemplate-to-not-escape-url
Tuesday, August 30, 2016
Use tab characters in text files in IntelliJ
IntelliJ keeps replacing tab characters with spaces and I'm not sure it's its default or I've configured it as such before.
Anyway to change to use tab characters in text files in IntelliJ, do as follows:
IntelliJ IDEA -> Preferences... -> Code Style -> General -> Default Indent Options -> Use tab character
Anyway to change to use tab characters in text files in IntelliJ, do as follows:
IntelliJ IDEA -> Preferences... -> Code Style -> General -> Default Indent Options -> Use tab character
Friday, August 26, 2016
Backup a database in MySQL
To backup a database in MySQL, use the following command:
mysqldump -u trust -p db_trust > trust.sql
mysqldump -u trust -p db_trust > trust.sql
Backup a table in MySQL
To backup a table in MySQL, use the following command:
mysqldump -u trust -p db_trust message > message.sql
Reference:
http://stackoverflow.com/questions/6682916/how-to-take-backup-of-a-single-table-in-the-mysql-database
mysqldump -u trust -p db_trust message > message.sql
Reference:
http://stackoverflow.com/questions/6682916/how-to-take-backup-of-a-single-table-in-the-mysql-database
Error: Cannot find module 'express'
If you encounter the following error:
$ node server.js
module.js:340
throw err;
^
Error: Cannot find module 'express'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/Users/izeye/IdeaProjects/samples-reactjs/samples/docs/tutorial/server.js:3:15)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
$
install `express` manually as follows:
npm install express
or use `package.json`'s `dependencies` as follows:
{
"name": "tutorial",
"version": "1.0.0",
"description": "",
"main": "server.js",
"dependencies": {
"body-parser": "^1.4.3",
"express": "^4.4.5"
},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC"
}
and run `npm install`.
$ node server.js
module.js:340
throw err;
^
Error: Cannot find module 'express'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/Users/izeye/IdeaProjects/samples-reactjs/samples/docs/tutorial/server.js:3:15)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
$
install `express` manually as follows:
npm install express
or use `package.json`'s `dependencies` as follows:
{
"name": "tutorial",
"version": "1.0.0",
"description": "",
"main": "server.js",
"dependencies": {
"body-parser": "^1.4.3",
"express": "^4.4.5"
},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC"
}
and run `npm install`.
Thursday, August 25, 2016
Delete documents having a specific field with a null value in Elasticsearch
To delete documents having a specific field with a null value in Elasticsearch, install `delete-by-query` as follows:
./bin/plugin install delete-by-query
and restart the Elasticsearch.
Now you can delete them as follows:
curl -XDELETE http://localhost:9200/answer/_query?pretty -d '
{
"query": {
"constant_score": {
"filter": {
"missing": {
"field" : "body"
}
}
}
}
}'
References:
https://www.elastic.co/guide/en/elasticsearch/plugins/current/plugins-delete-by-query.html
https://www.elastic.co/guide/en/elasticsearch/plugins/current/delete-by-query-usage.html
./bin/plugin install delete-by-query
and restart the Elasticsearch.
Now you can delete them as follows:
curl -XDELETE http://localhost:9200/answer/_query?pretty -d '
{
"query": {
"constant_score": {
"filter": {
"missing": {
"field" : "body"
}
}
}
}
}'
References:
https://www.elastic.co/guide/en/elasticsearch/plugins/current/plugins-delete-by-query.html
https://www.elastic.co/guide/en/elasticsearch/plugins/current/delete-by-query-usage.html
Search documents having a specific field with a null value in Elasticsearch
To search documents having a specific field with a null value in Elasticsearch, do as follows:
curl http://localhost:9200/answer/_search?pretty -d '
{
"query": {
"constant_score": {
"filter": {
"missing": {
"field" : "body"
}
}
}
}
}'
Reference:
https://www.elastic.co/guide/en/elasticsearch/guide/current/_dealing_with_null_values.html
curl http://localhost:9200/answer/_search?pretty -d '
{
"query": {
"constant_score": {
"filter": {
"missing": {
"field" : "body"
}
}
}
}
}'
Reference:
https://www.elastic.co/guide/en/elasticsearch/guide/current/_dealing_with_null_values.html
Wednesday, August 24, 2016
Remove a file from a Git repository but keep in a local file system
To remove a file from a Git repository but keep in a local file system, do as follows:
git rm --cached some_project.iml
git commit
You can `git push` if needed.
Reference:
http://stackoverflow.com/questions/1143796/remove-a-file-from-a-git-repository-without-deleting-it-from-the-local-filesyste
git rm --cached some_project.iml
git commit
You can `git push` if needed.
Reference:
http://stackoverflow.com/questions/1143796/remove-a-file-from-a-git-repository-without-deleting-it-from-the-local-filesyste
Tuesday, August 23, 2016
Sync from local directory to remote directory with rsync
To sync from local directory to remote directory with rsync, do as follows:
rsync -avr . 1.2.3.4::R/home/izeye/workspaces/izeye/logs/
rsync -avr . 1.2.3.4::R/home/izeye/workspaces/izeye/logs/
Saturday, August 20, 2016
Autowire beans to a non-bean object in Spring framework
To autowire beans to a non-bean object in Spring framework, do as follows:
@Autowired
private ApplicationContext applicationContext;
@Bean
public List<AnswerEngine> answerEngines() {
AutowireCapableBeanFactory factory =
this.applicationContext.getAutowireCapableBeanFactory();
List<AppProperties.AnswerEngineSpec> answerEngineSpecs =
this.properties.getAnswerEngineSpecs();
Collections.sort(
answerEngineSpecs,
(o1, o2) -> Integer.compare(o1.getEngineOrder(), o2.getEngineOrder()));
List<AnswerEngine> answerEngines = new ArrayList<>();
for (AppProperties.AnswerEngineSpec spec : answerEngineSpecs) {
AnswerEngine answerEngine =
(AnswerEngine) ClassUtils.createInstance(spec.getEngineClass());
factory.autowireBean(answerEngine);
answerEngines.add(answerEngine);
}
return answerEngines;
}
Note that getting `AutowireCapableBeanFactory` from `ApplicationContext` and invoking `autowireBean()`.
@Autowired
private ApplicationContext applicationContext;
@Bean
public List<AnswerEngine> answerEngines() {
AutowireCapableBeanFactory factory =
this.applicationContext.getAutowireCapableBeanFactory();
List<AppProperties.AnswerEngineSpec> answerEngineSpecs =
this.properties.getAnswerEngineSpecs();
Collections.sort(
answerEngineSpecs,
(o1, o2) -> Integer.compare(o1.getEngineOrder(), o2.getEngineOrder()));
List<AnswerEngine> answerEngines = new ArrayList<>();
for (AppProperties.AnswerEngineSpec spec : answerEngineSpecs) {
AnswerEngine answerEngine =
(AnswerEngine) ClassUtils.createInstance(spec.getEngineClass());
factory.autowireBean(answerEngine);
answerEngines.add(answerEngine);
}
return answerEngines;
}
Note that getting `AutowireCapableBeanFactory` from `ApplicationContext` and invoking `autowireBean()`.
Apply a plugin to some sub-projects in Gradle
To apply a plugin to some sub-projects in Gradle, do as follows:
configure(subprojects.findAll {it.name == 'ask-anything-api' || it.name == 'ask-anything-answer-module-ua-analyzer'}) {
apply plugin: 'checkstyle'
checkstyle {
toolVersion = '7.0'
configFile = rootProject.file("config/checkstyle/checkstyle.xml")
configProperties = [
'headerLocation': 'config/checkstyle/checkstyle-header.txt',
'suppressionsLocation': 'config/checkstyle/checkstyle-suppressions.xml'
]
}
}
configure(subprojects.findAll {it.name == 'ask-anything-api' || it.name == 'ask-anything-answer-module-ua-analyzer'}) {
apply plugin: 'checkstyle'
checkstyle {
toolVersion = '7.0'
configFile = rootProject.file("config/checkstyle/checkstyle.xml")
configProperties = [
'headerLocation': 'config/checkstyle/checkstyle-header.txt',
'suppressionsLocation': 'config/checkstyle/checkstyle-suppressions.xml'
]
}
}
Thursday, August 18, 2016
Use Octotree with GitHub Enterprise
To use Octotree with GitHub Enterprise, add your GitHub Enterprise URL (say `https://github.ctb.com/`) to `GitHub Enterprise URLs`.
You need a personal access token for GitHub, so go to the following link (Replace `github.ctb.com` with your domain):
https://github.ctb.com/settings/tokens/new
and create one with the following option:
repo Full control of private repositories
Provide the token to `Site access token`.
Reference:
https://github.com/buunguyen/octotree#access-token
You need a personal access token for GitHub, so go to the following link (Replace `github.ctb.com` with your domain):
https://github.ctb.com/settings/tokens/new
and create one with the following option:
repo Full control of private repositories
Provide the token to `Site access token`.
Reference:
https://github.com/buunguyen/octotree#access-token
Install Octotree for easy navigation in GitHub
To install Octotree which helps navigation in GitHub, go to chrome web store and add `Octotree` to Chrome.
You can navigate a GitHub repository with a tree view and download any file.
Reference:
https://github.com/buunguyen/octotree
You can navigate a GitHub repository with a tree view and download any file.
Reference:
https://github.com/buunguyen/octotree
Change tab size for views in GitHub
To change tab size for views in GitHub, use `ts` parameter as follows:
https://github.com/izeye/ask-anything/blob/master/src/main/java/com/ctb/askanything/Application.java?ts=2
You can change tab size for a repository by adding a `.editorconfig` file as follows:
[*.{java,js,html}]
indent_style = tab
indent_size = 2
Now all `.java`, `.js`, `.html` files use 2 spaces for tab.
References:
https://github.com/tiimgreen/github-cheat-sheet#adjust-tab-space
http://stackoverflow.com/questions/8833953/how-to-change-tab-size-on-github
https://github.com/izeye/ask-anything/blob/master/src/main/java/com/ctb/askanything/Application.java?ts=2
You can change tab size for a repository by adding a `.editorconfig` file as follows:
[*.{java,js,html}]
indent_style = tab
indent_size = 2
Now all `.java`, `.js`, `.html` files use 2 spaces for tab.
References:
https://github.com/tiimgreen/github-cheat-sheet#adjust-tab-space
http://stackoverflow.com/questions/8833953/how-to-change-tab-size-on-github
Install Elasticsearch head plugin
To install Elasticsearch head plugin, do as follows:
./bin/plugin install mobz/elasticsearch-head
Restarting Elasticsearch is not necessary.
Reference:
https://github.com/mobz/elasticsearch-head
./bin/plugin install mobz/elasticsearch-head
Restarting Elasticsearch is not necessary.
Reference:
https://github.com/mobz/elasticsearch-head
Thursday, August 11, 2016
Synchronize a Git repository to another Git repository automatically
To synchronize a Git repository to another Git repository automatically, create a script called `sync2bitbucket.sh` as follows:
cd /home/izeye/workspaces/izeye/ask-anything
git pull >> /home/izeye/workspaces/izeye/ask-anything/sync2bitbucket.log 2>&1
git push bitbucket >> /home/izeye/workspaces/izeye/ask-anything/sync2bitbucket.log 2>&1
and add an execution permission to it as follows:
chmod +x sync2bitbucket.sh
Add a job to Cron as follows:
crontab -e
* * * * * /home/izeye/workspaces/izeye/ask-anything/sync2bitbucket.sh
cd /home/izeye/workspaces/izeye/ask-anything
git pull >> /home/izeye/workspaces/izeye/ask-anything/sync2bitbucket.log 2>&1
git push bitbucket >> /home/izeye/workspaces/izeye/ask-anything/sync2bitbucket.log 2>&1
and add an execution permission to it as follows:
chmod +x sync2bitbucket.sh
Add a job to Cron as follows:
crontab -e
* * * * * /home/izeye/workspaces/izeye/ask-anything/sync2bitbucket.sh
`git push` to Bitbucket with SSH key
To `git push` to Bitbucket with SSH key, create an SSH key as follows:
ssh-keygen -t rsa -b 4096 -C "izeye@naver.com"
Add `~/.ssh/id_rsa.pub` to your Bitbucket settings as follows:
Bitbucket settings -> SSH keys -> Add key
Change the remote URL as follows:
git remote set-url bitbucket git+ssh://git@bitbucket.org/ctb-return/ask-anything.git
and now you can `git push` as follows:
git push bitbucket
References:
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
http://stackoverflow.com/questions/8588768/git-push-username-password-how-to-avoid
ssh-keygen -t rsa -b 4096 -C "izeye@naver.com"
Add `~/.ssh/id_rsa.pub` to your Bitbucket settings as follows:
Bitbucket settings -> SSH keys -> Add key
Change the remote URL as follows:
git remote set-url bitbucket git+ssh://git@bitbucket.org/ctb-return/ask-anything.git
and now you can `git push` as follows:
git push bitbucket
References:
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
http://stackoverflow.com/questions/8588768/git-push-username-password-how-to-avoid
Monday, August 8, 2016
Gradle jacocoTestReport SKIPPED
If you run `jacocoTestReport`, it will be skipped as follows:
$ ./gradlew clean jacocoTestReport
:clean
:compileJava
:processResources
:classes
:jacocoTestReport SKIPPED
BUILD SUCCESSFUL
Total time: 9.444 secs
This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html
$
Do `test` first as follows:
$ ./gradlew clean test jacocoTestReport
:clean
:compileJava
:processResources
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test
objc[7046]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
:jacocoTestReport
BUILD SUCCESSFUL
Total time: 29.017 secs
This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html
$
Reference:
http://stackoverflow.com/questions/20032366/running-jacocoreport
$ ./gradlew clean jacocoTestReport
:clean
:compileJava
:processResources
:classes
:jacocoTestReport SKIPPED
BUILD SUCCESSFUL
Total time: 9.444 secs
This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html
$
Do `test` first as follows:
$ ./gradlew clean test jacocoTestReport
:clean
:compileJava
:processResources
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test
objc[7046]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
:jacocoTestReport
BUILD SUCCESSFUL
Total time: 29.017 secs
This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html
$
Reference:
http://stackoverflow.com/questions/20032366/running-jacocoreport
Change Jenkins port
To change Jenkins port, do as follows:
java -jar jenkins.war --httpPort=18080
Reference:
https://wiki.jenkins-ci.org/display/JENKINS/Starting+and+Accessing+Jenkins
java -jar jenkins.war --httpPort=18080
Reference:
https://wiki.jenkins-ci.org/display/JENKINS/Starting+and+Accessing+Jenkins
Remove Elasticsearch license error
When you run Elasticsearch, you might encounter the following error:
[2016-08-08 18:23:17,625][ERROR][license.plugin.core ] [Icarus]
#
# License will expire on [Sunday, September 04, 2016]. If you have a new license, please update it.
# Otherwise, please reach out to your support contact.
#
# Commercial plugins operate with reduced functionality on license expiration:
# - marvel
# - The agent will stop collecting cluster and indices metrics
# - The agent will stop automatically cleaning indices older than [marvel.history.duration]
If you don't have a license for Marvel, remove `marvel-agent` and `license` plugins from Elasticsearch as follows:
$ ./bin/plugin remove marvel-agent
-> Removing marvel-agent...
Removed marvel-agent
$ ./bin/plugin remove license
-> Removing license...
Removed license
ize@lunch19-VirtualBox:~/programs/elasticsearch-2.3.5$
and remove `marvel` plugin from Kibana as follows:
$ ./bin/kibana plugin --remove marvel
Removing marvel...
$
Now starting again should clear the error.
Reference:
https://www.elastic.co/guide/en/marvel/current/installing-marvel.html
[2016-08-08 18:23:17,625][ERROR][license.plugin.core ] [Icarus]
#
# License will expire on [Sunday, September 04, 2016]. If you have a new license, please update it.
# Otherwise, please reach out to your support contact.
#
# Commercial plugins operate with reduced functionality on license expiration:
# - marvel
# - The agent will stop collecting cluster and indices metrics
# - The agent will stop automatically cleaning indices older than [marvel.history.duration]
If you don't have a license for Marvel, remove `marvel-agent` and `license` plugins from Elasticsearch as follows:
$ ./bin/plugin remove marvel-agent
-> Removing marvel-agent...
Removed marvel-agent
$ ./bin/plugin remove license
-> Removing license...
Removed license
ize@lunch19-VirtualBox:~/programs/elasticsearch-2.3.5$
and remove `marvel` plugin from Kibana as follows:
$ ./bin/kibana plugin --remove marvel
Removing marvel...
$
Now starting again should clear the error.
Reference:
https://www.elastic.co/guide/en/marvel/current/installing-marvel.html
Fix Elasticsearch max file descriptors warning
You might encounter the following warning when you run Elasticsearch:
[2016-08-08 13:48:17,065][WARN ][env ] [Michael Nowman] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
Check your max file descriptors as follows:
$ ulimit -n
1024
$ ulimit -Hn
4096
$
Change the value as follows:
$ sudo vi /etc/security/limits.conf
izeye soft nofile 65536
izeye hard nofile 65536
Check again as follows:
$ ulimit -n
65536
$ ulimit -Hn
65536
$
The warning should disappear now.
Reference:
http://stackoverflow.com/questions/21515463/how-to-increase-maximum-file-open-limit-ulimit-in-ubuntu
[2016-08-08 13:48:17,065][WARN ][env ] [Michael Nowman] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
Check your max file descriptors as follows:
$ ulimit -n
1024
$ ulimit -Hn
4096
$
Change the value as follows:
$ sudo vi /etc/security/limits.conf
izeye soft nofile 65536
izeye hard nofile 65536
Check again as follows:
$ ulimit -n
65536
$ ulimit -Hn
65536
$
The warning should disappear now.
Reference:
http://stackoverflow.com/questions/21515463/how-to-increase-maximum-file-open-limit-ulimit-in-ubuntu
Tuesday, August 2, 2016
java.lang.ClassNotFoundException: com.puppycrawl.tools.checkstyle.CheckStyleTask
When using Gradle 2.0, the following error occurred:
java.lang.ClassNotFoundException: com.puppycrawl.tools.checkstyle.CheckStyleTask
Upgrading to Gradle 2.14.1 solves the issue.
Reference:
https://github.com/checkstyle/checkstyle/issues/2107
java.lang.ClassNotFoundException: com.puppycrawl.tools.checkstyle.CheckStyleTask
Upgrading to Gradle 2.14.1 solves the issue.
Reference:
https://github.com/checkstyle/checkstyle/issues/2107
Friday, July 29, 2016
Share Code style schemes in IntelliJ
To share Code style schemes in IntelliJ, do as follows:
File -> Export Settings... -> Select None -> Code style schemes -> OK
Reference:
https://www.jetbrains.com/help/idea/2016.2/exporting-and-importing-settings.html
File -> Export Settings... -> Select None -> Code style schemes -> OK
Reference:
https://www.jetbrains.com/help/idea/2016.2/exporting-and-importing-settings.html
Install spaCy
To install spaCy, do as follows:
Johnnyui-MacBook-Pro:~ izeye$ python -m pip install -U pip virtualenv
...
Johnnyui-MacBook-Pro:~ izeye$ virtualenv .env -p python2
Running virtualenv with interpreter /Library/Frameworks/Python.framework/Versions/2.7/bin/python2
New python executable in /Users/izeye/.env/bin/python
Installing setuptools, pip, wheel...done.
Johnnyui-MacBook-Pro:~ izeye$ source .env/bin/activate
(.env) Johnnyui-MacBook-Pro:~ izeye$ pip install spacy
(.env) Johnnyui-MacBook-Pro:~ izeye$ python
Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import spacy
>>>
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -m spacy.en.download
Downloading...
Downloaded 532.28MB 100.00% 0.24MB/s eta 0s
archive.gz checksum/md5 OK
Model successfully installed.
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -c "import spacy; spacy.load('en'); print('OK')"
OK
Reference:
https://spacy.io/docs#getting-started
Johnnyui-MacBook-Pro:~ izeye$ python -m pip install -U pip virtualenv
...
Johnnyui-MacBook-Pro:~ izeye$ virtualenv .env -p python2
Running virtualenv with interpreter /Library/Frameworks/Python.framework/Versions/2.7/bin/python2
New python executable in /Users/izeye/.env/bin/python
Installing setuptools, pip, wheel...done.
Johnnyui-MacBook-Pro:~ izeye$ source .env/bin/activate
(.env) Johnnyui-MacBook-Pro:~ izeye$ pip install spacy
(.env) Johnnyui-MacBook-Pro:~ izeye$ python
Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 12:54:16)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import spacy
>>>
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -m spacy.en.download
Downloading...
Downloaded 532.28MB 100.00% 0.24MB/s eta 0s
archive.gz checksum/md5 OK
Model successfully installed.
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -c "import spacy; spacy.load('en'); print('OK')"
OK
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -c "import os; import spacy; print(os.path.dirname(spacy.__file__))"
/Users/izeye/.env/lib/python2.7/site-packages/spacy
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -m pip install -U pytest
...
...
(.env) Johnnyui-MacBook-Pro:~ izeye$ python -m pytest /Users/izeye/.env/lib/python2.7/site-packages/spacy --vectors --model --slow
...
(.env) Johnnyui-MacBook-Pro:~ izeye$
https://spacy.io/docs#getting-started
Checkstyle RightCurly alone with IntelliJ
To make Checkstyle RightCurly alone happy with IntelliJ, do as follows:
File -> Settings... -> Code Style -> Java -> Wrapping and Braces
* 'if()' statement
'else' on new line -> true
* 'try' statement
'catch' on new line -> true
'finally' on new line -> true
Reformat Code...
File -> Settings... -> Code Style -> Java -> Wrapping and Braces
* 'if()' statement
'else' on new line -> true
* 'try' statement
'catch' on new line -> true
'finally' on new line -> true
Reformat Code...
ERROR: virtualenv is not compatible with this system or executable
I got the following errors:
$ virtualenv .env
Using base prefix '/Users/izeye/anaconda'
New python executable in /Users/izeye/.env/bin/python
ERROR: The executable /Users/izeye/.env/bin/python is not functioning
ERROR: It thinks sys.prefix is '/Users/izeye' (should be '/Users/izeye/.env')
ERROR: virtualenv is not compatible with this system or executable
$
I just gave up to use Python 3 and worked around with Python 2 as follows:
$ virtualenv .env -p python2
Running virtualenv with interpreter /Library/Frameworks/Python.framework/Versions/2.7/bin/python2
New python executable in /Users/izeye/.env/bin/python
Installing setuptools, pip, wheel...done.
$
$ virtualenv .env
Using base prefix '/Users/izeye/anaconda'
New python executable in /Users/izeye/.env/bin/python
ERROR: The executable /Users/izeye/.env/bin/python is not functioning
ERROR: It thinks sys.prefix is '/Users/izeye' (should be '/Users/izeye/.env')
ERROR: virtualenv is not compatible with this system or executable
$
I just gave up to use Python 3 and worked around with Python 2 as follows:
$ virtualenv .env -p python2
Running virtualenv with interpreter /Library/Frameworks/Python.framework/Versions/2.7/bin/python2
New python executable in /Users/izeye/.env/bin/python
Installing setuptools, pip, wheel...done.
$
Add @author tags for Javadoc comments in IntelliJ
To add @author tags for Javadoc comments in IntelliJ, do as follows:
Preferences... -> File and Code Templates -> Includes -> File Header
/**
* Fill me.
*
* @author Johnny Lim
*/
Preferences... -> File and Code Templates -> Includes -> File Header
/**
* Fill me.
*
* @author Johnny Lim
*/
Wednesday, July 27, 2016
Use CheckStyle in IntelliJ
To use CheckStyle in IntelliJ, do as follows:
File -> Settings... -> CheckStyle
Add a CheckStyle configuration file and activate it.
Open `Checkstyle` window and click `Check Project`.
File -> Settings... -> CheckStyle
Add a CheckStyle configuration file and activate it.
Open `Checkstyle` window and click `Check Project`.
Apply a Copyright comment to all Java source files in IntelliJ
To apply a Copyright comment to all Java source files in IntelliJ, do as follows:
IntelliJ IDEA -> Preferences...
Copyright -> Copyright Profiles
Add a profile as follows:
```
Copyright 2016 the original author or authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
In `Copyright`, select one for `Default project copyright`.
Add a scope.
Finally, apply the Copyright as follows:
`src/main/java` -> Update Copyright...
`src/test/java` -> Update Copyright...
Reference:
https://www.jetbrains.com/help/idea/2016.1/generating-and-updating-copyright-notice.html
IntelliJ IDEA -> Preferences...
Copyright -> Copyright Profiles
Add a profile as follows:
```
Copyright 2016 the original author or authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
In `Copyright`, select one for `Default project copyright`.
Add a scope.
Finally, apply the Copyright as follows:
`src/main/java` -> Update Copyright...
`src/test/java` -> Update Copyright...
Reference:
https://www.jetbrains.com/help/idea/2016.1/generating-and-updating-copyright-notice.html
Monday, July 25, 2016
IllegalArgumentException[No custom metadata prototype registered for type [licenses], node like missing plugins]
If you encounter the following error:
[2016-07-25 16:23:24,384][INFO ][discovery.zen ] [Alex Wilder] failed to send join request to master [{Surtur}{8l2V-7MmSvKyC4oChA1gPA}{1.2.3.4}{1.2.3.4:9300}], reason [RemoteTransportException[[Surtur][1.2.3.4:9300][internal:discovery/zen/join]]; nested: IllegalStateException[failure when sending a validation request to node]; nested: RemoteTransportException[[Alex Wilder][1.2.3.5:9300][internal:discovery/zen/join/validate]]; nested: IllegalArgumentException[No custom metadata prototype registered for type [licenses], node like missing plugins]; ]
Install missing plugins as follows:
./bin/plugin install license
./bin/plugin install marvel-agent
[2016-07-25 16:23:24,384][INFO ][discovery.zen ] [Alex Wilder] failed to send join request to master [{Surtur}{8l2V-7MmSvKyC4oChA1gPA}{1.2.3.4}{1.2.3.4:9300}], reason [RemoteTransportException[[Surtur][1.2.3.4:9300][internal:discovery/zen/join]]; nested: IllegalStateException[failure when sending a validation request to node]; nested: RemoteTransportException[[Alex Wilder][1.2.3.5:9300][internal:discovery/zen/join/validate]]; nested: IllegalArgumentException[No custom metadata prototype registered for type [licenses], node like missing plugins]; ]
Install missing plugins as follows:
./bin/plugin install license
./bin/plugin install marvel-agent
Thursday, July 21, 2016
AWK fields and `if` sample
This is an AWK fields and `if` sample:
cat logs/user_agent/user_agent.log | awk 'BEGIN { FS = "\t" }; { if ($1 == "1234") print $2 }' > user_agent_pc.txt
References:
https://www.gnu.org/software/gawk/manual/html_node/Field-Separators.html
http://www.thegeekstuff.com/2010/02/awk-conditional-statements/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheGeekStuff+(The+Geek+Stuff)
cat logs/user_agent/user_agent.log | awk 'BEGIN { FS = "\t" }; { if ($1 == "1234") print $2 }' > user_agent_pc.txt
References:
https://www.gnu.org/software/gawk/manual/html_node/Field-Separators.html
http://www.thegeekstuff.com/2010/02/awk-conditional-statements/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheGeekStuff+(The+Geek+Stuff)
Show histogram on live objects in Java heap
To show histogram on live objects in Java heap, do as follows:
jmap -histo:live 1234
jmap -histo:live 1234
Monday, July 18, 2016
Disable replicas of a new index in Elasticsearch
To disable replicas of a new index in Elasticsearch, do as follows:
curl -XPUT 'localhost:9200/_template/logstash_template' -d '
{
"template" : "logstash-*",
"settings" : {
"number_of_replicas" : 0
}
}'
Reference:
http://stackoverflow.com/questions/24553718/updating-the-default-index-number-of-replicas-setting-for-new-indices
curl -XPUT 'localhost:9200/_template/logstash_template' -d '
{
"template" : "logstash-*",
"settings" : {
"number_of_replicas" : 0
}
}'
Reference:
http://stackoverflow.com/questions/24553718/updating-the-default-index-number-of-replicas-setting-for-new-indices
Disable replicas of an existing index in Elasticsearch
To disable replicas of an existing index in Elasticsearch, do as follows:
curl -XPUT 'localhost:9200/logstash-2016.07.18/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
Reference:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html
curl -XPUT 'localhost:9200/logstash-2016.07.18/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
Reference:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html
Setup Elasticsearch cluster
Add the following configuration to `config/elasticsearch.yml` in each instance of Elasticsearch:
cluster:
name: some-log
network:
host:
- _eth1_
- _local_
discovery.zen.ping.unicast.hosts: ["1.2.3.4", "1.2.3.5", "1.2.3.6", "1.2.3.7", "1.2.3.8"]
discovery.zen.minimum_master_nodes: 1
Note the value of `discovery.zen.minimum_master_nodes` is used for simplicity. Based on the recommendation it will be 3:
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
cluster:
name: some-log
network:
host:
- _eth1_
- _local_
discovery.zen.ping.unicast.hosts: ["1.2.3.4", "1.2.3.5", "1.2.3.6", "1.2.3.7", "1.2.3.8"]
discovery.zen.minimum_master_nodes: 1
Note the value of `discovery.zen.minimum_master_nodes` is used for simplicity. Based on the recommendation it will be 3:
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
Thursday, July 14, 2016
Change Elasticsearch heap size
To change Elasticsearch heap size, use the `ES_HEAP_SIZE` environment variable as follows:
ES_HEAP_SIZE=8g ./bin/elasticsearch
Reference:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
ES_HEAP_SIZE=8g ./bin/elasticsearch
Reference:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
Wednesday, July 13, 2016
Install Marvel
Intall Marvel into Elasticsearch and Kibana as follows:
cd programs/elasticsearch-2.3.3
./bin/plugin install license
./bin/plugin install marvel-agent
cd ../kibana-4.5.1-linux-x64
./bin/kibana plugin --install elasticsearch/marvel/latest
Restart Elasticsearch and Kibana.
Check the following URL:
http://localhost:5601/app/marvel
Reference:
https://www.elastic.co/kr/downloads/marvel
cd programs/elasticsearch-2.3.3
./bin/plugin install license
./bin/plugin install marvel-agent
cd ../kibana-4.5.1-linux-x64
./bin/kibana plugin --install elasticsearch/marvel/latest
Restart Elasticsearch and Kibana.
Check the following URL:
http://localhost:5601/app/marvel
Reference:
https://www.elastic.co/kr/downloads/marvel
Show all document in an index in Elasticsearch
To show all document in an index in Elasticsearch, do as follows:
$ curl 'localhost:9200/logstash/_search?pretty=true&q=*:*'
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [ {
"_index" : "logstash",
"_type" : "logstash",
"_id" : "AVXjVaB4eRCf5XO_Qkwg",
"_score" : 1.0,
"_source" : {
"firstName" : "Johnny",
"lastName" : "Lim"
}
} ]
}
}
$
Reference:
http://stackoverflow.com/questions/8829468/elasticsearch-query-to-return-all-records
$ curl 'localhost:9200/logstash/_search?pretty=true&q=*:*'
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [ {
"_index" : "logstash",
"_type" : "logstash",
"_id" : "AVXjVaB4eRCf5XO_Qkwg",
"_score" : 1.0,
"_source" : {
"firstName" : "Johnny",
"lastName" : "Lim"
}
} ]
}
}
$
Reference:
http://stackoverflow.com/questions/8829468/elasticsearch-query-to-return-all-records
Monday, July 11, 2016
ZooKeeper Hello, world!
Install ZooKeeper as follows:
tar zxvf zookeeper-3.4.8.tar.gz
Setup and run ZooKeeper as follows:
cd zookeeper-3.4.8
conf/zoo.cfg
tickTime=2000
dataDir=/Users/izeye/zookeeper-data
clientPort=2181
./bin/zkServer.sh start
Test ZooKeeper as follows:
./bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit
getAcl path
close
connect host:port
[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 2] create /zk_test my_data
Created /zk_test
[zk: localhost:2181(CONNECTED) 3] ls /
[zookeeper, zk_test]
[zk: localhost:2181(CONNECTED) 4] get /zk_test
my_data
cZxid = 0x11
ctime = Mon Jul 11 21:03:22 KST 2016
mZxid = 0x11
mtime = Mon Jul 11 21:03:22 KST 2016
pZxid = 0x11
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0
[zk: localhost:2181(CONNECTED) 5] set /zk_test junk
cZxid = 0x11
ctime = Mon Jul 11 21:03:22 KST 2016
mZxid = 0x12
mtime = Mon Jul 11 21:05:11 KST 2016
pZxid = 0x11
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
[zk: localhost:2181(CONNECTED) 6] get /zk_test
junk
cZxid = 0x11
ctime = Mon Jul 11 21:03:22 KST 2016
mZxid = 0x12
mtime = Mon Jul 11 21:05:11 KST 2016
pZxid = 0x11
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
[zk: localhost:2181(CONNECTED) 7] delete /zk_test
[zk: localhost:2181(CONNECTED) 8] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 9]
Reference:
https://zookeeper.apache.org/doc/r3.4.8/zookeeperStarted.html
tar zxvf zookeeper-3.4.8.tar.gz
Setup and run ZooKeeper as follows:
cd zookeeper-3.4.8
conf/zoo.cfg
tickTime=2000
dataDir=/Users/izeye/zookeeper-data
clientPort=2181
./bin/zkServer.sh start
Test ZooKeeper as follows:
./bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit
getAcl path
close
connect host:port
[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 2] create /zk_test my_data
Created /zk_test
[zk: localhost:2181(CONNECTED) 3] ls /
[zookeeper, zk_test]
[zk: localhost:2181(CONNECTED) 4] get /zk_test
my_data
cZxid = 0x11
ctime = Mon Jul 11 21:03:22 KST 2016
mZxid = 0x11
mtime = Mon Jul 11 21:03:22 KST 2016
pZxid = 0x11
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0
[zk: localhost:2181(CONNECTED) 5] set /zk_test junk
cZxid = 0x11
ctime = Mon Jul 11 21:03:22 KST 2016
mZxid = 0x12
mtime = Mon Jul 11 21:05:11 KST 2016
pZxid = 0x11
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
[zk: localhost:2181(CONNECTED) 6] get /zk_test
junk
cZxid = 0x11
ctime = Mon Jul 11 21:03:22 KST 2016
mZxid = 0x12
mtime = Mon Jul 11 21:05:11 KST 2016
pZxid = 0x11
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
[zk: localhost:2181(CONNECTED) 7] delete /zk_test
[zk: localhost:2181(CONNECTED) 8] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 9]
Reference:
https://zookeeper.apache.org/doc/r3.4.8/zookeeperStarted.html
Friday, July 8, 2016
How to change Logstash's default max heap size
To change Logstash's default max heap size, do as follows:
LS_HEAP_SIZE=4g ./bin/logstash -f generator.conf
You can check if it works with `jps -v` as follows:
$ jps -v
15582 Main -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx4g -Xss2048k -Djffi.boot.library.path=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/izeye/programs/logstash-2.3.4/heapdump.hprof -Xbootclasspath/a:/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jruby.jar -Djruby.home=/home/izeye/programs/logstash-2.3.4/vendor/jruby -Djruby.lib=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh
15646 Jps -Dapplication.home=/home/izeye/programs/jdk1.8.0_45 -Xms8m
$
You can see `-Xmx4g` (ie. 4GB).
Reference:
https://www.elastic.co/guide/en/logstash/current/command-line-flags.html
LS_HEAP_SIZE=4g ./bin/logstash -f generator.conf
You can check if it works with `jps -v` as follows:
$ jps -v
15582 Main -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx4g -Xss2048k -Djffi.boot.library.path=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/izeye/programs/logstash-2.3.4/heapdump.hprof -Xbootclasspath/a:/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jruby.jar -Djruby.home=/home/izeye/programs/logstash-2.3.4/vendor/jruby -Djruby.lib=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh
15646 Jps -Dapplication.home=/home/izeye/programs/jdk1.8.0_45 -Xms8m
$
You can see `-Xmx4g` (ie. 4GB).
Reference:
https://www.elastic.co/guide/en/logstash/current/command-line-flags.html
Logstash's default max heap size
To know Logstash's default max heap size, use `jps -v` as follows:
$ jps -v
15396 Main -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xss2048k -Djffi.boot.library.path=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/izeye/programs/logstash-2.3.4/heapdump.hprof -Xbootclasspath/a:/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jruby.jar -Djruby.home=/home/izeye/programs/logstash-2.3.4/vendor/jruby -Djruby.lib=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh
15460 Jps -Dapplication.home=/home/izeye/programs/jdk1.8.0_45 -Xms8m
$
You can see `-Xmx1g` (ie. 1GB).
The result is from Logstash 2.3.4.
$ jps -v
15396 Main -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xss2048k -Djffi.boot.library.path=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/izeye/programs/logstash-2.3.4/heapdump.hprof -Xbootclasspath/a:/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib/jruby.jar -Djruby.home=/home/izeye/programs/logstash-2.3.4/vendor/jruby -Djruby.lib=/home/izeye/programs/logstash-2.3.4/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh
15460 Jps -Dapplication.home=/home/izeye/programs/jdk1.8.0_45 -Xms8m
$
You can see `-Xmx1g` (ie. 1GB).
The result is from Logstash 2.3.4.
How to get JVM default max heap size
To get JVM default max heap size, use the following command:
$ java -XX:+PrintFlagsFinal -version | grep HeapSize
uintx ErgoHeapSizeLimit = 0 {product}
uintx HeapSizePerGCThread = 87241520 {product}
uintx InitialHeapSize := 262144000 {product}
uintx LargePageHeapSizeThreshold = 134217728 {product}
uintx MaxHeapSize := 4179623936 {product}
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
$
In this case, you can see it's 4GB.
Reference:
http://stackoverflow.com/questions/12797560/command-line-tool-to-find-java-heap-size-and-memory-used-linux
$ java -XX:+PrintFlagsFinal -version | grep HeapSize
uintx ErgoHeapSizeLimit = 0 {product}
uintx HeapSizePerGCThread = 87241520 {product}
uintx InitialHeapSize := 262144000 {product}
uintx LargePageHeapSizeThreshold = 134217728 {product}
uintx MaxHeapSize := 4179623936 {product}
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
$
In this case, you can see it's 4GB.
Reference:
http://stackoverflow.com/questions/12797560/command-line-tool-to-find-java-heap-size-and-memory-used-linux
How to get VM parameters of running a Java process
To get VM parameters of running a Java process, do as follows:
$ jps -v
15286 Jps -Dapplication.home=/home/izeye/programs/jdk1.8.0_45 -Xms8m
$
$ jps -v
15286 Jps -Dapplication.home=/home/izeye/programs/jdk1.8.0_45 -Xms8m
$
How to pass an inline environment variable to an application in Linux
To pass an inline environment variable to an application in Linux, do as follows:
$ LS_HEAP_SIZE=4g ./some-script.sh
4g
$ echo $LS_HEAP_SIZE
$
`some-script.sh` simply includes `echo` for the environment variable as follows:
echo $LS_HEAP_SIZE
Note that the environment variable is not available in the next prompt.
$ LS_HEAP_SIZE=4g ./some-script.sh
4g
$ echo $LS_HEAP_SIZE
$
`some-script.sh` simply includes `echo` for the environment variable as follows:
echo $LS_HEAP_SIZE
Note that the environment variable is not available in the next prompt.
How to unset an environment variable set by `export` in Linux
To unset an environment variable set by `export` in Linux, use `unset` as follows:
$ export LS_HEAP_SIZE=4g
$ echo $LS_HEAP_SIZE
4g
$ unset LS_HEAP_SIZE
$ echo $LS_HEAP_SIZE
$
Reference:
http://stackoverflow.com/questions/6877727/how-do-i-delete-unset-an-exported-environment-variable
$ export LS_HEAP_SIZE=4g
$ echo $LS_HEAP_SIZE
4g
$ unset LS_HEAP_SIZE
$ echo $LS_HEAP_SIZE
$
Reference:
http://stackoverflow.com/questions/6877727/how-do-i-delete-unset-an-exported-environment-variable
Benchmark Logstash Kafka input plugin with no-op output except metrics
Test environment is as follows:
```
CPU: Intel L5640 2.26 GHz 6 cores * 2 EA
Memory: SAMSUNG PC3-10600R 4 GB * 4 EA
HDD: TOSHIBA SAS 10,000 RPM 300 GB * 6 EA
OS: CentOS release 6.6 (Final)
Logstash 2.3.4
```
I used the following configuration:
```
input {
kafka {
zk_connect => '1.2.3.4:2181'
topic_id => 'some-log'
consumer_threads => 1
}
}
filter {
metrics {
meter => "events"
add_tag => "metric"
}
}
output {
if "metric" in [tags] {
stdout { codec => line {
format => "Count: %{[events][count]}"
}
}
}
}
```
I got the following result:
```
./bin/logstash -f some-log-kafka.conf
Settings: Default pipeline workers: 24
Pipeline main started
Count: 9614
Count: 23080
Count: 37087
Count: 50815
Count: 64517
Count: 78296
Count: 91977
Count: 105990
```
Default `flush_interval` is 5 seconds, so it looks roughly 14K per 5 seconds (2.8K per second).
With `consumer_threads` set to 10, I got the following result:
```
./bin/logstash -f impression-log-kafka.conf
Settings: Default pipeline workers: 24
Pipeline main started
Count: 9599
Count: 23254
Count: 37253
Count: 51029
Count: 64881
Count: 78868
Count: 92663
Count: 106267
```
It looks increasing `consumer_threads` doesn't make much difference.
Based on benchmark using my simple no-op consumer built with Kafka client Java library in the same machine, I expected around 30K and at least 10K but it's just 1/10 of the expected performance.
I'm not sure this could be enhanced by customizing configuration.
As a base test, I tested with `generator` as follows:
```
input {
generator { }
}
filter {
metrics {
meter => "events"
add_tag => "metric"
}
}
output {
#stdout { }
if "metric" in [tags] {
stdout { codec => line { format => "Count: %{[events][count]}"
}
}
}
}
```
I got the following result:
```
./bin/logstash -f generator.conf
Settings: Default pipeline workers: 24
Pipeline main started
Count: 200584
Count: 424425
Count: 651640
Count: 881605
Count: 1110150
```
It looks roughly 220K per 5 seconds (44K per second). It's not good as much as I expected as my simple no-op consumer built with Kafka client Java library consumed from 30K to 50K per second.
What am I missing here?
References:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html
http://izeye.blogspot.kr/2016/07/benchmark-simple-no-op-kafka-consumer.html
```
CPU: Intel L5640 2.26 GHz 6 cores * 2 EA
Memory: SAMSUNG PC3-10600R 4 GB * 4 EA
HDD: TOSHIBA SAS 10,000 RPM 300 GB * 6 EA
OS: CentOS release 6.6 (Final)
Logstash 2.3.4
```
I used the following configuration:
```
input {
kafka {
zk_connect => '1.2.3.4:2181'
topic_id => 'some-log'
consumer_threads => 1
}
}
filter {
metrics {
meter => "events"
add_tag => "metric"
}
}
output {
if "metric" in [tags] {
stdout { codec => line {
format => "Count: %{[events][count]}"
}
}
}
}
```
I got the following result:
```
./bin/logstash -f some-log-kafka.conf
Settings: Default pipeline workers: 24
Pipeline main started
Count: 9614
Count: 23080
Count: 37087
Count: 50815
Count: 64517
Count: 78296
Count: 91977
Count: 105990
```
Default `flush_interval` is 5 seconds, so it looks roughly 14K per 5 seconds (2.8K per second).
With `consumer_threads` set to 10, I got the following result:
```
./bin/logstash -f impression-log-kafka.conf
Settings: Default pipeline workers: 24
Pipeline main started
Count: 9599
Count: 23254
Count: 37253
Count: 51029
Count: 64881
Count: 78868
Count: 92663
Count: 106267
```
It looks increasing `consumer_threads` doesn't make much difference.
Based on benchmark using my simple no-op consumer built with Kafka client Java library in the same machine, I expected around 30K and at least 10K but it's just 1/10 of the expected performance.
I'm not sure this could be enhanced by customizing configuration.
As a base test, I tested with `generator` as follows:
```
input {
generator { }
}
filter {
metrics {
meter => "events"
add_tag => "metric"
}
}
output {
#stdout { }
if "metric" in [tags] {
stdout { codec => line { format => "Count: %{[events][count]}"
}
}
}
}
```
I got the following result:
```
./bin/logstash -f generator.conf
Settings: Default pipeline workers: 24
Pipeline main started
Count: 200584
Count: 424425
Count: 651640
Count: 881605
Count: 1110150
```
It looks roughly 220K per 5 seconds (44K per second). It's not good as much as I expected as my simple no-op consumer built with Kafka client Java library consumed from 30K to 50K per second.
What am I missing here?
References:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html
http://izeye.blogspot.kr/2016/07/benchmark-simple-no-op-kafka-consumer.html
Benchmark a simple no-op Kafka consumer using Kafka client Java library
Test environment is as follows:
```
CPU: Intel L5640 2.26 GHz 6 cores * 2 EA
Memory: SAMSUNG PC3-10600R 4 GB * 4 EA
HDD: TOSHIBA SAS 10,000 RPM 300 GB * 6 EA
OS: CentOS release 6.6 (Final)
Kafka server 0.9.0.0
Kafka client Java library 0.9.0.1
```
I used a custom tool as follows:
```
git clone https://github.com/izeye/kafka-consumer.git
cd kafka-consumer/
./gradlew clean bootRepackage
java -jar build/libs/kafka-consumer-1.0.jar --spring.profiles.active=noop --kafka.consumer.bootstrap-servers=1.2.3.4:9092 --kafka.consumer.group-id=logstash --kafka.consumer.topic=some-log
```
I got the following result:
```
# of consumed logs per second: 29531
# of consumed logs per second: 38848
# of consumed logs per second: 28747
# of consumed logs per second: 49191
# of consumed logs per second: 28797
```
It consumed from 30K to 50K.
```
CPU: Intel L5640 2.26 GHz 6 cores * 2 EA
Memory: SAMSUNG PC3-10600R 4 GB * 4 EA
HDD: TOSHIBA SAS 10,000 RPM 300 GB * 6 EA
OS: CentOS release 6.6 (Final)
Kafka server 0.9.0.0
Kafka client Java library 0.9.0.1
```
I used a custom tool as follows:
```
git clone https://github.com/izeye/kafka-consumer.git
cd kafka-consumer/
./gradlew clean bootRepackage
java -jar build/libs/kafka-consumer-1.0.jar --spring.profiles.active=noop --kafka.consumer.bootstrap-servers=1.2.3.4:9092 --kafka.consumer.group-id=logstash --kafka.consumer.topic=some-log
```
I got the following result:
```
# of consumed logs per second: 29531
# of consumed logs per second: 38848
# of consumed logs per second: 28747
# of consumed logs per second: 49191
# of consumed logs per second: 28797
```
It consumed from 30K to 50K.
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 552313, only 36 bytes available
If you try to connect from Kafka client 10.0.0.0 to Kafka server 0.9.0.0, you will get the following exception:
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 552313, only 36 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73) ~[kafka-clients-0.10.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380) ~[kafka-clients-0.10.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449) ~[kafka-clients-0.10.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269) ~[kafka-clients-0.10.0.0.jar:na]
Changing Kafka client version to 0.9.0.1 solves the problem.
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 552313, only 36 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73) ~[kafka-clients-0.10.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380) ~[kafka-clients-0.10.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449) ~[kafka-clients-0.10.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269) ~[kafka-clients-0.10.0.0.jar:na]
Changing Kafka client version to 0.9.0.1 solves the problem.
Thursday, July 7, 2016
How to extract a range of lines in a text file to another file in Linux
To extract a range of lines in a text file to another file in Linux, use the following command:
sed -n '1000,2000p' some.log > new.log
sed -n '1000,2000p' some.log > new.log
List Kafka consumer groups
To list Kafka consumer groups, use the following command:
./bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
./bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
Transfer logs from Kafka to Elasticsearch via Logstash
You can transfer logs from Kafka to Elasticsearch via Logstash with the follwoing configuration:
input {
kafka {
topic_id => 'some_log'
}
}
filter {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{INT:log_version}\t%{INT:some_id}\t%{DATA:some_field}\t%{GREEDYDATA:last_field}" }
}
if [some_id] not in ["1", "2", "3"] {
drop { }
}
}
output {
elasticsearch {
hosts => [ "1.2.3.4:9200" ]
}
#stdout {
#codec => json
# codec => rubydebug
#}
}
Note that the last field can't be `DATA`. If you use `DATA`, the last field won't be parsed.
Reference:
http://stackoverflow.com/questions/38240392/logstash-grok-filter-doesnt-work-for-the-last-field
input {
kafka {
topic_id => 'some_log'
}
}
filter {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{INT:log_version}\t%{INT:some_id}\t%{DATA:some_field}\t%{GREEDYDATA:last_field}" }
}
if [some_id] not in ["1", "2", "3"] {
drop { }
}
}
output {
elasticsearch {
hosts => [ "1.2.3.4:9200" ]
}
#stdout {
#codec => json
# codec => rubydebug
#}
}
Note that the last field can't be `DATA`. If you use `DATA`, the last field won't be parsed.
Reference:
http://stackoverflow.com/questions/38240392/logstash-grok-filter-doesnt-work-for-the-last-field
How to insert a tab in Mac terminal
To insert a tab in Mac terminal, do as follows:
control + `V` + tab
Reference:
https://discussions.apple.com/thread/2225213?tstart=0
control + `V` + tab
Reference:
https://discussions.apple.com/thread/2225213?tstart=0
Tuesday, June 28, 2016
How to install Flask in CentOs 5.3
When you try to install Flask, you might encounter the following error:
$ sudo pip install Flask
Traceback (most recent call last):
File "/usr/bin/pip", line 7, in ?
sys.exit(
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 236, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2097, in load_entry_point
return ep.load()
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python2.4/site-packages/pip-8.1.2-py2.4.egg/pip/__init__.py", line 208
except PipError as exc:
^
SyntaxError: invalid syntax
$
It's a Python version problem.
You can check with the following code:
try:
print 'a'
except PipError as exc:
print 'b'
With Python 2.4.3, you will get the following error:
$ python -V
Python 2.4.3
$ python try_except.py
File "try_except.py", line 3
except PipError as exc:
^
SyntaxError: invalid syntax
$
With Python 2.6.6, you will get no error as follows:
$ python -V
Python 2.6.6
$
$ python try_except.py
a
$
To install the latest Python, do as follows:
sudo yum install zlib-devel
sudo yum install openssl-devel
cd /home/izeye/programs
wget https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tgz
tar zxvf Python-2.7.12.tgz
cd Python-2.7.12
./configure --prefix=/home/izeye/programs/python
make
make install
cd ..
wget https://bootstrap.pypa.io/get-pip.py --no-check-certificate
./python/bin/python get-pip.py
./python/bin/pip install Flask
Now you can check Flask is working as follows:
$ ./python/bin/python
Python 2.7.12 (default, Jun 28 2016, 23:29:43)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import flask
>>>
$ sudo pip install Flask
Traceback (most recent call last):
File "/usr/bin/pip", line 7, in ?
sys.exit(
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 236, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2097, in load_entry_point
return ep.load()
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python2.4/site-packages/pip-8.1.2-py2.4.egg/pip/__init__.py", line 208
except PipError as exc:
^
SyntaxError: invalid syntax
$
It's a Python version problem.
You can check with the following code:
try:
print 'a'
except PipError as exc:
print 'b'
With Python 2.4.3, you will get the following error:
$ python -V
Python 2.4.3
$ python try_except.py
File "try_except.py", line 3
except PipError as exc:
^
SyntaxError: invalid syntax
$
With Python 2.6.6, you will get no error as follows:
$ python -V
Python 2.6.6
$
$ python try_except.py
a
$
To install the latest Python, do as follows:
sudo yum install zlib-devel
sudo yum install openssl-devel
cd /home/izeye/programs
wget https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tgz
tar zxvf Python-2.7.12.tgz
cd Python-2.7.12
./configure --prefix=/home/izeye/programs/python
make
make install
cd ..
wget https://bootstrap.pypa.io/get-pip.py --no-check-certificate
./python/bin/python get-pip.py
./python/bin/pip install Flask
Now you can check Flask is working as follows:
$ ./python/bin/python
Python 2.7.12 (default, Jun 28 2016, 23:29:43)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import flask
>>>
ImportError: cannot import name HTTPSHandler
If you encounter the following error:
$ python get-pip.py
Traceback (most recent call last):
File "get-pip.py", line 19177, in <module>
main()
File "get-pip.py", line 194, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
File "/tmp/tmpI4QU87/pip.zip/pip/__init__.py", line 16, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/vcs/subversion.py", line 9, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/index.py", line 30, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/wheel.py", line 39, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/_vendor/distlib/scripts.py", line 14, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/_vendor/distlib/compat.py", line 31, in <module>
ImportError: cannot import name HTTPSHandler
$
install `openssl-devel` as follows:
sudo yum install openssl-devel
and rebuild and install Python as follows:
./configure --prefix=/home/izeye/programs/python
make
make install
Reference:
http://stackoverflow.com/questions/20688034/importerror-cannot-import-name-httpshandler-using-pip
$ python get-pip.py
Traceback (most recent call last):
File "get-pip.py", line 19177, in <module>
main()
File "get-pip.py", line 194, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
File "/tmp/tmpI4QU87/pip.zip/pip/__init__.py", line 16, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/vcs/subversion.py", line 9, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/index.py", line 30, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/wheel.py", line 39, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/_vendor/distlib/scripts.py", line 14, in <module>
File "/tmp/tmpI4QU87/pip.zip/pip/_vendor/distlib/compat.py", line 31, in <module>
ImportError: cannot import name HTTPSHandler
$
install `openssl-devel` as follows:
sudo yum install openssl-devel
and rebuild and install Python as follows:
./configure --prefix=/home/izeye/programs/python
make
make install
Reference:
http://stackoverflow.com/questions/20688034/importerror-cannot-import-name-httpshandler-using-pip
zipimport.ZipImportError: can't decompress data; zlib not available
If you encounter the following error:
$ python get-pip.py
Traceback (most recent call last):
File "get-pip.py", line 19177, in <module>
main()
File "get-pip.py", line 194, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
zipimport.ZipImportError: can't decompress data; zlib not available
$
install `zlib-devel` as follows:
sudo yum install zlib-devel
and rebuild and install Python as follows:
./configure --prefix=/home/izeye/programs/python
make
make install
Reference:
http://stackoverflow.com/questions/6169522/no-module-named-zlib
$ python get-pip.py
Traceback (most recent call last):
File "get-pip.py", line 19177, in <module>
main()
File "get-pip.py", line 194, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
zipimport.ZipImportError: can't decompress data; zlib not available
$
install `zlib-devel` as follows:
sudo yum install zlib-devel
and rebuild and install Python as follows:
./configure --prefix=/home/izeye/programs/python
make
make install
Reference:
http://stackoverflow.com/questions/6169522/no-module-named-zlib
404 when executing `yum list`
When you're executing `yum list`, you may encounter the following error:
$ sudo yum list
Gathering header information file(s) from server(s)
Server: Red Hat Linux 4AS - x86_64 - Base
retrygrab() failed for:
http://centos.ustc.edu.cn/centos/4/os/i386/headers/header.info
Executing failover method
failover: out of servers to try
Error getting file http://centos.ustc.edu.cn/centos/4/os/i386/headers/header.info
[Errno 4] IOError: HTTP Error 404: Not Found
$
Opening the URL in a web browser will get 404, too: http://centos.ustc.edu.cn/centos/4/os/i386/headers/header.info
You can see `readme` in http://centos.ustc.edu.cn/centos/4/
and it includes the following:
```
This directory (and version of CentOS) is depreciated.
CentOS-4 is now past EOL
You can get the last released version of centos 4.9 here:
http://vault.centos.org/4.9/
```
Modify `/etc/yum.conf` as follows:
[base]
name=Red Hat Linux $releasever - $basearch - Base
#baseurl=http://centos.ustc.edu.cn/centos/4/os/i386/
baseurl=http://vault.centos.org/4.0/os/i386/
[updates]
name=Red Hat Linux $releasever - Updates
#baseurl=http://centos.ustc.edu.cn/centos/4/updates/i386/
baseurl=http://vault.centos.org/4.0/updates/i386/
and try `yum list` again.
$ sudo yum list
Gathering header information file(s) from server(s)
Server: Red Hat Linux 4AS - x86_64 - Base
retrygrab() failed for:
http://centos.ustc.edu.cn/centos/4/os/i386/headers/header.info
Executing failover method
failover: out of servers to try
Error getting file http://centos.ustc.edu.cn/centos/4/os/i386/headers/header.info
[Errno 4] IOError: HTTP Error 404: Not Found
$
Opening the URL in a web browser will get 404, too: http://centos.ustc.edu.cn/centos/4/os/i386/headers/header.info
You can see `readme` in http://centos.ustc.edu.cn/centos/4/
and it includes the following:
```
This directory (and version of CentOS) is depreciated.
CentOS-4 is now past EOL
You can get the last released version of centos 4.9 here:
http://vault.centos.org/4.9/
```
Modify `/etc/yum.conf` as follows:
[base]
name=Red Hat Linux $releasever - $basearch - Base
#baseurl=http://centos.ustc.edu.cn/centos/4/os/i386/
baseurl=http://vault.centos.org/4.0/os/i386/
[updates]
name=Red Hat Linux $releasever - Updates
#baseurl=http://centos.ustc.edu.cn/centos/4/updates/i386/
baseurl=http://vault.centos.org/4.0/updates/i386/
and try `yum list` again.
Wednesday, June 22, 2016
How to convert URL to punycode in Java
To convert a URL to punycode in Java, you can use the following code if you're using Spring framework:
public static String getPunycodeUrl(String url) {
UriComponentsBuilder uriComponentsBuilder = UriComponentsBuilder.fromHttpUrl(url);
String host = uriComponentsBuilder.build().getHost();
return uriComponentsBuilder.host(IDN.toASCII(host)).toUriString();
}
public static String getPunycodeUrl(String url) {
UriComponentsBuilder uriComponentsBuilder = UriComponentsBuilder.fromHttpUrl(url);
String host = uriComponentsBuilder.build().getHost();
return uriComponentsBuilder.host(IDN.toASCII(host)).toUriString();
}
Friday, June 17, 2016
Spring Boot heap dump script
If you're using Spring Boot and creating an `application.pid`, you can create the following script to dump heap (live objects only):
jmap -dump:live,format=b,file=heap_dump.`date +%Y%m%d_%H%M%S`.hprof `cat application.pid`
jmap -dump:live,format=b,file=heap_dump.`date +%Y%m%d_%H%M%S`.hprof `cat application.pid`
Spring Boot thread dump script
If you're using Spring Boot and creating an `application.pid`, you can create the following script to dump threads:
jstack `cat application.pid` > thread_dump.`date +%Y%m%d_%H%M%S`
jstack `cat application.pid` > thread_dump.`date +%Y%m%d_%H%M%S`
Monday, June 13, 2016
How to pass parameters for Java compiler like `-Xlint:unchecked` in Gradle
If you encounter the following warning in Gradle:
Note: /Users/izeye/IdeaProjects/trust/src/test/java/com/ctb/trust/SpringBootActuatorTests.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
you can add the following configuration:
tasks.withType(JavaCompile) {
options.compilerArgs << '-Xlint:unchecked'
}
And then you will see the following detail:
Map<String, Object> health = response.getBody();
^
required: Map<String,Object>
found: Map
1 warning
Reference:
https://discuss.gradle.org/t/it-seems-that-javac-compiler-options-is-not-passed-to-compilejava-task-on-gradle2-6-with-jdk8u60/11271
Note: /Users/izeye/IdeaProjects/trust/src/test/java/com/ctb/trust/SpringBootActuatorTests.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
you can add the following configuration:
tasks.withType(JavaCompile) {
options.compilerArgs << '-Xlint:unchecked'
}
And then you will see the following detail:
Map<String, Object> health = response.getBody();
^
required: Map<String,Object>
found: Map
1 warning
Reference:
https://discuss.gradle.org/t/it-seems-that-javac-compiler-options-is-not-passed-to-compilejava-task-on-gradle2-6-with-jdk8u60/11271
Thursday, June 9, 2016
Use a specific value of enum for a column value in JPA
If you use JPA 2.1, you can use JPA converter as follows:
@Converter(autoApply = true)
public class RatingScoreConverter implements AttributeConverter<RatingScore, Integer> {
@Override
public Integer convertToDatabaseColumn(RatingScore attribute) {
return attribute.getScore();
}
@Override
public RatingScore convertToEntityAttribute(Integer dbData) {
return RatingScore.getValueByScore(dbData);
}
}
Reference:
https://dzone.com/articles/mapping-enums-done-right
@Converter(autoApply = true)
public class RatingScoreConverter implements AttributeConverter<RatingScore, Integer> {
@Override
public Integer convertToDatabaseColumn(RatingScore attribute) {
return attribute.getScore();
}
@Override
public RatingScore convertToEntityAttribute(Integer dbData) {
return RatingScore.getValueByScore(dbData);
}
}
Reference:
https://dzone.com/articles/mapping-enums-done-right
org.hibernate.TransientObjectException: object references an unsaved transient instance
When saving an entity having the following property:
@ManyToMany
private Set<Landmark> landmarks;
the following exception might occur:
org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.ctb.trust.core.restaurant.domain.Landmark
Set `cascade` as follows:
@ManyToMany(cascade = CascadeType.ALL)
private Set<Landmark> landmarks;
ONLY IF its behavior is appropriate for you.
Otherwise, you can save them manually.
Reference:
http://stackoverflow.com/questions/2302802/object-references-an-unsaved-transient-instance-save-the-transient-instance-be
@ManyToMany
private Set<Landmark> landmarks;
the following exception might occur:
org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.ctb.trust.core.restaurant.domain.Landmark
Set `cascade` as follows:
@ManyToMany(cascade = CascadeType.ALL)
private Set<Landmark> landmarks;
ONLY IF its behavior is appropriate for you.
Otherwise, you can save them manually.
Reference:
http://stackoverflow.com/questions/2302802/object-references-an-unsaved-transient-instance-save-the-transient-instance-be
Wednesday, June 8, 2016
How to check errors in logrotate
To check errors in logrotate, run the logrotate cron script as follows:
$ sudo /etc/cron.daily/logrotate
error: skipping "/home/izeye/programs/nginx/logs/access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
error: skipping "/home/izeye/programs/nginx/logs/error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
$
If you encounter the above errors, you can change the owner and the group of the `logs` directory to the `root` in this case as follows:
sudo chown -R root:root logs
I can't see any security effect with this but I'm not sure because I'm not an expert on security.
Reference:
http://serverfault.com/questions/381081/where-does-logrotate-save-their-own-log
$ sudo /etc/cron.daily/logrotate
error: skipping "/home/izeye/programs/nginx/logs/access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
error: skipping "/home/izeye/programs/nginx/logs/error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
$
If you encounter the above errors, you can change the owner and the group of the `logs` directory to the `root` in this case as follows:
sudo chown -R root:root logs
I can't see any security effect with this but I'm not sure because I'm not an expert on security.
Reference:
http://serverfault.com/questions/381081/where-does-logrotate-save-their-own-log
How to get the last rotation time with logrotate
To get the last rotation time with logrotate, use the following command:
cat /var/lib/logrotate.status
Reference:
http://serverfault.com/questions/189320/how-can-i-monitor-what-logrotate-is-doing
cat /var/lib/logrotate.status
Reference:
http://serverfault.com/questions/189320/how-can-i-monitor-what-logrotate-is-doing
Tuesday, June 7, 2016
Change the number of partitions of a specific topic in Kafka
Check the number of partitions of a specific topic with the following command:
./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic event
and change the number of partitions of the topic with the following command:
./bin/kafka-topics.sh --zookeeper localhost:2181 --topic event --alter --partitions 10
./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic event
and change the number of partitions of the topic with the following command:
./bin/kafka-topics.sh --zookeeper localhost:2181 --topic event --alter --partitions 10
Sunday, June 5, 2016
Invalid CSRF Token 'null' was found on the request parameter '_csrf' or header 'X-CSRF-TOKEN'.
If you see the following error:
There was an unexpected error (type=Forbidden, status=403).
Invalid CSRF Token 'null' was found on the request parameter '_csrf' or header 'X-CSRF-TOKEN'.
Check if you're trying to sign out (log out, logout) in security-ignored path.
`CsrfToken` will be `null` in security-ignored path.
There was an unexpected error (type=Forbidden, status=403).
Invalid CSRF Token 'null' was found on the request parameter '_csrf' or header 'X-CSRF-TOKEN'.
Check if you're trying to sign out (log out, logout) in security-ignored path.
`CsrfToken` will be `null` in security-ignored path.
Saturday, June 4, 2016
Why aren't my all `*.log` files committed into Git?
There's no entry like `*.log` in the `.gitignore` file of my project.
But I can't add `*.log` with the following message:
C:\Users\izeye\IdeaProjects\test>git addsrc\test\resources\test.log
The following paths are ignored by one of your .gitignore files:
src\test\resources\test.log
Use -f if you really want to add them.
fatal: no files added
C:\Users\izeye\IdeaProjects\test>
I can add it forcefully:
C:\Users\izeye\IdeaProjects\test>git add src\test\resources\test.log -f
warning: LF will be replaced by CRLF in src/test/resources/test.log.
The file will have its original line endings in your working directory.
C:\Users\izeye\IdeaProjects\test>
but why?
Finally I found the following configuration in `.gitconfig`:
[core]
autocrlf = true
excludesfile = C:\\Users\\izeye\\Documents\\gitignore_global.txt
and `gitignore_global.txt` has the following configuration:
#ignore thumbnails created by windows
Thumbs.db
#Ignore files build by Visual Studio
*.obj
*.exe
*.pdb
*.user
*.aps
*.pch
*.vspscc
*_i.c
*_p.c
*.ncb
*.suo
*.tlb
*.tlh
*.bak
*.cache
*.ilk
*.log
*.dll
*.lib
*.sbr
That's why I can't add `*.log`.
But I can't add `*.log` with the following message:
C:\Users\izeye\IdeaProjects\test>git addsrc\test\resources\test.log
The following paths are ignored by one of your .gitignore files:
src\test\resources\test.log
Use -f if you really want to add them.
fatal: no files added
C:\Users\izeye\IdeaProjects\test>
I can add it forcefully:
C:\Users\izeye\IdeaProjects\test>git add src\test\resources\test.log -f
warning: LF will be replaced by CRLF in src/test/resources/test.log.
The file will have its original line endings in your working directory.
C:\Users\izeye\IdeaProjects\test>
but why?
Finally I found the following configuration in `.gitconfig`:
[core]
autocrlf = true
excludesfile = C:\\Users\\izeye\\Documents\\gitignore_global.txt
and `gitignore_global.txt` has the following configuration:
#ignore thumbnails created by windows
Thumbs.db
#Ignore files build by Visual Studio
*.obj
*.exe
*.pdb
*.user
*.aps
*.pch
*.vspscc
*_i.c
*_p.c
*.ncb
*.suo
*.tlb
*.tlh
*.bak
*.cache
*.ilk
*.log
*.dll
*.lib
*.sbr
That's why I can't add `*.log`.
Wednesday, June 1, 2016
Limit `kafka-logs` directory disk size
You can limit `kafka-logs` directory disk size through log retension hours.
Open `config/server.properties` and change the following property:
log.retention.hours=168
Open `config/server.properties` and change the following property:
log.retention.hours=168
Sunday, May 29, 2016
Profile a Java application in Linux
You can profile a Java application in Linux with `perf`.
Install `perf`:
yum install `perf`
Profile a Java application with the following command:
sudo perf record -g -p 1722
and press Ctrl + C after waiting for enough samples.
Create a report with the following command:
sudo perf report
and navigate it.
References:
http://www.evanjones.ca/java-native-leak-bug.html
http://superuser.com/questions/380733/how-can-i-install-perf-a-linux-peformance-monitoring-tool
Install `perf`:
yum install `perf`
Profile a Java application with the following command:
sudo perf record -g -p 1722
and press Ctrl + C after waiting for enough samples.
Create a report with the following command:
sudo perf report
and navigate it.
References:
http://www.evanjones.ca/java-native-leak-bug.html
http://superuser.com/questions/380733/how-can-i-install-perf-a-linux-peformance-monitoring-tool
How to analyze native memory leak in Java
Memory leak on heap is easy but how can I analyze native memory leak in Java?
You can use `jemalloc` for this purpose.
Install `jemalloc`:
wget https://github.com/jemalloc/jemalloc/releases/download/4.2.0/jemalloc-4.2.0.tar.bz2
bzip2 -d jemalloc-4.2.0.tar.bz2
tar xvf jemalloc-4.2.0.tar
cd jemalloc-4.2.0
./configure --prefix=/home/izeye/programs/jemalloc --enable-prof
make
make install
Profile your application just by running it with the following environment variables:
export LD_PRELOAD=/home/izeye/programs/jemalloc/lib/libjemalloc.so
export MALLOC_CONF=prof_leak:true,lg_prof_sample:0,prof_final:true
Create a report with the following command:
/home/izeye/programs/jemalloc/bin/jeprof --show_bytes --svg `which java` jeprof.65301.0.f.heap > result.svg
Now you can spot the leaking point by seeing the result.
In my case, the culprit was Deflater. More precisely it was me. I missed to call `end()` to release the native resources.
References:
http://www.evanjones.ca/java-native-leak-bug.html
https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/
https://github.com/jemalloc/jemalloc/releases
https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
You can use `jemalloc` for this purpose.
Install `jemalloc`:
wget https://github.com/jemalloc/jemalloc/releases/download/4.2.0/jemalloc-4.2.0.tar.bz2
bzip2 -d jemalloc-4.2.0.tar.bz2
tar xvf jemalloc-4.2.0.tar
cd jemalloc-4.2.0
./configure --prefix=/home/izeye/programs/jemalloc --enable-prof
make
make install
Profile your application just by running it with the following environment variables:
export LD_PRELOAD=/home/izeye/programs/jemalloc/lib/libjemalloc.so
export MALLOC_CONF=prof_leak:true,lg_prof_sample:0,prof_final:true
Create a report with the following command:
/home/izeye/programs/jemalloc/bin/jeprof --show_bytes --svg `which java` jeprof.65301.0.f.heap > result.svg
Now you can spot the leaking point by seeing the result.
In my case, the culprit was Deflater. More precisely it was me. I missed to call `end()` to release the native resources.
References:
http://www.evanjones.ca/java-native-leak-bug.html
https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/
https://github.com/jemalloc/jemalloc/releases
https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
sh: dot: command not found
If you got the following error:
sh: dot: command not found
try the following command:
sudo yum install graphviz -y
Reference:
http://johnjianfang.blogspot.kr/2009/10/sh-dot-command-not-found.html
sh: dot: command not found
try the following command:
sudo yum install graphviz -y
Reference:
http://johnjianfang.blogspot.kr/2009/10/sh-dot-command-not-found.html
Thursday, May 19, 2016
How to find logs by OOM killer
When your application has been killed by OOM killer, you can use the following command to find the logs by it:
sudo grep oom /var/log/*
Reference:
http://unix.stackexchange.com/questions/128642/debug-out-of-memory-with-var-log-messages
sudo grep oom /var/log/*
Reference:
http://unix.stackexchange.com/questions/128642/debug-out-of-memory-with-var-log-messages
Check Kafka offset lag
To check Kafka's offset lag, use the following command:
$ ./bin/kafka-consumer-offset-checker.sh --broker-info --group test-group --zookeeper localhost:2181 --topic test-topic
[2016-05-19 16:57:30,771] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
Group Topic Pid Offset logSize Lag Owner
test test-topic 0 294386 4349292 4054906 none
BROKER INFO
0 -> 1.2.3.4:9092
$
$ ./bin/kafka-consumer-offset-checker.sh --broker-info --group test-group --zookeeper localhost:2181 --topic test-topic
[2016-05-19 16:57:30,771] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
Group Topic Pid Offset logSize Lag Owner
test test-topic 0 294386 4349292 4054906 none
BROKER INFO
0 -> 1.2.3.4:9092
$
Wednesday, May 18, 2016
Setup JMX in Kafka
To setup JMX in Kafka, use the following command:
JMX_PORT=10000 ./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
JMX_PORT=10000 ./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
Monday, May 16, 2016
How to clean up `marked for deletion` in Kafka
After deleting topic as follows:
$ ./bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-topic
Topic my-topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
$
you will still see the topic with `marked for deletion` as follows:
$ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
my-topic - marked for deletion
$
To clean it up, add the following line to `config/server.properties`:
delete.topic.enable=true
and restart the Kafka.
You can see the topic has gone away soon.
$ ./bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic my-topic
Topic my-topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
$
you will still see the topic with `marked for deletion` as follows:
$ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
my-topic - marked for deletion
$
To clean it up, add the following line to `config/server.properties`:
delete.topic.enable=true
and restart the Kafka.
You can see the topic has gone away soon.
Logstash doesn't work with Kafka output
With the following configuration:
input {
file {
path => "/home/izeye/programs/logstash-2.3.2/some.log"
start_position => beginning
}
}
output {
kafka {
bootstrap_servers => "1.2.3.4:9092"
topic_id => "some-topic"
codec => line
}
# stdout {
# }
}
the following command didn't work:
./bin/logstash -f some.conf >> logstash.log 2>&1 &
Trying with Java client didn't work, either.
In my case, the cause was the wrong host name which is unreachable.
So I modified `config/server.properties` as follows:
advertised.host.name=1.2.3.4
After that, both Logstash and Java client worked.
input {
file {
path => "/home/izeye/programs/logstash-2.3.2/some.log"
start_position => beginning
}
}
output {
kafka {
bootstrap_servers => "1.2.3.4:9092"
topic_id => "some-topic"
codec => line
}
# stdout {
# }
}
the following command didn't work:
./bin/logstash -f some.conf >> logstash.log 2>&1 &
Trying with Java client didn't work, either.
In my case, the cause was the wrong host name which is unreachable.
So I modified `config/server.properties` as follows:
advertised.host.name=1.2.3.4
After that, both Logstash and Java client worked.
Sunday, May 15, 2016
Kafka commands
The following commands are for quick reference:
./bin/zookeeper-server-start.sh config/zookeeper.properties >> zookeeper.log 2>&1 &
./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
./bin/kafka-topics.sh --list --zookeeper localhost:2181
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Reference:
http://kafka.apache.org/documentation.html#quickstart
./bin/zookeeper-server-start.sh config/zookeeper.properties >> zookeeper.log 2>&1 &
./bin/kafka-server-start.sh config/server.properties >> kafka.log 2>&1 &
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
./bin/kafka-topics.sh --list --zookeeper localhost:2181
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Reference:
http://kafka.apache.org/documentation.html#quickstart
Tuesday, May 10, 2016
connect() failed (110: Connection timed out) while connecting to upstream
If you encounter the following error in logs/error.log in Nginx:
2016/05/10 22:59:24 [error] 20722#0: *56830 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.2.3.4, server: localhost, request: "GET /api/v1/events/xxx HTTP/1.1", upstream: "http://127.0.0.1:8080/api/v1/events/xxx", host: "api.izeye.com", referrer: "https://www.izeye.com/"
you can fix by setting keepalive between Nginx and Tomcat as follows:
upstream backend {
server localhost:8080;
keepalive 32;
}
server {
listen 80;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
...
2016/05/10 22:59:24 [error] 20722#0: *56830 connect() failed (110: Connection timed out) while connecting to upstream, client: 1.2.3.4, server: localhost, request: "GET /api/v1/events/xxx HTTP/1.1", upstream: "http://127.0.0.1:8080/api/v1/events/xxx", host: "api.izeye.com", referrer: "https://www.izeye.com/"
you can fix by setting keepalive between Nginx and Tomcat as follows:
upstream backend {
server localhost:8080;
keepalive 32;
}
server {
listen 80;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
...
}
Thursday, April 28, 2016
java.lang.NoSuchMethodError: org.hamcrest.Matcher.describeMismatch(Ljava/lang/Object;Lorg/hamcrest/Description;)V
When you got the following error:
java.lang.NoSuchMethodError: org.hamcrest.Matcher.describeMismatch(Ljava/lang/Object;Lorg/hamcrest/Description;)V
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.junit.rules.ExpectedException.handleException(ExpectedException.java:252)
at org.junit.rules.ExpectedException.access$000(ExpectedException.java:106)
at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:241)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
try to change `mockito-all` to `mockito-core` as follows:
// testCompile("org.mockito:mockito-all:${mockitoVersion}")
testCompile("org.mockito:mockito-core:${mockitoVersion}")
It worked for me.
Reference:
http://stackoverflow.com/questions/7869711/getting-nosuchmethoderror-org-hamcrest-matcher-describemismatch-when-running
java.lang.NoSuchMethodError: org.hamcrest.Matcher.describeMismatch(Ljava/lang/Object;Lorg/hamcrest/Description;)V
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.junit.rules.ExpectedException.handleException(ExpectedException.java:252)
at org.junit.rules.ExpectedException.access$000(ExpectedException.java:106)
at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:241)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
try to change `mockito-all` to `mockito-core` as follows:
// testCompile("org.mockito:mockito-all:${mockitoVersion}")
testCompile("org.mockito:mockito-core:${mockitoVersion}")
It worked for me.
Reference:
http://stackoverflow.com/questions/7869711/getting-nosuchmethoderror-org-hamcrest-matcher-describemismatch-when-running
Subscribe to:
Posts (Atom)