五年

一身转战三千里 一剑曾当百万师


  • 首页

  • 关于

  • 归档

  • 标签

mysql优化

发表于 2017-05-12 |

#mysql 优化策略

目前看到喜欢的mysql优化策略来自美团点评技术团队

防身的mysql优化策略:

  1. 索引

    1. 最左前缀匹配原则,非常重要的原则,mysql会一直向右匹配直到遇到范围查询(>、<、between、like)就停止匹配,比如a = 1 and b = 2 and c > 3 and d = 4 如果建立(a,b,c,d)顺序的索引,d是用不到索引的,如果建立(a,b,d,c)的索引则都可以用到,a,b,d的顺序可以任意调整。

    2. =和in可以乱序,比如a = 1 and b = 2 and c = 3 建立(a,b,c)索引可以任意顺序,mysql的查询优化器会帮你优化成索引可以识别的形式

    3. 尽量选择区分度高的列作为索引,区分度的公式是count(distinct col)/count(*),表示字段不重复的比例,比例越大我们扫描的记录数越少,唯一键的区分度是1,而一些状态、性别字段可能在大数据面前区分度就是0,那可能有人会问,这个比例有什么经验值吗?使用场景不同,这个值也很难确定,一般需要join的字段我们都要求是0.1以上,即平均1条扫描10条记录

    4. 索引列不能参与计算,保持列“干净”,比如from_unixtime(create_time) = ’2014-05-29’就不能使用到索引,原因很简单,b+树中存的都是数据表中的字段值,但进行检索时,需要把所有元素都应用函数才能比较,显然成本太大。所以语句应该写成create_time = unix_timestamp(’2014-05-29’);
    5. 尽量的扩展索引,不要新建索引。比如表中已经有a的索引,现在要加(a,b)的索引,那么只需要修改原来的索引即可
  2. 更省力的索引检查
    还是美团点评技术团队的case:

用例如下,适合对mysql优化无力深入的。。。。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
./sqladvisor -h dev-be-mysql.co904fphklgb.ap-southeast-1.rds.amazonaws.com -P 3306 -u langapi -p '!ang!iv*b*' -d db_billing -q "SELECT ca.tid, ca.pfid, ca.cash, ca.gold, ca.order_time, ca.status, ca.op_id, ca.op_time, ca.to_account, ch.name, ch.phone, ch.birthday, ch.address, ch.id_card_address, ch.id_card_img_0, ch.id_card_img_1, ch.bank_card_img, ch.bank_code, ch.bank_name, ch.email, ch.bank_branch_name, cr.company_id, cp.company AS company, ch.cashier_type, ch.op_id AS cashier_op_id FROM tb_cashier AS ca LEFT JOIN tb_cashier_hot AS ch ON ca.tid = ch.tid LEFT JOIN tb_company_relationship AS cr ON cr.pfid = ca.pfid LEFT JOIN tb_company AS cp ON cp.id = cr.company_id LIMIT 100" -v 1
2017-05-12 17:48:12 31770 [Note] 第1步: 对SQL解析优化之后得到的SQL:select `ca`.`tid` AS `tid`,`ca`.`pfid` AS `pfid`,`ca`.`cash` AS `cash`,`ca`.`gold` AS `gold`,`ca`.`order_time` AS `order_time`,`ca`.`status` AS `status`,`ca`.`op_id` AS `op_id`,`ca`.`op_time` AS `op_time`,`ca`.`to_account` AS `to_account`,`ch`.`name` AS `name`,`ch`.`phone` AS `phone`,`ch`.`birthday` AS `birthday`,`ch`.`address` AS `address`,`ch`.`id_card_address` AS `id_card_address`,`ch`.`id_card_img_0` AS `id_card_img_0`,`ch`.`id_card_img_1` AS `id_card_img_1`,`ch`.`bank_card_img` AS `bank_card_img`,`ch`.`bank_code` AS `bank_code`,`ch`.`bank_name` AS `bank_name`,`ch`.`email` AS `email`,`ch`.`bank_branch_name` AS `bank_branch_name`,`cr`.`company_id` AS `company_id`,`cp`.`company` AS `company`,`ch`.`cashier_type` AS `cashier_type`,`ch`.`op_id` AS `cashier_op_id` from (((`db_billing`.`tb_cashier` `ca` left join `db_billing`.`tb_cashier_hot` `ch` on((`ca`.`tid` = `ch`.`tid`))) left join `db_billing`.`tb_company_relationship` `cr` on((`cr`.`pfid` = `ca`.`pfid`))) le
2017-05-12 17:48:12 31770 [Note] 第2步:开始解析join on条件:ca.tid=ch.tid
2017-05-12 17:48:12 31770 [Note] 第3步:开始解析join on条件:cr.pfid=ca.pfid
2017-05-12 17:48:12 31770 [Note] 第4步:开始解析join on条件:cp.id=cr.company_id
2017-05-12 17:48:12 31770 [Note] 第5步:开始选择驱动表,一共有1个候选驱动表
2017-05-12 17:48:12 31770 [Note] explain select * from tb_cashier
2017-05-12 17:48:12 31770 [Note] 第6步:候选驱动表tb_cashier的结果集行数为:182
2017-05-12 17:48:12 31770 [Note] 第7步:选择表tb_cashier为驱动表
2017-05-12 17:48:12 31770 [Note] 第8步:表tb_cashier 的SQL太逆天,没有优化建议
2017-05-12 17:48:12 31770 [Note] 第9步:开始验证 字段tid是不是主键。表名:tb_cashier_hot
2017-05-12 17:48:12 31770 [Note] show index from tb_cashier_hot where Key_name = 'PRIMARY' and Column_name ='tid' and Seq_in_index = 1
2017-05-12 17:48:12 31770 [Note] 第10步:字段tid不是主键。表名:tb_cashier_hot
2017-05-12 17:48:12 31770 [Note] 第11步:开始验证 字段tid是不是主键。表名:tb_cashier_hot
2017-05-12 17:48:12 31770 [Note] show index from tb_cashier_hot where Key_name = 'PRIMARY' and Column_name ='tid' and Seq_in_index = 1
2017-05-12 17:48:12 31770 [Note] 第12步:字段tid不是主键。表名:tb_cashier_hot
2017-05-12 17:48:12 31770 [Note] 第13步:开始验证表中是否已存在相关索引。表名:tb_cashier_hot, 字段名:tid, 在索引中的位置:1
2017-05-12 17:48:12 31770 [Note] show index from tb_cashier_hot where Column_name ='tid' and Seq_in_index =1
2017-05-12 17:48:12 31770 [Note] 第14步:开始输出表tb_cashier_hot索引优化建议:
2017-05-12 17:48:12 31770 [Note] Create_Index_SQL:alter table tb_cashier_hot add index idx_tid(tid)
2017-05-12 17:48:12 31770 [Note] 第15步:开始验证 字段pfid是不是主键。表名:tb_company_relationship
2017-05-12 17:48:12 31770 [Note] show index from tb_company_relationship where Key_name = 'PRIMARY' and Column_name ='pfid' and Seq_in_index = 1
2017-05-12 17:48:12 31770 [Note] 第16步:字段pfid不是主键。表名:tb_company_relationship
2017-05-12 17:48:12 31770 [Note] 第17步:开始验证 字段pfid是不是主键。表名:tb_company_relationship
2017-05-12 17:48:12 31770 [Note] show index from tb_company_relationship where Key_name = 'PRIMARY' and Column_name ='pfid' and Seq_in_index = 1
2017-05-12 17:48:12 31770 [Note] 第18步:字段pfid不是主键。表名:tb_company_relationship
2017-05-12 17:48:12 31770 [Note] 第19步:开始验证表中是否已存在相关索引。表名:tb_company_relationship, 字段名:pfid, 在索引中的位置:1
2017-05-12 17:48:12 31770 [Note] show index from tb_company_relationship where Column_name ='pfid' and Seq_in_index =1
2017-05-12 17:48:12 31770 [Note] 第20步:索引(pfid)已存在
2017-05-12 17:48:12 31770 [Note] 第21步:开始验证 字段id是不是主键。表名:tb_company
2017-05-12 17:48:12 31770 [Note] show index from tb_company where Key_name = 'PRIMARY' and Column_name ='id' and Seq_in_index = 1
2017-05-12 17:48:12 31770 [Note] 第22步:字段id是主键。表名:tb_company
2017-05-12 17:48:12 31770 [Note] 第23步:表tb_company 经过运算得到的索引列首列是主键,直接放弃,没有优化建议
2017-05-12 17:48:12 31770 [Note] 第24步: SQLAdvisor结束!

session解析

发表于 2017-05-12 |

#session解析

##http
Http是一种无状态性的协议。这是因为此种协议不要求浏览器在每次请求中标明它自己的身份,并且浏览器以及服务器之间并没有保持一个持久性的连接用于多个页面之间的访问。当一个用户访问一个站点的时候,用户的浏览器发送一个http请求到服务器,服务器返回给浏览器一个http响应。其实很简单的一个概念,客户端一个请求,服务器端一个回复,这就是整个基于http协议的通讯过程。

##cookie
Cookie(复数形态Cookies),中文名称为“小型文本文件”或“小甜饼”[1],指某些网站为了辨别用户身份而储存在用户本地终端(Client Side)上的数据(通常经过加密)。定义于RFC2109。是网景公司的前雇员卢·蒙特利在1993年3月的发明[2]。

浏览器中cookie
Alt text

##禁用cookie之后

当禁用cookie之后,浏览器进行网页的访问,cookie就不存在了,自然不能将session_id传入到服务端,服务端也就无法识别用户的身份。

这里面的核心问题其实跟cookie没关系,仅仅是session_id(用户身份标识)无法传输到服务端了。

那我们把session_id作为一个参数来看待,解决的问题的方案就出来了,手动的传输我们session_id到服务端就好了。

这就是url重写。不管是get 还是post 请求,根据业务相应的传输方案就好。

当前端里面的session_id,通过请求发送到服务器之后,php服务端处理方案是:

  1. 获取客户端传入的session_id,
  2. 设置session_id
  3. 即可正常的读取session内容
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<?php
session_start();
session_id('020e163uku6ptt7cm6eoj50992');
var_dump($_SESSION);
if (empty($_SESSION['count'])) {
$_SESSION['count'] = 1;
} else {
$_SESSION['count']++;
}
?>
<p>
Hello visitor, you have seen this page <?php echo $_SESSION['count'].htmlspecialchars(SID); ?> times.
</p>
<p>
To continue, <a href="<?php echo "nextpage.php?". htmlspecialchars(SID); ?>">click here</a>.
</p>

上述代码

SID的session_id,在cookie被禁用的模式下,会自动被赋值
session_id($sid),通过手动设置session_id 既可以实现session的状态

##php 中的 session

php中session的位置可以进行配置:
session.save_path = “/tmp/session”

我使用的服务器,默认在/tmp里面
session在服务器上的实际存在情况
|字段|值|
|-:-|
|path |/tmp/sess_ucqqv1ej1lulttmste47hpuj56|
|info|$key\|type:len:"$value"\

参考文档:

SESSION 原理

redis集群

发表于 2017-05-05 |

#redis

[toc]

##1. 基础配置参数

redis基础配置

字段 配置 eg
prot 端口 7379
daemonize yes 后台执行
cluster-enabled yes 开启集群
cluster-config-file nodes.conf 集群配置文件,自动生成
cluster-node-timeout 5000 集群中节点状态检查超时时间,超过界限会被标记为失败,并通过选举算法,关闭节点
cluster-slave-validity-factor 尝试主从切换的时间 为0时始终常识进行主从切换 ;请注意,任何不同于零的值都可能导致如果没有从站能够故障转移,则主站故障后Redis Cluster将不可用
logfile “redis.log” log文件配置
cluster-migration-barrier ?
cluster-require-full-coverage 集群空间全覆盖检查? 如果这设置为是,默认情况下,如果某个百分比的密钥空间未被任何节点覆盖,则群集将停止接受写入,如果选项设置为否,则集群仍将提供查询,即使仅可以处理关于键子集的请求。
dbfilename save db on disk dbfilename 7379.rdb 最快1分钟备份一次 默认的备份方式
appendonly yes aof持久化开启,有较大的io压力,谨慎开启
appendfilename save cmd 7379.appendonly.aof 每条命令执行进行备份;io压力
appendfsync always everysec no 递减

##2. 高可用

###架构目标 Cluster + (master & slaves)
可选方案

  1. redis集群
  2. 主从
  3. sentinel 2

###集群数据一致性保障

  1. 异步操作,不能保证强一致性。
  2. 节点超时

##3. 集群创建的流程

请以官方文档为主:https://redis.io/topics/cluster-tutorial

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
** redis.conf文件
port 7000
daemonize yes
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
logfile "redis.log"
** 启动集群 输出
vagrant@vagrant-ubuntu-trusty-64:~$ /data/local/redis/redis-3.2.8/src/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:7000
127.0.0.1:7001
127.0.0.1:7002
Adding replica 127.0.0.1:7003 to 127.0.0.1:7000
Adding replica 127.0.0.1:7004 to 127.0.0.1:7001
Adding replica 127.0.0.1:7005 to 127.0.0.1:7002
M: fb165309530288508d643b45381c114ac23246e7 127.0.0.1:7000
slots:0-5460 (5461 slots) master
M: 849c0c5c3401cd429cfc12943773c8c5c3878b63 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
M: a9be2c77dd3c67728315eba597dd13ce5ecd7523 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
S: e8addb041027ab38ef58a1e2a5cfa920ff188fca 127.0.0.1:7003
replicates fb165309530288508d643b45381c114ac23246e7
S: b5d559e19d2a539b16d5b0cb05743a24865854cf 127.0.0.1:7004
replicates 849c0c5c3401cd429cfc12943773c8c5c3878b63
S: 97364828304bcff993cddd696c631f79eeaad7f1 127.0.0.1:7005
replicates a9be2c77dd3c67728315eba597dd13ce5ecd7523
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: fb165309530288508d643b45381c114ac23246e7 127.0.0.1:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: b5d559e19d2a539b16d5b0cb05743a24865854cf 127.0.0.1:7004
slots: (0 slots) slave
replicates 849c0c5c3401cd429cfc12943773c8c5c3878b63
S: 97364828304bcff993cddd696c631f79eeaad7f1 127.0.0.1:7005
slots: (0 slots) slave
replicates a9be2c77dd3c67728315eba597dd13ce5ecd7523
M: 849c0c5c3401cd429cfc12943773c8c5c3878b63 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: e8addb041027ab38ef58a1e2a5cfa920ff188fca 127.0.0.1:7003
slots: (0 slots) slave
replicates fb165309530288508d643b45381c114ac23246e7
M: a9be2c77dd3c67728315eba597dd13ce5ecd7523 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

节点id创建的时候确定。不随着ip port 改变;

###集群中客户端 输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
14079:M 04 May 07:44:31.314 # Server started, Redis version 3.2.8
14079:M 04 May 07:44:31.315 * The server is now ready to accept connections on port 7000
14079:M 04 May 07:52:02.099 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
14079:M 04 May 07:52:02.119 # IP address for this node updated to 127.0.0.1
14079:M 04 May 07:52:06.529 * Slave 127.0.0.1:7003 asks for synchronization
14079:M 04 May 07:52:06.529 * Full resync requested by slave 127.0.0.1:7003
14079:M 04 May 07:52:06.529 * Starting BGSAVE for SYNC with target: disk
14079:M 04 May 07:52:06.529 * Background saving started by pid 14677
14677:C 04 May 07:52:06.531 * DB saved on disk
14677:C 04 May 07:52:06.532 * RDB: 0 MB of memory used by copy-on-write
14079:M 04 May 07:52:06.626 * Background saving terminated with success
14079:M 04 May 07:52:06.628 * Synchronization with slave 127.0.0.1:7003 succeeded
14079:M 04 May 07:52:07.032 # Cluster state changed: ok

###数据重塑-分配插槽 Resharding the cluster

所有的重塑抽取的都是从节点的前面slot开始抽取。

1
2
3
4
5
6
7
8
9
开始重新塑造
/data/local/redis/redis-3.2.8/src/redis-trib.rb reshard 127.0.0.1:7000
检查塑造结果
/data/local/redis/redis-3.2.8/src/redis-trib.rb check 127.0.0.1:7000
查看自己的节点
redis-cli -p 7000 cluster nodes | grep myself

###数据重塑-分配插槽 Scripting a resharding operation

1
2
3
4
./redis-trib.rb reshard --from <node-id> --to <node-id> --slots <number of slots> --yes <host>:<port>
eg
./redis-trib.rb reshard --from fb165309530288508d643b45381c114ac23246e7 --to 849c0c5c3401cd429cfc12943773c8c5c3878b63 --slots 500 --yes 127.0.0.1:7000

####集群重塑源文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
vagrant@vagrant-ubuntu-trusty-64:~$ /data/local/redis/redis-3.2.8/src/redis-trib.rb reshard 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: fb165309530288508d643b45381c114ac23246e7 127.0.0.1:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: b5d559e19d2a539b16d5b0cb05743a24865854cf 127.0.0.1:7004
slots: (0 slots) slave
replicates 849c0c5c3401cd429cfc12943773c8c5c3878b63
S: 97364828304bcff993cddd696c631f79eeaad7f1 127.0.0.1:7005
slots: (0 slots) slave
replicates a9be2c77dd3c67728315eba597dd13ce5ecd7523
M: 849c0c5c3401cd429cfc12943773c8c5c3878b63 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: e8addb041027ab38ef58a1e2a5cfa920ff188fca 127.0.0.1:7003
slots: (0 slots) slave
replicates fb165309530288508d643b45381c114ac23246e7
M: a9be2c77dd3c67728315eba597dd13ce5ecd7523 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1000
What is the receiving node ID? fb165309530288508d643b45381c114ac23246e7
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:all
分别从其它节点中取得所需的slots
Ready to move 1000 slots.
Source nodes:
M: 849c0c5c3401cd429cfc12943773c8c5c3878b63 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: a9be2c77dd3c67728315eba597dd13ce5ecd7523 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
Destination node:
M: fb165309530288508d643b45381c114ac23246e7 127.0.0.1:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
Resharding plan:
Moving slot 5461 from 849c0c5c3401cd429cfc12943773c8c5c3878b63
Moving slot 5462 from 849c0c5c3401cd429cfc12943773c8c5c3878b63

###故障转移

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
查看所有的主节点
redis-cli -p 7000 cluster nodes | grep master
主动出发销毁
redis-cli -p 7002 debug segfault
查看销毁后的节点情况
redis-cli -p 7000 cluster nodes
集群节点信息
The output of the CLUSTER NODES command may look intimidating, but it is actually pretty simple, and is composed of the following tokens:
Node ID
ip:port
flags: master, slave, myself, fail, ...
if it is a slave, the Node ID of the master
Time of the last pending PING still waiting for a reply.
Time of the last PONG received.
Configuration epoch for this node (see the Cluster specification). 节点的配置时期?
Status of the link to this node. 指向此节点的连接状态
Slots served... 插槽分配

###故障转移(Manual failover)

Manual failovers are supported by Redis Cluster using the CLUSTER FAILOVER command, that must be executed in one of the slaves of the master you want to failover.
手动故障转移再升级系统时非常有效。必须在预进行故障转移的从库中进行。

####log文件

1
2
3
4
5
6
# Manual failover user request accepted.
# Received replication offset for paused master manual failover: 347540
# All master replication stream processed, manual failover can start.
# Start of election delayed for 0 milliseconds (rank #0, offset 347540).
# Starting a failover election for epoch 7545.
# Failover election won: I'm the new master.

###添加新的节点Adding a new node

1
2
3
4
5
6
7
随机添加(添加7006端口的机器,到7000集群中,默认添加的会是主库)
/data/local/redis/redis-3.2.8/src/redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
指定添加
/data/local/redis/redis-3.2.8/src/redis-trib.rb add-node --slave --master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7006 127.0.0.1:7000

####源文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
vagrant@vagrant-ubuntu-trusty-64:~$ /data/local/redis/redis-3.2.8/src/redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
>>> Adding node 127.0.0.1:7006 to cluster 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: fb165309530288508d643b45381c114ac23246e7 127.0.0.1:7000
slots:500-5961,10923-11421 (5961 slots) master
1 additional replica(s)
S: b5d559e19d2a539b16d5b0cb05743a24865854cf 127.0.0.1:7004
slots: (0 slots) slave
replicates 849c0c5c3401cd429cfc12943773c8c5c3878b63
M: 97364828304bcff993cddd696c631f79eeaad7f1 127.0.0.1:7005
slots:11422-16383 (4962 slots) master
0 additional replica(s)
M: 849c0c5c3401cd429cfc12943773c8c5c3878b63 127.0.0.1:7001
slots:0-499,5962-10922 (5461 slots) master
1 additional replica(s)
S: e8addb041027ab38ef58a1e2a5cfa920ff188fca 127.0.0.1:7003
slots: (0 slots) slave
replicates fb165309530288508d643b45381c114ac23246e7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:7006 to make it join the cluster.
[OK] New node added correctly.

####手动添加的状态

1
2
3
vagrant@vagrant-ubuntu-trusty-64:~$ redis-cli -c -p 7006
127.0.0.1:7006> cluster nodes
b322c8631fe98c26e7e26b5cd8f8d1e3b32da16b 127.0.0.1:7006 myself,master - 0 0 0 connected

###为集群自动添加从库 Adding a new node as a replica
cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e

###删除节点

1
2
redis-trib del-node 127.0.0.1:7000 b322c8631fe98c26e7e26b5cd8f8d1e3b32da16b

如果需要删除的节点重新作为从库加入到集群中,需要删除node rm -rf nodes.conf
删除主节点时,需要讲主节点的数据清空:参考手动故障转移

###副本迁移 Replicas migration

1
2
CLUSTER REPLICATE <master-node-id>

死掉的主库,在被重新拉起之后,自动顶替它的主库的从库

##redis配置文件 完整参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
# Accept connections on the specified port, default is 7379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 7379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /usr/local/var/run/redis.pid when daemonized.
daemonize yes
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/usr/local/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_7379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile "/data/log/redis.7379.log"
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename 7379.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /usr/local/var/db/redis/
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# slaves in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover slave instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a masteer.
#
# The listed IP and address normally reported by a slave is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the slave to connect with the master.
#
# Port: The port is communicated by the slave during the replication
# handshake, and is normally the port that the slave is using to
# list for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the slave may be actually reachable via different IP and port
# pairs. The following two options can be used by a slave in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# slave-announce-ip 5.5.5.5
# slave-announce-port 1234
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly yes
# The name of the append only file (default: "appendonly.aof")
appendfilename "7379.appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

mac vim 设置

发表于 2017-03-21 |

#mac vim 设置

vi ~/.vimrc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
set ai " auto indenting
set history=100 " keep 100 lines of history
set ruler " show the cursor position
syntax on " syntax highlighting
set hlsearch " highlight the last searched term
filetype plugin on " use the file type plugins
" When editing a file, always jump to the last cursor position
autocmd BufReadPost *
\ if ! exists("g:leave_my_cursor_position_alone") |
\ if line("'\"") > 0 && line ("'\"") <= line("$") |
\ exe "normal g'\"" |
\ endif |
\ endif

vagrant 虚拟机工具

发表于 2017-03-21 |

#vagrant

[TOC]

##install

创建文件夹

vagrant init ubuntu/trusty64

vagrant up --provider virtualbox

##cmd
|cmd|描述|
|:|
|vagrant status|状态
|vagrant halt|强制关闭,立刻释放所有的ram
|vagrant suspend|挂起|
|vagrant up|启动|
|vagrant destroy|摧毁机器|
|vagrant reload|重启机器|

##访问控制

1
2
3
4
5
6
7
8
9
10
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
Add the following directly below those comments: config.vm.network :forwarded_port, guest: 80, host: 8080
Save the file and start your Vagrant virtual machine using the vagrant up command. If your virtual machine is currently running, you can reload it using the vagrant reload command.
This configuration change will setup port forwarding from port 8080 on the host machine (your computer) to the guest machine (your Vagrant virtual machine) when your virtual machine is running. This will allow you to access your web server using the URL http://localhost:8080.

##smoe lesson

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Vagrant Commands
We’re now ready to get started working within our Linux virtual machine. If your download hasn’t completed from the initial setup, go ahead and take a break and come back when that has completed. You won’t be able to make further progress until the virtual machine is up and running as much of the course will take place within this environment.
Before we access our machine, let’s quickly review a few commands that vagrant provides to make managing your virtual machines much simpler. Remember, your vagrant machine lives within this specific folder on your computer so make sure you’re within that same folder your created earlier; otherwise these commands won’t work as expected.
Type vagrant status
This command will show you the current status of the virtual machine. It should currently read “default running (virtualbox)” along with some other information.
Type vagrant suspend
This command suspends your virtual machine. All of your work is saved and the machine is put into a “sleep mode” of sorts. The machines state is saved and it’s very quick to stop and start your work. You should use this command if you plan to just take a short break from your work but don’t want to leave the virtual machine running.
Type vagrant up
This gets your virtual machine up and running again. Notice we didn’t have to redownload the virtual machine image, since it’s already been downloaded.
Type vagrant ssh
This command will actually connect to and log you into your virtual machine. Once done you will see a few lines of text showing various performance statistics of the virtual machine along with a new command line prompt that reads vagrant@vagrant-ubuntu-trusty-64:~$
Here are a few other important commands that we’ll discuss but you do not need to practice at this time:
vagrant halt
This command halts your virtual machine. All of your work is saved and the machine is turned off - think of this as “turning the power off”. It’s much slower to stop and start your virtual machine using this command, but it does free up all of your RAM once the machine has been stopped. You should use this command if you plan to take an extended break from your work, like when you are done for the day. The command vagrant up will turn your machine back on and you can continue your work.
vagrant destroy
This command destroys your virtual machine. Your work is not saved, the machine is turned off and forgotten about for the most part. Think of this as formatting the hard drive of a computer. You can always use vagrant up to relaunch the machine but you’ll be left with the baseline Linux installation from the beginning of this course. You should not have to use this command at any time during this course unless, at some point in time, you perform a task on the virtual machine that makes it completely inoperable.

参考链接:
first up帮助文档

ELK 配置文件

发表于 2017-02-04 |

#ELK 配置文件

[toc]

##配置文件

###nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
upstream kibana5 {
server localhost:5601 fail_timeout=5;
keepalive 64;
}
server {
listen 5602;
server_name kibana_server.com;
access_log /data/log/nginx/kibana.srv-log-dev.log;
error_log /data/log/nginx/kibana.srv-log-dev.error.log;
# ssl on;
# ssl_certificate /etc/nginx/ssl/all.crt;
# ssl_certificate_key /etc/nginx/ssl/server.key;
location / {
# root /var/www/kibana;
# index index.html index.htm;
proxy_pass http://localhost:5601;
}
}

###文件地址

/etc/kibana/kibana.yml
/data/local/elasticsearch/config/elasticsearch.yml
/data/local/filebeat/filebeat.yml

###logstash.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
input {
beats {
port => 5044
}
}
filter {
grok {
add_field => [ "received_from", "%{from_machine}" ]
}
}
output {
file {
path => "/data/log/logstash/all.log"
codec => line {format =>"$%{from_machine} %{message}"}
flush_interval => 0
}
}

###filebeat.yml(收集)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
# 收集的日志目录
paths:
#- /var/log/*.log
#
#- c:\programdata\elasticsearch\logs\*
- /data/log/api/api.log
- /data/log/php/php.log
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#
fields_under_root: true
fields:
from_machine: "10.8.15.106 APP[5]"
output.logstash:
# The Logstash hosts
hosts: ["10.8.26.121:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

###filebeat.yml (加密)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /data/log/lang-xmpp/api.log
#- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
fields_under_root: true
#fields:
# from_machine: "10.8.17.112 APP[1]"
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.8.17.112:9200"]
index: "xmpp-%{+yyyy.MM.dd}"
# Optional protocol and basic auth credentials.
#protocol: "https"
username: "elastic"
password: "elasticpassword0314"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# hosts: ["10.8.26.121:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
~
~

###elasticsearch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.8.17.112
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*
xpack.security.audit.enabled: true

###kibana.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://10.8.17.112:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana"
elasticsearch.password: "kibanapassword0314"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.cert: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.cert: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.ca: /path/to/your/CA.pem
# To disregard the validity of SSL certificates, change this setting's value to false.
#elasticsearch.ssl.verify: true
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
#
xpack.security.enabled: true

auth_basic

发表于 2017-02-04 |

#nginx auth_basic 配置

nginx .conf 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
upstream kibana5 {
server localhost:5601 fail_timeout=5;
keepalive 64;
}
server {
listen 5602;
server_name kibana_server.com;
access_log /data/log/nginx/kibana.srv-log-dev.log;
error_log /data/log/nginx/kibana.srv-log-dev.error.log;
location / {
auth_basic "secret";
auth_basic_user_file /data/local/nginx/passwd.db;
proxy_pass http://localhost:5601;
}
}

配置的选项
auth_basic "secret"; auth_basic_user_file /data/local/nginx/passwd.db;

auth文件
/data/local/nginx/passwd.db

我使用的是nginx服务器:为了方便直接使用在线工具生成的,格式如下:
user:passwd

异常处理(持续更新...)

发表于 2017-02-04 |

#nginx 500 解决方案汇总

[toc]

##检查php-fpm 进程

原因:脚本执行时间超时,导致php-fpm进程被占用
方案:
临时:添加php-fpm进程数
步骤:
1,ps -ef | grep php-fpm 查看当前php-fpm的配置文件
2,pm = static && pm.max_children = 128

linux

发表于 2017-02-04 |

#linux

[TOC]

##系统

信息 命令
内核/操作系统/CPU信息 uname
操作系统版本 head -n 1 /etc/issue
计算机名 hostname
列出加载的内核模块 lsmod
查看环境变量 env
查看系统 ps -p 1

##资源

信息 命令
查看内存使用量和交换区使用量 free -g
查看系统运行时间、用户数、负载 uptime
端口zan yong lsof -i grep 5601
查看硬盘 du -sh * ``

##用户

信息 命令 描述
新增 adduser 会自动为创建的用户指定主目录、系统shell版本,会在创建时输入用户密码。
新增 useradd 需要使用参数选项指定上述基本设置,如果不使用任何参数,则创建的用户无密码、无主目录、没有指定shell版本
添加root vi /etc/sudoers.d/$user && $user ALL=(ALL) ALL``chmod $user_file 400
修改密码 sudo passwd $suser
修改用户权限 sudo chown -R elasticsearch:elasticsearch /data/local/elasticsearch

##进程
|信息|命令|描述
|:|
|kill|ps -ef | grep xmpp.php | grep -v grep | awk '{print $2}' | xargs kill|

##文件

信息 命令 描述
软链 ln -s /data/local/mysql5.6.16/bin/mysql /uer/bin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
系统
  # uname -a # 查看内核/操作系统/CPU信息
  # head -n 1 /etc/issue # 查看操作系统版本
  # cat /proc/cpuinfo # 查看CPU信息
  # hostname # 查看计算机名
  # lspci -tv # 列出所有PCI设备
  # lsusb -tv # 列出所有USB设备
  # lsmod # 列出加载的内核模块
  # env # 查看环境变量
资源
  # free -m # 查看内存使用量和交换区使用量
  # df -h # 查看各分区使用情况
  # du -sh <目录名> # 查看指定目录的大小
  # grep MemTotal /proc/meminfo # 查看内存总量
  # grep MemFree /proc/meminfo # 查看空闲内存量
  # uptime # 查看系统运行时间、用户数、负载
  # cat /proc/loadavg # 查看系统负载
磁盘和分区
  # mount | column -t # 查看挂接的分区状态
  # fdisk -l # 查看所有分区
  # swapon -s # 查看所有交换分区
  # hdparm -i /dev/hda # 查看磁盘参数(仅适用于IDE设备)
  # dmesg | grep IDE # 查看启动时IDE设备检测状况
网络
  # ifconfig # 查看所有网络接口的属性
  # iptables -L # 查看防火墙设置
  # route -n # 查看路由表
  # netstat -lntp # 查看所有监听端口
  # netstat -antp # 查看所有已经建立的连接
  # netstat -s # 查看网络统计信息
进程
  # ps -ef # 查看所有进程
  # top # 实时显示进程状态
用户
  # w # 查看活动用户
  # id <用户名> # 查看指定用户信息
  # last # 查看用户登录日志
  # cut -d: -f1 /etc/passwd # 查看系统所有用户
  # cut -d: -f1 /etc/group # 查看系统所有组
  # crontab -l # 查看当前用户的计划任务
服务
  # chkconfig --list # 列出所有系统服务
  # chkconfig --list | grep on # 列出所有启动的系统服务
程序
  # rpm -qa # 查看所有安装的软件包

ELK 搭建文档

发表于 2017-02-04 |

#ELK 搭建

[TOC]

##⚠️

请保持全程elk各系统间的版本一致
elk的数据增长很快,请在开始的时候注意切分好index,方便清理旧数据
本文档仅适用于参考,不同的服务器,版本等信息均需要参看官方教程
一切以官网文档为第一手资料

##安装:

###filebean

####下载

1
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm

####安装

1
rpm -vi filebeat-5.1.1-x86_64.rpm

####配置

1
vi /etc/filebeat/filebeat.yml

####软链

1
ln -s /etc/filebeat /data/local/filebeat

###logstash:

####java环境依赖

1
2
yum install java-1.8.0-openjdk
yum install java-1.8.0-openjdk-devel.x86_64

export JAVACMD=`which java`
export JAVA_HOME=`which java`

####下载

1
2
3
curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
sudo rpm -i logstash-5.1.1.rpm

####软链:

1
ln -s /etc/logstash /data/local/logstash

warning: logstash-5.1.1.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY

####插件:

plugin 接收插件,gem_source 墙 /usr/share/logstash/Gemfile

filebeat
1
/usr/share/logstash/bin/logstash-plugin install logstash-input-beats

#####start bug fix
/usr/bin/filebeat.sh -e -c /data/local/filebeat/filebeat.yml

##启动

###filebean

####测试

1
/usr/share/filebeat/bin/filebeat -e -c filebeat.yml -d "publish"

1
/usr/share/filebeat/bin/filebeat -e -c /data/local/filebeat/filebeat.yml -d "publish"

####生产

1
nohup /usr/bin/filebeat.sh -e -c /data/local/filebeat/filebeat.yml &>/dev/null &

####清空

测试数据复用

1
rm /usr/share/filebeat/bin/data/registry

###logstash

####测试

1
2
3
4
/usr/share/logstash/bin/logstash -f /data/local/logstash/conf.d/logstash.conf
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
bin/logstash -f first-pipeline.conf --config.test_and_exit
bin/logstash -f first-pipeline.conf --config.reload.automatic

####生产
@(工具)

1
nohup /usr/share/logstash/bin/logstash -f /data/local/logstash/conf.d/logstash.conf >> /data/

###elasticsearch:

####概览:

  • Check your cluster, node, and index health, status, and statistics
  • Administer your cluster, node, and index data and metadata
  • Perform CRUD (Create, Read, Update, and Delete) and search operations against your indexes
  • Execute advanced search operations such as paging, sorting, filtering, scripting, aggregations, and many others

####下载

1
2
3
4
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.tar.gz
tar -xvf elasticsearch-5.1.1.tar.gz
cd elasticsearch-5.1.1/bin

####启动

1
2
./elasticsearch
./elasticsearch -Ecluster.name=my_cluster_name -Enode.name=my_node_name

####测试
helath check

1
curl -XGET 'localhost:9200/_cat/health?v&pretty'

list of node

1
curl -XGET 'localhost:9200/_cat/nodes?v&pretty'

index of node :

1
curl -XGET 'localhost:9200/_cat/indices?v&pretty'

create index :

1
curl -XPUT 'localhost:9200/customer?pretty&pretty'

让我们将一个简单的客户文档索引到客户索引“外部”类型,ID为1,如下所示:

新增=修改

1
2
3
4
curl -XPUT 'localhost:9200/customer/external/1?pretty&pretty' -d'
{
"name": "John Doe"
}'

查看

1
2
curl -XGET 'localhost:9200/customer/external/1?pretty&pretty'
curl -XGET 'localhost:9200/customer/external/1?pretty'

删除

1
curl -XDELETE 'localhost:9200/customer?pretty'

del index

1
curl -XDELETE 'localhost:9200/customer?pretty&pretty'

list index

1
curl -XGET 'localhost:9200/_cat/indices?v&pretty'

elastic_search

本轮存在很大问题啊

1
2
3
4
5
6
7
curl -XPUT 'localhost:9200/customer?pretty'
curl -XPUT 'localhost:9200/customer/external/2?pretty' -d'
{
"name": "John"
}'
curl -XGET 'localhost:9200/customer/external/1?pretty'
curl -XDELETE 'localhost:9200/customer?pretty'

that in the above case, we are using the POST verb instead of PUT since we didn’t specify an ID.

自增

1
2
3
4
curl -XPOST 'localhost:9200/customer/external?pretty&pretty' -d'
{
"name": "Jane Doe"
}'

修改1

1
2
3
4
curl -XPOST 'localhost:9200/customer/external/1/_update?pretty&pretty' -d'
{
"doc": { "name": "Jane Doe", "age": 20 }
}'

修改2

1
2
3
4
curl -XPOST 'localhost:9200/customer/external/1/_update?pretty&pretty' -d'
{
"script" : "ctx._source.age += 5"
}'

修改 put可以直接进行创建的时候修改

1
curl -XPOST 'localhost:9200/customer/external/1/_update?pretty&pretty' -d'

####批量操作
目前测试不通过,只有批量的第一个会成功

1
2
3
4
5
curl -XPOST 'localhost:9200/customer/external/_bulk?pretty&pretty' -d'
{"index":{"_id":"1"}}
{"name": "John Doe111" }
{"index":{"_id":"2"}}
{"name": "Jane Doe222" }'

####实际数据操作
准备数据

1
2
3
wget https://raw.githubusercontent.com/elastic/elasticsearch/master/docs/src/test/resources/accounts.json
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty&refresh' --data-binary "@accounts.json"
curl 'localhost:9200/_cat/indices?v'

查询 REST request URI

1
curl -XGET 'localhost:9200/bank/_search?q=*&sort=account_number:asc&pretty&pretty'

查询 REST request body

1
2
3
4
5
6
7
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match_all": {} },
"sort": [
{ "account_number": "asc" }
]
}'

查询扩展 REST request body
size 默认 10

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
查询所有
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match_all": {} }
}'
查询所有 返回1
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match_all": {} },
"size": 1
}'
查询所有 返回10条
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match_all": {} },
"from": 10,
"size": 10
}'
查询所有 返回字段仅 account_number balance
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match_all": {} },
"_source": ["account_number", "balance"]
}'
查询匹配 account_number = 20
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match": { "account_number": 20 } }
}'
查询匹配
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match": { "address": "mill" } }
}'
查询匹配 or
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match": { "address": "mill lane" } }
}'
查询匹配 &&
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": { "match_phrase": { "address": "mill lane" } }
}'
查询匹配 &&
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": {
"bool": {
"must": [
{ "match": { "address": "mill" } },
{ "match": { "address": "lane" } }
]
}
}
}'
查询 !
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": {
"bool": {
"must_not": [
{ "match": { "address": "mill" } },
{ "match": { "address": "lane" } }
]
}
}
}'
组合
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": {
"bool": {
"must": [
{ "match": { "age": "40" } }
],
"must_not": [
{ "match": { "state": "ID" } }
]
}
}
}'
范围
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"query": {
"bool": {
"must": { "match_all": {} },
"filter": {
"range": {
"balance": {
"gte": 20000,
"lte": 30000
}
}
}
}
}
}'
分组查询
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"size": 0,
"aggs": {
"group_by_state": {
"terms": {
"field": "state.keyword"
}
}
}
}'
==
SELECT state, COUNT(*) FROM bank GROUP BY state ORDER BY COUNT(*) DESC
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"size": 0,
"aggs": {
"group_by_state": {
"terms": {
"field": "state.keyword",
"order": {
"average_balance": "desc"
}
},
"aggs": {
"average_balance": {
"avg": {
"field": "balance"
}
}
}
}
}
}'
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"size": 0,
"aggs": {
"group_by_state": {
"terms": {
"field": "state.keyword"
},
"aggs": {
"average_balance": {
"avg": {
"field": "balance"
}
}
}
}
}
}'
这个例子演示了我们如何根据年龄段(20-29,30-39和40-49),然后按性别分组,然后最终得到每个年龄段的每个性别的平均帐户余额!!!
curl -XGET 'localhost:9200/bank/_search?pretty' -d'
{
"size": 0,
"aggs": {
"group_by_age": {
"range": {
"field": "age",
"ranges": [
{
"from": 20,
"to": 30
},
{
"from": 30,
"to": 40
},
{
"from": 40,
"to": 50
}
]
},
"aggs": {
"group_by_gender": {
"terms": {
"field": "gender.keyword"
},
"aggs": {
"average_balance": {
"avg": {
"field": "balance"
}
}
}
}
}
}
}
}'

####elastic search 扩展练习

1
2
3
4
curl -XGET '127.0.0.1:9200/xmpp/_search?pretty' -d'
{
"query": { "match": { "message": "1000010=>1279133" } }
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
GET /megacorp/employee/_search?pretty
GET /megacorp/employee/_search?q=last_name:Smith
GET /megacorp/employee/_search?pretty
{
"query" : {
"match" : {
"last_name" : "Smith"
}
}
}
GET /megacorp/employee/_search?pretty
{
"query" : {
"bool" : {
"must" : {
"match" : {
"last_name" : "smith"
}
},
"filter" : {
"range" : {
"age" : { "gt" : 30 }
}
}
}
}
}
GET /megacorp/employee/_search
{
"query" : {
"match" : {
"about" : "rock climbing"
}
}
}
GET /megacorp/employee/_search
{
"query" : {
"match_phrase" : {
"about" : "rock climbing"
}
}
}
GET /megacorp/employee/_search
{
"query" : {
"match_phrase" : {
"about" : "rock climbing"
}
},
"highlight": {
"fields" : {
"about" : {}
}
}
}
GET /megacorp/employee/_search
{
"aggs": {
"all_interests": {
"terms": { "field": "interests" }
}
}
}
PUT /megacorp/_mapping/employee?pretty
{
"properties": {
"interests": {
"type": "text",
"fielddata": true
}
}
}
GET /megacorp/employee/_search
{
"query": {
"match": {
"last_name": "smith"
}
},
"aggs": {
"all_interests": {
"terms": {
"field": "interests"
}
}
}
}
GET /megacorp/employee/_search
{
"aggs" : {
"all_interests" : {
"terms" : { "field" : "interests" },
"aggs" : {
"avg_age" : {
"avg" : { "field" : "age" }
}
}
}
}
}

###kibana

####安装
vi /etc/yum.repos.d/kibana.repo

1
2
3
4
5
6
7
8
9
[kibana-5.x]
name=Kibana repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

1
2
3
4
5
sudo yum install kibana
sudo chkconfig --add kibana
sudo -i service kibana start
sudo -i service kibana stop

####运行
sudo -i service kibana start

####配置
nginx转发

1
sudo -i service kibana stop

####elastic search生产环境部署bug fix:

#####Q1:
ERROR: bootstrap checks failed system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

1
2
3
4
5
6
7
原因:
这是在因为Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动。
解决:
在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false`

#####Q2:
max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]

1
2
3
4
5
6
7
8
9
10
11
12
13
`ulimit -n 5000`
解决:切换到root用户,进入limits.d目录下修改配置文件。
vi /etc/security/limits.d/90-nproc.conf
修改如下内容:
* soft nproc 1024
#修改为
* soft nproc 2048

#####Q3
max number of threads [1024] for user [lishang] likely too low, increase to at least [2048]

1
2
3
4
5
6
7
8
9
10
11
12
13
解决:切换到root用户修改配置sysctl.conf
vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360
并执行命令:
sysctl -p
然后,重新启动elasticsearch,即可启动成功。

###x-pack

x-pack 是需要license的,请安装的时候注意,如果不准备申请license的话:
配置如下:
elasticsearch.yml

1
2
3
4
xpack.security.enabled: false
xpack.monitoring.enabled: true
xpack.graph.enabled: false
#xpack.reporting.enabled: false

1
2
3
some warm:
Storing generated key in [/Users/langlive/Desktop/elasticsearch-5.2.1/config/x-pack/system_key]...
Ensure the generated key can be read by the user that Elasticsearch runs as, permissions are set to owner read/write only

####安装
elasticsearch

1
bin/elasticsearch-plugin install x-pack

kibana

1
bin/kibana-plugin install x-pack

logstash

1
bin/logstash-plugin install x-pack

filebeat

1
vim /data/local/filebeat/filebeat.yml

####设置

#####密码

1
2
3
4
5
6
7
8
9
10
11
curl -XPUT -u elastic '127.0.0.1:9200/_xpack/security/user/elastic/_password' -d '{
"password" : "123456"
}'
curl -XPUT -u elastic '127.0.0.1:9200/_xpack/security/user/kibana/_password' -d '{
"password" : "123456"
}'
curl -XPUT -u elastic '127.0.0.1:9200/_xpack/security/user/logstash_system/_password' -d '{
"password" : "123456"
}'

#####访问控制

######创建角色

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
curl -XPOST -u elastic '127.0.0.1:9200/_xpack/security/role/events_admin' -d '{
"indices" : [
{
"names" : [ "events*" ],
"privileges" : [ "all" ]
},
{
"names" : [ "xmpp*" ],
"privileges" : [ "all" ]
},
{
"names" : [ "log*" ],
"privileges" : [ "all" ]
},
{
"names" : [ ".kibana*" ],
"privileges" : [ "manage", "read", "index" ]
}
]
}'
curl -XPOST -u elastic '127.0.0.1:9200/_xpack/security/role/events_root' -d '{
"cluster":all
"indices" : [
{
"names" : [ "*" ],
"privileges" : [ "all" ]
},
]
}'

######创建管理员

1
2
3
4
5
6
7
8
9
10
11
12
13
14
curl -XPOST -u elastic '127.0.0.1:9200/_xpack/security/user/walter' -d '{
"password" : "enjoyprocess",
"full_name" : "walter.shi",
"email" : "walter.shi@langlive.com",
"roles" : [ "events_admin" ]
}'
curl -XPOST -u elastic '127.0.0.1:9200/_xpack/security/user/dev' -d '{
"password" : "kibana-search",
"full_name" : "langlive.dev",
"email" : "dev@langlive.com",
"roles" : [ "events_admin" ]
}'

#####修改密码

1
2
3
4
5
6
curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "elasticpassword"
}
'

参考链接:
filebeat安装文档教程
filebeat配置文档教程
logstash安装文档教程
logstash安装插件教程
logstash配置文档教程

walter

walter

技术博客

10 日志
8 标签
RSS
© 2017 walter
由 Hexo 强力驱动
主题 - NexT.Mist