rsync 常用的同步命令

- ### 由本地向服务器推送
- rsync -avz --port=8730 dist.tar.gz liuxinxiu@127.0.0.1::test
- rsync -avzP --port=8730 dist.tar.gz dynamicAssets.json jenkins@172.16.207.22::work-litigation
- ### 由服务器向本地下载
- rsync -avz liuxinxiu@127.0.0.1::test /test/111
- rsync -avz jenkins@172.16.207.22::work-litigation /var/www/html/mirrors/frontend/injured/work-litigation
- ### 创建软连接(前边是存储源——后边是软链接)
- ln -s /data/apps/rsync/www/release /data/apps/nginx/htdocs/release
以下是具体是例子:
- ### 由本地向服务器推送
- cd /data/apps/nginx/htdocs/$projectPath/upload
- rsync -avzP --port=8730 dist.tar.gz dynamicAssets.json jenkins@172.16.207.22::$las_dir
- ### 由本地向服务器推送
- cd /data/apps/nginx/htdocs/$projectPath/upload
- rsync -avzP --port=8730 dist.tar.gz dynamicAssets.json jenkins@172.16.207.22::release/$projectPath &&
- cd /data/apps/rsync/www/release/$lat_dir &&
- scp -r $pat_dir root@10.10.9.99:/var/www/html/mirrors/frontend/$pat_dir &&
- curl http://10.10.9.99/frontend/$projectPath
[Linux] 解决secureCRT 登录 Rocky Linux 9.1 报错 No compatible hostkey. The server supports these methods: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519

- Key exchange failed. No compatible key exchange method. The server supports these methods: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512
- No compatible hostkey. The server supports these methods: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519
1、通过web管理终端登录系统
编辑/etc/ssh/sshd_config
在最下面新增
- KexAlgorithms diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group1-sha1,curve25519-sha256@libssh.org
Rocky Linux-8.6 Docker安装

- yum install -y yum-utils device-mapper-persistent-data lvm2
- yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- yum install docker-ce -y --allowerasing
- systemctl start docker
- systemctl enable docker
安装docker成功后,启动docker报错,解决方法

安装了docker 因服务器从公网(192.168.50.60)机房迁移到内网(192.168.190.60),更改了IP地址,环境就运行不起来了~
启动docker后执行 systemctl status docker 出现了异常,具体如下:
[root@joinApp2 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Thu 2016-02-25 17:26:11 CST; 16s ago
Docs:
http://docs.docker.com Process: 16384 ExecStart=/usr/bin/docker daemon $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 16384 (code=exited, status=1/FAILURE)
Feb 25 17:26:10 joinApp2 systemd[1]: Failed to start Docker Application Container Engine.
Feb 25 17:26:10 joinApp2 systemd[1]: Unit docker.service entered failed state.
Feb 25 17:26:10 joinApp2 systemd[1]: docker.service failed.
Feb 25 17:26:11 joinApp2 systemd[1]: docker.service holdoff time over, scheduling restart.
Feb 25 17:26:11 joinApp2 systemd[1]: start request repeated too quickly for docker.service
Feb 25 17:26:11 joinApp2 systemd[1]: Failed to start Docker Application Container Engine.
Feb 25 17:26:11 joinApp2 systemd[1]: Unit docker.service entered failed state.
Feb 25 17:26:11 joinApp2 systemd[1]: docker.service failed.
当时问题没解决搁置了。
今天重新google , 解决了问题,现在贴下解决办法
vi /etc/sysconfig/selinux
把selinux后面的改为disabled,重启一波机器,再重启docker就可以了
解决方法二:
===================
CentOS简单配置防御ddos攻击

DDOS这种攻击的目的就是在短时间内制造数量巨大的并发连接,从而使用服务器down机或消耗掉网络带宽和系统资源导致正常用户无法正常访问浏览网站。
DoS Deflate 是一个轻量级阻止拒绝服务攻击的bash shell脚本。我们可以通过安装他并且简单配置来防御DDOS攻击。
首先安装命令:
- wget http://www.inetbase.com/scripts/ddos/install.sh
- chmod 700 install.sh
- ./install.sh
然后会自动进行安装,完成后会有一段版权提示与说明,按q键退出即可。
卸载命令:
- wget http://www.inetbase.com/scripts/ddos/uninstall.ddos
- chmod 700 uninstall.ddos
- ./uninstall.ddos
安装完成之后就可以通过简单配置来进行DDOS防御,我是用的是CentOS7操作系统配置文件目录是/usr/local/ddos/ddos.conf
或者也可以通过命令更改 vi /usr/local/ddos/ddos.conf 编辑完成后:wq保存退出
下面介绍一下ddos.conf的基本配置#为注释部分不用理会关键配置项有:
- PROGDIR="/usr/local/ddos" #文件存放目录
- PROG="/usr/local/ddos/ddos.sh" #主要功能脚本
- IGNORE_IP_LIST="/usr/local/ddos/ignore.ip.list" #可以设置IP白名单
- CRON="/etc/cron.d/ddos.cron" #crond定时任务脚本
- APF="/etc/apf/apf" #这两项应该分别对应使用APF或者iptables配置目录不过笔者
- IPT="/sbin/iptables" #尝试打开文件里边是乱码,有哪位大牛知道是干嘛的欢迎留言
- FREQ=1 #间隔多久检查一次,默认1分钟
- NO_OF_CONNECTIONS=150 #最大连接数设置,超过这个数字的IP就会被屏蔽
- APF_BAN=0 #1:使用APF,0:使用iptables,推荐使用iptables
- KILL=1 #是否屏蔽IP 1:屏蔽,0:不屏蔽
- EMAIL_TO="root" #发送电子邮件报警的邮箱地址,换成自己使用的邮箱
- BAN_PERIOD=600 #禁用IP时间,可根据情况调整,默认单位:秒
如果/usr/local/ddos/ddos.sh 统计不正确,可能是启用ipv6的缘故
vi /usr/local/ddos/ddos.sh 修改/usr/local/ddos/ddos.sh
117行的内容是这样的netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr > $BAD_IP_LIST
修改为以下代码即可!
- netstat -ntu | awk '{print $5}' | cut -d: -f1 | sed -n '/[0-9]/p' | sort | uniq -c | sort -nr > $BAD_IP_LIST
CentOS7默认为Firewall为了配合使用DoS Deflate建议停用Firewall启用iptables,不会用iptables的朋友,百度一下有很多
查看最近已登录的是连接到服务器的最大数量的IP的列表

如何确认是否受到DDOS攻击?
登录到你的服务器以root用户执行下面的命令,使用它你可以检查你的服务器是在DDOS攻击与否:
- netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
执行后,将会显示服务器上所有的每个IP多少个连接数。每个IP几个、十几个或几十个连接数都还算比较正常,如果像上面成百上千肯定就不正常了,该命令将显示已登录的是连接到服务器的最大数量的IP的列表。
另外还有其他情况,使DDOS变得更为复杂,因为攻击者在使用更少的连接,更多数量IP的攻击服务器的情况下,你得到的连接数量较少,即使你的服务器被攻击了。有一点很重要,你应该检查当前你的服务器活跃的连接信息,执行以下命令:
- netstat -n | grep :80 |wc –l
上面的命令将显示所有打开你的服务器的活跃连接。
您也可以使用如下命令:
- netstat -n | grep :80 | grep SYN |wc –l
从第一个命令有效连接的结果会有所不同,但如果它显示连接超过500,那么将肯定有问题。
如果第二个命令的结果是100或以上,那么服务器可能被同步攻击。
一旦你获得了攻击你的服务器的IP列表,你可以很容易地阻止它。
google chrome 插件安装目录

MAC:
- /Users/jesse/Library/Application Support/Google/Chrome/Default/Extensions/nhdogjmejiglipccpnnnanhbledajbpd/0.0.2_0
windows:
- C:\Users\Administrator\AppData\Local\Google\Chrome\User Data\Default\Extensions\nhdogjmejiglipccpnnnanhbledajbpd\3.1.4_0
/bin/false和/sbin/nologin的区别

不会有任何提示,用户切换不过去
Linux如何查找大文件或目录总结

在Windows系统中,我们可以使用TreeSize工具查找一些大文件或文件夹,非常的方便高效,在Linux系统中,如何去搜索一些比较大的文件呢?下面我整理了一下在Linux系统中如何查找大文件或文件夹的方法。
1: 如何查找大文件?
其实很多时候,你需要了解当前系统下有哪些大文件,比如文件大小超过100M或1G(阀值视具体情况而定)。那么如何把这些大文件搜索出来呢?例如我要搜索当前目录下,超过800M大小的文件
- [root@getlnx01 u03]# pwd
- /u03
- [root@getlnx01 u03]# find . -type f -size +800M
- ./flash_recovery_area/backup/backupsets/ora_df873519197_s46815_s1
- ./flash_recovery_area/backup/backupsets/ora_df873523646_s46822_s1
- ./flash_recovery_area/backup/backupsets/ora_df873521714_s46818_s1
- ./flash_recovery_area/backup/backupsets/ora_df873522876_s46820_s1
- ./flash_recovery_area/backup/backupsets/ora_df873517396_s46813_s1
- ./flash_recovery_area/backup/backupsets/ora_df873523321_s46821_s1
- ./flash_recovery_area/backup/backupsets/ora_df873515765_s46811_s1
- ./flash_recovery_area/backup/backupsets/ora_df873520789_s46817_s1
- ./flash_recovery_area/backup/backupsets/ora_df873524162_s46823_s1
- ./flash_recovery_area/backup/backupsets/ora_df873518302_s46814_s1
- ./flash_recovery_area/backup/backupsets/ora_df873519953_s46816_s1
- ./flash_recovery_area/backup/backupsets/ora_df873516500_s46812_s1
- ./flash_recovery_area/backup/backupsets/ora_df873513413_s46809_s1
- ./flash_recovery_area/backup/backupsets/ora_df873514789_s46810_s1
- ./oradata/epps/invsubmat_d08.dbf
- ./oradata/epps/gmtinv_d08.dbf
- ./oradata/epps/gmtinv_x01.dbf
- ./oradata/epps/undotbs02.dbf
- ./oradata/epps/gmtinv_d07.dbf
- ./oradata/epps/undotbs01.dbf
- ./oradata/epps/gmtinv_x02.dbf
如上命令所示,我们仅仅能看到超过800M大小的文件的文件名称,但是对文件的信息(例如,文件大小、文件属性)一无所知,那么能否更详细显示一些文件属性或信息呢,当然可以,如下所示
- [root@getlnx01 u03]# find . -type f -size +800M -print0 | xargs -0 ls -l
- -rw-r----- 1 oracle oinstall 2782846976 Mar 6 11:51 ./flash_recovery_area/backup/backupsets/ora_df873513413_s46809_s1
- -rw-r----- 1 oracle oinstall 1878433792 Mar 6 11:53 ./flash_recovery_area/backup/backupsets/ora_df873514789_s46810_s1
- -rw-r----- 1 oracle oinstall 1378492416 Mar 6 11:54 ./flash_recovery_area/backup/backupsets/ora_df873515765_s46811_s1
- -rw-r----- 1 oracle oinstall 1641381888 Mar 6 11:56 ./flash_recovery_area/backup/backupsets/ora_df873516500_s46812_s1
- -rw-r----- 1 oracle oinstall 1564065792 Mar 6 11:58 ./flash_recovery_area/backup/backupsets/ora_df873517396_s46813_s1
- -rw-r----- 1 oracle oinstall 1663492096 Mar 6 12:00 ./flash_recovery_area/backup/backupsets/ora_df873518302_s46814_s1
- -rw-r----- 1 oracle oinstall 1368244224 Mar 6 12:02 ./flash_recovery_area/backup/backupsets/ora_df873519197_s46815_s1
- -rw-r----- 1 oracle oinstall 1629069312 Mar 6 12:04 ./flash_recovery_area/backup/backupsets/ora_df873519953_s46816_s1
- -rw-r----- 1 oracle oinstall 1629954048 Mar 6 12:06 ./flash_recovery_area/backup/backupsets/ora_df873520789_s46817_s1
- -rw-r----- 1 oracle oinstall 1202192384 Mar 6 12:07 ./flash_recovery_area/backup/backupsets/ora_df873521714_s46818_s1
- -rw-r----- 1 oracle oinstall 1189388288 Mar 6 12:10 ./flash_recovery_area/backup/backupsets/ora_df873522876_s46820_s1
- -rw-r----- 1 oracle oinstall 1089257472 Mar 6 12:11 ./flash_recovery_area/backup/backupsets/ora_df873523321_s46821_s1
- -rw-r----- 1 oracle oinstall 1097687040 Mar 6 12:12 ./flash_recovery_area/backup/backupsets/ora_df873523646_s46822_s1
- -rw-r----- 1 oracle oinstall 1051009024 Mar 6 12:13 ./flash_recovery_area/backup/backupsets/ora_df873524162_s46823_s1
- -rw-r----- 1 oracle oinstall 4294975488 Apr 3 15:07 ./oradata/epps/gmtinv_d07.dbf
- -rw-r----- 1 oracle oinstall 4194312192 Apr 1 22:36 ./oradata/epps/gmtinv_d08.dbf
- -rw-r----- 1 oracle oinstall 4294975488 Apr 3 15:54 ./oradata/epps/gmtinv_x01.dbf
- -rw-r----- 1 oracle oinstall 4294975488 Apr 3 15:57 ./oradata/epps/gmtinv_x02.dbf
- -rw-r----- 1 oracle oinstall 4294975488 Apr 1 22:35 ./oradata/epps/invsubmat_d08.dbf
- -rw-r----- 1 oracle oinstall 8589942784 Apr 4 09:55 ./oradata/epps/undotbs01.dbf
- -rw-r----- 1 oracle oinstall 8589942784 Apr 4 09:15 ./oradata/epps/undotbs02.dbf
当我们只需要查找超过800M大小文件,并显示查找出来文件的具体大小,可以使用下面命令:
- [root@getlnx01 u03]# find . -type f -size +800M -print0 | xargs -0 du -h
- 1.3G ./flash_recovery_area/backup/backupsets/ora_df873519197_s46815_s1
- 1.1G ./flash_recovery_area/backup/backupsets/ora_df873523646_s46822_s1
- 1.2G ./flash_recovery_area/backup/backupsets/ora_df873521714_s46818_s1
- 1.2G ./flash_recovery_area/backup/backupsets/ora_df873522876_s46820_s1
- 1.5G ./flash_recovery_area/backup/backupsets/ora_df873517396_s46813_s1
- 1.1G ./flash_recovery_area/backup/backupsets/ora_df873523321_s46821_s1
- 1.3G ./flash_recovery_area/backup/backupsets/ora_df873515765_s46811_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873520789_s46817_s1
- 1004M ./flash_recovery_area/backup/backupsets/ora_df873524162_s46823_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873518302_s46814_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873519953_s46816_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873516500_s46812_s1
- 2.6G ./flash_recovery_area/backup/backupsets/ora_df873513413_s46809_s1
- 1.8G ./flash_recovery_area/backup/backupsets/ora_df873514789_s46810_s1
- 4.1G ./oradata/epps/invsubmat_d08.dbf
- 4.0G ./oradata/epps/gmtinv_d08.dbf
- 4.1G ./oradata/epps/gmtinv_x01.dbf
- 8.1G ./oradata/epps/undotbs02.dbf
- 4.1G ./oradata/epps/gmtinv_d07.dbf
- 8.1G ./oradata/epps/undotbs01.dbf
- 4.1G ./oradata/epps/gmtinv_x02.dbf
如果你还需要对查找结果按照文件大小做一个排序,那么可以使用下面命令:
- [root@getlnx01 u03]# find . -type f -size +800M -print0 | xargs -0 du -h | sort -nr
- 1004M ./flash_recovery_area/backup/backupsets/ora_df873524162_s46823_s1
- 8.1G ./oradata/epps/undotbs02.dbf
- 8.1G ./oradata/epps/undotbs01.dbf
- 4.1G ./oradata/epps/invsubmat_d08.dbf
- 4.1G ./oradata/epps/gmtinv_x02.dbf
- 4.1G ./oradata/epps/gmtinv_x01.dbf
- 4.1G ./oradata/epps/gmtinv_d07.dbf
- 4.0G ./oradata/epps/gmtinv_d08.dbf
- 2.6G ./flash_recovery_area/backup/backupsets/ora_df873513413_s46809_s1
- 1.8G ./flash_recovery_area/backup/backupsets/ora_df873514789_s46810_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873520789_s46817_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873519953_s46816_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873518302_s46814_s1
- 1.6G ./flash_recovery_area/backup/backupsets/ora_df873516500_s46812_s1
- 1.5G ./flash_recovery_area/backup/backupsets/ora_df873517396_s46813_s1
- 1.3G ./flash_recovery_area/backup/backupsets/ora_df873519197_s46815_s1
- 1.3G ./flash_recovery_area/backup/backupsets/ora_df873515765_s46811_s1
- 1.2G ./flash_recovery_area/backup/backupsets/ora_df873522876_s46820_s1
- 1.2G ./flash_recovery_area/backup/backupsets/ora_df873521714_s46818_s1
- 1.1G ./flash_recovery_area/backup/backupsets/ora_df873523646_s46822_s1
- 1.1G ./flash_recovery_area/backup/backupsets/ora_df873523321_s46821_s1
不过如上截图所示,有时候排列的顺序并不完全是按大小一致,这个是因为du命令的参数h所致,你可以统一使用使用MB来显示,这样就能解决这个问题。到这里,这个在Linux系统查找大文件的命令已经非常完美了,当然如果你还有很多的需求,那么可以在这个命令上做修改、调整.
2: 如何查找Linux下的大目录
譬如有时候磁盘空间告警了,而你平时又疏于管理、监控文件的增长,那么我需要快速的了解哪些目录变得比较大,那么此时我们可以借助du命令来帮我们解决这个问题。
- [root@getlnx01 u03]# du -h --max-depth=1
- 16K ./lost+found
- 33G ./flash_recovery_area
- 37G ./oradata
- 70G .
如果你想知道flash_recovery_area目录下面有哪些大文件夹,那么可以将参数max-depth=2 ,如果你想对搜索出来的结果进行排序,那么可以借助于sort命令。如下所示
- [root@getlnx01 u03]# du -h --max-depth=2 | sort -n
- 3.5G ./flash_recovery_area/EPPS
- 16K ./lost+found
- 29G ./flash_recovery_area/backup
- 33G ./flash_recovery_area
- 37G ./oradata
- 37G ./oradata/epps
- 70G .
- [root@getlnx01 u03]# du -hm --max-depth=2 | sort -n
- 1 ./lost+found
- 3527 ./flash_recovery_area/EPPS
- 29544 ./flash_recovery_area/backup
- 33070 ./flash_recovery_area
- 37705 ./oradata
- 37705 ./oradata/epps
- 70775 .
[root@getlnx01 u03]# cd /
[root@getlnx01 /]# du -hm --max-depth=2 | sort -n
有时候搜索出来的结果太多了(譬如,我从根目录开始搜索),一直在刷屏,如果我只想查出最大的12个文件夹,怎么办呢?此时就要借助head命令来显示了
- [root@getlnx01 /]# du -hm --max-depth=2 | sort -nr | head -12
- 407480 .
- 167880 ./u04
- 158685 ./u02/oradata
- 158685 ./u02
- 152118 ./u04/oradata
- 70775 ./u03
- 37705 ./u03/oradata
- 33070 ./u03/flash_recovery_area
- 5995 ./u01/app
- 5995 ./u01
- 3551 ./usr
- 1558 ./usr/share
- [root@getlnx01 /]#
查看具体目录:du -sh /www/cnmo/
查看整体概况:df -h
Linux Shell 汇总

1、查看当前操作系统类型
- #!/bin/sh
- SYSTEM=`uname -s`
- if [ $SYSTEM = "Linux" ] ; then
- echo "Linux"
- elif [ $SYSTEM = "FreeBSD" ] ; then
- echo "FreeBSD"
- elif [ $SYSTEM = "Solaris" ] ; then
- echo "Solaris"
- else
- echo "What?"
- fi