11/23/09

Upgrading CentOS to 5.4 breaks vmware

http://communities.vmware.com/thread/229957

5.4 upgrades glibc to 2.5-42.i686, this causes problems with vmware-hostd and it crashes shortly after being started (I can get it to crash just by logging in and clicking on a vm)

fix ...

Get a copy of 5.3 glibc, you can get it from a install dvd, google, another box. Just make sure if you have a 32bit machine you get the 32 bit, and 64 for 64bit.

mkdir /usr/lib/vmware/lib/libc.so.6
cp libc-2.5.so /usr/lib/vmware/lib/libc.so.6/
chown root:root /usr/lib/vmware/lib/libc.so.6/libc-2.5.so
mv /usr/lib/vmware/lib/libc.so.6/libc-2.5.so /usr/lib/vmware/lib/libc.so.6/libc.so.6
vi /usr/sbin/vmware-hostd

added an "export LD_LIBRARY_PATH=/usr/lib/vmware/lib/libc.so.6:$LD_LIBRARY_PATH" before the last line.

restart /etc/init.d/vmware

10/28/09

Forwarding Samba over SSH - Plus! an easy to use script

Here is a simple example for connecting to a samba server:

ssh -L 22330:SAMBA_SERVER:139 USER_NAME@SSH_SERVER

smbmount //SAMBA_SERVER/SHARE_NAME /PATH/TO/SHARE_MOUNT --verbose -o ip=127.0.0.1,port=22330,credentials=/PATH/TO/CREDS/FILE


Now my problem was I need to make an easy to use way for dev's to access samba without installing a VPN. They also needed to make two hops from an access server to another ssh server then finally the samba server.

Here is the two hop method on one line:

ssh -t -t -L 22330:localhost:22330 USER_NAME@SSH_SERVER "ssh -t -t -L 22330:SAMBA_SERVER:139 INTERNAL_SSH_SERVER_IP"

-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

Now for the script, Its written in two parts, one script for creating the tunnel, the other for mounting the smb share.

createsshtunnel.sh
----------------------------------------------------------
#!/bin/bash
#This script will forward a randomly generated port for tunneling samba connections

username=user_name
PORT=$[ ( $RANDOM % ( $[ 22999 - 22000 ] + 1 ) ) + 22000 ]

echo " "
echo Port Number is: $PORT
echo " "
echo " "
echo "Creating Samba Tunnel"
ssh -t -t -L $PORT:localhost:$PORT $username@ssh_server "ssh -t -t -L $PORT:SAMBA_SERVER:139 INTERNAL_SSH_SERVER"
-------------------------------------------------------------------------

mountsamba.sh
------------------------------------------------------------------------
#!/bin/bash
#This will mount samba shares in combination with the sambassh.sh script
#Enter in the port number returned from sambassh.sh

#Location of a credentials file (chmod 600 file)
#Format:
#username=
#password=

credentials=/PATH/TO/creds

echo " "
echo "Enter Port Number:"
read portnumber
echo "$portnumber is "
echo "Which Share?"
read sharename

if [ -d "~/$sharename" ]
then
echo "Mount Point Exists, trying to unmount if its mounted"
sudo umount "~/$sharename"
echo " "
else
echo "Directory "~/$sharename" does not exist, creating for you."
echo " "
mkdir "~/$sharename"
fi

smbmount //SMB_SERVER/$sharename ~/$sharename --verbose -o ip=127.0.0.1,port=$portnumber,credentials=$credentials
----------------------------------------------------------------------

10/12/09

HOWTO Install LVS on Centos 5.3

#HOWTO Install LVS on Centos 5.3
#10/12/09

#Install Packages
sudo yum install -y Cluster_Administration-en-US.noarch piranha.i386 / piranha.x86_64

#Set to start on boot
sudo chkconfig pulse on
sudo chkconfig piranha-gui on (primary node only)

#Start Piranah WebUI and set passwd
sudo /usr/sbin/piranha-passwd
sudo /sbin/service piranha-gui start #(listens on port 3636)

#Set Access restrictions to web interface (localhost only)
sudo vi /etc/sysconfig/ha/web/secure/.htaccess
----
Order deny,allow
Deny from all
Allow from 127.0.0.1
----

#Turn on Packet Forwarding
sudo vi /etc/sysctl.conf
net.ipv4.ip_forward = 1

/sbin/sysctl -w net.ipv4.ip_forward=1 #(manually set)

#Apply Firewall Changes

iptables -A RH-Firewall-1-INPUT -p udp -m udp --dport 539 -j ACCEPT #port for pulse
iptables -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 3636 -j ACCEPT #port for piranah webUI
iptables -A RH-Firewall-1-INPUT -m pkttype --pkt-type multicast -j ACCEPT #allow multicast packets for arp failover


#Layout





#Interfaces

Master Backup
----------------------------------
Public: 172.16.1.133 Public: 172.16.1.134
Private: 10.0.1.2 Private: 10.0.1.3

Public floating VIP 172.16.1.136, 172.16.1.137, 172.16.1.138 etc...
Private VIP 10.0.1.254 (gateway for real servers)


#/etc/sysconfig/ha/lvs.cf
serial_no = 91
primary = 172.16.1.133
primary_private = 10.0.1.2
service = lvs
backup_active = 1
backup = 172.16.1.134
backup_private = 10.0.1.3
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = nat
nat_router = 10.0.1.254 eth1:1
nat_nmask = 255.255.255.255
debug_level = 1
monitor_links = 1
syncdaemon = 0
virtual webservers {
active = 1
address = 172.16.1.136 eth0:1
vip_nmask = 255.255.255.0
port = 80
send = "GET / HTTP/1.0\r\n\r\n"
expect = "HTTP"
use_regex = 0
load_monitor = none
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server A {
address = 10.0.1.5
active = 1
weight = 1
}
}

9/28/09

MySQL Master/Master Config

This is a HOWTO for setting up a Master/Master MySQL configuration. This can provide a level of fault tolerance with a hot standby, load balancing, or even high availability fault tolerance can be achived with the addition of keepalive or something similar.


#Master 1/Slave 2 ip: 192.168.1.2 (ServerA)
#Master 2/Slave 1 ip : 192.168.1.3 (ServerB)

#Step 1
#On Master 1 (ServerA), make changes in my.cnf:
-----------------
[mysqld]
datadir=/d2/mysql
socket=/var/lib/mysql/mysql.sock
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
bind-address = 192.168.1.2 #enable tcp access
server-id=1 #server id

log-bin=/d2/mysql/db1-bin-log #Where to store the bin logs for replication TO ServerB
log-bin-index=/d2/mysql/db1-bin-log.index

binlog-do-db=redmine1 #DB to replicate
binlog-ignore-db=mysql #DB's not to replicate
binlog-ignore-db=test

master-host = 192.168.1.3 #Set Master info for ServerA
master-user = replication
master-password = *****************
master-port = 3306

relay-log=/d2/mysql/db1-relay-log #where to store the relay logs for replication FROM ServerB
relay-log-index=/d2/mysql/db1-relay-log.index

#[mysqld_safe]
#log-error=/var/log/mysqld.log
#pid-file=/var/run/mysqld/mysqld.pid
------------------

#Step 2 (granting access to replcation users on both boxes)
#On master 1 (ServerA), create a replication slave account on master1 for master2:
mysql -u root -p
mysql> grant replication slave on *.* to 'replication'@'192.168.1.3' identified by '**************';

#Create a replication slave account on master2(ServerB) for master1:
mysql -u root -p
mysql> grant replication slave on *.* to 'replication'@192.168.1.2 identified by '****************';

#Step 3
#Now edit my.cnf on Slave1 or Master2 (ServerB):
--------------------
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
bind-address = 192.168.1.3
server-id=2

log-bin=/var/lib/mysql/db2-bin-log
log-bin-index=/var/lib/mysql/db2-bin-log.index

binlog-do-db=redmine1
binlog-ignore-db=mysql
binlog-ignore-db=test

master-host = 192.168.1.2
master-user = replication
master-password = *****************
master-port = 3306

relay-log=/var/lib/mysql/db2-relay-log
relay-log-index=/var/lib/mysql/db2-relay-log.index

#[mysqld_safe]
#log-error=/var/log/mysqld.log
#pid-file=/var/run/mysqld/mysqld.pid
--------------------

#Step 4
#Restart mysqld on both servers.

sudo /etc/init.d/mysqld restart

#Step 5
#Start slave 1 and slave 2 (both servers)

mysql -u root -p
mysql> start slave;
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event #Make sure this isn't blank
Master_Host: 192.168.1.2
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: db1-bin-log.000014
Read_Master_Log_Pos: 404
Relay_Log_File: db2-relay-log.000029
Relay_Log_Pos: 543
Relay_Master_Log_File: db1-bin-log.000014
Slave_IO_Running: Yes #Make sure this is yes
Slave_SQL_Running: Yes #Make sure this is yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 404
Relay_Log_Space: 543
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)

ERROR:
No query specified

#Step 6
#Check on master status (both boxes):
mysql> show master status;
+--------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+--------------------+----------+--------------+------------------+
| db2-bin-log.000001 | 1214 | redmine1 | mysql,test |
+--------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

9/26/09

Nagios administrator

Need Nagios installed and configured?

Monitor Web/Mail/FTP/SSH/DNS/SMB Servers etc...
Monitor Devices (Printers, Switches/Routers, Firewalls)
SNMP
PNP integration for graphing performance data, be able to view uptime availability over time.
Phone/Pager/Email/Backberry alerts!
Different Time periods for alerts, work day, weekends, 24x7 etc.
Contact groups
Escalations
Interface with climate/motion/light sensors.
Windows Integration (NSClient++)
Security Considerations

Also ask me about using an Amazon EC2 instance for an affordable alternative to expensive external solutions. For as little as $100/year you could have an external monitoring solution on your own dedicated server, fully customizable and secure.

Please email with contact info and I'll give you a call, or send me an email and I can provide my contact information.

- Resume/References available upon request. Currently work as a full time administrator managing 100's of servers and several Nagios instances including fail-over redundant configurations.

9/11/09

Packages and Install for The "Perfect Webserver"

Packages for apache, mysql, passenger, and php


Apache
-------------

sudo yum install -y httpd.x86_64
sudo yum install -y mod_ssl.x86_64

MySQL
-------------
sudo yum install -y mysql-devel.x86_64 mysql.x86_64 mysql-server.x86_64


PHP
------------
sudo yum install -y php-mysql.x86_64
sudo yum install -y php.x86_64


Deps
----------
sudo yum install -y httpd-devel apr-devel

Ruby
------------
sudo yum install -y ruby
sudo yum install -y ruby-devel ruby-docs ruby-ri ruby-irb ruby-rdoc


Ruby Gems
-------------
wget http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz
tar xzvf rubygems-1.3.5.tgz
sudo ruby setup.rb
sudo gem update
sudo gem update --system
sudo gem install -v=2.1.2 rails
sudo gem list

Passenger
-------------
gem install passenger


add to httpd.conf:

LoadModule passenger_module /usr/lib64/ruby/gems/1.8/gems/passenger-2.2.5/ext/apache2/mod_passenger.so
PassengerRoot /usr/lib64/ruby/gems/1.8/gems/passenger-2.2.5
PassengerRuby /usr/bin/ruby

Include conf/sites-enabled/*.conf


Create: in sites-enabled/


ServerName www.yourhost.com
DocumentRoot /somewhere/public # <-- be sure to point to 'public'!

9/2/09

Getting bonded networking connections working in xen

create mybond in the /etc/xen/scripts dir

------------------
#!/bin/sh
dir=$(dirname "$0")
"$dir/network-bridge" "$@" vifnum=0 netdev=bond0
"$dir/network-bridge" "$@" vifnum=1 netdev=bond1
-------------------

edit /etc/xen/xend-config.sxp

comment out:
#(network-script network-bridge)

and add:
(network-script mybond)

restart xen

8/3/09

Xen Centos Host Freebsd Guest

Centos installs xen 3.0.1 which is old as hell, and 3.4 is way better. Also Freebsd will not install inside of 3.0.1 without a lot of extra work. So easy work around is to install this repo in yum.

http://www.gitco.de/repo/

$ cd /etc/yum.repos.d
$ wget http://www.gitco.de/repo/CentOS5-GITCO_x86_64.repo
$ sudo yum groupremove Virtualization
$ sudo yum groupinstall Virtualization
$ sudo reboot

[root@xen1 xen]# cat freebsd.hvm
name = "FreeBSD7"
builder = "hvm"
memory = "1024"
disk = ['file:/d2/images/bsdsmtpgateway/disk1.img,hda,w','file:/d2/iso/7.2-RELEASE-i386-dvd1.iso,hdc:cdrom,r']
vif = [ "mac=00:16:3e:70:66:ee,bridge=xenbr0" ]
device_model = "/usr/lib64/xen/bin/qemu-dm"
kernel = "/usr/lib/xen/boot/hvmloader"
vnc=1
boot="d"
vcpus=1
acpi="0"
pae="0"
serial = "pty" # enable serial console
on_reboot = 'restart'
on_crash = 'restart'


$ xm create freebsd.hvm

$ sudo lsof -i -n -P | grep qemu #find out what port vnc is listening on.
qemu-dm 6045 root 14u IPv4 18110 TCP 127.0.0.1:5900 (LISTEN)


From remote box;

$ ssh -L 5900:localhost:5900 hostname
$ vncviewer localhost

7/13/09

Converting a Mantis database in mysql from latin1 to utf8, so it can be imported into postgres by redmine:migrate_from_mantis

vi my.cnf
---------------------------------
character-set-server=utf8
default-collation=utf8_unicode_ci

[client]
default-character-set=utf8
---------------------------------

sudo service mysqld restart
mysqldump -u root -p --opt --default-character-set=latin1 --skip-extended-insert > mantis-latin1.sql
iconv -t LATIN1 -f UTF8 -c mantis-latin1.sql > latest.mantis-utf8.sql
sed 's/DEFAULT CHARSET=latin1/DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci/g' latest.mantis-utf8.sql > latest.mantis-utf8.sql2
vi latest.mantis-utf8.sql2
:set encoding=utf-8
:set guifont=-misc-fixed-medium-r-normal--18-120-100-100-c-90-iso10646-1
save w!

mysql -u root -p utf8mantisdb --default-character-set=utf8 < latest.mantis-utf8.sql2

sudo rake redmine:migrate_from_mantis RAILS_ENV="production"

7/9/09

Using apache allow/deny with a load balancer forwarding the clients IP as X-Forwarded-For

In this example, I have a HAproxy load balancer setup, and its forwarding the clients IP so you see that instead of the load balancer in the log files.

It is fowarded by changing the log format to:

LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

Where You substitute %{X-Forwarded-For}i for %U

In order for apache to use this value to allow/deny people you need to set it like the following:

SetEnvIfNoCase ^X-Forwarded-For ^1\.2\.3\.4 officeip


Order deny,allow
Deny from all
Allow from env=officeip

6/5/09

Nagios - Favorite Linux Monitoring Application! by linux journal.

http://www.linuxjournal.com/article/10451

Favorite Linux Monitoring Application

Nagios (51%)

Honorable Mention

Hyperic HQ (15%)

up.time (11%)

Nagios was not only recently dubbed one of the most important open-source apps of all time, but it also is the winner of the new Readers' Choice category, Favorite Linux Monitoring Application. A slim majority 51% of you use Nagios to keep close tabs on your networks of all shapes, sizes and levels of complexity. Most of you not using Nagios opt for the Honorable Mention candidates, Hyperic HQ (with 15%) and up.time (11%). Ganglia and GroundWork also garnered respectable votes in the single digits.

6/4/09

Easy remote syslog-ng setup

This is on CentOS, of course you already have regular syslog installed so download syslog-ng rpm from wherever and install...

Force its install via:

sudo rpm --force -Uvh syslog-ng-1.6.12-1.el5.centos.i386.rpm

or remove the old syslog first via:
rpm -e --nodeps rsyslog
stop syslog and start syslog-ng:

sudo /etc/init.d/syslog stop && sudo /etc/init.d/syslog-ng start

Test that its working via:

logger "test message" && sudo tail /var/log/messages

remove syslog from starting and setup syslog-ng to start up on boot:

sudo chkconfig syslog off && sudo chkconfig syslog-ng on && sudo chkconfig --list | grep syslog

Enable remote syslogging on the host syslog server
HOST:
sudo vi /etc/syslog-ng/syslog-ng.conf
add:

source s_network {
tcp(max-connections(5000));
udp();
};

destination d_network {
file ("/var/log/syslog-ng/$HOST/$FACILITY.log");
};

log { source(s_network);
destination(d_network);
};

Sending messages from your syslog-ng client
CLIENT:
sudo vi /etc/syslog-ng/syslog-ng.conf

destination loghost {
tcp("
192.168.1.5");
};

log {
source(s_sys);
destination(loghost);
};

Add an iptables allow rule for port 514, and optionally add the -s and mention the host (much more secure)
sudo vi /etc/sysconfig/iptables
add:
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 514 -s 192.168.1.5 -j ACCEPT
restart iptables:
sudo /etc/init.d/iptables restart

Test that its working by running on the client:
logger "test to remote"

and running on the host:
tail -f /var/log/messages

If you see the msg its working .. if not, you fail... try again.

5/18/09

Amazon New features! Loan balancing and monitoring.... finally!

Dear AWS Community Member,

You signed up to be notified when we released monitoring, auto scaling and load balancing for Amazon EC2. We are excited to announce the public beta of these new features: Amazon CloudWatch, a web service for monitoring AWS cloud resources, Auto Scaling for automatically growing and shrinking Amazon EC2 capacity based on demand, and Elastic Load Balancing for distributing incoming traffic across Amazon EC2 compute instances. Together, these capabilities provide you with visibility into the health and usage of your AWS compute resources, enhance application performance, and lower costs.

Monitoring

Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources, starting with Amazon EC2. It provides customers with visibility into resource utilization, operational performance, and overall demand patterns -- including metrics such as CPU utilization, disk reads and writes, and network traffic. To use Amazon CloudWatch, simply select the Amazon EC2 instances that you'd like to monitor; within minutes, Amazon CloudWatch will begin aggregating and storing monitoring data that can be accessed using web service APIs or Command Line Tools.

Auto Scaling

Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you're using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees.

Elastic Load Balancing

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. Customers can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance.

Like all Amazon Web Services and features, Amazon CloudWatch and Elastic Load Balancing are available on a pay-as-you-go basis with no up-front fee, minimum spend or long term commitment. Auto Scaling is free to Amazon CloudWatch customers. Each instance launched by Auto Scaling is automatically enabled for monitoring and the Amazon CloudWatch monitoring charge will be applied.

For more information on these new features and details on how to start using them, please see the resources listed below:

  • Amazon EC2 Detail Page
  • Release Notes
  • These have been among the most requested Amazon EC2 features by our customers. We hope they prove useful to you, and we look forward to your feedback.

    Sincerely,

    The Amazon Web Services Team

    5/6/09

    Amazon has a new feature, reserved instances.

    Basically you pay a one time up front fee, and it drastically lowers your hourly $ cost per instance.

    You can have a small instance (Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform)

    For around - 65$/month - 780$/year, you could have a dedicated server ... pretty sweet deal.

    Now if you add in the reserved instance's feature...

    A small instance will cost
    $325(reserved instance fee) + $262/year = $587/year ... even better deal.

    Sign up for 3 years, and you now are paying...
    $587 1st year, and then $262 ... end of 3 years total = $1111 compared to $2340 for the regular price for 3 years.


    4/9/09

    Rebundeling a running ec2 instance....

    ec2-bundle-vol --prefix what_you_want_to_name_it -d /mnt/ami -c pathtocert.pem -k pathtokeyfile.pem -u 123456789 -s 10240 --kernel aki-9b00e5f2 -r i386

    -d, --destination PATH
    -c, --cert PATH
    -k, --privatekey PATH
    --kernel ID Id of the default kernel to launch the AMI with.
    -r, --arch ARCHITECTURE Specify target architecture. One of ["i386", "x86_64"]
    -s, --size MB The size, in MB (1024 * 1024 bytes), of the image file to create. The maximum size is 10240 MB.

    (change the kernel type and arch to suite your needs... might as well use the largest size 10240)

    This will bundle your running instance, and place the files in /mnt/ami

    ec2-upload-bundle -b bucketname -m /mnt/ami/what_you_named_it.manifest.xml --access-key XYZ --secret-key XYZ

    This will upload your bundled image to your bucket.

    ec2-register /bucket/what_you_named_it.manifest.xml

    The last step is to register your image, you will get back the AMI ID, and can either start your instance on the cmd line, or simply login to the web console and start it.

    Nagios - Instead of a ping check to see if a host is alive, use http instead

    Sometimes (often) ICMP is blocked, so you can't ping check your hosts to see if they are alive.

    You can add this to commands.cfg and hosts.cfg for those hosts to check via http instead.

    commands.cfg
    define command {
    command_name check-host-alive-by-http
    command_line $USER1$/check_http -H $HOSTADDRESS
    }

    hosts.cfg

    define host{
    host_name hostname.com
    address ip.address
    alias hostnamealias
    use networking_machines_template
    check_command check-host-alive-by-http
    }

    4/8/09

    Installing Nagios

    Installing Nagios
    ---------------------
    *need to have basic centos install, with apache installed.



    Install rpmforge repo

    wget http://apt.sw.be/redhat/el5/en/i386/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
    wget http://apt.sw.be/redhat/el5/en/x86_64/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
    rpm --import http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt
    rpm -K rpmforge-release-0.3.6-1.el5.rf.*.rpm #verify package
    rpm -i rpmforge-release-0.3.6-1.el5.rf.*.rpm #install package

    Install yum-priorities #this is't required, but I like to limit the rpmforge repository from effecting any base packages

    yum install yum-priorities
    #Make sure that yum-priorities is enabled by editing the /etc/yum/pluginconf.d/priorities.conf
    #Edit the .repo files in /etc/yum.repos.d/ and set up priorities by adding the line: (lower number = higher priority, 0 = disabled)
    priority=N


    Install nagios packages

    sudo yum install nagios nagios-devel nagios-plugins nagios-plugins-setuid rrdtool


    Configure Nagios

    sudo htpasswd -c /etc/nagios/htpasswd.users kylec #Create htpasswd file for auth
    sudo htpasswd /etc/nagios/htpasswd.users username #for adding users

    sudo vi /etc/nagios/nagios.cfg
    comment out...
    #cfg_file=/etc/nagios/objects/templates.cfg
    #cfg_file=/etc/nagios/objects/localhost.cfg
    add...
    cfg_file=/etc/nagios/objects/hosts.cfg
    cfg_file=/etc/nagios/objects/hostgroups.cfg
    cfg_file=/etc/nagios/objects/services.cfg
    cfg_file=/etc/nagios/objects/contactgroups.cfg
    set...
    process_performance_data=1
    host_perfdata_command=process-host-perfdata
    service_perfdata_command=process-service-perfdata


    sudo vi /etc/nagios/cgi.cfg
    set...
    authorized_for_system_information=*
    authorized_for_configuration_information=*
    authorized_for_system_commands=*
    authorized_for_all_services=*
    authorized_for_all_hosts=*
    authorized_for_all_service_commands=*
    authorized_for_all_host_commands=*


    ----------------------------------------------------------------------------------------------
    Installing Nagios pnp

    wget http://switch.dl.sourceforge.net/sourceforge/pnp4nagios/pnp-0.4.13.tar.gz
    ./configure
    make all
    make fullinstall

    edit hosts.cfg

    add:

    define host{
    use generic-host,host-pnp #add host-pnp to networking_machines_template


    define host {
    name host-pnp
    register 0
    action_url /nagios/pnp/index.php?host=$HOSTNAME$' onmouseover="get_g('$HOSTNAME$','_HOST_')" onmouseout="clear_g()"

    }

    edit services.cfg

    Add to the main template... (in my case basic-service)

    define service{
    use generic-service,srv-pnp (srv-pnp is whats added)
    name basic-service
    .... truncated


    define service {
    name srv-pnp
    register 0
    action_url /nagios/pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$' onmouseover="get_g('$HOSTNAME$','$SERVICEDESC$')" onmouseout="clear_g()"

    }

    edit commands.cfg

    add:

    define command {
    command_name process-service-perfdata
    command_line /usr/bin/perl /usr/local/nagios/libexec/process_perfdata.pl
    }

    define command {
    command_name process-host-perfdata
    command_line /usr/bin/perl /usr/local/nagios/libexec/process_perfdata.pl -d HOSTPERFDATA
    }

    comment out the existing SAMPLE PERFORMANCE DATA COMMANDS

    setup configs

    cd /usr/local/nagios/etc/pnp/
    sudo mv npcd.cfg-sample npcd.cfg
    sudo mv process_perfdata.cfg-sample process_perfdata.cfg
    sudo mv rra.cfg-sample rra.cfg


    If you are seeing ... "File does not exist: /usr/share/nagios/pnp," in your httpd logs.
    cp -R /usr/local/nagios/share/pnp/ /usr/share/nagios/pnp

    edit as you would like them.

    start npcd
    /etc/init.d/npcd start
    -------------------------------------------------------------------------------------------------------




    EXAMPLE .cfgs #this is custom, these are just some examples to get you started.

    contacts.cfg

    define contact{
    contact_name kyle
    alias kyle
    service_notification_period 24x7
    host_notification_period 24x7
    service_notification_options w,u,c,r
    host_notification_options d,u,r
    service_notification_commands notify-by-email
    host_notification_commands host-notify-by-email
    email kyle@email.com
    }

    contactgroups.cfg

    define contactgroup{
    contactgroup_name admins
    alias admins
    members internal_infosec, kylepager
    }


    hosts.cfg

    define host{
    host_name website.com
    address website.com
    alias website.com
    use networking_machines_template
    }


    hostgroups.cfg

    define hostgroup{
    hostgroup_name webservers
    alias webservers
    members server1, server2, server3, website.com
    }

    services.cfg

    define service{
    use basic-service
    hostgroup_name webservers
    service_description HTTP
    check_command check_http!
    contact_groups admins
    }

    4/6/09

    Use nmap to scan for conflicker...

    Use nmap to scan for conflicker...
    http://seclists.org/nmap-dev/2009/q1/0869.html

    get the latest of nmap, and install lua

    Directions for ubuntu ...

    sudo apt-get install lua50

    wget http://nmap.org/dist/nmap-4.85BETA7.tar.bz2
    tar -jzxvf nmap-4.85BETA7.tar.bz2
    ./configure
    make

    ./nmap -sC --script=smb-check-vulns --script-args=safe=1 -p445 -d -PN -n -T4 --min-hostgroup 256 --min-parallelism 64 -oA conficker_scan 192.168.1.1/24 | grep Conficker:


    You should see all

    | Conficker: Likely CLEAN

    Just remove the grep filter to see the host if you get any other results.

    4/3/09

    Modified pidgin SMS for added functionality


    This is by no means very well written. But its a short easy hack to get IM's sent to you via SMS, that is somewhat dynamic so you can receive them from anyone you have it setup for.

    1.) Select buddy to add pounce
    2.) Select your bounce options
    3.) Select it to execute your script, and after the script path, add the full buddy name (this will be the path to the log that is the variable part)


    The only part I don't have very dynamic is this will only work the way its intended if your buddy's are all under the same account, you need to have a different script for each account you use.

    pidgin.sms.sh
    ------------------------------------------------------------------------------------------------------
    #/bin/bash
    #Written by kylepike
    #This script will send you an sms txt alert if you have a buddy pounce setup in pidgn
    #The argument you need to pass in is the full "buddy name" in the "New buddy pounce" window


    buddyfolder="/home/kyle/.purple/logs/jabber/kyle@k0rupted.domain.net/$1"

    cd $buddyfolder

    file=`ls -lrt | tail -n 1 | awk '{ print $8}'`
    lynx --dump $file | tail -n 1 > ~/scripts/emailmessage.txt
    scp ~/scripts/emailmessage.txt kyle@k0rupted.domain.net:~/scripts/emailmessage.txt

    ssh kyle@k0rupted.domain.net /home/kyle/scripts/smsme.sh
    -------------------------------------------------------------------------------------------------------


    smsme.sh on the remote sendmail server, you could incorporate this into one if you have a working sendmail server on your computer/laptop. But like I said in the earlier post, I'm always roaming, and paranoid and I would't want to send my sms out from an unsecure network clear txt.

    #!/bin/bash
    # script to send simple email
    # email subject
    sn="SenderName"
    SUBJECT="IM FROM $sn"
    # Email To ?
    EMAIL="123456789@messaging.sprintpcs.com"
    # Email text/message
    EMAILMESSAGE="/home/kyle/scripts/emailmessage.txt"
    #echo "IM Reminder"> $EMAILMESSAGE
    #echo "From Blah" >>$EMAILMESSAGE
    # send an email using /bin/mail
    /bin/mail -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE

    4/1/09

    Pidgin SMS via buddy pounce over ssh

    K, so you want to receive an alert that you got an important IM that you were waiting for. Or Dreading ... Also if you wanted to know if someone signed online and where waiting to talk to them, basically any of the buddy pounce rules can trigger the SMS.

    Either way, you need to know that you got it and don't want to wait by your computer. Here is a quick and dirty way to get the job done.

    For my setup I have a laptop that is often roaming, so using a local sendmail server wasn't really an option. And I have my own server running at home, with a reliable sendmail server, so my best bet was to send the alerts from there. But I also don't want to open the port for the whole world (also my ISP won't allow SMTP port), so I will do this all over ssh. Also because im a paranoid nut job.

    What you Need
    ----------------------
    1.) pidgin
    2.) linux or have ssh installed via cmd line in windows (ur on your own there)
    3.) private key auth to your server
    4.) an external server (you could just run this all locally if you can send mail from your desktop/laptop)


    On the server create: smsme.sh ... of course you can change these contents to send whatever you would like.

    vi smsme.sh
    chmod +x smsme.sh
    -----------------------------

    #!/bin/bash
    # script to send simple email
    # email subject
    SUBJECT="While you were away"
    # Email To ?
    EMAIL="123456789@messaging.sprintpcs.com" #use your cell email address
    # Email text/message
    EMAILMESSAGE="/home/scripts/emailmessage.txt"
    echo "IM Reminder"> $EMAILMESSAGE
    echo "From Person's Name" >>$EMAILMESSAGE
    # send an email using /bin/mail
    /bin/mail -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE


    -----------------------

    On your client/laptop/desktop create pidgin.sh

    echo "ssh username@hostname /home/username/smsme.sh" > pidgin.sh
    chmod +x pidgin.sh

    Then in pidgin select your important contact that you don't want to miss their IM, and select your desired options, and then selected "execute a command" and point it to your pidgin.sh


    All set, based on your buddy pounce rules, you will receive an SMS alert that you got the IM. I want to figure out a way now to include the contents of that IM.

    3/26/09

    Installing Nagios pnp

    Installing nagios-pnp

    wget http://switch.dl.sourceforge.net/sourceforge/pnp4nagios/pnp-0.4.13.tar.gz
    ./configure
    make all
    make fullinstall


    General Options:
    -------------------------
    Nagios user/group: nagios,nagios
    Install ${prefix}: /usr/local/nagios
    HTML Dir: /usr/local/nagios/share/pnp
    Config Dir: /usr/local/nagios/etc/pnp/
    Path to rrdtool: /usr/bin/rrdtool
    RRD Files stored in: /usr/local/nagios/share/perfdata



    -------------------
    edit nagios.cfg

    process_performance_data=1
    enable_environment_macros=1
    service_perfdata_command=process-service-perfdata
    host_perfdata_command=process-host-perfdata




    ---------------------
    edit commands.cfg

    add:

    define command {
    command_name process-service-perfdata
    command_line /usr/bin/perl /usr/local/nagios/libexec/process_perfdata.pl
    }

    define command {
    command_name process-host-perfdata
    command_line /usr/bin/perl /usr/local/nagios/libexec/process_perfdata.pl -d HOSTPERFDATA
    }


    comment out the existing SAMPLE PERFORMANCE DATA COMMANDS


    -----------------
    setup configs

    cd /usr/local/nagios/etc/pnp/
    sudo mv npcd.cfg-sample npcd.cfg
    sudo mv process_perfdata.cfg-sample process_perfdata.cfg
    sudo mv rra.cfg-sample rra.cfg


    If you are seeing ... "File does not exist: /usr/share/nagios/pnp," in your httpd logs.
    cp -R /usr/local/nagios/share/pnp/ /usr/share/nagios/pnp

    edit as you would like them.

    start npcd
    /etc/init.d/npcd start

    3/18/09

    Installing geo-ip in awstats

    In CentOS figured I'd save someone else the google'ing, you don't need to make the geoip stuff manually, you can do it from yum...

    1. yum install perl-Geo-IP
    2. yum install GeoIP-data

    Uncomment:
    LoadPlugin="geoip GEOIP_STANDARD /var/lib/GeoIP/GeoIP.dat"
    Re-run awstats... done!

    3/11/09

    bonding two NICs, on CentOS

    vi /etc/sysconfig/network-scripts/ifcfg-bond0
    DEVICE=bond0
    IPADDR=192.168.1.20
    NETWORK=192.168.1.0
    NETMASK=255.255.255.0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes


    vi /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    USERCTL=no
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    BOOTPROTO=none


    vi /etc/sysconfig/network-scripts/ifcfg-eth1
    DEVICE=eth1
    USERCTL=no
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    BOOTPROTO=none

    vi /etc/modprobe.conf
    alias bond0 bonding
    options bond0 mode=balance-alb miimon=100


    modprobe bonding

    service network restart

    less /proc/net/bonding/bond0

    vmware kernel for kernel clock issue, and preformance

    #Issue is basically... " In 2.4 kernels the system timer was normally clocked at 100 Hz, while in 2.6 the default system timer is set to 1000 Hz (some other distros are not following these "rules", and USER_HZ is still 100). 1000 Hz is definitely a good thing for desktop computers requiring fast interactive responses, but there are environments where this causes bad side effects."
    #This caused the time to drift on the guest, which would mean problems with files and timestamps. This will fix that, as well as preformance gains.


    #Get the kernel repo

    cd /etc/yum.repos.d/
    sudo wget http://vmware.xaox.net/centos/5.2/VMware.repo

    #Install yum-protect and yum-priorties
    sudo yum install yum-protect-packages
    sudo yum install yum-priorities

    #Set up the priorties, you want the Vmware.repo to be 1 higher then all the others. an example:

    name=CentOS-$releasever - Base
    mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
    #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
    gpgcheck=1
    gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
    priority=2


    #Then setup the VMware.repo to look similar to this: and enable protect so that it won't touch these packages from any other repos
    [vmware]
    name=VMware - Centos $releasever - $basearch
    baseurl=http://vmware.xaox.net/centos/$releasever/$basearch/
    gpgcheck=1
    gpgkey=http://vmware.xaox.net/centos/$releasever/RPM-GPG-KEY
    priority=1
    protect=1

    #wget the kernel packages.
    wget http://vmware.xaox.net/centos/5.2/x86_64/kernel-2.6.18-92.1.10.el5.centos.plus.VMware.x86_64.rpm
    wget http://vmware.xaox.net/centos/5.2/x86_64/kernel-devel-2.6.18-92.1.10.el5.centos.plus.VMware.x86_64.rpm
    wget http://vmware.xaox.net/centos/5.2/x86_64/kernel-headers-2.6.18-92.1.10.el5.centos.plus.VMware.x86_64.rpm

    #install the kernel, devel, and header packages
    sudo rpm -ivh kernel-2.6.18-92.1.10.el5.centos.plus.VMware.x86_64.rpm kernel-devel-2.6.18-92.1.10.el5.centos.plus.VMware.x86_64.rpm kernel-headers-2.6.18-92.1.10.el5.centos.plus.VMware.x86_64.rpm

    reboot, and your done :-)

    Setting up keepalived for use with haproxy

    This is for a 2 box load balancer:

    keepalived:

    download install keepalived:
    wget http://www.keepalived.org/software/keepalived-1.1.15.tar.gz
    tar -zxvf keepalived-1.1.15.tar.gz && cd keepalived-1.1.15.tar.gz && ./configure --prefix=/usr --sysconfdir=/etc && make && sudo make install

    sudo vi /etc/sysctl.conf
    net.ipv4.ip_nonlocal_bind=1 <--- have to do this to enable it to bind to a non local ip
    sudo sysctl -p



    lb1:
    vi /etc/keepalived/keepalived.conf

    vrrp_script chk_haproxy { # Requires keepalived-1.1.13
    script "killall -0 haproxy" # cheaper than pidof
    interval 2 # check every 2 seconds
    weight 2 # add 2 points of prio if OK
    }

    vrrp_instance VI_1 {
    interface eth0
    state MASTER
    virtual_router_id 51
    priority 101 # 101 on master, 100 on backup
    virtual_ipaddress {
    192.168.0.99
    }
    track_script {
    chk_haproxy
    }
    }




    lb2:
    vi /etc/keepalived/keepalived.conf
    vrrp_script chk_haproxy { # Requires keepalived-1.1.13
    script "killall -0 haproxy" # cheaper than pidof
    interval 2 # check every 2 seconds
    weight 2 # add 2 points of prio if OK
    }

    vrrp_instance VI_1 {
    interface eth0
    state MASTER
    virtual_router_id 51
    priority 100 # 101 on master, 100 on backup
    virtual_ipaddress {
    192.168.0.99
    }
    track_script {
    chk_haproxy
    }
    }

    I found some of the configs off a HOWTO forge and tweaked a little to suit my configurations, thank you to whomever.

    Bundeling an existing linux server into a new AMI

    Requirements:
    - A running server to bundle
    - Ec2 ami/api tools installed
    - Amazon ec2 account

    First you need to setup the amazon ec2 tools, follow the directions here:

    http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/setting-up-your-tools.html


    Some notes that I have for my local setup that may be useful:

    sudo yum install ruby


    wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.noarch.zip

    go to http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88 for the API tools
    go to http://java.sun.com/j2se/1.4.2/download.html to download and install the jre enviroment


    ---- tools ----
    export EC2_HOME=/home/kylec/ec2-api-tools-1.3-26369
    export EC2_AMITOOL_HOME=/home/kylec/ec2-ami-tools-1.3-26357

    ---- keys -----
    export EC2_PRIVATE_KEY=~/.ec2/pk-$keynumber.pem #set $keynumber
    export EC2_CERT=~/.ec2/cert-$keynumber.pem #set $keynumber

    ---- example .bash_profile -----
    PATH=$PATH:$HOME/bin:/usr/bin/:/home/kylec/ec2-api-tools-1.3-26369/bin/:/usr/sbin:/usr/local/sbin

    export PATH
    unset USERNAME

    export EC2_AMITOOL_HOME=/home/kylec/ec2-ami-tools-1.3-26357
    export EC2_HOME=/home/kylec/ec2-api-tools-1.3-26369
    export EC2_PRIVATE_KEY=~/.ec2/pk-*************.pem
    export EC2_CERT=~/.ec2/cert-**************.pem
    export JAVA_HOME=/usr
    -------------------------------

    ----------------------------
    There are a few things you want to do before you bundle your image, get it to how you want it as far as applications installed, updates applied, configurations set etc... After all the things kept on / are non-persistant after reboots.


    #Once everything is setup for the tools, you can bundle the running physical machine with the following.

    ./ec2-bundle-vol -p amazonami1 -d /ami -c ~/.ec2/cert-*****.pem -k ~/.ec2/pk-******.pem -u 123456789 -s 10240 --no-inherit --generate-fstab -e /ami --kernel aki-9800e5f1 --ramdisk ari-a23adfcb

    This will create a new image named "amazonami1", and will put it in /xen, at the max size of 10240, and will not inherit its meta data from the instance (duh), and will generate an ec2 fstab file... (note the exclude -e /ami)

    Also a good idea is to specify the kernel you want to start the AMI with, as it will default to the oldest one. aki-9800e5f1 = 2.6.18-xenU-ec2-v1.0

    So this should take a little while, it will mount a loop0 interface and write/compress and encrypt your instance into the specified /ami folder.

    Now the next part is easy, all that is left is to upload your newly bundled image.

    ec2-upload-bundle -b bucketname -m /ami/imageprefixname.manifest.xml -a ************ -s ********************

    Pretty straight forward, just need to specify the bucket to upload to, and the path to the ami manifest.xml file. Just sit back and let it upload.

    Once its compete all thats left is to register the AMI,

    ec2-register /bucket/aminame.manifest.xml

    Then, just login to the console and start your new AMI,

    https://console.aws.amazon.com/

    doing a yum update broke backuppc "backup failed (can't find Compress::Zlib)"

    Centos's yum perl package doesn't include XS support, causing the error in backuppc "backup failed (can't find Compress::Zlib)"

    Issue was with, /usr/lib/perl5/site_perl/5.8.8/Compress/Zlib.pm
    use Scalar::Util qw(dualvar);

    Test with:

    [root@]# perl -W
    use Scalar::Util qw(dualvar);
    Use of uninitialized value in concatenation (.) or string at /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/Scalar/Util.pm line 30.
    is only avaliable with the XS version at - line 1
    BEGIN failed--compilation aborted at - line 1.

    To Fix:

    cpan: install xs
    cpan: force install Scalar::Util

    rerun

    [root@wailord log]# perl -W
    use Scalar::Util qw(dualvar);

    and you should see no error messages. Rerun an update on a host in backuppc and it should succeed.

    ......

    ADDED NOTES! --

    The next day, for some reason the backups where corrupt, it seems that the zlib package that I downgraded to caused some issues as well!

    These were the packages that were updated that seemed to break something:
    Mar 10 09:55:34 Updated: perl - 4:5.8.8-15.el5_2.1.x86_64
    Mar 10 09:55:37 Updated: perl-Compress-Raw-Zlib - 2.015-1.el5.rf.x86_64
    Mar 10 09:55:38 Updated: perl-DBI - 1.607-1.el5.rf.x86_64
    Mar 10 09:55:39 Updated: perl-IO-Compress-Base - 2.015-1.el5.rf.noarch
    Mar 10 09:55:45 Updated: perl-IO-Compress-Zlib - 2.015-1.el5.rf.noarch

    And I downgraded zlib thinking that could fix it to:

    Mar 11 11:35:52 Installed: perl-Compress-Zlib - 1.42-1.fc6.x86_64

    Ended up upgrading back to the following:

    Mar 12 10:42:29 Installed: perl-Compress-Raw-Zlib - 2.015-1.el5.rf.x86_64
    Mar 12 10:42:29 Installed: perl-IO-Compress-Base - 2.015-1.el5.rf.noarch
    Mar 12 10:42:29 Installed: perl-DBD-mysql - 4.010-1.el5.rf.x86_64
    Mar 12 10:42:29 Installed: perl-IO-Compress-Zlib - 2.015-1.el5.rf.noarch
    Mar 12 10:42:29 Updated: perl-Compress-Zlib - 2.015-1.el5.rf.noarch

    This fixed the corrupt backups issue.