Friday, January 6, 2017

Difference between SSL and TLS

SSL and TLS are both cryptographic protocols that provide authentication and data encryption between servers.
 SSL is the predecessor to TLS, over the years, new version has been released to address vulnerabilities and support stronger and more secure cipher suite.

SSL was developed by Netscape and SSL 2.0 was released on 1995 version 2.0 was quickly replaced by SSL 3.0 after found number of vulnerabilities.


TLS was developed in 1999 as a new version of SSL and based on SSL 3.0.
 both SSL 2.0 and 3.0 have been continue to be discovered in the depreciated  SSL protocols, most of browsers will show a degrade  user experience. For those reasons, you should disabled SSLv2 and V3 and enabled only TLS protocol.

Thursday, July 28, 2016

New Features of vSphere Replication 6.0
There are some new features available since vSphere replication 6.0
Network traffic compression- replicated data is now optionally compressed prior to being sent over the network, resulting is reduced replication times network bandwidth consumed.
Linux guest OS quiescing: introducing Linux guest OS quiescing services to provide file systems level crash consistency to replicated Linux based VM.
At last, network isolation, management and replication traffic can be split over separate networks.
Interoperability with VMware vSphere storage DRS at the target site: vSphere Storages DRS can now detect replica discs of replicated virtual machines at the target site.

Recover VM through Vsphere Replication 6.0
For understanding of the recovery from v-Sphere replication 6.0 following process comprises the following steps:
·         Select VM which we want to recover.
·         Recovery VM on replication destination site, power it on, verify it works as expected.
At first it could be interesting to note the fact that if virtual machine is available at source site V-Sphere replication allow you to synchronize disk data back from target site to source site.

For understanding the proper scenario, we have two site one is primary site and second is DR site both sites manage by  their own v-Center server, assumed VM at source site went corrupted.

To perform recovery connect to v-center at DR site and select Monitor-> vSphere replication-> manage-> click on incoming replications and select VM and right click on it and then click recovery button.

First select correct recovery options, there is showing two options, recovery with recent changes and second is recovery with latest available data, I will select 2nd option and go ahead for the next process.

Select where to recover VM, in this case I created a dedicated folder in datacenter where to place recovered and select host your VM.

Once recovery has been performed a new VM will be created on your source site (DR site) and on Summary tab of the VM you will be informed the VM was recovered.


 VM is now fully functional at remote site (DR Site) but how about moving it back to Primary site ? To move VM back to source site a reverse replication must be configured. By enabling V-sphere Replication on recovered VM at target site back to source site will bring VM back at its original place.



Friday, August 21, 2015

Understanding of Amazon Web Services

Cloud is always remain hot topic for discuss security, performance and high-availability. Many of organizations have moved on cloud and many of planning to move, It is the toughest job to decide which cloud architecture is best for your organization, there are having so many ways where we can easily understand cloud architecture, what we exactly want for run your infrastructure in cloud and most of the organizations are followed traditional infrastructure.

Amazon is provide same modernize traditional infrastructure in cloud which is the best suitable for all organization. The traditional infrastructure is easily portable to the cloud services provided by the AWS products with only few simple modifications. This section will helps you evaluate an AWS cloud solution.

It compares deploying your web application in the cloud to an on-premises deployment, presents an AWS cloud infrastructure for hosting your application and discusses the key components of this solution.

If you are responsible for running a web application, you face a variety of infrastructure and architectural issues for which AWS can provide seamless and cost effective solution.

AWS Cloud Architecture for Web Hosting

Below is another look at that classic web application architecture and how it could leverage the AWS cloud computing infrastructure.



The following sections outline some of the key components of web hosting architecture deployed in AWS cloud and explain how they are differ from the traditional web hosting architecture.

Content delivery- Edge caching is still relevant in the AWS cloud computing infrastructure, it is available in AWS which is to utilize the Amazon cloud front services for edge caching your website.

Amazon cloud front can be used to deliver your website, including dynamic, static and streamline content using a global network for edge location. Requests for your content are automatically routed from the nearest edge location, so content delivered with the best performance.

Managing DNS - moving a web application to the AWS cloud requires some DNS changes to take advantage of the multiple availability zone that AWS provides. To help your manage DNS routing, AWS provides Amazon 53, it is highly available and scalable DNS web services. Queries for your domain are automatically routed from the nearest DNS server and thus answered with the best possible performance.

Host Security- Security groups features are available as a firewall services which is helpful for stop unwanted traffic from the unwanted network are allowed only specific ports and services for the specific network or tier architecture. For the web tier only http and https should be allowed  and for the application tier will communicate only with web tier and db tier internally.
Security groups are allowed to reach your EC2 instance.

Storage gateway- AWS storage gateway  is a service connection an on-premises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS storage infrastructure. This service allow you to securely store data in the AWS cloud for scalable and cost effective storage. It provides low-latency performance by maintaining frequently assessed data on-premises while securely storing all of your data encrypted in Amazon storage service (S3).

AWS storage Gateway supports three configuration.
Gateway-Cached Volumes- you can store your primary data in Amazon S3 and retain your frequently accessed daily locally. Gateway-cached volumes substantial cost savings on primary storage.
Gateway –Stored Volumes- in the event you need low-latency access to your entire data set, you can configure your on-premises data gateway to store primary data locally.
Gateway –Virtual Tape library- if you have limitless data which should be securely stored in virtual tape, you can use this storage to store data in tape library which may use for future.

It is make very easy to store on-premises data on Amazon S3 and Amazon Glacier, AWS storage gateway reduce the cost, maintenance and scaling challenges associated with  managing primary, backup and archive storage environment. Amazon Gateway-Stored and cached volumes are designed to seamlessly integrate with Amazon S3, Amazon EBS and Amazon EC2 by enabling you to store point in time snapshots of your on-premises application data in Amazon EC2 , S3 and EBS snapshots for future recovery on-premises in Amazon EC2.

  
TNT drive is one of the tool which will help to MAP Amazon S3 storage with Amazon VM instance.




Monday, June 13, 2011

How to install and configure DHCP server on fedora

$ yum install dhcp*

$ chkconfig dhcpd on

Copy dhcp ‘s sample file and configure dhcp server

usr/share/doc/dhcp*/dhcpd.conf.sample

$ cp /usr/share/doc/dhcp/dhcpd.conf.sample /etc/dhcpd.conf

$ vim /etc/dhcpd.conf

ddns-update-style interim;
ignore client-updates;

subnet 192.168.10.0 netmask 255.255.255.0 {

# --- default gateway
option routers 192.168.10.1;
option subnet-mask 255.255.255.0;

#option nis-domain "abc.com";
#option domain-name "abc.com";
option domain-name-servers 192.168.10.1;

option time-offset -18000; # Eastern Standard Time
# option ntp-servers 192.168.1.1;
option netbios-name-servers 192.168.10.1;
# --- Selects point-to-point node (default is hybrid). Don't change this
unless
# -- you understand Netbios very well
# option netbios-node-type 2;

*#** range dynamic-bootp 192.168.1.100 192.168.1.100;*
default-lease-time 21600;
max-lease-time 43200;


#If you want to release IP for only your registered MAC address in your configuration

# we want the nameserver to appear at a fixed address

host COMPUTERNAME {
hardware ethernet 00:1A:64:A2:BB:88;
fixed-address 192.168.10.3;
}

host COMPUTERNAME2 {
hardware ethernet 00:1C:90:25:92:G1;
fixed-address 192.168.10.4;
}

Wednesday, April 20, 2011

Apache server load balancing with Multiple Tomcat Clustering-


Apache server load balancing with Multiple Tomcat Clustering- :





Load Balancing -: Load balancer accept request from external client and forward them to one of the available Backend servers according to a scheduling algorithm.
We can use dedicated hardware and any load balancing software for load balancing
Mod_proxy_balancer- : Apache web server’s module of mod_proxy_balancer the apache module developed to provide to load balancing over a set of web server. Load balancer it can keep track of session
Sticky Session- A single user always deals with the same backend server.

Installation -:

Apache modules- Download from apache’s web site, mod_proxy module for load balancing

Windows-: Download mod_proxy modules and copy in modules directory.
Linux- Download mod_proxy modules and run following command for compile.

#./configure --enable-proxy --enable-proxy-balancer [run ./configure –h
# make
# make install

Configuration-:

Windows-: Enable following load modules in and add require modules in httpd.conf file

C:\Program Files\Apache Software Foundation\Apache2.2\conf\httpd.conf

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so

Now we can add following lines for proxy-balancer (Cluster name is domain.abc.net with two member)

ProxyRequests Off
ProxyPass / balancer://domain.abc.net/ lbmethod=byrequests stickysession=jsessionid nofailover=On maxattempts=15

ProxyPreserveHost On

BalancerMember http://192.168.10.10:84
BalancerMember http://192.168.100.10:85



Linux-

We need three servers one is load balancer and other two workers nodes
Http Server configuration file “/etc/httpd/conf/httpd.conf” add following lines

Include conf/extra/httpd-proxy-balancer.conf

Now create the httpd-proxy-balancer.conf file in the “/etc/httpd/conf/httpd.conf” and add the following lines.

ProxyRequests Off
ProxyPass / balancer://domain.abc.net/ lbmethod=byrequests stickysession=jsessionid nofailover=On maxattempts=15

ProxyPreserveHost On

BalancerMember http://192.168.10.10:84
BalancerMember http://192.168.100.10:85



Load balancing method-:

There are three type of load balancing method used in mod_prxy
Byrequests-:Weighted request count balancing
Bytraffic-: Weighted traffic byte count balancing
Bybusyiness-: Pending request balancing

Where method is one of the three listed before. Default is byrequests



BalancerMember http://192.168.10.10:84 loadfactor=4
BalancerMember http://192.168.100.10:85 loadfactor=6

A load factor will be applied member of the cluster, in order to define and sharing load balancing between members of cluster.

In the following example 40% of the requests will be forwarded to the first and reaming 60% will be forward to second cluster.



ProxyPass / balancer://domain.abc.net/ lbmethod=byrequests stickysession=jsessionid

SESSION_ID is the name of the variable at the application level storing the session identifier.

Thursday, February 24, 2011

How to Configure Two Tomcat instance in Linux

Download Java SE Development Kit. http://java.sun.com/javase/downloads/index.jsp.

# mkdir -p /opt/test
# cd /opt/test
# chmod 755 jdk-6u10-linux-x64.bin
# sh /opt/test/jdk-6u10-linux-x64.bin

# export JAVA_HOME=/opt/test/jdk1.6.0_10/bin
# export PATH=$JAVA_HOME/bin:$PATH

now check java version-
# java -version

Install Tomcat6
Download tomcat6 from the http://tomcat.apache.org/download-60.cgi
extract in /tmp folder.

Download httpd-2.64.tar.gz
now extract in /opt/test directory

Apache compile process-: Compile process we need GCC and lib file must be require
# yum install gcc*
# yum install lib
# cd /test/httpd-2.64
#./configure -prefix=/opt/httpd-2.64
# make
# make install
# cd /opt/test/httpd-2.64
# cd /bin
# apachectl start

Set permanent variable -: In Linux we can set up path

#vim /root/.bashrc
JAVA_HOME=/opt/test/jdk-1.6.0/
export JAVA_HOME
PATH=$JAVA_HOME/bin/;$PATH


We can set new installation JDK java path
#alternative --config Java

Install Tomcat6 and configuration-:

#cp apache-tomcat-6.0.20/opt/test/tomcat-1

Set CATALINA_HOME environment
CATALINA_HOME=/opt/test/tomcat-1

#cd /opt/test/tomcat-1/bin
# sh startup.sh

Configure tomcat2 instance

#cp apache-tomcat-6.0.20/opt/test/tomcat-2

Set CATALINA_HOME environment
CATALINA_HOME=/opt/test/tomcat-2

#cd /opt/test/tomcat-2/bin


Now check first jvm tomcat instance is working or not
http://localhost:8080

Configuring Tomcat Network Ports-:
Since this is the first tomcat that's being created here,the default port numbers can be unchanged in tomcat1
#vim /opt/test/tomcat1/conf/server.xml


connectionTimeout="20000"
redirectPort="8443" />



Second Tomcat we need to be changed
# vim /opt/test/tomcat2/conf/server.xml




connectionTimeout="20000"
redirectPort="8444" />



Install tomcat-connector-1.2.30-src.tar.gz
Unzip directory
tomcat-connector-1.2.30-src

Compile tomcat-connector
# cd /tomcat-connector-1.2.30-src
# cd native
# ./configure -with-apxs=/opt/test/httpd-2.64/bin/apxs
# make
# make install

Now we can find mod_jk.so file will be put on /opt/test/httpd-2.64/modules/

# chmod 755 /opt/test/httpd-2.64/modules/mod_jk.so

Now create workers.properties
# cd /opt/test/httpd-2.64/conf/
# touch workers.properties
Add the following lines

Define list of workers that will be used
# for mapping requests
worker.list=loadbalancer,status

# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8005
worker.node1.type=ajp13
worker.node1.lbfactor=1
worker.node1.cachesize=10

# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8006

worker.node2.type=ajp13
worker.node2.lbfactor=1
worker.node2.cachesize=10

# Load-balancing behavior
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1
#worker.list=loadbalancer

# Status worker for managing load balancer
worker.status.type=status

Edit httpd.conf file-:
# /opt/test/httpd-2.64/conf/
# vim httpd.conf
LoadModule jk_module modules/mod_jk.so
JkWorkersFile conf/workers.properties
JkMount /* myworker


Start the httpd server
#/bin/apachctl start

start both Tomcat server
#/bin/startup.sh

Monday, February 21, 2011

Configure Squid with Dansguardian

Scenario-:
1. Configure squid Server
2. Configure Dan guardian
3. Configure Iptables
4. Configure Proxy server as a router.

Our purpose of proxy server is to sharing internet connection for web browsing performance & configures Dan guardian for content and site blocking.

A. Allow Internal to all user with restricted web site and content.
B. Allow limited user can access all site
C. Publish local server as a web server in different-different port.
D. All user can send receive mail from the Outlook but they can’t access restricted site.
E. Allow vnc, Sql server and Remote Desktop Connection access form to internet to External Network.
F. Allow company’s website access to all users




Process-:


External LAN Card- eth0 (10.10.10.1)
Internal LAN Card- eth1(192.168.10.1)

1. Configure and install Squid Server-:

# yum install squid*

Cp /etc/squid/squid.conf /etc/squid/squid.conf.bkp

Vim /etc/squid/squid.conf

visible hostname vsnl.com
http_port 3128

# Restrict Web access by IP address

Acl special_client src “/etc/squid/special_client_ip_txt” # allow all site access users ‘s ip list
Acl our_networks src 192.168.10.0/24 # allow network
Acl bed url_regex “ /etc/squid/squid/squid-block.acl” # list of block site ‘s name
http_access allow bed special_client # allow access all site to special client list
http_access deny bed our_networks # allow limited access
http_access allow our_networks # allow access to network

vim /etc/squid/special_client_ip_txt
192.168.10.126
192.168.10.200
192.168.10.251
vim /etc/squid/squid_block_acl
orkut.com
yahoo.com
gmial.com

Service squid start
# Service squid stop
# Service squid restart


Install and Configure Dansguardain -:
Yum install dans*

Cp /etc/dansguardain/dansguardian.conf /etc/dansguardain/dansguardian.conf.bkp

Vim /etc/dansguardian/dansguardain.conf

Filter ip = 192.168.10.1
Filter port = 8080
Proxy ip = 127.0.0.1
Proxy port = 3128

Vim /etc/dansguardian/list/bandsitelist
Gmail.com # list of block site
Yahoo.com
Facebook.com
Orkut.com
Vim /etc/dansguardain/list/bannedregexpurllist

# Hard core phase ( for content blocking)

Orkut|youtube|sport|gmail|facebook|orkut|sex|video|virus|audio

Vim /etc/dansguardian/lists/exceptionsitelist
# following site will not be filter by dansguardain. Allow for all users.

www.online-linux.blogspot.com
www.xyz.com

vim/etc/dansguardian/exceptioniplist

# list of ip allow all fitler site.

192.168.10.126
192.168.10.200
192.168.10.251

Configure Iptables-:
# masquerade local lan(eth1)
# redirect all request 80 to 8080 from eth1(local lan)
# publish local website
# allow 80 and 8080 port
$ iptables –t nat –A POSTROUTING –I eth1 –j MASQUERADE
$ iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-port 8080
$ iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 3128 -j REDIRECT --to-port 8080
$ iptables -t nat -A PREROUTING -p tcp -d 10.10.10.1 --dport 8090 -j DNAT --to-destination 192.168.10.10:8090
$ iptables –I INPUT –s 192.168.10.0/24 –p tcp –-dport 80 –J ACCEPT
$ iptables –I INPUT –s 192.168.10.0/24 –p tcp –dport 8080 –J ACCEPT





Client Site-

Lan setting- 192.168.100.1:8080