Get a Free iPhone 4 from Xpango - click here

Friday, November 26, 2010

How to install Samba on linux with Screenshots

Samba allows linux computers to share files and printers across a network connection. By using its SMB protocol, your linux box can appear in Windows Network Neighborhood or My Network Places just like any other windows machine. You can share files this way, as well as printers. By using samba on my home network, for example, my Windows machines have access to a printer directly hooked up to my Linux box, and my Linux box has access to a printer directly hooked up to one of my Windows machines. In addition, everyone can access everyone else's shared files. You can see how samba can be very useful if you have a network of both Windows as well as Linux machines. 
Samba configurations are slightly different depending on the distribution you're using.This article explain steps on fedora.
  1. Install Samba
$sudo yum install samba samba-client system-config-samba
           enter your root password.

2. Adding Users.
You will need to ensure that people also have a login to the SAMBA server to do their work. Logins should be provided on an as needed basis. Obviously, in most cases the users accessing the SAMBA server will be a subset of the total users on the Windows business network.
Create user logins with the Gnome User Manager tool in Fedora. You can find this from the main menu by choosing System Settings, then Users & Groups. The command for this is: system-config-users.
                          Goto System->Administration->samba

                           Under the Preference menu item choose Samba Users

In this window you must Add at least one user who will have access to the SAMBA Server. Notice that only user accounts you created above should be added to this listing.

3. Configure Sharing
You can configure samba using /etc/samba.conf file.
We will use GUI tool installed by package “system-config-samba”.
                                   Goto System->administration->samba
                                   Goto preference->server setting


Enter Workgroup name exactly the same as in Windows OS.
In this case its ‘workgroup”. Your situation may be different.

Under this same window, click on the Security tab. It comes by default with the appropriate settings for a basic SAMBA Server. The Authentication mode should be User. You would need to change this only if you plan to allow logins based on the Microsoft ADS.



Under the SAMBA Server Configuration window, you must create at least one SAMBA share directory.
                                Press the " +" button and then the Browse button




Now choose a folder you wish to make available to SAMBA users. Be careful, some folders have permissions settings that do not allow sharing. Now be sure to select the Read/Write option to allow peoplefull access.Don't press OK yet!
In the same window, select the second tab labeled Access. From here choose the first option labeled Only allow access to specific users and select the users you wish to give access to this specific SAMBA shared folder. Press OK when finished.



4. Start Samba Server
SMB daemon and other core services are usually NOT started by default. You will need to change this so that your SMB daemon is now started.Using the GUI from the main menu, go to System Settings, then Server Settings, then choose Services. You can also get to this using the command: system-config-services.


Select SMB and press start.Save the configuration and exit.
Also make sure your firewall settings is configured to allow inbound request for Samba server.Goto to firewall settings and choose samba as trusted service.

Before moving towards windows note the IP address of your network .
Issue the following command :
$ ifconfig
It will show you the details of your networks from LAN,WLAN etc

5. Access Samba server from windows
In windows, start->run
Type the IP address of your linux system hosting the samba server.

eg: \\192.154.54.1

Will ask you for samba username and password.
You can now access shared folders in windows.

Monday, November 22, 2010

How to block websites using Squid

This is very simple way to block a Single Web site, or multiple website in Squid Proxy Server.Just Write the under mentioned Script in squid.cof file
To Block single website:
acl <network name>  url_regex -i <website name>
http_access deny <network name>
example:
acl block_website url_regx -i www.yahoo.com.
http_access deny block_website.
To Block multiple websites:
acl <filename with websites to block> url_regex -i "/etc/squid/<filename with websites to block>"
http_access deny <filename with websites to block>

example:
Create a file named "Blockwebsite" in /etc/squid directory.
Write names of websites you wanna block in this file.
www.hotmail.com
www.yahoo.com
www.gmail.com


Edit squid.conf as:
acl  Blockwebsite url_regx -i "/etc/squid/Blockwebsite"
http_access deny Blockwebsite.


Below is the link for installation of squid.
Proxy Server With Squid.

Steps to setup proxy server with Squid on linux

Put Squid between the users and the Internet to cache your web pages. Users surf faster, HTTP traffic uses less bandwidth, and you can save on bandwidth fees -- or use the saved bandwidth for other traffic.
You can install squid from source or using rmp package.
On Fedora,if you are online,as root user type:
#yum install squid
This will install squid on your system.
Using the above step you will find config file at /etc/squid/squid.conf.
Edit squid.conf file to suit your requirement.
Basic configuration
  • Check http_porticp_port, and htcp_port. 3128 is a good default, 8080 is a reasonable alternative for HTTP. Port 80, or any port normally used by some other service, should be avoided if at all possible.
  • Leave cache_mem at 8 Mbytes at first, unless you have between 0.5 Gbytes and 1 Gbyte of RAM free routinely. If so, set cache-mem to 128 Mbytes. Adjustcache_mem once local cache patterns are known.
  • Increase the maximum_object_size to 40 Mbytes. If larger files are routinely downloaded, increase it to 250 Mbytes or even 700 Mbytes.
  • Set cache_dir to an area that has a large amount of space. Technically it belongs under /var, but you might not want it backed up. Don't set it to use more than 70 percent of the space, Squid uses this directory to store journal files as well. cache_dir ufs /var/cache/squid 80000 16 256 is common.
  • Turn the access_log and cache_log on. The former tells you who is doing what, and the latter tells you when things aren't quite right.
  • cache_swap_log is the location for the journal files mentioned in cache_dir. The default location is in the same directory as cache_dir.
  • pid_filename must be set. /var/log/squid/squid.pid is a good location. Squid uses this to shut down, rotate log files, or reread its configuration.
  • refresh_pattern affects how objects are evaluated for freshness. A reasonable default is refresh_pattern . 0 20% 10080.
  • cache_mgr is for people that use the cache to report problems. Be sure to use an email address that you will actually read.
  • cache_effective_user and cache_effective group should be set to a "proxy" user and group. Many distributions ship with this user and group pre-installed.
  • Recursively chown the log and cache directories to this user before you start Squid. This user must be able to read the configuration file and the directory that it's in.

    chown -R proxy.proxy /var/log/squid /var/cache/squid

  • Set visible_hostname to the fully qualified domain name. For example,gw.mybox.com
  • Uncomment dns_testnames. If it can't resolve names like "netscape.com", "internic.net", and "nlanr.net", your system needs fixing.
  • Turn memory_pools off unless there's a lot of free memory on the box.
  • Turn log_icp_queries on. ICP queries come from other proxies -- if you don't have sibling or parent proxies and you're getting them, you'll want to see these in the access.log.

Basic configuration ACLs

Access-control lists manage the access to your network. This basic example limits access to the proxy to the network 1.2.3.4/24. It matches successfully if a request comes from any of the addresses between 1.2.3.0 and 1.2.3.255 (inclusive).
acl our_network src 1.2.3.4/24
http_access allow our_network
http_access deny all
ACLs are checked from top to bottom. Clients with IPs in our_network are permitted, anyone else falls through to the "deny all" and gets a failure message. The format for the class definition is acl listname src network/netmask.
ACLs have an implicit last line that reverses the rule of the previous line. This protects against forgetting to add the http_access deny all, but explicitly adding that line makes the ACL more readable and helps ensure that it's not missed when the ACL is changed.

miss_access

If an object isn't in the cache and marked as fresh, Squid checks with the origin server to see if it is still current and requests a new copy if it isn't. This behavior serves local users well, but is undesirable if the requesting client is a neighboring proxy server. The following ACL lines allow the local network to be passed objects which aren't in the current cache, but deny this service to anyone outside the local network.
miss_access allow our_network
miss_access deny all

icp_access

Caches communicate with ICP messages to find out whether they have fresh content that satisfies a request. The icp_access ACL lines are used to control the caches Squid can communicate with.

Configuration for speed

To maximize speed, minimize the number of simultaneous requests Squid has to handle. The more requests Squid has to process in parallel, the longer each request takes. Every bit of latency you can reduce speed of the server.
  • Use a multiprocessor machine with asynchronous I/O enabled.
  • Run a version of Squid with internal DNS, or increase the number of DNS servers.
Aim to have 20 or 30 DNS servers. DNS lookups can be slow -- some continental backbones can take a minute and more to resolve a DNS request.
When you have Squid configured, run squid -z to create the cache directory structure. Then you can start Squid.
 To configure any application including a web browser to use squid, modify the proxy setting with the IP address of the squid server and the port number (default 3128).


Also read:
How to block websites using squid

How to backup Linux system using Dump.

The dump tool works by making a copy of an entire file system. The restore tool can then take this copy and pull any and all files from it.
To support incremental backups, dump uses the concept of dump levels. A dump level of 0 means a full backup. Any dump level above 0 is an incremental relative to the last time a dump with a lower dump level occurred. For example, a dump level of 1 covers all the changes to the file system since the last level 0 dump, a dump level of 2 covers all of the changes to the file system since the last level 1 dump, and so on—all the way through dump level 9. Consider a case in which you have three dumps: the first is a level 0, the second is a level 1, and the third is also a level 1. The first dump is, of course, a full backup. The second dump (level 1) contains all the changes made since the first dump. The third dump (also a level 1) also has all the changes.
The dump utility stores all the information about its dumps in the /etc/dumpdates file.
This file lists each backed-up file system, when it was backed up, and at what dump level.
Given this information, you can determine which tape to use for a restore. For example, if you
perform level 0 dumps on Monday, level 1 incrementals on Tuesday and Wednesday, and then
level 2 incrementals on Thursday and Friday, a file that was last modified on Tuesday but got
accidentally erased on Friday can be restored from Tuesday night’s incremental backup.
A file that was last modified during the preceding week will be on Monday’s level 0 tape.


Using dump
The dump tool is a command-line utility. It takes many parameters,
For example, here is the  command to perform a level 0 dump to /dev/st0 of the /dev/hda1
file system:

[root@scribe /root]# dump -0 -f /dev/st0 /dev/hda1


dump Parameter Description
– n Specifies the dump level, where n is a number between 0 and 9. For
example, –0 would perform a full backup.

–b blocksize Sets the dump block size to blocksize, which is measured in kilobytes. If you
are backing up many large files, using a larger block size will increase
performance. You may need to carefully adjust this to match the capabilities
of your tape system.

–B count Specifies a number ( count) of records per tape to be dumped. If there is
more data to dump than there is tape space, dump will prompt you to insert
a new tape.

–f filename Specifies a location ( filename) for the resulting dump file. You can make the
dump file a normal file that resides on another file system, or you can write
the dump file to the tape device. The SCSI tape device is /dev/st0.

–u Updates the /etc/dumpdates file after a successful dump.

–d density Specifies the density of the tape in bits per inch.

–s size Specifies the size of the tape in feet.

–a Bypasses all tape-length calculations and writes until an end-of-media signal
is returned. This works best for most modern tape drives and is particularly
useful for appending data to existing tapes. This is the default mode.

–z or –j Compresses each data block. The –z parameter uses zlib compression, while

–j uses bzlib. Either option can be immediately followed with a number, if
you want to specify the compression level, or white space, if you want to
accept the default compression level of 2. Your tape drive must be able to
support variable-length blocks to be able to use this feature. If your tape
system has hardware compression built in, don’t use both the hardware
compression and this option together, or your files will likely increase in size.

Using dump to Back Up an Entire System 
The dump utility works by making an
archive of one file system. If your entire system comprises multiple file systems, you need to
run dump for every file system. Since dump creates its output as a single, large file, you can
store multiple dumps to a single tape by using a nonrewinding tape device.
Assuming we’re backing up to a SCSI tape device, /dev/nst0, we must first decide which
file systems we’re backing up. This information is in the /etc/fstab file. Obviously, we don’t
want to  backup files such as /dev/cdrom, so we skip those. Depending on our data, we may or
may not want to back up certain partitions (such as swap and /tmp).


Let’s assume this leaves us with /dev/hda1, /dev/hda3, /dev/hda5, and /dev/hda6. To
back up these to /dev/nst0, compressing them along the way, we would issue the following
series of commands:
[root@scribe /root]# mt -f /dev/nst0 rewind
[root@scribe /root]# dump -0uz -f /dev/nst0 /dev/hda1
[root@scribe /root]# dump -0uz -f /dev/nst0 /dev/hda3
[root@scribe /root]# dump -0uz -f /dev/nst0 /dev/hda5
[root@scribe /root]# dump -0uz -f /dev/nst0 /dev/hda6
[root@scribe /root]# mt -f /dev/nst0 rewind
[root@scribe /root]# mt -f /dev/nst0 eject

Monday, November 15, 2010

Oracle Certified Master credential for Oracle Database 11g


The Oracle Certified Master credential is designed for Oracle Certified Professionals with advanced training and at least three to four years of professional, enterprise-level experience with Oracle Database. This column introduces two Oracle Database 11g Oracle Certified Master exams, provides details on the target candidates and focus areas of each exam, and describes a few sample tasks that are similar to those in the exams. 

About the Oracle Certified Master Exams

Obtaining the Oracle Certified Master credential requires passing an onsite practical exam—conducted in an Oracle University classroom—that tests candidates on their ability to perform database administration tasks in a live database environment. To become an Oracle Certified Master, a candidate must complete two required advanced DBA courses from Oracle University in addition to passing an Oracle Certified Master exam. 

All candidates must complete a certain number of “skillsets” that test their ability to perform complex technical tasks. They are required to complete the tasks with either the command-line interface or Oracle Enterprise Manager, and because the exams are based on the Oracle Enterprise Linux platform, working knowledge of Linux commands is essential. Candidates also need to be prepared to work with the following tools while completing the skillsets: 
  • Oracle Recovery Manager (Oracle RMAN) utility
  • Oracle Network Manager
  • Oracle Net Configuration Assistant
  • Oracle Database Configuration Assistant
  • Oracle Enterprise Manager
  • Oracle Listener Utility
  • Oracle Management Service
  • Oracle Password Utility
  • Oracle Data Guard command-line interface 
 
As shown in the diagram in Figure 1, each Oracle Certified Master candidate participating in the practical exam is provided with two machines: a database server and a management server. Both servers are provided with Oracle Enterprise Linux Release 5.2 and Oracle Database 11g installed. The database server hosts the production database the candidate uses to perform various database administration tasks. The management server hosts Oracle Management Repository, which the candidate uses to perform Oracle Enterprise Manager Grid Control tasks.

 
o60 ocp figure 1

Figure 1: Oracle Certified Master examination servers
 
Depending on a candidate’s current certification, that person needs to take and pass one of the following exams to become an Oracle Certified Master on Oracle Database 11g
  • Oracle Database 11g Certified Master Upgrade Exam
  • Oracle Database 11g Certified Master Exam  
 

Oracle Database 11g Certified Master Upgrade Exam

This exam provides a path for Oracle Certified Masters on Oracle9Database or Oracle Database 10g to upgrade their certification to Oracle Database 11g Certified Master. Candidates need not upgrade the OCP certification; they will instead be granted 11g OCP certification along with 11gmaster certification when they pass this exam. The exam is based on Oracle Database 11gRelease 1. 

This one-day practicum tests candidates on four primary areas: 
  • Oracle Database, Oracle RMAN, Oracle Enterprise Manager, and network configuration
  • Oracle Data Guard
  • Data and data warehouse management
  • Performance management  
 

Oracle Database 11g Certified Master Exam

Candidates need to be Oracle Certified Professionals on Oracle Database 11g before signing up for this exam. The exam is based on Oracle Database 11g Release 2 and includes the use of Oracle Real Application Clusters (Oracle RAC). 

This two-day practicum tests candidates on eight primary areas: 
  • Creating database and network configurations
  • Managing database availability
  • Database warehouse management
  • Data management
  • Performance management
  • Oracle Enterprise Manager Grid Control
  • Oracle Data Guard
  • Grid infrastructure, Oracle Automatic Storage Management, and Oracle RAC  
 

Before You Take the Oracle Database 11g Certified Master Exam

Here are some recommendations for candidates who plan to take the Oracle Database 11gCertified Master exam: 

  • Gain considerable hands-on expertise with all the exam objectives.
  • Get comfortable with the Linux command language—you will need to perform certain operating-system-level tasks such as executing a script and moving/copying a file/directory.
  • Be prepared to use all the tools listed in this column.
  • Practice completing tasks within a time limit. Working out each of the skillsets within a stipulated time is one of the key factors in achieving this certification.
   

Sample Tasks

The sample tasks in this section are representative of the types of tasks candidates will be asked to perform on the Oracle Certified Master exams. 

Task: Creating a tablespace. Create a tablespace named HRTBS in the ORCL database with the following specifications: 
  • Tablespace name: HRTBS
  • Block size: 16 K
  • File size: expected to grow to 3 TB
  • Initial extent size: 1 MB
  • Next extent size: 1 MB 
 
This task tests whether the candidate knows how to create a bigfile tablespace with the given specifications. Also, 16 K is a nonstandard block size, and hence the candidate is expected to do the configuration necessary to create the tablespace.
Task: Creating a partitioned table. Create a partitioned table called PART_EMPLOYEES, using the list-partitioned EMPLOYEES table. The partitions should be stored in the EMPTBS1, EMPTBS2, and EMPTBS3 tablespaces.
The PART_EMPLOYEES table should have the following column specifications: 
EMPLOYEE_ID       NOT NULL NUMBER(6)
EMAIL             NOT NULL VARCHAR2(25)
HIRE_DATE         NOT NULL DATE
MANAGER_ID        NUMBER(6)
DEPARTMENT_ID     NOT NULL NUMBER(4)
 
Next, populate the PART_EMPLOYEES table by executing the populate_parttable.sql script.
This Task tests whether the candidate knows how to create a partitioned table with the given specifications.The candidate   is expected to create a list-partitioned PART_EMPLOYEES table and store each partition in a seperate tablespace.(The EMPLOYEES table is configured accordingly before a candidate start this task.) The candidate will then populate the table,using the script provided.  

Task: Oracle RMAN configuration.Configure Oracle RMAN according to the following specifications, and perform a backup of the ORCL database to recover the database in the event of a failure: 
  • Enable compression for the backups on disk
  • Configure automatic backup of the control file and server parameter file 
 
This task tests whether a candidate knows how to use the Oracle RMAN CONFIGURE command to create persistent settings in the Oracle RMAN environment and then perform a backup. When performing the task, the candidate should use the following settings to enable compression of the backups on disk: 
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET; 

CONFIGURE CONTROLFILE AUTOBACKUP ON;

Oracle Enhances Solaris Operating System

AS part of its integrated applications-to-disk technology stack strategy,Oracle has announced Oracle Solaris 10 9/10,Oracle Solaris Cluster 3.3, and Oracle Solaris Studio 12.2.

The Oracle Solaris 10 9/10 operating systen update features improved netwroking,performance, and virtualization capabilities and includes updates to the Oracle Solaris ZFS file system.Oracle Solaris 10 9/10 preserves full compatibility with more than 11,000 third-party products and customer applications and is designed to leverage systems based on the latest SPARC and x86 architectures.

Oracle Solaris Cluster 3.3,a multisystem,multisite disaster recovery solution,enables virtual application clusters via Oracle Solaris Containers in Oracle Solaris Cluster Geographic Edition.Its Integrates with Oracle Weblogic Server,Oracle Siebel Customer Relationship Management,MySql Clusters,and Oracle Business Intelligence Enterprise Edition 11g and uses the Oracle Solaris Trusted Extensions feature for security.

Oracle Solaris Studio 12.2 provides tools for developing single,multithreaded,and distributed applications.Its integrated development environment ,code-aware editor,workflow,and project functionality help increase developer productivity.

Oracle Solaris provides the proven enterprise-class reliability,security,and performance that customers need for their most mission-critical and essential applications.Oracle Solaris 10 set the standard for mission critical computing.Now,through the Oracle's increased investment in technical innovation and integration with the entire Oracle hardware and software stack,we can achieve even higher levels of application performance and service levels.

Wednesday, October 13, 2010

Oracle Enterprise Content Management Suite 11g

Oracle Enterprise Content Management Suite 11g,a comprehensive,integrated, and high-performance content management solution,is now available to help organizations increase efficiency,reduce costs and improve content security.

A component of Oracle Fusion Middleware,Oracle Enterprise Content Management suite 11g consists of Oracle Universal Records Management 11g,Oracle Imaging and Process Management 11g,and Oracle Information Rights Management 11g.

The new suite is built on a unified content repository and provides an easy-to-use,end-to-end solution spanning imaging,Web,document,and records management.It builds on Oracle's solution for enterprise application documents through preintegration with business process,business applications,and desktop productivity tools.Its unified repository can support high-volume content ingestion applications,such as invoice processing,and high-volume content delivery applications,such as e-commerce sites.

To maximize content management efficiency,reduce costs,and improve security,organizations need a comprehensive enterprise content management solution that is integrated into bussiness process and fits with the way work.With Oracle Enterprise Content Management Suite 11g,we meet those demands while delivering the performance and scalibility needed to support the most-complex and demanding enterprise content management environments.

New enhancements deliver unparalleled ease-of-use and help put content management where people work by providing:
Next-generation desktop integration – to increase efficiency through access to workflow information and saved searches via Microsoft Windows Explorer, as well as offering the ability to edit managed content, compare managed documents and insert managed links directly from Microsoft Office.
Open Web content management – a revolutionary approach to integrating Web content authoring, design and presentation capabilities into a broad range of Web sites; portals, including Oracle WebLogic Portal; and Web applications. Among the new features are:
Servlets and tag libraries for content management services enable Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications
A plug-in which adds an array of Web content management palettes into Oracle’s leading Integrated Development Environment, Oracle JDeveloper
Simplified design mode for business users which makes it easier for them to edit and manage their sites.
Oracle Universal Records Management 11g is a scalable DoD 5015.2 v3 certified electronic and physical records management system. Key new enhancements support business demands by:
Improving discovery – to provide access to content from anywhere in the enterprise and enable instant holds, dispositions and discovery of content from across systems and applications
Enhancing usability – to improve insight for business users through an updated physical records management interface, a records management dashboard and more flexible and complete reporting
Delivering real-time discovery and support for third-party archiving - Oracle Universal Records Management 11g is an enterprise-ready solution with real-time discovery capabilities and support for third-party archiving and information management vendors.

Friday, October 8, 2010

Oracle 11g Release 2 installation on linux screenshots


Provide email id if you wish to receive support from Oracle.


Select the operation you want to perform.




Enter the details below with admin password.