After being amazed and bewildered for sometime and listening to people talking so geeky about it ;I decided to unearth few intricacies about it.As we all know few parts of the world where people making simple thing complicated to others which benifit few blunt headed idiots to get cheap credit and save thier arse.
Oh! by the way forgot to mention that(those of you ignore the title)this is all about telling some intricacies about
LUN(Logical Unit Number) .So without much ado lets dive into it.
Let me give you a clear cut definition I have come across so far is this
"A LUN is a Logical Unit Number. It can be used to refer to an entire physical disk, or a subset of a larger physical disk or disk volume. The physical disk or disk volume could be an entire single disk drive, a partition (subset) of a single disk drive, or disk volume from a RAID controller comprising multiple disk drives aggregated together for larger capacity and redundancy. LUNs represent a logical abstraction or, if you prefer, virtualization layer between the physical disk device/volume and the applications."
So how can you detect the LUN?
So there are two common ways to detect it,one is through /proc filesystes check and second is create a initrd image with it so time of os boot it can be detected.
Take the first case by scanning the /proc filesystem check; we need to the following to get a sense of it or aware of it:
bhaskar@bhaskar-laptop_06:55:52_Wed Nov 17:~> sudo cat /proc/scsi/scsi
Password:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST9160821AS Rev: 3.BH
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: Optiarc Model: DVD RW AD-7560A Rev: DH10
Type: CD-ROM ANSI SCSI revision: 05
As you can see it from the out of that scan of /proc filesystem.Now the next one getting the LUN thing detected when the os boots.
Detect LUNs automatically at system boot
The second method of configuring LUNs for a Linux system with only LUN 0 configured involves setting the parameter for the SCSI mid-layer driver that controls how many LUNs are scanned during a SCSI bus scan. The following procedure works for both 2.4 and 2.6 kernels, but it assumes the SCSI mid-layer driver is compiled as a scsi_mod module that is loaded automatically at system boot time. For Linux 2.4 kernels, to set the maximum number of disk devices under Linux to properly detect all volumes, you need to set the max_scsi_luns option for the SCSI mid-layer driver. For example, if max_scsi_luns is set to 1 this limits SCSI bus scans to only LUN 0. This value should be set to the respective maximum number of disks the kernel can support, for example, 128 or 256. In Linux 2.6 kernels, the same procedure applies, except that the parameter has been renamed from max_scsi_luns to max_luns.
1. Edit the /etc/modules.conf file.
2. Add the following line:
* options scsi_mod max_scsi_luns=
(where is the total number of luns to probe.
3. Save the file.
4. Run the mkinitrd command to rebuild the ram-disk associated with the current kernel. You can use the following figures examples of what mkinitrd command to run for your operating system. refers to the ‘uname –r’ output which displays the currently running kernel level, for example:. 2.4.21-292-smp.
For SUSE distributions, use the following command:
cd /boot
mkinitrd –k vmlinuz- -i initrd-
For Red Hat distributions, use the following command:
cd /boot
mkinitrd –v initrd-.img
5. Reboot the host.
6. Verify that the boot files are correctly configured for the newly created initrd image in the /boot/grub/menu.lst file.
The basics behind it somewhat is this LUNs are created as a basic part of the storage provisioning process using software tools that typically accompany the particular storage platform. However, there is not a 1-to-1 ratio between drives and LUNs. Many LUNs can easily be carved out of a single disk drive.
For example, a 500 GB drive can be partitioned into one 200 GB LUN and one 300 GB LUN, which would appear as two unique drives to the host server. Conversely, storage administrators can employ Logical Volume Manager software to combine multiple LUNs into a larger volume. Veritas Volume Manager from Symantec Corp. is one example of this software. In actual practice, disks are first gathered into a RAID group for larger capacity and redundancy (e.g., RAID-50), and then LUNs are carved from that RAID group.
LUNs are often referred to as logical "volumes," reflecting the traditional use of drive volume letters, such as volume C: or volume F: on your computer. But some experts warn against mixing the two terms, noting that the term "volume" is often used to denote the large volume created when multiple LUNs are combined with volume manager software. In this context, a volume may involve numerous LUNs and can confuse storage allocation.
Once created, LUNs can also be shared between multiple servers. For example, a LUN might be shared between an active and standby server. If the active server fails, the standby server can immediately take over. However, it can be catastrophic for multiple servers to access the same LUN simultaneously without a means of coordinating changed blocks to ensure data integrity. Cooordinating data changes requires clustering software, such as a clustered volume manager, a clustered file system, a clustered application or a network file system using NFS or CIFS.
LUN scaling and performance
LUNs are based on disks, so LUN performance and reliability will vary for the same reasons. For example, a LUN carved from a Fibre Channel 15K rpm disk will perform far better than a LUN of the same size taken from a 7,200 rpm SATA disk. This is also true of LUNs based on RAID arrays where the mirroring of a RAID-0 group may offer significantly different performance than the parity protection of a RAID-5 or RAID-6/dual parity (DP) group. Proper RAID group configuration will have a profound impact on LUN performance.
An organization may utilize hundreds or even thousands of LUNs, so the choice of storage resources has vast implications for a storage administrator. Not only is it necessary to supply an application with adequate capacity (in gigabytes), but the LUN must also be drawn from disk storage with suitable characteristics.
LUN management tools
Since an enterprise array may host more than 10,000 LUNs, software tools are vital for efficient LUN creation, manipulation and reporting. Such management tools are readily available; almost every storage vendor provides some type of management software to accompany products ranging from direct-attached storage (DAS) devices to enterprise arrays.
Administrators typically opt for vendor-specific or heterogeneous tools. A data center with one storage array or a single-vendor shop would do well with the indigenous LUN management tool that accompanied their storage system. Multivendor shops should at least consider heterogeneous tools that allow LUN management across all of the storage platforms.
A LUN management tool should also support the entire storage provisioning process. Features should include mapping to specific array ports and masking specific host bus adapters (HBA), along with comprehensive reporting. The LUN management tool should also be able to reclaim storage that is no longer needed. Although a few LUN management products support autonomous provisioning, some administrators have reservations about such automation.
SAN zoning and masking
LUNs are the basic vehicle for delivering storage, but provisioning SAN storage isn't just a matter of creating LUNs or volumes; the SAN fabric itself must be configured so that disks and their LUNs are matched to the appropriate servers. Proper configuration helps to manage storage traffic and maintain SAN security by preventing any server from accessing any LUN.
Zoning makes it possible for devices in a Fibre Channel network to see each other. By limiting the visibility of end devices, servers (hosts) can only see and access storage devices that are placed into the same zone. In more practical terms, zoning allows certain servers to see one or more ports on a disk array. Bandwidth, and thus minimum service levels, can be reserved by dedicating certain ports to a zone or isolate incompatible ports from one another.
Zoning is an important element of SAN security and high-availability SAN design. Zoning can typically be broken down into hard and soft zoning. With hard zoning, each device is assigned to a zone, and that assignment can never change. In soft zoning, the device assignments can be changed by the network administrator.
LUN masking adds granularity to this concept. Just because you zone a server and disk together doesn't mean that the server should be able to see all of the LUNs on that disk. Once the SAN is zoned, LUNs are masked so that each host server can only see specific LUNs.
Suppose that a disk has two LUNs: LUN_A and LUN_B. If we zoned two servers to that disk, both servers would see both LUNs. However, we can use LUN masking to allow one server to see only LUN_A and mask the other server to see only LUN_B. Port-based LUN masking is granular to the storage array port, so any disks on a given port will be accessible to any servers on that port. Server-based LUN masking is a bit more granular; a server will see only the LUNs assigned to it, regardless of the other disks or servers connected.
Adding/Removing a Logical Unit Through rescan-scsi-bus.sh
The sg3_utils package provides the rescan-scsi-bus.sh script, which can automatically update the logical unit configuration of the host as needed (after a device has been added to the system). The rescan-scsi-bus.sh script can also perform an issue_lip on supported devices. For more information about how to use this script, refer to rescan-scsi-bus.sh --help.
To install the sg3_utils package, run yum install sg3_utils.
Known Issues With rescan-scsi-bus.sh
When using the rescan-scsi-bus.sh script, take note of the following known issues:
In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0. The rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first mapped logical unit even if you use the --nooptscan option.
A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0; all other logical units are added in the second scan.
A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing a change in logical unit size when the --remove option is used.
The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals.
Resources:
1) http://publib.boulder.ibm.com/infocenter/dsichelp/ds6000ic/index.jsp?topic=%2Fcom.ibm.storage.smric.help.doc%2Ff2c_linuxlunconfig_2hsaga.html
2) http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/html/Online_Storage_Reconfiguration_Guide/rescan-scsi-bus.html
Hope this will help.
Cheers!
Bhaskar