USING SANS AND NAS PDF

adminComment(0)
    Contents:

Both SAN and NAS technologies were developed to help organizations acquired by organizations with investments in Intel servers and with data stored using. Data is the lifeblood of modern business, and modern data centers have extremely demanding requirements for size, speed, and reliability. Storage Area . NAS solutions use TCP/IP and NFS/CIFS/HTTP based networks. • SAN solutions utilizes Fibre Channel encapsulated SCSI setups. TCP/IP LAN. SAN Topology.


Using Sans And Nas Pdf

Author:BRITTANEY KOPELMAN
Language:English, Arabic, Dutch
Country:Austria
Genre:Technology
Pages:551
Published (Last):20.02.2016
ISBN:885-3-77855-917-8
ePub File Size:18.31 MB
PDF File Size:11.44 MB
Distribution:Free* [*Registration needed]
Downloads:38653
Uploaded by: CHANTAL

systems. NAS is the highest layer of storage and can be built on top of a SAN or DAS . That's important, especially inside a storage array with 15 hard drives. both SANs and NAS play vital roles in today's enterprises and provide many advan- time and disruption associated with server-attached storage upgrades. Demystifying DAS, SAN, NAS, NAS Gateways, Fibre Channel, and iSCSI. Page 2 NAS is optimized for ease-of-management and file sharing using lower-cost.

An In-Depth Guide to the Differences Between SAN and NAS

SAN is nothing but a high speed network that makes connections between storage devices and servers. Traditionally application servers used to have their own storage devices attached to them. SCSI is nothing but a standard used to communicate between servers and storage devices. All normal hard disks, tape drives etc uses SCSI. In the beginning the storage needs of a server was fulfilled by a storage devices that was included inside the server the server used to talk to those internal storage device, using SCSI.

This is very much similar to how a normal desktop talks to its internal hard disk. The main advantage of SCSI for connecting devices to a server was its high throughput.

You are here

Although this architecture is sufficient for low end requirements, there are few limitations like the below mentioned ones. The server can only access data on the devices, which are directly attached to it. If something happens to the server, access to data will fail because the storage device is part of the server and is attached to it using SCSI There is a limit in the number of storage devices the server can access.

In case the server needs more storage space, there will be no more space that can be attached, as the SCSI bus can accommodate only a finite number of devices. Also the server using the SCSI storage has to be near the storage device because parallel SCSI, which is the normal implementation in most computer's and servers, has some distance limitations.

It can work up to 25 meters.

Low complexity, Low investment, Simplicity in deployment caused DAS to be adopted by many for normal requirement's. The solution was good even performance wise, if used with faster mediums like fiber channel. Internally the storage device can use RAID which normally is the case or anything to provide storage volumes to servers. Although DAS is good for normal needs and gives good performance, there are limitations like the number of servers that can access it.

Storage device, or say DAS storage has to be near to the server in the same rack or within the limits of the accepted distance of the medium used. It can be argued that, directly attached storage DAS is faster than any other storage methods.

This is because it does not involve any overhead of data transfer over the network all data transfer occurs on a dedicated connection between the server and the storage device.

If your asking the differences between DAS- NAS -SAN you are in the data storage context; in this area many technologies exists which share a primary common goal: the persistence and availability of your data. Block devices and filesystems Most storage devices share the same physical and logical structure, in order to be able to locate the data you want, you need a way to identify where your data resides, so this is the reason of why Hard Disk drives have sectors or simply "blocks" , in many cases this reflects the layout of the data written into the physical medium.

But accessing your data by addressing the sector number while not very complex, it's an error prone method and you have to keep track yourself of the data you write and the sectors you have written to.

This explains the difference between a block device and a filesystem, a filesystem must reside on a block device. Hard disk drives need to communicate somehow, and they need to use a well defined physical interface and protocol in order for your computer to understand them, the most common interface and protocol employed today for PC is the S-ATA or Serial ATA or Serial Advanced Technology Attachment.

And unless you access your disk by block numbers you need a file system on top of it in order to put it at good use. NAS But what if you can provide access to your file-system to other computers for transferring files?

And NAS can handle unstructured data, such as audio, video, websites, text files and Microsoft Office documents. NAS appliances can be outfitted with more or larger disks to expand storage capacity. This approach is referred to as scale-up NAS.

They also can be clustered together for scale-out storage. NAS enables Portable OS Interface-compliant file access, facilitating centrally managed security and file access and ensuring that multiple applications can share a scale-out NAS device without one application overwriting a file that another application is using.

It can get slowed down even further if too many users overwhelm a system with simultaneous requests. The use of flash storage in newer NAS systems, either in conjunction with HDDs or as an all-flash system, alleviates the speed problem, however.

Scalability issues can arise with NAS. Clustered, or scale-out, NAS was devised to mitigate that problem. Data integrity can become an issue, because file systems store metadata and file content across a logical or physical disk volume.

If the file server loses power, the system must perform a file system check, also called a fsck, to validate the state of the data.

The delay involved with doing a fsck can be significant depending on the NAS system. Rebuild times can take days, a situation that will only get worse as multiterabyte capacity drives become more common. SAN systems are highly scalable; capacity can be added as required.

Other reasons for deploying SANs include continuous availability and resilience. Highly available SANs are designed to have no single point of failure, starting with highly available SAN disk arrays and switches with redundant critical components and redundant connections to the SAN. The hardware for these systems is expensive, and building and managing them require specialized knowledge and skills.

SAN is far more complex than NAS, with dedicated cabling -- usually FC, but Ethernet can be used -- as well as dedicated switches and storage hardware. FC was developed specifically for storage because Ethernet was not reliable enough to transmit block data before advances were made to the protocol over the past decade.

Once the scale-up limit is reached, it's necessary to move to a higher-performance array or to add multiple arrays. An increasing number of SAN disk arrays are avoiding this problem by supporting horizontal scale-out where storage nodes are added that scale capacity and performance simultaneously.

The simplest example of DAS is a computer's hard drive. To access files on DAS, a user must have access to the physical storage. However, with DAS, the storage on each device must be managed separately, adding a layer of complexity to the management of the system.

Follow the Author

DAS systems generally don't offer advanced storage management features, such as replication, snapshots and thin provisioning, that are common in SAN and NAS.

DAS also does not enable shared storage among multiple users.A NAS system or device is attached to a network via a standard Ethernet connection, so it appears to users like any other network-connected device.

And you can also transfer data to an extent of 30m using a copper wire for lower cost in fiber cannel. Data integrity can become an issue, because file systems store metadata and file content across a logical or physical disk volume.

Take it as an array of disk's, probably in some RAID level. Using SANs, for instance, is a way to share multiple devices tape drives and disk drives for storage, while NAS is a means for centrally storing files so they can be shared.

DARIO from Birmingham
Also read my other articles. I am highly influenced by slot car racing. I do love reading books hourly .
>