Building a fileserver

From JaxHax
Jump to: navigation, search

Building a Fileserver

This wiki assumes you already have a debian system built, on the network, and have several disks attached to it. Most content here will translate readily to other flavors of linux.

Disks

Generally in a file server, you will be dealing with spinning-type hard disk drives (HDD). If you are dealing with sata, scsi or sas your disks will be designated with the beginning 'sd' and then a letter 'a-z'. For example sda, sdb, sdc If you are dealing with pata (parallel ata) your disks will be designated with 'hd' and then a letter. This is less common/rare now-a-days.

Viewing Disks using fdisk:

# fdisk -l
...
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdf: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
...

Partitions

While you can build a raid using just raw disks, it is generally unwise to do so. You generally want to use a partition, in this case we generally use type 'fd' for linux raid autodetect.

Building a raid partition on a blank disk:

# fdisk /dev/sdb
 
Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
 
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xbae456f2.
 
Command (m for help): p
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbae456f2
 
 
 
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-3907029167, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-3907029167, default 3907029167): 
 
Created a new partition 1 of type 'Linux' and of size 1.8 TiB.
 
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
 
Command (m for help): p
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbae456f2
 
Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect
 
 
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Now that we have our partitions, we can move on:

# fdisk -l | grep sd.1
...
/dev/sdb1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect
/dev/sdc1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect
/dev/sdd1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect
/dev/sde1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect
/dev/sdf1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect

Raid

Redundant Array of Inexpensive Disks or now commonly refered to as "Redundant Array of Independent Disks". The type of raid we will be discussing here will be a linux software raid managed by mdadm.

There are many types of RAID:

RAID 0 - Striping
Pros: Most Space (n), Fastest access, read/write
Cons: No redundancy, single disk failure leads to total data failure
RAID 1 - Mirror
Pros: Redundancy without math overhead (all disks but 1 can fail), fast reads with good controllers
Cons: Space usage (n - n)
RAID 5
Pros: Space (n - 1), Speed (read is as fast as raid 0), Single drive able to fail without taking out the raid
Cons: large arrays during rebuild can fail another drive thus killing the raid, stripe calculaton can have cpu overhead during writes, write hole can destroy data if encountered at the same time as power outage
RAID 6
Pros: Space (n - 2), Speed (read is as fast as raid 0), Two drives able to fail without taking out the raid, safer than raid 5
Cons: stripe calculaton can have cpu overhead during writes, write hole can destroy data if encountered at the same time as power outage

You can combine any of the above raids to make more redundant or faster RAIDs.

Example of building a simple RAID 6 array named md0 using the devices from the partition section, you will require the package mdadm

Notice that the array has to build, this is normal and will take time. Before you start using it, it is generally wise to let it finish building.
# mdadm --create --level=6 --raid-devices=5 /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Mar 18 12:55:03 2015
     Raid Level : raid6
     Array Size : 5860147200 (5588.67 GiB 6000.79 GB)
  Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent
 
  Intent Bitmap : Internal
 
    Update Time : Wed Mar 18 12:55:08 2015
          State : clean, resyncing 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
  Resync Status : 0% complete
 
           Name : icarus:0  (local to host icarus)
           UUID : fbca31c9:0f830f0d:9d2fcd25:1e7f1e95
         Events : 2
 
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
       4       8       81        4      active sync   /dev/sdf1
 
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid6 sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
      5860147200 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
      [>....................]  resync =  0.0% (150816/1953382400) finish=8193.0min speed=3972K/sec
      bitmap: 15/15 pages [60KB], 65536KB chunk
 
unused devices: <none>
 
## and when complete:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid6 sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
      5860147200 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 15/15 pages [60KB], 65536KB chunk
 
unused devices: <none>

Filesystems

You can put most types of filesystems and file structures on top of your raid. We will be discussing BTRFS, XFS and EXT3/4.

BTRFS

# mkfs.btrfs /dev/md0
Btrfs v3.17
See http://btrfs.wiki.kernel.org for more information.
 
Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
fs created label (null) on /dev/md0
	nodesize 16384 leafsize 16384 sectorsize 4096 size 5.46TiB
# mount /dev/md0 /mnt
# df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        5.5T  512K  5.5T   1% /mnt
# btrfs filesystem df /mnt
Data, single: total=8.00MiB, used=256.00KiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=1.00GiB, used=112.00KiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=16.00MiB, used=0.00B

XFS

You will need the package: xfsprogs

# mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=256    agcount=33, agsize=45782272 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1465036800, imaxpct=5
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/md0 /mnt
# df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        5.5T   34M  5.5T   1% /mnt

EXT

EXT3 - This took an extremely long time to run
# mkfs.ext3 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 1465036800 4k blocks and 183132160 inodes
Filesystem UUID: 63570363-908e-4cc3-8226-f125456efc92
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
	102400000, 214990848, 512000000, 550731776, 644972544
 
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done  
# mount /dev/md0 /mnt
# df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        5.5T   59M  5.2T   1% /mnt
EXT4
# mkfs.ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
/dev/md0 contains a ext3 file system
	last mounted on Wed Mar 18 15:58:34 2015
Proceed anyway? (y,n) ^C
root@icarus:~# wipefs -a /dev/md0
/dev/md0: 2 bytes were erased at offset 0x00000438 (ext3): 53 ef
root@icarus:~# mkfs.ext4 /dev/md0
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 1465036800 4k blocks and 183132160 inodes
Filesystem UUID: cd650454-469d-4255-86ab-3a3e89ef2a3a
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
	102400000, 214990848, 512000000, 550731776, 644972544
 
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done       
 
# mount /dev/md0 /mnt
# df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        5.5T   58M  5.2T   1% /mnt

Services

Now that we have a file structure to store files, now we need to put that to use. There are many services we can deploy to use the file structure we have built. The ones we will discuss today are NFS (Network File System), SAMBA (CIFS/Windows File Sharing), FTP (File Transfer Protocol)

NFS

Network Filesystem - This allows you to mount a networked filesystem over the network between the server and the client. Clients are available for Windows, Mac and Linux.

NFS has 3 major in-use versions, 2 3 and 4. In this demonstration we are using version 3 because the security is easier to deal with.

https://wiki.debian.org/NFSServerSetup

packages used: nfs-kernel-server portmap

Options generally used in exports
rw/ro - read+write, read-only
async/sync - synchronous (replies when file is committed to filesystem [safer, slower] or asynchronous mode (default, server responds when its done receiving the file but not synced to the filesystem
no_root_squash - allows remote systems to have root privs on this filesystem (dangerous)
no_subtree_check - subtree checking allows you to box users into only the part of the filesystem they are allowed, disabling this is pretty common (they have the entire branch of the tree you are allowing them) and allows for faster access

example

# /etc/exports: the access control list for filesystems which may be exported
#		to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/mnt *(ro,async,no_subtree_check)
/mnt/foo 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)
 
# exportfs -av
exporting 10.0.0.0/24:/mnt/foo
exporting *:/mnt

SAMBA

Samba allows you to host files over CIFS (Common Internet File System) to Windows, Mac and Linux clients. (this is native windows filesharing)

https://wiki.debian.org/SambaServerSimple http://www.tldp.org/HOWTO/SMB-HOWTO-8.html

packages used: samba samba-client

example config (/etc/samba/smb.conf) would allow you to
log in as a guest user to two specific directories (public and public2)
log in as a user on the system (see password creation in the next block)
[global]
   workgroup = workgroupname
   dns proxy = no
   log file = /var/log/samba/log.%m
   max log size = 1000
   syslog = 0
   panic action = /usr/share/samba/panic-action %d
   server role = standalone server
   passdb backend = tdbsam
   obey pam restrictions = yes
   unix password sync = yes
   passwd program = /usr/bin/passwd %u
   passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
   pam password change = yes
   map to guest = bad user
   guest account = public
   usershare allow guests = yes
[homes]
   comment = Home Directories
   browseable = no
   read only = yes
   create mask = 0700
   directory mask = 0700
   valid users = %S
[public]
comment = Public Share
path = /mnt/public
guest ok = yes
read only = no
browseable = yes
[public2]
comment = Public2 Share
path = /mnt/public2
guest ok = yes
read only = no
browseable = yes

Next you need to add a user to samba (user needs to exist in /etc/passwd)

# smbpasswd -a dan
New SMB password:
Retype new SMB password:
Added user dan.

Then view/mount your share:

# smbclient -U guest //10.10.10.10/public
Enter guest's password:  (garbage password)
Domain=[workgroupname] OS=[Unix] Server=[Samba 4.1.13-Debian]
smb: \> ls
... (filelisting) ...
 
# mount -t cifs //10.10.10.10/public /mnt -o username=guest
# df -h /mnt
Filesystem         Size  Used Avail Use% Mounted on
//10.10.10.10/public   22T   18T  4.3T  81% /mnt

FTP