summaryrefslogtreecommitdiffstats
path: root/Documentation/filesystems/vfs.txt
blob: 821090946a1a1c02b7259289611948c3e91d6346 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917

	      Overview of the Linux Virtual File System

	Original author: Richard Gooch <rgooch@atnf.csiro.au>

		  Last updated on October 28, 2005

  Copyright (C) 1999 Richard Gooch
  Copyright (C) 2005 Pekka Enberg

  This file is released under the GPLv2.


Introduction
============

The Virtual File System (also known as the Virtual Filesystem Switch)
is the software layer in the kernel that provides the filesystem
interface to userspace programs. It also provides an abstraction
within the kernel which allows different filesystem implementations to
coexist.

VFS system calls open(2), stat(2), read(2), write(2), chmod(2) and so
on are called from a process context. Filesystem locking is described
in the document Documentation/filesystems/Locking.


Directory Entry Cache (dcache)
------------------------------

The VFS implements the open(2), stat(2), chmod(2), and similar system
calls. The pathname argument that is passed to them is used by the VFS
to search through the directory entry cache (also known as the dentry
cache or dcache). This provides a very fast look-up mechanism to
translate a pathname (filename) into a specific dentry. Dentries live
in RAM and are never saved to disc: they exist only for performance.

The dentry cache is meant to be a view into your entire filespace. As
most computers cannot fit all dentries in the RAM at the same time,
some bits of the cache are missing. In order to resolve your pathname
into a dentry, the VFS may have to resort to creating dentries along
the way, and then loading the inode. This is done by looking up the
inode.


The Inode Object
----------------

An individual dentry usually has a pointer to an inode. Inodes are
filesystem objects such as regular files, directories, FIFOs and other
beasts.  They live either on the disc (for block device filesystems)
or in the memory (for pseudo filesystems). Inodes that live on the
disc are copied into the memory when required and changes to the inode
are written back to disc. A single inode can be pointed to by multiple
dentries (hard links, for example, do this).

To look up an inode requires that the VFS calls the lookup() method of
the parent directory inode. This method is installed by the specific
filesystem implementation that the inode lives in. Once the VFS has
the required dentry (and hence the inode), we can do all those boring
things like open(2) the file, or stat(2) it to peek at the inode
data. The stat(2) operation is fairly simple: once the VFS has the
dentry, it peeks at the inode data and passes some of it back to
userspace.


The File Object
---------------

Opening a file requires another operation: allocation of a file
structure (this is the kernel-side implementation of file
descriptors). The freshly allocated file structure is initialized with
a pointer to the dentry and a set of file operation member functions.
These are taken from the inode data. The open() file method is then
called so the specific filesystem implementation can do it's work. You
can see that this is another switch performed by the VFS. The file
structure is placed into the file descriptor table for the process.

Reading, writing and closing files (and other assorted VFS operations)
is done by using the userspace file descriptor to grab the appropriate
file structure, and then calling the required file structure method to
do whatever is required. For as long as the file is open, it keeps the
dentry in use, which in turn means that the VFS inode is still in use.


Registering and Mounting a Filesystem
=====================================

To register and unregister a filesystem, use the following API
functions:

   #include <linux/fs.h>

   extern int register_filesystem(struct file_system_type *);
   extern int unregister_filesystem(struct file_system_type *);

The passed struct file_system_type describes your filesystem. When a
request is made to mount a device onto a directory in your filespace,
the VFS will call the appropriate get_sb() method for the specific
filesystem. The dentry for the mount point will then be updated to
point to the root inode for the new filesystem.

You can see all filesystems that are registered to the kernel in the
file /proc/filesystems.


struct file_system_type
-----------------------

This describes the filesystem. As of kernel 2.6.13, the following
members are defined:

struct file_system_type {
	const char *name;
	int fs_flags;
        struct super_block *(*get_sb) (struct file_system_type *, int,
                                       const char *, void *);
        void (*kill_sb) (struct super_block *);
        struct module *owner;
        struct file_system_type * next;
        struct list_head fs_supers;
};

  name: the name of the filesystem type, such as "ext2", "iso9660",
	"msdos" and so on

  fs_flags: various flags (i.e. FS_REQUIRES_DEV, FS_NO_DCACHE, etc.)

  get_sb: the method to call when a new instance of this
	filesystem should be mounted

  kill_sb: the method to call when an instance of this filesystem
	should be unmounted

  owner: for internal VFS use: you should initialize this to THIS_MODULE in
  	most cases.

  next: for internal VFS use: you should initialize this to NULL

The get_sb() method has the following arguments:

  struct super_block *sb: the superblock structure. This is partially
	initialized by the VFS and the rest must be initialized by the
	get_sb() method

  int flags: mount flags

  const char *dev_name: the device name we are mounting.

  void *data: arbitrary mount options, usually comes as an ASCII
	string

  int silent: whether or not to be silent on error

The get_sb() method must determine if the block device specified
in the superblock contains a filesystem of the type the method
supports. On success the method returns the superblock pointer, on
failure it returns NULL.

The most interesting member of the superblock structure that the
get_sb() method fills in is the "s_op" field. This is a pointer to
a "struct super_operations" which describes the next level of the
filesystem implementation.

Usually, a filesystem uses generic one of the generic get_sb()
implementations and provides a fill_super() method instead. The
generic methods are:

  get_sb_bdev: mount a filesystem residing on a block device

  get_sb_nodev: mount a filesystem that is not backed by a device

  get_sb_single: mount a filesystem which shares the instance between
  	all mounts

A fill_super() method implementation has the following arguments:

  struct super_block *sb: the superblock structure. The method fill_super()
  	must initialize this properly.

  void *data: arbitrary mount options, usually comes as an ASCII
	string

  int silent: whether or not to be silent on error


The Superblock Object
=====================

A superblock object represents a mounted filesystem.


struct super_operations
-----------------------

This describes how the VFS can manipulate the superblock of your
filesystem. As of kernel 2.6.13, the following members are defined:

struct super_operations {
        struct inode *(*alloc_inode)(struct super_block *sb);
        void (*destroy_inode)(struct inode *);

        void (*read_inode) (struct inode *);

        void (*dirty_inode) (struct inode *);
        int (*write_inode) (struct inode *, int);
        void (*put_inode) (struct inode *);
        void (*drop_inode) (struct inode *);
        void (*delete_inode) (struct inode *);
        void (*put_super) (struct super_block *);
        void (*write_super) (struct super_block *);
        int (*sync_fs)(struct super_block *sb, int wait);
        void (*write_super_lockfs) (struct super_block *);
        void (*unlockfs) (struct super_block *);
        int (*statfs) (struct super_block *, struct kstatfs *);
        int (*remount_fs) (struct super_block *, int *, char *);
        void (*clear_inode) (struct inode *);
        void (*umount_begin) (struct super_block *);

        void (*sync_inodes) (struct super_block *sb,
                                struct writeback_control *wbc);
        int (*show_options)(struct seq_file *, struct vfsmount *);

        ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
        ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
};

All methods are called without any locks being held, unless otherwise
noted. This means that most methods can block safely. All methods are
only called from a process context (i.e. not from an interrupt handler
or bottom half).

  alloc_inode: this method is called by inode_alloc() to allocate memory
 	for struct inode and initialize it.

  destroy_inode: this method is called by destroy_inode() to release
  	resources allocated for struct inode.

  read_inode: this method is called to read a specific inode from the
        mounted filesystem.  The i_ino member in the struct inode is
	initialized by the VFS to indicate which inode to read. Other
	members are filled in by this method.

	You can set this to NULL and use iget5_locked() instead of iget()
	to read inodes.  This is necessary for filesystems for which the
	inode number is not sufficient to identify an inode.

  dirty_inode: this method is called by the VFS to mark an inode dirty.

  write_inode: this method is called when the VFS needs to write an
	inode to disc.  The second parameter indicates whether the write
	should be synchronous or not, not all filesystems check this flag.

  put_inode: called when the VFS inode is removed from the inode
	cache.

  drop_inode: called when the last access to the inode is dropped,
	with the inode_lock spinlock held.

	This method should be either NULL (normal UNIX filesystem
	semantics) or "generic_delete_inode" (for filesystems that do not
	want to cache inodes - causing "delete_inode" to always be
	called regardless of the value of i_nlink)

	The "generic_delete_inode()" behavior is equivalent to the
	old practice of using "force_delete" in the put_inode() case,
	but does not have the races that the "force_delete()" approach
	had. 

  delete_inode: called when the VFS wants to delete an inode

  put_super: called when the VFS wishes to free the superblock
	(i.e. unmount). This is called with the superblock lock held

  write_super: called when the VFS superblock needs to be written to
	disc. This method is optional

  sync_fs: called when VFS is writing out all dirty data associated with
  	a superblock. The second parameter indicates whether the method
	should wait until the write out has been completed. Optional.

  write_super_lockfs: called when VFS is locking a filesystem and
  	forcing it into a consistent state.  This method is currently
  	used by the Logical Volume Manager (LVM).

  unlockfs: called when VFS is unlocking a filesystem and making it writable
  	again.

  statfs: called when the VFS needs to get filesystem statistics. This
	is called with the kernel lock held

  remount_fs: called when the filesystem is remounted. This is called
	with the kernel lock held

  clear_inode: called then the VFS clears the inode. Optional

  umount_begin: called when the VFS is unmounting a filesystem.

  sync_inodes: called when the VFS is writing out dirty data associated with
  	a superblock.

  show_options: called by the VFS to show mount options for /proc/<pid>/mounts.

  quota_read: called by the VFS to read from filesystem quota file.

  quota_write: called by the VFS to write to filesystem quota file.

The read_inode() method is responsible for filling in the "i_op"
field. This is a pointer to a "struct inode_operations" which
describes the methods that can be performed on individual inodes.


The Inode Object
================

An inode object represents an object within the filesystem.


struct inode_operations
-----------------------

This describes how the VFS can manipulate an inode in your
filesystem. As of kernel 2.6.13, the following members are defined:

struct inode_operations {
	int (*create) (struct inode *,struct dentry *,int, struct nameidata *);
	struct dentry * (*lookup) (struct inode *,struct dentry *, struct nameidata *);
	int (*link) (struct dentry *,struct inode *,struct dentry *);
	int (*unlink) (struct inode *,struct dentry *);
	int (*symlink) (struct inode *,struct dentry *,const char *);
	int (*mkdir) (struct inode *,struct dentry *,int);
	int (*rmdir) (struct inode *,struct dentry *);
	int (*mknod) (struct inode *,struct dentry *,int,dev_t);
	int (*rename) (struct inode *, struct dentry *,
			struct inode *, struct dentry *);
	int (*readlink) (struct dentry *, char __user *,int);
        void * (*follow_link) (struct dentry *, struct nameidata *);
        void (*put_link) (struct dentry *, struct nameidata *, void *);
	void (*truncate) (struct inode *);
	int (*permission) (struct inode *, int, struct nameidata *);
	int (*setattr) (struct dentry *, struct iattr *);
	int (*getattr) (struct vfsmount *mnt, struct dentry *, struct kstat *);
	int (*setxattr) (struct dentry *, const char *,const void *,size_t,int);
	ssize_t (*getxattr) (struct dentry *, const char *, void *, size_t);
	ssize_t (*listxattr) (struct dentry *, char *, size_t);
	int (*removexattr) (struct dentry *, const char *);
};

Again, all methods are called without any locks being held, unless
otherwise noted.

  create: called by the open(2) and creat(2) system calls. Only
	required if you want to support regular files. The dentry you
	get should not have an inode (i.e. it should be a negative
	dentry). Here you will probably call d_instantiate() with the
	dentry and the newly created inode

  lookup: called when the VFS needs to look up an inode in a parent
	directory. The name to look for is found in the dentry. This
	method must call d_add() to insert the found inode into the
	dentry. The "i_count" field in the inode structure should be
	incremented. If the named inode does not exist a NULL inode
	should be inserted into the dentry (this is called a negative
	dentry). Returning an error code from this routine must only
	be done on a real error, otherwise creating inodes with system
	calls like create(2), mknod(2), mkdir(2) and so on will fail.
	If you wish to overload the dentry methods then you should
	initialise the "d_dop" field in the dentry; this is a pointer
	to a struct "dentry_operations".
	This method is called with the directory inode semaphore held

  link: called by the link(2) system call. Only required if you want
	to support hard links. You will probably need to call
	d_instantiate() just as you would in the create() method

  unlink: called by the unlink(2) system call. Only required if you
	want to support deleting inodes

  symlink: called by the symlink(2) system call. Only required if you
	want to support symlinks. You will probably need to call
	d_instantiate() just as you would in the create() method

  mkdir: called by the mkdir(2) system call. Only required if you want
	to support creating subdirectories. You will probably need to
	call d_instantiate() just as you would in the create() method

  rmdir: called by the rmdir(2) system call. Only required if you want
	to support deleting subdirectories

  mknod: called by the mknod(2) system call to create a device (char,
	block) inode or a named pipe (FIFO) or socket. Only required
	if you want to support creating these types of inodes. You
	will probably need to call d_instantiate() just as you would
	in the create() method

  rename: called by the rename(2) system call to rename the object to
	have the parent and name given by the second inode and dentry.

  readlink: called by the readlink(2) system call. Only required if
	you want to support reading symbolic links

  follow_link: called by the VFS to follow a symbolic link to the
	inode it points to.  Only required if you want to support
	symbolic links.  This method returns a void pointer cookie
	that is passed to put_link().

  put_link: called by the VFS to release resources allocated by
  	follow_link().  The cookie returned by follow_link() is passed
  	to to this method as the last parameter.  It is used by
  	filesystems such as NFS where page cache is not stable
  	(i.e. page that was installed when the symbolic link walk
  	started might not be in the page cache at the end of the
  	walk).

  truncate: called by the VFS to change the size of a file.  The
 	i_size field of the inode is set to the desired size by the
 	VFS before this method is called.  This method is called by
 	the truncate(2) system call and related functionality.

  permission: called by the VFS to check for access rights on a POSIX-like
  	filesystem.

  setattr: called by the VFS to set attributes for a file. This method
  	is called by chmod(2) and related system calls.

  getattr: called by the VFS to get attributes of a file. This method
  	is called by stat(2) and related system calls.

  setxattr: called by the VFS to set an extended attribute for a file.
  	Extended attribute is a name:value pair associated with an
  	inode. This method is called by setxattr(2) system call.

  getxattr: called by the VFS to retrieve the value of an extended
  	attribute name. This method is called by getxattr(2) function
  	call.

  listxattr: called by the VFS to list all extended attributes for a
  	given file. This method is called by listxattr(2) system call.

  removexattr: called by the VFS to remove an extended attribute from
  	a file. This method is called by removexattr(2) system call.


The Address Space Object
========================

The address space object is used to identify pages in the page cache.


struct address_space_operations
-------------------------------

This describes how the VFS can manipulate mapping of a file to page cache in
your filesystem. As of kernel 2.6.13, the following members are defined:

struct address_space_operations {
	int (*writepage)(struct page *page, struct writeback_control *wbc);
	int (*readpage)(struct file *, struct page *);
	int (*sync_page)(struct page *);
	int (*writepages)(struct address_space *, struct writeback_control *);
	int (*set_page_dirty)(struct page *page);
	int (*readpages)(struct file *filp, struct address_space *mapping,
			struct list_head *pages, unsigned nr_pages);
	int (*prepare_write)(struct file *, struct page *, unsigned, unsigned);
	int (*commit_write)(struct file *, struct page *, unsigned, unsigned);
	sector_t (*bmap)(struct address_space *, sector_t);
	int (*invalidatepage) (struct page *, unsigned long);
	int (*releasepage) (struct page *, int);
	ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
			loff_t offset, unsigned long nr_segs);
	struct page* (*get_xip_page)(struct address_space *, sector_t,
			int);
};

  writepage: called by the VM write a dirty page to backing store.

  readpage: called by the VM to read a page from backing store.

  sync_page: called by the VM to notify the backing store to perform all
  	queued I/O operations for a page. I/O operations for other pages
	associated with this address_space object may also be performed.

  writepages: called by the VM to write out pages associated with the
  	address_space object.

  set_page_dirty: called by the VM to set a page dirty.

  readpages: called by the VM to read pages associated with the address_space
  	object.

  prepare_write: called by the generic write path in VM to set up a write
  	request for a page.

  commit_write: called by the generic write path in VM to write page to
  	its backing store.

  bmap: called by the VFS to map a logical block offset within object to
  	physical block number. This method is use by for the legacy FIBMAP
	ioctl. Other uses are discouraged.

  invalidatepage: called by the VM on truncate to disassociate a page from its
  	address_space mapping.

  releasepage: called by the VFS to release filesystem specific metadata from
  	a page.

  direct_IO: called by the VM for direct I/O writes and reads.

  get_xip_page: called by the VM to translate a block number to a page.
	The page is valid until the corresponding filesystem is unmounted.
	Filesystems that want to use execute-in-place (XIP) need to implement
	it.  An example implementation can be found in fs/ext2/xip.c.


The File Object
===============

A file object represents a file opened by a process.


struct file_operations
----------------------

This describes how the VFS can manipulate an open file. As of kernel
2.6.13, the following members are defined:

struct file_operations {
	loff_t (*llseek) (struct file *, loff_t, int);
	ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
	ssize_t (*aio_read) (struct kiocb *, char __user *, size_t, loff_t);
	ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
	ssize_t (*aio_write) (struct kiocb *, const char __user *, size_t, loff_t);
	int (*readdir) (struct file *, void *, filldir_t);
	unsigned int (*poll) (struct file *, struct poll_table_struct *);
	int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
	long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
	long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
	int (*mmap) (struct file *, struct vm_area_struct *);
	int (*open) (struct inode *, struct file *);
	int (*flush) (struct file *);
	int (*release) (struct inode *, struct file *);
	int (*fsync) (struct file *, struct dentry *, int datasync);
	int (*aio_fsync) (struct kiocb *, int datasync);
	int (*fasync) (int, struct file *, int);
	int (*lock) (struct file *, int, struct file_lock *);
	ssize_t (*readv) (struct file *, const struct iovec *, unsigned long, loff_t *);
	ssize_t (*writev) (struct file *, const struct iovec *, unsigned long, loff_t *);
	ssize_t (*sendfile) (struct file *, loff_t *, size_t, read_actor_t, void *);
	ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
	unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
	int (*check_flags)(int);
	int (*dir_notify)(struct file *filp, unsigned long arg);
	int (*flock) (struct file *, int, struct file_lock *);
};

Again, all methods are called without any locks being held, unless
otherwise noted.

  llseek: called when the VFS needs to move the file position index

  read: called by read(2) and related system calls

  aio_read: called by io_submit(2) and other asynchronous I/O operations

  write: called by write(2) and related system calls

  aio_write: called by io_submit(2) and other asynchronous I/O operations

  readdir: called when the VFS needs to read the directory contents

  poll: called by the VFS when a process wants to check if there is
	activity on this file and (optionally) go to sleep until there
	is activity. Called by the select(2) and poll(2) system calls

  ioctl: called by the ioctl(2) system call

  unlocked_ioctl: called by the ioctl(2) system call. Filesystems that do not
  	require the BKL should use this method instead of the ioctl() above.

  compat_ioctl: called by the ioctl(2) system call when 32 bit system calls
 	 are used on 64 bit kernels.

  mmap: called by the mmap(2) system call

  open: called by the VFS when an inode should be opened. When the VFS
	opens a file, it creates a new "struct file". It then calls the
	open method for the newly allocated file structure. You might
	think that the open method really belongs in
	"struct inode_operations", and you may be right. I think it's
	done the way it is because it makes filesystems simpler to
	implement. The open() method is a good place to initialize the
	"private_data" member in the file structure if you want to point
	to a device structure

  flush: called by the close(2) system call to flush a file

  release: called when the last reference to an open file is closed

  fsync: called by the fsync(2) system call

  fasync: called by the fcntl(2) system call when asynchronous
	(non-blocking) mode is enabled for a file

  lock: called by the fcntl(2) system call for F_GETLK, F_SETLK, and F_SETLKW
  	commands

  readv: called by the readv(2) system call

  writev: called by the writev(2) system call

  sendfile: called by the sendfile(2) system call

  get_unmapped_area: called by the mmap(2) system call

  check_flags: called by the fcntl(2) system call for F_SETFL command

  dir_notify: called by the fcntl(2) system call for F_NOTIFY command

  flock: called by the flock(2) system call

Note that the file operations are implemented by the specific
filesystem in which the inode resides. When opening a device node
(character or block special) most filesystems will call special
support routines in the VFS which will locate the required device
driver information. These support routines replace the filesystem file
operations with those for the device driver, and then proceed to call
the new open() method for the file. This is how opening a device file
in the filesystem eventually ends up calling the device driver open()
method.


Directory Entry Cache (dcache)
==============================


struct dentry_operations
------------------------

This describes how a filesystem can overload the standard dentry
operations. Dentries and the dcache are the domain of the VFS and the
individual filesystem implementations. Device drivers have no business
here. These methods may be set to NULL, as they are either optional or
the VFS uses a default. As of kernel 2.6.13, the following members are
defined:

struct dentry_operations {
	int (*d_revalidate)(struct dentry *, struct nameidata *);
	int (*d_hash) (struct dentry *, struct qstr *);
	int (*d_compare) (struct dentry *, struct qstr *, struct qstr *);
	int (*d_delete)(struct dentry *);
	void (*d_release)(struct dentry *);
	void (*d_iput)(struct dentry *, struct inode *);
};

  d_revalidate: called when the VFS needs to revalidate a dentry. This
	is called whenever a name look-up finds a dentry in the
	dcache. Most filesystems leave this as NULL, because all their
	dentries in the dcache are valid

  d_hash: called when the VFS adds a dentry to the hash table

  d_compare: called when a dentry should be compared with another

  d_delete: called when the last reference to a dentry is
	deleted. This means no-one is using the dentry, however it is
	still valid and in the dcache

  d_release: called when a dentry is really deallocated

  d_iput: called when a dentry loses its inode (just prior to its
	being deallocated). The default when this is NULL is that the
	VFS calls iput(). If you define this method, you must call
	iput() yourself

Each dentry has a pointer to its parent dentry, as well as a hash list
of child dentries. Child dentries are basically like files in a
directory.


Directory Entry Cache API
--------------------------

There are a number of functions defined which permit a filesystem to
manipulate dentries:

  dget: open a new handle for an existing dentry (this just increments
	the usage count)

  dput: close a handle for a dentry (decrements the usage count). If
	the usage count drops to 0, the "d_delete" method is called
	and the dentry is placed on the unused list if the dentry is
	still in its parents hash list. Putting the dentry on the
	unused list just means that if the system needs some RAM, it
	goes through the unused list of dentries and deallocates them.
	If the dentry has already been unhashed and the usage count
	drops to 0, in this case the dentry is deallocated after the
	"d_delete" method is called

  d_drop: this unhashes a dentry from its parents hash list. A
	subsequent call to dput() will deallocate the dentry if its
	usage count drops to 0

  d_delete: delete a dentry. If there are no other open references to
	the dentry then the dentry is turned into a negative dentry
	(the d_iput() method is called). If there are other
	references, then d_drop() is called instead

  d_add: add a dentry to its parents hash list and then calls
	d_instantiate()

  d_instantiate: add a dentry to the alias hash list for the inode and
	updates the "d_inode" member. The "i_count" member in the
	inode structure should be set/incremented. If the inode
	pointer is NULL, the dentry is called a "negative
	dentry". This function is commonly called when an inode is
	created for an existing negative dentry

  d_lookup: look up a dentry given its parent and path name component
	It looks up the child of that given name from the dcache
	hash table. If it is found, the reference count is incremented
	and the dentry is returned. The caller must use d_put()
	to free the dentry when it finishes using it.


RCU-based dcache locking model
------------------------------

On many workloads, the most common operation on dcache is
to look up a dentry, given a parent dentry and the name
of the child. Typically, for every open(), stat() etc.,
the dentry corresponding to the pathname will be looked
up by walking the tree starting with the first component
of the pathname and using that dentry along with the next
component to look up the next level and so on. Since it
is a frequent operation for workloads like multiuser
environments and web servers, it is important to optimize
this path.

Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus
in every component during path look-up. Since 2.5.10 onwards,
fast-walk algorithm changed this by holding the dcache_lock
at the beginning and walking as many cached path component
dentries as possible. This significantly decreases the number
of acquisition of dcache_lock. However it also increases the
lock hold time significantly and affects performance in large
SMP machines. Since 2.5.62 kernel, dcache has been using
a new locking model that uses RCU to make dcache look-up
lock-free.

The current dcache locking model is not very different from the existing
dcache locking model. Prior to 2.5.62 kernel, dcache_lock
protected the hash chain, d_child, d_alias, d_lru lists as well
as d_inode and several other things like mount look-up. RCU-based
changes affect only the way the hash chain is protected. For everything
else the dcache_lock must be taken for both traversing as well as
updating. The hash chain updates too take the dcache_lock.
The significant change is the way d_lookup traverses the hash chain,
it doesn't acquire the dcache_lock for this and rely on RCU to
ensure that the dentry has not been *freed*.


Dcache locking details
----------------------

For many multi-user workloads, open() and stat() on files are
very frequently occurring operations. Both involve walking
of path names to find the dentry corresponding to the
concerned file. In 2.4 kernel, dcache_lock was held
during look-up of each path component. Contention and
cache-line bouncing of this global lock caused significant
scalability problems. With the introduction of RCU
in Linux kernel, this was worked around by making
the look-up of path components during path walking lock-free.


Safe lock-free look-up of dcache hash table
===========================================

Dcache is a complex data structure with the hash table entries
also linked together in other lists. In 2.4 kernel, dcache_lock
protected all the lists. We applied RCU only on hash chain
walking. The rest of the lists are still protected by dcache_lock.
Some of the important changes are :

1. The deletion from hash chain is done using hlist_del_rcu() macro which
   doesn't initialize next pointer of the deleted dentry and this
   allows us to walk safely lock-free while a deletion is happening.

2. Insertion of a dentry into the hash table is done using
   hlist_add_head_rcu() which take care of ordering the writes -
   the writes to the dentry must be visible before the dentry
   is inserted. This works in conjunction with hlist_for_each_rcu()
   while walking the hash chain. The only requirement is that
   all initialization to the dentry must be done before hlist_add_head_rcu()
   since we don't have dcache_lock protection while traversing
   the hash chain. This isn't different from the existing code.

3. The dentry looked up without holding dcache_lock by cannot be
   returned for walking if it is unhashed. It then may have a NULL
   d_inode or other bogosity since RCU doesn't protect the other
   fields in the dentry. We therefore use a flag DCACHE_UNHASHED to
   indicate unhashed  dentries and use this in conjunction with a
   per-dentry lock (d_lock). Once looked up without the dcache_lock,
   we acquire the per-dentry lock (d_lock) and check if the
   dentry is unhashed. If so, the look-up is failed. If not, the
   reference count of the dentry is increased and the dentry is returned.

4. Once a dentry is looked up, it must be ensured during the path
   walk for that component it doesn't go away. In pre-2.5.10 code,
   this was done holding a reference to the dentry. dcache_rcu does
   the same.  In some sense, dcache_rcu path walking looks like
   the pre-2.5.10 version.

5. All dentry hash chain updates must take the dcache_lock as well as
   the per-dentry lock in that order. dput() does this to ensure
   that a dentry that has just been looked up in another CPU
   doesn't get deleted before dget() can be done on it.

6. There are several ways to do reference counting of RCU protected
   objects. One such example is in ipv4 route cache where
   deferred freeing (using call_rcu()) is done as soon as
   the reference count goes to zero. This cannot be done in
   the case of dentries because tearing down of dentries
   require blocking (dentry_iput()) which isn't supported from
   RCU callbacks. Instead, tearing down of dentries happen
   synchronously in dput(), but actual freeing happens later
   when RCU grace period is over. This allows safe lock-free
   walking of the hash chains, but a matched dentry may have
   been partially torn down. The checking of DCACHE_UNHASHED
   flag with d_lock held detects such dentries and prevents
   them from being returned from look-up.


Maintaining POSIX rename semantics
==================================

Since look-up of dentries is lock-free, it can race against
a concurrent rename operation. For example, during rename
of file A to B, look-up of either A or B must succeed.
So, if look-up of B happens after A has been removed from the
hash chain but not added to the new hash chain, it may fail.
Also, a comparison while the name is being written concurrently
by a rename may result in false positive matches violating
rename semantics.  Issues related to race with rename are
handled as described below :

1. Look-up can be done in two ways - d_lookup() which is safe
   from simultaneous renames and __d_lookup() which is not.
   If __d_lookup() fails, it must be followed up by a d_lookup()
   to correctly determine whether a dentry is in the hash table
   or not. d_lookup() protects look-ups using a sequence
   lock (rename_lock).

2. The name associated with a dentry (d_name) may be changed if
   a rename is allowed to happen simultaneously. To avoid memcmp()
   in __d_lookup() go out of bounds due to a rename and false
   positive comparison, the name comparison is done while holding the
   per-dentry lock. This prevents concurrent renames during this
   operation.

3. Hash table walking during look-up may move to a different bucket as
   the current dentry is moved to a different bucket due to rename.
   But we use hlists in dcache hash table and they are null-terminated.
   So, even if a dentry moves to a different bucket, hash chain
   walk will terminate. [with a list_head list, it may not since
   termination is when the list_head in the original bucket is reached].
   Since we redo the d_parent check and compare name while holding
   d_lock, lock-free look-up will not race against d_move().

4. There can be a theoretical race when a dentry keeps coming back
   to original bucket due to double moves. Due to this look-up may
   consider that it has never moved and can end up in a infinite loop.
   But this is not any worse that theoretical livelocks we already
   have in the kernel.


Important guidelines for filesystem developers related to dcache_rcu
====================================================================

1. Existing dcache interfaces (pre-2.5.62) exported to filesystem
   don't change. Only dcache internal implementation changes. However
   filesystems *must not* delete from the dentry hash chains directly
   using the list macros like allowed earlier. They must use dcache
   APIs like d_drop() or __d_drop() depending on the situation.

2. d_flags is now protected by a per-dentry lock (d_lock). All
   access to d_flags must be protected by it.

3. For a hashed dentry, checking of d_count needs to be protected
   by d_lock.


Papers and other documentation on dcache locking
================================================

1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).

2. http://lse.sourceforge.net/locking/dcache/dcache.html


Resources
=========

(Note some of these resources are not up-to-date with the latest kernel
 version.)

Creating Linux virtual filesystems. 2002
    <http://lwn.net/Articles/13325/>

The Linux Virtual File-system Layer by Neil Brown. 1999
    <http://www.cse.unsw.edu.au/~neilb/oss/linux-commentary/vfs.html>

A tour of the Linux VFS by Michael K. Johnson. 1996
    <http://www.tldp.org/LDP/khg/HyperNews/get/fs/vfstour.html>

A small trail through the Linux kernel by Andries Brouwer. 2001
    <http://www.win.tue.nl/~aeb/linux/vfs/trail.html>
OpenPOWER on IntegriCloud