]> bbs.cooldavid.org Git - net-next-2.6.git/blame - Documentation/cgroups/blkio-controller.txt
Merge branch 'for-rmk' of git://git.pengutronix.de/git/imx/linux-2.6
[net-next-2.6.git] / Documentation / cgroups / blkio-controller.txt
CommitLineData
72f924f6
VG
1 Block IO Controller
2 ===================
3Overview
4========
5cgroup subsys "blkio" implements the block io controller. There seems to be
6a need of various kinds of IO control policies (like proportional BW, max BW)
7both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
8Plan is to use the same cgroup based management interface for blkio controller
9and based on user options switch IO policies in the background.
10
11In the first phase, this patchset implements proportional weight time based
12division of disk policy. It is implemented in CFQ. Hence this policy takes
13effect only on leaf nodes when CFQ is being used.
14
15HOWTO
16=====
17You can do a very simple testing of running two dd threads in two different
18cgroups. Here is what you can do.
19
afc24d49
VG
20- Enable Block IO controller
21 CONFIG_BLK_CGROUP=y
22
72f924f6
VG
23- Enable group scheduling in CFQ
24 CONFIG_CFQ_GROUP_IOSCHED=y
25
26- Compile and boot into kernel and mount IO controller (blkio).
27
28 mount -t cgroup -o blkio none /cgroup
29
30- Create two cgroups
31 mkdir -p /cgroup/test1/ /cgroup/test2
32
33- Set weights of group test1 and test2
34 echo 1000 > /cgroup/test1/blkio.weight
35 echo 500 > /cgroup/test2/blkio.weight
36
37- Create two same size files (say 512MB each) on same disk (file1, file2) and
38 launch two dd threads in different cgroup to read those files.
39
40 sync
41 echo 3 > /proc/sys/vm/drop_caches
42
43 dd if=/mnt/sdb/zerofile1 of=/dev/null &
44 echo $! > /cgroup/test1/tasks
45 cat /cgroup/test1/tasks
46
47 dd if=/mnt/sdb/zerofile2 of=/dev/null &
48 echo $! > /cgroup/test2/tasks
49 cat /cgroup/test2/tasks
50
51- At macro level, first dd should finish first. To get more precise data, keep
52 on looking at (with the help of script), at blkio.disk_time and
53 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
54 much disk time (in milli seconds), each group got and how many secotors each
55 group dispatched to the disk. We provide fairness in terms of disk time, so
56 ideally io.disk_time of cgroups should be in proportion to the weight.
57
58Various user visible config options
59===================================
72f924f6 60CONFIG_BLK_CGROUP
afc24d49 61 - Block IO controller.
72f924f6
VG
62
63CONFIG_DEBUG_BLK_CGROUP
afc24d49
VG
64 - Debug help. Right now some additional stats file show up in cgroup
65 if this option is enabled.
66
67CONFIG_CFQ_GROUP_IOSCHED
68 - Enables group scheduling in CFQ. Currently only 1 level of group
69 creation is allowed.
72f924f6
VG
70
71Details of cgroup files
72=======================
73- blkio.weight
da69da18
GJ
74 - Specifies per cgroup weight. This is default weight of the group
75 on all the devices until and unless overridden by per device rule.
76 (See blkio.weight_device).
72f924f6
VG
77 Currently allowed range of weights is from 100 to 1000.
78
da69da18
GJ
79- blkio.weight_device
80 - One can specify per cgroup per device rules using this interface.
81 These rules override the default value of group weight as specified
82 by blkio.weight.
83
84 Following is the format.
85
86 #echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device
87 Configure weight=300 on /dev/sdb (8:16) in this cgroup
88 # echo 8:16 300 > blkio.weight_device
89 # cat blkio.weight_device
90 dev weight
91 8:16 300
92
93 Configure weight=500 on /dev/sda (8:0) in this cgroup
94 # echo 8:0 500 > blkio.weight_device
95 # cat blkio.weight_device
96 dev weight
97 8:0 500
98 8:16 300
99
100 Remove specific weight for /dev/sda in this cgroup
101 # echo 8:0 0 > blkio.weight_device
102 # cat blkio.weight_device
103 dev weight
104 8:16 300
105
72f924f6
VG
106- blkio.time
107 - disk time allocated to cgroup per device in milliseconds. First
108 two fields specify the major and minor number of the device and
109 third field specifies the disk time allocated to group in
110 milliseconds.
111
112- blkio.sectors
113 - number of sectors transferred to/from disk by the group. First
114 two fields specify the major and minor number of the device and
115 third field specifies the number of sectors transferred by the
116 group to/from the device.
117
84c124da
DS
118- blkio.io_service_bytes
119 - Number of bytes transferred to/from the disk by the group. These
120 are further divided by the type of operation - read or write, sync
121 or async. First two fields specify the major and minor number of the
122 device, third field specifies the operation type and the fourth field
123 specifies the number of bytes.
124
125- blkio.io_serviced
126 - Number of IOs completed to/from the disk by the group. These
127 are further divided by the type of operation - read or write, sync
128 or async. First two fields specify the major and minor number of the
129 device, third field specifies the operation type and the fourth field
130 specifies the number of IOs.
131
132- blkio.io_service_time
133 - Total amount of time between request dispatch and request completion
134 for the IOs done by this cgroup. This is in nanoseconds to make it
135 meaningful for flash devices too. For devices with queue depth of 1,
136 this time represents the actual service time. When queue_depth > 1,
137 that is no longer true as requests may be served out of order. This
138 may cause the service time for a given IO to include the service time
139 of multiple IOs when served out of order which may result in total
140 io_service_time > actual time elapsed. This time is further divided by
141 the type of operation - read or write, sync or async. First two fields
142 specify the major and minor number of the device, third field
143 specifies the operation type and the fourth field specifies the
144 io_service_time in ns.
145
146- blkio.io_wait_time
147 - Total amount of time the IOs for this cgroup spent waiting in the
148 scheduler queues for service. This can be greater than the total time
149 elapsed since it is cumulative io_wait_time for all IOs. It is not a
150 measure of total time the cgroup spent waiting but rather a measure of
151 the wait_time for its individual IOs. For devices with queue_depth > 1
152 this metric does not include the time spent waiting for service once
153 the IO is dispatched to the device but till it actually gets serviced
154 (there might be a time lag here due to re-ordering of requests by the
155 device). This is in nanoseconds to make it meaningful for flash
156 devices too. This time is further divided by the type of operation -
157 read or write, sync or async. First two fields specify the major and
158 minor number of the device, third field specifies the operation type
159 and the fourth field specifies the io_wait_time in ns.
160
812d4026
DS
161- blkio.io_merged
162 - Total number of bios/requests merged into requests belonging to this
163 cgroup. This is further divided by the type of operation - read or
164 write, sync or async.
165
cdc1184c
DS
166- blkio.io_queued
167 - Total number of requests queued up at any given instant for this
168 cgroup. This is further divided by the type of operation - read or
169 write, sync or async.
170
171- blkio.avg_queue_size
afc24d49 172 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
cdc1184c
DS
173 The average queue size for this cgroup over the entire time of this
174 cgroup's existence. Queue size samples are taken each time one of the
175 queues of this cgroup gets a timeslice.
176
812df48d 177- blkio.group_wait_time
afc24d49 178 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
812df48d
DS
179 This is the amount of time the cgroup had to wait since it became busy
180 (i.e., went from 0 to 1 request queued) to get a timeslice for one of
181 its queues. This is different from the io_wait_time which is the
182 cumulative total of the amount of time spent by each IO in that cgroup
183 waiting in the scheduler queue. This is in nanoseconds. If this is
184 read when the cgroup is in a waiting (for timeslice) state, the stat
185 will only report the group_wait_time accumulated till the last time it
186 got a timeslice and will not include the current delta.
187
188- blkio.empty_time
afc24d49 189 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
812df48d
DS
190 This is the amount of time a cgroup spends without any pending
191 requests when not being served, i.e., it does not include any time
192 spent idling for one of the queues of the cgroup. This is in
193 nanoseconds. If this is read when the cgroup is in an empty state,
194 the stat will only report the empty_time accumulated till the last
195 time it had a pending request and will not include the current delta.
196
197- blkio.idle_time
afc24d49 198 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
812df48d
DS
199 This is the amount of time spent by the IO scheduler idling for a
200 given cgroup in anticipation of a better request than the exising ones
201 from other queues/cgroups. This is in nanoseconds. If this is read
202 when the cgroup is in an idling state, the stat will only report the
203 idle_time accumulated till the last idle period and will not include
204 the current delta.
205
72f924f6 206- blkio.dequeue
afc24d49 207 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
72f924f6
VG
208 gives the statistics about how many a times a group was dequeued
209 from service tree of the device. First two fields specify the major
210 and minor number of the device and third field specifies the number
211 of times a group was dequeued from a particular device.
212
84c124da
DS
213- blkio.reset_stats
214 - Writing an int to this file will result in resetting all the stats
215 for that cgroup.
216
72f924f6
VG
217CFQ sysfs tunable
218=================
219/sys/block/<disk>/queue/iosched/group_isolation
6d6ac1c1 220-----------------------------------------------
72f924f6
VG
221
222If group_isolation=1, it provides stronger isolation between groups at the
223expense of throughput. By default group_isolation is 0. In general that
224means that if group_isolation=0, expect fairness for sequential workload
225only. Set group_isolation=1 to see fairness for random IO workload also.
226
227Generally CFQ will put random seeky workload in sync-noidle category. CFQ
228will disable idling on these queues and it does a collective idling on group
229of such queues. Generally these are slow moving queues and if there is a
230sync-noidle service tree in each group, that group gets exclusive access to
231disk for certain period. That means it will bring the throughput down if
232group does not have enough IO to drive deeper queue depths and utilize disk
233capacity to the fullest in the slice allocated to it. But the flip side is
234that even a random reader should get better latencies and overall throughput
235if there are lots of sequential readers/sync-idle workload running in the
236system.
237
238If group_isolation=0, then CFQ automatically moves all the random seeky queues
239in the root group. That means there will be no service differentiation for
240that kind of workload. This leads to better throughput as we do collective
241idling on root sync-noidle tree.
242
243By default one should run with group_isolation=0. If that is not sufficient
244and one wants stronger isolation between groups, then set group_isolation=1
245but this will come at cost of reduced throughput.
246
6d6ac1c1
VG
247/sys/block/<disk>/queue/iosched/slice_idle
248------------------------------------------
249On a faster hardware CFQ can be slow, especially with sequential workload.
250This happens because CFQ idles on a single queue and single queue might not
251drive deeper request queue depths to keep the storage busy. In such scenarios
252one can try setting slice_idle=0 and that would switch CFQ to IOPS
253(IO operations per second) mode on NCQ supporting hardware.
254
255That means CFQ will not idle between cfq queues of a cfq group and hence be
256able to driver higher queue depth and achieve better throughput. That also
257means that cfq provides fairness among groups in terms of IOPS and not in
258terms of disk time.
259
260/sys/block/<disk>/queue/iosched/group_idle
261------------------------------------------
262If one disables idling on individual cfq queues and cfq service trees by
263setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
264on the group in an attempt to provide fairness among groups.
265
266By default group_idle is same as slice_idle and does not do anything if
267slice_idle is enabled.
268
269One can experience an overall throughput drop if you have created multiple
270groups and put applications in that group which are not driving enough
271IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
272on individual groups and throughput should improve.
273
72f924f6
VG
274What works
275==========
276- Currently only sync IO queues are support. All the buffered writes are
277 still system wide and not per group. Hence we will not see service
278 differentiation between buffered writes between groups.