Skip to content

Commit c3b94f4

Browse files
Hugh Dickinstorvalds
Hugh Dickins
authored andcommitted
memcg: further prevent OOM with too many dirty pages
The may_enter_fs test turns out to be too restrictive: though I saw no problem with it when testing on 3.5-rc6, it very soon OOMed when I tested on 3.5-rc6-mm1. I don't know what the difference there is, perhaps I just slightly changed the way I started off the testing: dd if=/dev/zero of=/mnt/temp bs=1M count=1024; rm -f /mnt/temp; sync repeatedly, in 20M memory.limit_in_bytes cgroup to ext4 on USB stick. ext4 (and gfs2 and xfs) turn out to allocate new pages for writing with AOP_FLAG_NOFS: that seems a little worrying, and it's unclear to me why the transaction needs to be started even before allocating pagecache memory. But it may not be worth worrying about these days: if direct reclaim avoids FS writeback, does __GFP_FS now mean anything? Anyway, we insisted on the may_enter_fs test to avoid hangs with the loop device; but since that also masks off __GFP_IO, we can test for __GFP_IO directly, ignoring may_enter_fs and __GFP_FS. But even so, the test still OOMs sometimes: when originally testing on 3.5-rc6, it OOMed about one time in five or ten; when testing just now on 3.5-rc6-mm1, it OOMed on the first iteration. This residual problem comes from an accumulation of pages under ordinary writeback, not marked PageReclaim, so rightly not causing the memcg check to wait on their writeback: these too can prevent shrink_page_list() from freeing any pages, so many times that memcg reclaim fails and OOMs. Deal with these in the same way as direct reclaim now deals with dirty FS pages: mark them PageReclaim. It is appropriate to rotate these to tail of list when writepage completes, but more importantly, the PageReclaim flag makes memcg reclaim wait on them if encountered again. Increment NR_VMSCAN_IMMEDIATE? That's arguable: I chose not. Setting PageReclaim here may occasionally race with end_page_writeback() clearing it: lru_deactivate_fn() already faced the same race, and correctly concluded that the window is small and the issue non-critical. With these changes, the test runs indefinitely without OOMing on ext4, ext3 and ext2: I'll move on to test with other filesystems later. Trivia: invert conditions for a clearer block without an else, and goto keep_locked to do the unlock_page. Signed-off-by: Hugh Dickins <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Ying Han <[email protected]> Cc: Greg Thelen <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Fengguang Wu <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Theodore Ts'o <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent e62e384 commit c3b94f4

File tree

1 file changed

+24
-9
lines changed

1 file changed

+24
-9
lines changed

mm/vmscan.c

+24-9
Original file line numberDiff line numberDiff line change
@@ -723,23 +723,38 @@ static unsigned long shrink_page_list(struct list_head *page_list,
723723
/*
724724
* memcg doesn't have any dirty pages throttling so we
725725
* could easily OOM just because too many pages are in
726-
* writeback from reclaim and there is nothing else to
727-
* reclaim.
726+
* writeback and there is nothing else to reclaim.
728727
*
729-
* Check may_enter_fs, certainly because a loop driver
728+
* Check __GFP_IO, certainly because a loop driver
730729
* thread might enter reclaim, and deadlock if it waits
731730
* on a page for which it is needed to do the write
732731
* (loop masks off __GFP_IO|__GFP_FS for this reason);
733732
* but more thought would probably show more reasons.
733+
*
734+
* Don't require __GFP_FS, since we're not going into
735+
* the FS, just waiting on its writeback completion.
736+
* Worryingly, ext4 gfs2 and xfs allocate pages with
737+
* grab_cache_page_write_begin(,,AOP_FLAG_NOFS), so
738+
* testing may_enter_fs here is liable to OOM on them.
734739
*/
735-
if (!global_reclaim(sc) && PageReclaim(page) &&
736-
may_enter_fs)
737-
wait_on_page_writeback(page);
738-
else {
740+
if (global_reclaim(sc) ||
741+
!PageReclaim(page) || !(sc->gfp_mask & __GFP_IO)) {
742+
/*
743+
* This is slightly racy - end_page_writeback()
744+
* might have just cleared PageReclaim, then
745+
* setting PageReclaim here end up interpreted
746+
* as PageReadahead - but that does not matter
747+
* enough to care. What we do want is for this
748+
* page to have PageReclaim set next time memcg
749+
* reclaim reaches the tests above, so it will
750+
* then wait_on_page_writeback() to avoid OOM;
751+
* and it's also appropriate in global reclaim.
752+
*/
753+
SetPageReclaim(page);
739754
nr_writeback++;
740-
unlock_page(page);
741-
goto keep;
755+
goto keep_locked;
742756
}
757+
wait_on_page_writeback(page);
743758
}
744759

745760
references = page_check_references(page, sc);

0 commit comments

Comments
 (0)