arm: mm: Exclude additional mem_map entries from free

A previous patch addressed the issue of move_freepages_block()
trampling on erronously freed mem_map entries for the bank end
pfn. We also need to restrict the start pfn in a
complementary manner.

Also make macro usage consistent by adopting the use of
round_down and round_up.

Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
This commit is contained in:
Michael Bohan 2011-07-26 17:28:08 -07:00 committed by Bryan Huntsman
parent 5b7afbc106
commit ccd78a45dc
1 changed files with 8 additions and 8 deletions

View File

@ -472,7 +472,10 @@ free_memmap(unsigned long start_pfn, unsigned long end_pfn)
}
/*
* The mem_map array can get very big. Free the unused area of the memory map.
* The mem_map array can get very big. Free as much of the unused portion of
* the mem_map that we are allowed to. The page migration code moves pages
* in blocks that are rounded per the MAX_ORDER_NR_PAGES definition, so we
* can't free mem_map entries that may be dereferenced in this manner.
*/
static void __init free_unused_memmap(struct meminfo *mi)
{
@ -486,7 +489,8 @@ static void __init free_unused_memmap(struct meminfo *mi)
for_each_bank(i, mi) {
struct membank *bank = &mi->bank[i];
bank_start = bank_pfn_start(bank);
bank_start = round_down(bank_pfn_start(bank),
MAX_ORDER_NR_PAGES);
#ifdef CONFIG_SPARSEMEM
/*
@ -503,12 +507,8 @@ static void __init free_unused_memmap(struct meminfo *mi)
if (prev_bank_end && prev_bank_end < bank_start)
free_memmap(prev_bank_end, bank_start);
/*
* Align up here since the VM subsystem insists that the
* memmap entries are valid from the bank end aligned to
* MAX_ORDER_NR_PAGES.
*/
prev_bank_end = ALIGN(bank_pfn_end(bank), MAX_ORDER_NR_PAGES);
prev_bank_end = round_up(bank_pfn_end(bank),
MAX_ORDER_NR_PAGES);
}
#ifdef CONFIG_SPARSEMEM