mm/hmm: support automatic NUMA balancing

While the page is migrating by NUMA balancing, HMM failed to detect this
condition and still return the old page. Application will use the new page
migrated, but driver pass the old page physical address to GPU, this crash
the application later.

Use pte_protnone(pte) to return this condition and then hmm_vma_do_fault
will allocate new page.

Signed-off-by: Philip Yang <Philip.Yang@amd.com>
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
This commit is contained in:
Philip Yang 2019-05-23 16:32:31 -04:00 committed by Jason Gunthorpe
parent 085ea25064
commit 789c2af88f

View File

@ -548,7 +548,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte) static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte)
{ {
if (pte_none(pte) || !pte_present(pte)) if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte))
return 0; return 0;
return pte_write(pte) ? range->flags[HMM_PFN_VALID] | return pte_write(pte) ? range->flags[HMM_PFN_VALID] |
range->flags[HMM_PFN_WRITE] : range->flags[HMM_PFN_WRITE] :