Commit Graph

72392 Commits

Author SHA1 Message Date
Gavin Shan
7ebdf956df powerpc/powernv: PE list based on creation order
The resource (I/O and MMIO) will be assigned on basis of PE from
top to bottom so that we can implement the trick here: the resource
that has been assigned to parent PE could be taken by child PE if
necessary.

The current implementation already has PE list per PHB basis, but
the list doesn't meet our requirment: tracing PE based on their
cration time from top to bottom. So the patch does rename for the
DMA based PE list and introduces the list to trace the PEs sequentially
based on their creation time.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Reviewed-by: Ram Pai <linuxram@us.ibm.com>
Reviewed-by: Richard Yang <weiyang@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:35:13 +10:00
Gavin Shan
fb446ad075 powerpc/powernv: Create bus sensitive PEs
Basically, there're 2 types of PCI bus sensitive PEs: (A) The PE
includes single PCI bus. (B) The PE includes the PCI bus and all
the subordinate PCI buses. At present, we'd like to put PCI bus
originated by PCI-e link to form PE that contains single PCI bus,
and the PCIe-to-PCI bridge will form the 2nd type of PE. We don't
figure out to detect PLX bridge yet. Once we can detect PLX bridge
some day, we have to put PCI buses originated from the downstream
port of PLX bridge to the 2nd type of PE.

The patch changes the original implementation for a little bit
to support 2 types of PCI bus sensitive PEs described as above.
Also, the function used to retrieve the corresponding PE according
to the given PCI device has been changed based on that because each
PCI device should trace the directly associated PE.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Reviewed-by: Ram Pai <linuxram@us.ibm.com>
Reviewed-by: Richard Yang <weiyang@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:34:43 +10:00
Li Zhong
e72bbbab27 powerpc/trace: Fix interrupt tracepoints vs. RCU
There are a few tracepoints in the interrupt code path, which is before
irq_enter(), or after irq_exit(), like
trace_irq_entry()/trace_irq_exit() in do_IRQ(),
trace_timer_interrupt_entry()/trace_timer_interrupt_exit() in
timer_interrupt().

If the interrupt is from idle(), and because tracepoint contains RCU
read-side critical section, we could see following suspicious RCU usage
reported:

[  145.127743] ===============================
[  145.127747] [ INFO: suspicious RCU usage. ]
[  145.127752] 3.6.0-rc3+ #1 Not tainted
[  145.127755] -------------------------------
[  145.127759] /root/.workdir/linux/arch/powerpc/include/asm/trace.h:33
suspicious rcu_dereference_check() usage!
[  145.127765]
[  145.127765] other info that might help us debug this:
[  145.127765]
[  145.127771]
[  145.127771] RCU used illegally from idle CPU!
[  145.127771] rcu_scheduler_active = 1, debug_locks = 0
[  145.127777] RCU used illegally from extended quiescent state!
[  145.127781] no locks held by swapper/0/0.
[  145.127785]
[  145.127785] stack backtrace:
[  145.127789] Call Trace:
[  145.127796] [c00000000108b530] [c000000000013c40] .show_stack
+0x70/0x1c0 (unreliable)
[  145.127806] [c00000000108b5e0]
[c0000000000f59d8] .lockdep_rcu_suspicious+0x118/0x150
[  145.127813] [c00000000108b680] [c00000000000fc58] .do_IRQ+0x498/0x500
[  145.127820] [c00000000108b750] [c000000000003950]
hardware_interrupt_common+0x150/0x180
[  145.127828] --- Exception: 501 at .plpar_hcall_norets+0x84/0xd4
[  145.127828]     LR = .check_and_cede_processor+0x38/0x70
[  145.127836] [c00000000108bab0] [c0000000000665dc] .shared_cede_loop
+0x5c/0x100
[  145.127844] [c00000000108bb70] [c000000000588ab0] .cpuidle_enter
+0x30/0x50
[  145.127850] [c00000000108bbe0]
[c000000000588b0c] .cpuidle_enter_state+0x3c/0xb0
[  145.127857] [c00000000108bc60] [c000000000589730] .cpuidle_idle_call
+0x150/0x6c0
[  145.127863] [c00000000108bd30] [c000000000058440] .pSeries_idle
+0x10/0x40
[  145.127870] [c00000000108bda0] [c00000000001683c] .cpu_idle
+0x18c/0x2d0
[  145.127876] [c00000000108be60] [c00000000000b434] .rest_init
+0x124/0x1b0
[  145.127884] [c00000000108bef0] [c0000000009d0d28] .start_kernel
+0x568/0x588
[  145.127890] [c00000000108bf90] [c000000000009660] .start_here_common
+0x20/0x40

This is because the RCU usage in interrupt context should be used in
area marked by rcu_irq_enter()/rcu_irq_exit(), called in
irq_enter()/irq_exit() respectively.

Move them into the irq_enter()/irq_exit() area to avoid the reporting.

Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:54 +10:00
Aneesh Kumar K.V
78f1dbde9f powerpc/mm: Make some of the PGTABLE_RANGE dependency explicit
slice array size and slice mask size depend on PGTABLE_RANGE.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:53 +10:00
Aneesh Kumar K.V
f033d659c3 powerpc/mm: Update VSID allocation documentation
This update the proto-VSID and VSID scramble related information
to be more generic by using names instead of current values.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:53 +10:00
Aneesh Kumar K.V
048ee0993e powerpc/mm: Add 64TB support
Increase max addressable range to 64TB. This is not tested on
real hardware yet.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:51 +10:00
Aneesh Kumar K.V
735cafc32b powerpc/mm: Use 32bit array for slb cache
With larger vsid we need to track more bits of ESID in slb cache
for slb invalidate.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:51 +10:00
Aneesh Kumar K.V
ac8dc2823a powerpc/mm: Use the required number of VSID bits in slbmte
ASM_VSID_SCRAMBLE can leave non-zero bits in the high 28 bits of the result
for 256MB segment (40 bits for 1T segment). Properly mask them before using
the values in slbmte

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:50 +10:00
Aneesh Kumar K.V
7aa0727f33 powerpc/mm: Increase the slice range to 64TB
This patch makes the high psizes mask as an unsigned char array
so that we can have more than 16TB. Currently we support upto
64TB

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:50 +10:00
Aneesh Kumar K.V
67550080b8 powerpc/mm: Make KERN_VIRT_SIZE not dependend on PGTABLE_RANGE
As we keep increasing PGTABLE_RANGE we need not increase the virual
map area for kernel.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:50 +10:00
Aneesh Kumar K.V
5524a27d39 powerpc/mm: Convert virtual address to vpn
This patch convert different functions to take virtual page number
instead of virtual address. Virtual page number is virtual address
shifted right by VPN_SHIFT (12) bits. This enable us to have an
address range of upto 76 bits.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:49 +10:00
Aneesh Kumar K.V
dcda287a9b powerpc/mm: Simplify hpte_decode
This patch simplify hpte_decode for easy switching of virtual address to
virtual page number in the later patch

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:49 +10:00
Aneesh Kumar K.V
f6412b742a powerpc/mm: Use hpt_va to compute virtual address
Don't open code the same

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:48 +10:00
Aneesh Kumar K.V
f038392363 powerpc/mm: Replace open coded CONTEXT_BITS value
To clarify the meaning for future readers, replace the open coded
19 with CONTEXT_BITS

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:48 +10:00
Scott Wood
f3d3444572 powerpc/mm: Fix typo in PTRS_PER_PUD
PTRS_PER_PUD should be based on PUD_INDEX_SIZE, not PMD_INDEX_SIZE.  We
got away with it because PUD and PMD had the same index size, but this is
no longer true with Aneesh's patchset to support a 46-bit user effective
address space.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:47 +10:00
Michael Neuling
b92a66a65c powerpc: Add denormalisation exception handling for POWER6/7
On POWER6 and POWER7 if the input operand to an instruction is a
denormalised single precision binary floating point value we can take
a denormalisation exception where it's expected that the hypervisor
(HV=1) will fix up the inputs before the instruction is run.

This adds code to handle this denormalisation exception for POWER6 and
POWER7.

It also add a CONFIG_PPC_DENORMALISATION option and sets it in
pseries/ppc64_defconfig.

This is useful on bare metal systems only.  Based on patch from Milton
Miller.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:31:47 +10:00
Benjamin Herrenschmidt
eda485f06d Merge remote-tracking branch 'pci/pci/gavin-window-alignment' into next
Merge Gavin patches from the PCI tree as subsequent powerpc
patches are going to depend on them

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-17 16:07:43 +10:00
Bjorn Helgaas
9a5d5bd848 Merge commit 'v3.6-rc5' into pci/gavin-window-alignment
* commit 'v3.6-rc5': (1098 commits)
  Linux 3.6-rc5
  HID: tpkbd: work even if the new Lenovo Keyboard driver is not configured
  Remove user-triggerable BUG from mpol_to_str
  xen/pciback: Fix proper FLR steps.
  uml: fix compile error in deliver_alarm()
  dj: memory scribble in logi_dj
  Fix order of arguments to compat_put_time[spec|val]
  xen: Use correct masking in xen_swiotlb_alloc_coherent.
  xen: fix logical error in tlb flushing
  xen/p2m: Fix one-off error in checking the P2M tree directory.
  powerpc: Don't use __put_user() in patch_instruction
  powerpc: Make sure IPI handlers see data written by IPI senders
  powerpc: Restore correct DSCR in context switch
  powerpc: Fix DSCR inheritance in copy_thread()
  powerpc: Keep thread.dscr and thread.dscr_inherit in sync
  powerpc: Update DSCR on all CPUs when writing sysfs dscr_default
  powerpc/powernv: Always go into nap mode when CPU is offline
  powerpc: Give hypervisor decrementer interrupts their own handler
  powerpc/vphn: Fix arch_update_cpu_topology() return value
  ARM: gemini: fix the gemini build
  ...

Conflicts:
	drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
	drivers/rapidio/devices/tsi721.c
2012-09-13 15:54:57 -06:00
Gavin Shan
271fd03a30 powerpc/powernv: I/O and memory alignment for P2P bridges
The patch implements ppc_md.pcibios_window_alignment for powernv
platform so that the resource reassignment in PCI core will be
done according to the I/O and memory alignment returned from
powernv platform. The alignments returned from powernv platform
is closely depending on the scheme for PE segmenting. Besides,
the patch isn't useful for now, but the subsequent patches will
be working based on it.

[bhelgaas: use pci_pcie_type() since pci_dev.pcie_type was removed]
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2012-09-11 16:59:47 -06:00
Gavin Shan
4c2245bb5c powerpc/PCI: Override pcibios_window_alignment()
This patch implements pcibios_window_alignment() so powerpc platforms can
force P2P bridge windows to be at larger alignments than the PCI spec
requires.

[bhelgaas: changelog]
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2012-09-11 16:59:46 -06:00
Joe MacDonald
6b5e7229bb powerpc/mm: Match variable types to API
sys_subpage_prot() takes an unsigned long for 'addr' then does some stuff
with it and the result is stored in a signed int, i, which is eventually
used as the size parameter in a copy_from_user call.  Update 'i' to be an
unsigned long as well and since 'nw' is used in a size_t context which,
depending on whether this is 32- or 64-bit may be unsigned int or unsigned
long, switch that to a size_t and always be right.

Finally, since we're in the neighbourhood, make the same changes to
subpage_prot_clear().

Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joe MacDonald <joe.macdonald@windriver.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 14:37:31 +10:00
Alexey Kardashevskiy
11f63d3fb9 powerpc/iommu: Add ppc_md.tce_get() callback for use by VFIO
The upcoming VFIO support requires a way to know which
entry in the TCE map is not empty in order to do cleanup
at QEMU exit/crash. This patch adds such functionality
to POWERNV platform code.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 14:36:17 +10:00
Michael Neuling
4715fed600 powerpc: cleanup old DABRX #defines
These are no longer used so get rid of them

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:59:16 +10:00
Michael Neuling
cd14457304 powerpc: Dynamically calculate the dabrx based on kernel/user/hypervisor
Currently we mark the DABRX to interrupt on all matches
(hypervisor/kernel/user and then filter in software.  We can be a lot
smarter now that we can set the DABRX dynamically.

This sets the DABRX based on the flags passed by the user.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:59:13 +10:00
Michael Neuling
4474ef055c powerpc: Rework set_dabr so it can take a DABRX value as well
Rework set_dabr to take a DABRX value as well.

Both the pseries and PS3 hypervisors do some checks on the DABRX
values that are passed in the hcall.  This patch stops bogus values
from being passed to hypervisor.  Also, in the case where we are
clearing the breakpoint, where DABR and DABRX are zero, we modify the
DABRX value to make it valid so that the hcall won't fail.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:59:10 +10:00
Gavin Shan
3ab96a02e8 powerpc/eeh: Cleanup on EEH PCI address cache
The patch does cleanup on EEH PCI address cache based on the fact
EEH core is the only user of the component.

        * Cleanup on function names so that they all have prefix
          "eeh" and looks more short.
        * Function printk() has been replaced with pr_debug() or
          pr_warning() accordingly.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:59:00 +10:00
Gavin Shan
f8f7d63fd9 powerpc/eeh: Trace eeh device from I/O cache
The idea comes from Benjamin Herrenschmidt. The eeh cache helps
fetching the pci device according to the given I/O address. Since
the eeh cache is serving for eeh, it's reasonable for eeh cache
to trace eeh device except pci device.

The patch make eeh cache to trace eeh device. Also, the major
eeh entry function eeh_dn_check_failure has been renamed to
eeh_dev_check_failure since it will take eeh device as input
parameter.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:44 +10:00
Gavin Shan
d7bb88629d powerpc/eeh: Probe mode support
While EEH module is installed, PCI devices is checked one by one
to see if it supports eeh. On different platforms, the PCI devices
are referred through different ways when the EEH module is loaded.
For example, on pSeries platform, that is done by OF node. However,
we would do that by real PCI devices (struct pci_dev) on PowerNV
platform in future. So we needs some mechanism to differentiate
those cases by classifying them to probe modes, either from OF
nodes or real PCI devices.

The patch implements the support to eeh probe mode. Also, the
EEH on pSeries has set it into EEH_PROBE_MODE_DEVTREE. That means
the probe will be done based on OF nodes on pSeries platform.

In addition, On pSeries platform, it's done by OF nodes. The patch
moves the the probe function from EEH core to platform dependent
backend and some cleanup applied.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:44 +10:00
Gavin Shan
dbbceee12f powerpc/eeh: Move stats to PE
The patch removes the eeh related statistics for eeh device since
they have been maintained by the corresponding eeh PE. Also, the
flags used to trace the state of eeh device and PE have been reworked
for a little bit.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:43 +10:00
Gavin Shan
9b3c76f081 powerpc/eeh: Handle EEH error based on PE
The patch reworks the current implementation so that the eeh errors
will be handled basing on PE instead of eeh device.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:43 +10:00
Gavin Shan
120dc49661 powerpc/eeh: Make EEH handler PE sensitive
Once eeh error is found, eeh event will be created and put it into
the global linked list. At the mean while, kernel thread will be
started to process it. The handler for the kernel thread originally
was eeh device sensitive.

The patch reworks the handler of the kernel thread so that it's PE
sensitive.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:42 +10:00
Gavin Shan
c270a24c59 powerpc/eeh: Do reset based on PE
The patch implements reset based on PE instead of eeh device. Also,
The functions used to retrieve the reset type, either hot or fundamental
reset, have been reworked for a little bit. More specificly, it's
implemented based the the eeh device traverse function.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:42 +10:00
Gavin Shan
ff477966c6 powerpc/eeh: I/O enable and log retrival based on PE
The patch refactors the original implementation in order to enable
I/O and retrieve EEH log based on PE.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:42 +10:00
Gavin Shan
9e6d2cf65e powerpc/eeh: Device bars restore based on PE
The patch introduces the function to traverse the devices of the
specified PE and its child PEs. Also, the restore on device bars
is implemented based on the traverse function.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:41 +10:00
Gavin Shan
371a395d2f powerpc/eeh: Make EEH operations based on PE
Originally, all the EEH operations were implemented based on OF node.

Actually, it explicitly breaks the rules that the operation target
is PE instead of device. Therefore, the patch makes all the operations
based on PE instead of device.

Unfortunately, the backend for config space has to be kept as original
because it doesn't depend on PE.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:41 +10:00
Gavin Shan
66523d9f2d powerpc/eeh: Trace error based on PE from beginning
There're 2 conditions to trigger EEH error detection: invalid value
returned from reading I/O or config space. On each case, the function
eeh_dn_check_failure will be called to initialize EEH event and put
it into the poll for further processing.

The patch changes the function for a little bit so that the EEH error
will be traced based on PE instead of EEH device any more. Also, the
function eeh_find_device_pe() has been removed since the eeh device
is tracing the PE by struct eeh_dev::pe.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:33 +10:00
Gavin Shan
5b66352944 powerpc/eeh: Trace EEH state based on PE
Since we've introduced dedicated struct to trace individual PEs,
it's reasonable to trace its state through the dedicated struct
instead of using "eeh_dev" any more.

The patches implements the state tracing based on PE. It's notable
that the PE state will be applied to the specified PE as well as
its child PEs. That complies with the rule that problematic parent
PE will prevent those child PEs from working properly.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:32 +10:00
Gavin Shan
c533b46cc7 powerpc/eeh: Build EEH event based on PE
The original implementation builds EEH event based on EEH device.
We already had dedicated struct to depict PE. It's reasonable to
build EEH event based on PE.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:32 +10:00
Gavin Shan
82e8882f7f powerpc/eeh: Remove PE at appropriate time
During PCI hotplug and EEH recovery, the PE hierarchy tree might be
changed due to the PCI topology changes. At later point when the
PCI device is added, the PE will be created dynamically again.

The patch introduces new function to remove EEH devices from the
associated PE. That also can cause that the parent PE is removed
from the PE tree if the parent PE doesn't include valid EEH devices
and child PEs.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:32 +10:00
Gavin Shan
9b84348c92 powerpc/eeh: Create PEs duing EEH initialization
The patch creates PEs and associated the newly created PEs with
it parent/silbing as well as EEH devices. It would become more
straight to trace EEH errors and recover them accordingly.

Once the EEH functionality on one PCI IOA has been enabled, we
tries to create PE against it. If there's existing PE, to which
the current PCI IOA should be attached, the existing PE will be
converted from "device" type to "bus" type accordingly.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:31 +10:00
Gavin Shan
22f4ab123f powerpc/eeh: Search PE based on requirement
The patch implements searching PE based on the following
requirements:

 * Search PE according to PE address, which is traditional
   PE address that is composed of PCI bus/device/function
   number, or unified PE address assigned by firmware or
   platform.
 * Search parent PE according to the given EEH device. It's
   useful when creating new PE and put it into right position.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:31 +10:00
Gavin Shan
55037d1761 powerpc/eeh: Create PEs for PHBs
For one particular PE, it's only meaningful in the ancestor PHB
domain. Therefore, each PHB should have its own PE hierarchy tree
to trace those PEs created against the PHB.

The patch creates PEs for the PHBs and put those PEs into the
global link list traced by "eeh_phb_pe". The link list of PEs
would be first level of overall PE hierarchy tree across the
system.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:30 +10:00
Gavin Shan
646a849940 powerpc/eeh: Introduce global mutex
The patch introduces global mutex for EEH so that the core data
structures can be protected by that. Also, 2 inline functions
are exported for that: eeh_lock() and eeh_unlock().

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:30 +10:00
Gavin Shan
968f968f9b powerpc/eeh: Introduce eeh_pe struct
As defined in PAPR 2.4, Partitionable Endpoint (PE) is an I/O subtree
that can be treated as a unit for the purposes of partitioning and error
recovery. Therefore, eeh core should be aware of PE. With eeh_pe struct,
we can support PE explicitly. Further more, it makes all the stuff much
more data centralized. Another important reason is for eeh core to support
multiple platforms. Some of them like pSeries figures out PEs through
OF nodes while others like powernv have to do that through PCI bus/device
tree. With explicit PE support, eeh core will be implemented based on
the centrialized data and platform dependent implementations figure it
out by their feasible ways.

When the struct is designed, following factors are taken in account:
  * Reflecting the relationships of PEs. PE might have parent
    as well children.
  * Reflecting the association of PE and (eeh) devices.
  * PEs have PHB boundary.
  * PE should have unique address assigned in the corresponding
    PHB domain.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:29 +10:00
Gavin Shan
3ea1ae989a powerpc/eeh: More logs for EEH initialization
The patch adds more logs to EEH initialization functions for
debugging purpose. Also, the machine type (pSeries) is checked
in the platform initialization to assure it's the correct platform
to invoke it.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:29 +10:00
Gavin Shan
7e4bbaf0bf powerpc/eeh: Use slab to allocate eeh devices
The EEH initialization functions have been postponed until slab/slub
are ready. So we use slab/slub to allocate the memory chunks for newly
creatd EEH devices. That would save lots of memory.

The patch also does cleanup to replace "kmalloc" with "kzalloc" so
that we needn't clear the allocated memory chunk explicitly.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:28 +10:00
Gavin Shan
35e5cfe27e powerpc/eeh: Move EEH initialization around
Currently, we have 3 phases for EEH initialization on pSeries platform.
All of them are done through builtin functions: platform initialization,
EEH device creation, and EEH subsystem enablement. All of them are done
no later than ppc_md.setup_arch. That means that the slab/slub isn't ready
yet, so we have to allocate memory chunks on basis of PAGE_SIZE for those
dynamically created EEH devices. That's pretty expensive.

In order to utilize slab/slub for memory allocation, we have to move the EEH
initialization functions around, but all of them should be called after slab
is ready.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:27 +10:00
Michael Ellerman
407821a34f powerpc: Initialise paca.data_offset with poison
It's possible for the cpu_possible_mask to change between the time we
initialise the pacas and the time we setup per_cpu areas.

Obviously impossible cpus shouldn't ever be running, but stranger things
have happened. So be paranoid and initialise data_offset with a poison
value in case we don't set it up later.

Based on a patch from Anton Blanchard.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-10 09:35:27 +10:00
Linus Torvalds
32d687cad3 Merge branch 'fixes-for-3.6' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping
Pull DMA-mapping fixes from Marek Szyprowski:
 "Another set of fixes for ARM dma-mapping subsystem.

  Commit e9da6e9905 replaced custom consistent buffer remapping code
  with generic vmalloc areas.  It however introduced some regressions
  caused by limited support for allocations in atomic context.  This
  series contains fixes for those regressions.

  For some subplatforms the default, pre-allocated pool for atomic
  allocations turned out to be too small, so a function for setting its
  size has been added.

  Another set of patches adds support for atomic allocations to
  IOMMU-aware DMA-mapping implementation.

  The last part of this pull request contains two fixes for Contiguous
  Memory Allocator, which relax too strict requirements."

* 'fixes-for-3.6' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
  ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMIC
  ARM: dma-mapping: Introduce __atomic_get_pages() for __iommu_get_pages()
  ARM: dma-mapping: Refactor out to introduce __in_atomic_pool
  ARM: dma-mapping: atomic_pool with struct page **pages
  ARM: Kirkwood: increase atomic coherent pool size
  ARM: DMA-Mapping: print warning when atomic coherent allocation fails
  ARM: DMA-Mapping: add function for setting coherent pool size from platform code
  ARM: relax conditions required for enabling Contiguous Memory Allocator
  mm: cma: fix alignment requirements for contiguous regions
2012-09-08 16:22:43 -07:00
Michael Neuling
06c8876668 powerpc: Use the XDABR hcall
We never use the XDABR hcall since we check for DABR hcall first.
XDABR syscall is better since it allows us to also set the DABRX.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-09-07 11:44:42 +10:00