2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-25 21:54:06 +08:00
Commit Graph

35 Commits

Author SHA1 Message Date
Marc Zyngier
ac30a11e8e ARM: KVM: introduce per-vcpu HYP Configuration Register
So far, KVM/ARM used a fixed HCR configuration per guest, except for
the VI/VF/VA bits to control the interrupt in absence of VGIC.

With the upcoming need to dynamically reconfigure trapping, it becomes
necessary to allow the HCR to be changed on a per-vcpu basis.

The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR
field, initialized at setup time.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-03 01:15:23 +00:00
Lorenzo Pieralisi
7604537bbb ARM: kernel: implement stack pointer save array through MPIDR hashing
Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR
to index the array of pointers where the context is saved and restored.
The current approach works as long as the MPIDR can be considered a
linear index, so that the pointers array can simply be dereferenced by
using the MPIDR[7:0] value.
On ARM multi-cluster systems, where the MPIDR may not be a linear index,
to properly dereference the stack pointer array, a mapping function should
be applied to it so that it can be used for arrays look-ups.

This patch adds code in the cpu_{suspend}/cpu_{resume} implementation
that relies on shifting and ORing hashing method to map a MPIDR value to a
set of buckets precomputed at boot to have a collision free mapping from
MPIDR to context pointers.

The hashing algorithm must be simple, fast, and implementable with few
instructions since in the cpu_resume path the mapping is carried out with
the MMU off and the I-cache off, hence code and data are fetched from DRAM
with no-caching available. Simplicity is counterbalanced with a little
increase of memory (allocated dynamically) for stack pointers buckets, that
should be anyway fairly limited on most systems.

Memory for context pointers is allocated in a early_initcall with
size precomputed and stashed previously in kernel data structures.
Memory for context pointers is allocated through kmalloc; this
guarantees contiguous physical addresses for the allocated memory which
is fundamental to the correct functioning of the resume mechanism that
relies on the context pointer array to be a chunk of contiguous physical
memory. Virtual to physical address conversion for the context pointer
array base is carried out at boot to avoid fiddling with virt_to_phys
conversions in the cpu_resume path which is quite fragile and should be
optimized to execute as few instructions as possible.
Virtual and physical context pointer base array addresses are stashed in a
struct that is accessible from assembly using values generated through the
asm-offsets.c mechanism.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Colin Cross <ccross@android.com>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Stephen Warren <swarren@wwwdotorg.org>
2013-06-20 11:24:11 +01:00
Linus Torvalds
01227a889e Merge tag 'kvm-3.10-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Gleb Natapov:
 "Highlights of the updates are:

  general:
   - new emulated device API
   - legacy device assignment is now optional
   - irqfd interface is more generic and can be shared between arches

  x86:
   - VMCS shadow support and other nested VMX improvements
   - APIC virtualization and Posted Interrupt hardware support
   - Optimize mmio spte zapping

  ppc:
    - BookE: in-kernel MPIC emulation with irqfd support
    - Book3S: in-kernel XICS emulation (incomplete)
    - Book3S: HV: migration fixes
    - BookE: more debug support preparation
    - BookE: e6500 support

  ARM:
   - reworking of Hyp idmaps

  s390:
   - ioeventfd for virtio-ccw

  And many other bug fixes, cleanups and improvements"

* tag 'kvm-3.10-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (204 commits)
  kvm: Add compat_ioctl for device control API
  KVM: x86: Account for failing enable_irq_window for NMI window request
  KVM: PPC: Book3S: Add API for in-kernel XICS emulation
  kvm/ppc/mpic: fix missing unlock in set_base_addr()
  kvm/ppc: Hold srcu lock when calling kvm_io_bus_read/write
  kvm/ppc/mpic: remove users
  kvm/ppc/mpic: fix mmio region lists when multiple guests used
  kvm/ppc/mpic: remove default routes from documentation
  kvm: KVM_CAP_IOMMU only available with device assignment
  ARM: KVM: iterate over all CPUs for CPU compatibility check
  KVM: ARM: Fix spelling in error message
  ARM: KVM: define KVM_ARM_MAX_VCPUS unconditionally
  KVM: ARM: Fix API documentation for ONE_REG encoding
  ARM: KVM: promote vfp_host pointer to generic host cpu context
  ARM: KVM: add architecture specific hook for capabilities
  ARM: KVM: perform HYP initilization for hotplugged CPUs
  ARM: KVM: switch to a dual-step HYP init code
  ARM: KVM: rework HYP page table freeing
  ARM: KVM: enforce maximum size for identity mapped code
  ARM: KVM: move to a KVM provided HYP idmap
  ...
2013-05-05 14:47:31 -07:00
Marc Zyngier
3de50da690 ARM: KVM: promote vfp_host pointer to generic host cpu context
We use the vfp_host pointer to store the host VFP context, should
the guest start using VFP itself.

Actually, we can use this pointer in a more generic way to store
CPU speficic data, and arm64 is using it to dump the whole host
state before switching to the guest.

Simply rename the vfp_host field to host_cpu_context, and the
corresponding type to kvm_cpu_context_t. No change in functionnality.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
2013-04-28 22:23:13 -07:00
Gleb Natapov
2dfee7b271 Merge branch 'kvm-arm-cleanup' from git://github.com/columbia/linux-kvm-arm.git 2013-04-25 18:23:48 +03:00
Russell King
a126f7c41d Merge branch 'mcpm' of git://git.linaro.org/people/nico/linux into devel-stable 2013-04-25 09:42:42 +01:00
Dave Martin
1ae98561b1 ARM: mcpm_head.S: vlock-based first man election
Instead of requiring the first man to be elected in advance (which
can be suboptimal in some situations), this patch uses a per-
cluster mutex to co-ordinate selection of the first man.

This should also make it more feasible to reuse this code path for
asynchronous cluster resume (as in CPUidle scenarios).

We must ensure that the vlock data doesn't share a cacheline with
anything else, or dirty cache eviction could corrupt it.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-04-24 10:37:01 -04:00
Dave Martin
7fe31d28e8 ARM: mcpm: introduce helpers for platform coherency exit/setup
This provides helper methods to coordinate between CPUs coming down
and CPUs going up, as well as documentation on the used algorithms,
so that cluster teardown and setup
operations are not done for a cluster simultaneously.

For use in the power_down() implementation:
  * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu)
  * __mcpm_outbound_enter_critical(unsigned int cluster)
  * __mcpm_outbound_leave_critical(unsigned int cluster)
  * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu)

The power_up_setup() helper should do platform-specific setup in
preparation for turning the CPU on, such as invalidating local caches
or entering coherency.  It must be assembler for now, since it must
run before the MMU can be switched on.  It is passed the affinity level
for which initialization should be performed.

Because the mcpm_sync_struct content is looked-up and modified
with the cache enabled or disabled depending on the code path, it is
crucial to always ensure proper cache maintenance to update main memory
right away.  The sync_cache_*() helpers are used to that end.

Also, in order to prevent a cached writer from interfering with an
adjacent non-cached writer, we ensure each state variable is located to
a separate cache line.

Thanks to Nicolas Pitre and Achin Gupta for the help with this
patch.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
2013-04-24 10:37:00 -04:00
Marc Zyngier
7393b59917 ARM: KVM: abstract fault register accesses
Instead of directly accessing the fault registers, use proper accessors
so the core code can be shared.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2013-03-06 15:48:42 -08:00
Will Deacon
8a4e3a9ead ARM: 7659/1: mm: make mm->context.id an atomic64_t variable
mm->context.id is updated under asid_lock when a new ASID is allocated
to an mm_struct. However, it is also read without the lock when a task
is being scheduled and checking whether or not the current ASID
generation is up-to-date.

If two threads of the same process are being scheduled in parallel and
the bottom bits of the generation in their mm->context.id match the
current generation (that is, the mm_struct has not been used for ~2^24
rollovers) then the non-atomic, lockless access to mm->context.id may
yield the incorrect ASID.

This patch fixes this issue by making mm->context.id and atomic64_t,
ensuring that the generation is always read consistently. For code that
only requires access to the ASID bits (e.g. TLB flushing by mm), then
the value is accessed directly, which GCC converts to an ldrb.

Cc: <stable@vger.kernel.org> # 3.8
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-03 22:54:14 +00:00
Marc Zyngier
c7e3ba64ba ARM: KVM: arch_timers: Add timer world switch
Do the necessary save/restore dance for the timers in the world
switch code. In the process, allow the guest to read the physical
counter, which is useful for its own clock_event_device.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2013-02-11 19:05:38 +00:00
Marc Zyngier
348b2b0708 ARM: KVM: VGIC control interface world switch
Enable the VGIC control interface to be save-restored on world switch.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2013-02-11 19:00:03 +00:00
Christoffer Dall
f7ed45be3b KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.

The following Hyp-ABI is also documented in the code:

Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
   Switching to Hyp mode is done through a simple HVC #0 instruction. The
   exception vector code will check that the HVC comes from VMID==0 and if
   so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
   - r0 contains a pointer to a HYP function
   - r1, r2, and r3 contain arguments to the above function.
   - The HYP function will be called with its arguments in r0, r1 and r2.
   On HYP function return, we return directly to SVC.

A call to a function executing in Hyp mode is performed like the following:

        <svc code>
        ldr     r0, =BSYM(my_hyp_fn)
        ldr     r1, =my_param
        hvc #0  ; Call my_hyp_fn(my_param) from HYP mode
        <svc code>

Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.

SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.

Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.

To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU.  After a guest exit, the VFP state is
returned to the host.  When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state.  We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.

Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR.  We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest.  This quirk was fixed by
Marc Zyngier.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-23 13:29:12 -05:00
Russell King
9fc31ddc70 ARM: Don't unconditionally bloat thread_info
There is no point reserving space at the bottom of the kernel stack for
per-thread crunch state, and per-thread VFP state if these are not being
supported by the kernel being built.  Remove these members from the
thread union when these features are disabled.

Reported-by: Tim Bird <tim.bird@am.sony.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-08-29 11:18:17 +01:00
Barry Song
91c2ebb90b ARM: 7114/1: cache-l2x0: add resume entry for l2 in secure mode
we save the l2x0 registers at the first initialization, and platform codes
can get them to restore l2x0 status after wakeup.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:11:51 +01:00
Russell King
f8f2a8522a ARM: vfp: fix a hole in VFP thread migration
Fix a hole in the VFP thread migration.  Lets define two threads.

Thread 1, we'll call 'interesting_thread' which is a thread which is
running on CPU0, using VFP (so vfp_current_hw_state[0] =
&interesting_thread->vfpstate) and gets migrated off to CPU1, where
it continues execution of VFP instructions.

Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes
over on CPU0.  This has also been using VFP, and last used VFP on CPU0,
but doesn't use it again.

The following code will be executed twice:

		cpu = thread->cpu;

		/*
		 * On SMP, if VFP is enabled, save the old state in
		 * case the thread migrates to a different CPU. The
		 * restoring is done lazily.
		 */
		if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
			vfp_save_state(vfp_current_hw_state[cpu], fpexc);
			vfp_current_hw_state[cpu]->hard.cpu = cpu;
		}
		/*
		 * Thread migration, just force the reloading of the
		 * state on the new CPU in case the VFP registers
		 * contain stale data.
		 */
		if (thread->vfpstate.hard.cpu != cpu)
			vfp_current_hw_state[cpu] = NULL;

The first execution will be on CPU0 to switch away from 'interesting_thread'.
interesting_thread->cpu will be 0.

So, vfp_current_hw_state[0] points at interesting_thread->vfpstate.
The hardware state will be saved, along with the CPU number (0) that
it was executing on.

'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0.
Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0,
and so the thread migration check is not triggered.

This means that vfp_current_hw_state[0] remains pointing at interesting_thread.

The second execution will be on CPU1 to switch _to_ 'interesting_thread'.
So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now
will be 1.  The previous thread executing on CPU1 is not relevant to this
so we shall ignore that.

We get to the thread migration check.  Here, we discover that
interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is
now 1, indicating thread migration.  We set vfp_current_hw_state[1] to
NULL.

So, at this point vfp_current_hw_state[] contains the following:

[0] = &interesting_thread->vfpstate
[1] = NULL

Our interesting thread now executes a VFP instruction, takes a fault
which loads the state into the VFP hardware.  Now, through the assembly
we now have:

[0] = &interesting_thread->vfpstate
[1] = &interesting_thread->vfpstate

CPU1 stops due to ptrace (and so saves its VFP state) using the thread
switch code above), and CPU0 calls vfp_sync_hwstate().

	if (vfp_current_hw_state[cpu] == &thread->vfpstate) {
		vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);

BANG, we corrupt interesting_thread's VFP state by overwriting the
more up-to-date state saved by CPU1 with the old VFP state from CPU0.

Fix this by ensuring that we have sane semantics for the various state
describing variables:

1. vfp_current_hw_state[] points to the current owner of the context
   information stored in each CPUs hardware, or NULL if that state
   information is invalid.
2. thread->vfpstate.hard.cpu always contains the most recent CPU number
   which the state was loaded into or NR_CPUS if no CPU owns the state.

So, for a particular CPU to be a valid owner of the VFP state for a
particular thread t, two things must be true:

 vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu.

and that is valid from the moment a CPU loads the saved VFP context
into the hardware.  This gives clear and consistent semantics to
interpreting these variables.

This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu
is invalidated, otherwise CPU0 may believe it was the last owner.  The
hole can happen thus:

- thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info
  freed.
- New thread allocated from a previously running thread on CPU2, reusing
  memory for thread1 and copying vfp.hard.cpu.

At this point, the following are true:

	new_thread1->vfpstate.hard.cpu == 2
	&new_thread1->vfpstate == vfp_current_hw_state[2]

Lastly, this also addresses thread flushing in a similar way to thread
copying.  Hole is:

- thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP.
- thread calls execve(), so thread flush happens, leaving
  vfp_current_hw_state[0] intact.  This vfpstate is memset to 0 causing
  thread->vfpstate.hard.cpu = 0.
- thread migrates back to CPU0 before using VFP.

At this point, the following are true:

	thread->vfpstate.hard.cpu == 0
	&thread->vfpstate == vfp_current_hw_state[0]

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-09 17:22:12 +01:00
Russell King
f6b0fa02e8 ARM: pm: add generic CPU suspend/resume support
This adds core support for saving and restoring CPU coprocessor
registers for suspend/resume support.  This contains support for suspend
with ARM920, ARM926, SA11x0, PXA25x, PXA27x, PXA3xx, V6 and V7 CPUs.
Tested on Assabet and Tegra 2.

Tested-by: Colin Cross <ccross@android.com>
Tested-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-22 17:11:23 +00:00
Russell King
753790e713 ARM: move cache/processor/fault glue to separate include files
This allows the cache/processor/fault glue to be more easily used
from assembler code.  Tested on Assabet and Tegra 2.

Tested-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-12 11:52:21 +00:00
Nicolas Pitre
6451d7783b arm: remove machine_desc.io_pg_offst and .phys_io
Since we're now using addruart to establish the debug mapping, we can
remove the io_pg_offst and phys_io members of struct machine_desc.

The various declarations were removed using the following script:

  grep -rl MACHINE_START arch/arm | xargs \
  sed -i '/MACHINE_START/,/MACHINE_END/ { /\.\(phys_io\|io_pg_offst\)/d }'

[ Initial patch was from Jeremy Kerr, example script from Russell King ]

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Eric Miao <eric.miao at canonical.com>
2010-10-20 00:27:46 -04:00
Nicolas Pitre
df0698be14 ARM: stack protector: change the canary value per task
A new random value for the canary is stored in the task struct whenever
a new task is forked.  This is meant to allow for different canary values
per task.  On ARM, GCC expects the canary value to be found in a global
variable called __stack_chk_guard.  So this variable has to be updated
with the value stored in the task struct whenever a task switch occurs.

Because the variable GCC expects is global, this cannot work on SMP
unfortunately.  So, on SMP, the same initial canary value is kept
throughout, making this feature a bit less effective although it is still
useful.

One way to overcome this GCC limitation would be to locate the
__stack_chk_guard variable into a memory page of its own for each CPU,
and then use TLB locking to have each CPU see its own page at the same
virtual address for each of them.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-06-14 21:31:01 -04:00
Russell King
a9c9147eb9 ARM: dma-mapping: provide per-cpu type map/unmap functions
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:20 +00:00
Christoph Lameter
02cbe4749a arm: use kbuild.h instead of macros in asm-offsets.c
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 08:06:29 -07:00
Paul Brook
48d7927bdf Add a prefetch abort handler
This patch adds a prefetch abort handler similar to the data abort one
and renames the latter for consistency. Initial implementation by Paul
Brook with some renaming by Catalin Marinas.

Signed-off-by: Paul Brook <paul@codesourcery.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2008-04-18 22:43:07 +01:00
Catalin Marinas
d7f864be83 ARMv7: Add support for the ThumbEE state saving/restoring
This patch adds the detection and handling of the ThumbEE extension on
ARMv7 CPUs.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2008-04-18 22:43:06 +01:00
Russell King
516793c61b [ARM] ARMv6: add CPU_HAS_ASID configuration
Presently, we check for the minimum ARM architecture that we're
building for to determine whether we need ASID support.  This is
wrong - if we're going to support a range of CPUs which include
ARMv6 or higher, we need the ASID.

Convert the checks to use a new configuration symbol, and arrange
for ARMv6 and higher CPU entries to select it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-05-17 10:19:23 +01:00
Russell King
ee90dabcad [ARM] Include asm/elf.h instead of asm/procinfo.h
These files want to provide/access ELF hwcap information, so should
be including asm/elf.h rather than asm/procinfo.h

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-11-30 12:24:46 +00:00
Russell King
8799ee9f49 [ARM] Set bit 4 on section mappings correctly depending on CPU
On some CPUs, bit 4 of section mappings means "update the
cache when written to".  On others, this bit is required to
be one, and others it's required to be zero.  Finally, on
ARMv6 and above, setting it turns on "no execute" and prevents
speculative prefetches.

With all these combinations, no one value fits all CPUs, so we
have to pick a value depending on the CPU type, and the area
we're mapping.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-06-29 18:24:21 +01:00
Lennert Buytenhek
c17fad11f3 [ARM] 3370/2: ep93xx: add crunch support
Patch from Lennert Buytenhek

Add the necessary kernel bits for crunch task switching.

Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-06-28 17:55:01 +01:00
Uwe Zeisberger
2ceec0c8c6 [ARM] 3517/1: move definition of PROC_INFO_SZ from procinfo.h to asm-offsets.h
Patch from Uwe Zeisberger

The symbol is only used in arch/arm/kernel/head-common.S.  This in turn
is included from arch/arm/kernel/head.S and arch/arm/kernel/head-nommu.S
which include asm-offsets.h .

Signed-off-by: Uwe Zeisberger <Uwe_Zeisberger@digi.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-05-16 11:39:30 +01:00
Uwe Zeisberger
2eb9d31571 [ARM] 3496/1: more constants for asm-offsets.h
Patch from Uwe Zeisberger

added the following constants:
- MACHINFO_TYPE
- MACHINFO_NAME
- MACHINFO_PHYSIO
- MACHINFO_PGOFFIO
- PROCINFO_INITFUNC
- PROCINFO_MMUFLAGS

and removed their definition from head.S and head-nommu.S

Signed-off-by: Uwe Zeisberger <Uwe_Zeisberger@digi.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-05-05 15:11:14 +01:00
Russell King
cdaabbd74b [ARM] iwmmxt thread state alignment
This patch removes the reliance of iwmmxt on hand coded alignments.
Since thread_info is always 8K aligned, specifying that fpstate is
8-byte aligned achieves the same effect without needing to resort
to hand coded alignments.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-03-12 22:36:06 +00:00
Andrew Morton
a136564702 [PATCH] remove gcc-2 checks
Remove various things which were checking for gcc-1.x and gcc-2.x compilers.

From: Adrian Bunk <bunk@stusta.de>

    Some documentation updates and removes some code paths for gcc < 3.2.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08 20:14:02 -08:00
Nicolas Pitre
f09b997999 [ARM] 3060/1: allow constants found in asm/memory.h to be used in asm code
Patch from Nicolas Pitre

This patch allows for assorted type of cleanups by letting assembly code
use the same set of defines for constant values and avoid duplicated
definitions that might not always be in sync, or that might simply be
confusing due to the different names for the same thing.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2005-10-29 21:44:55 +01:00
Russell King
925c8a1a8c [PATCH] ARM: pt_regs offsets
Generate pt_regs S_xx offsets from the structure itself instead
of #defining them.

Signed-off-by: Russell King <rmk@arm.linux.org.uk>
2005-04-26 15:18:59 +01:00
Linus Torvalds
1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00