Merge branch 'imx-cleanup' of git://git.pengutronix.de/git/imx/linux-2.6 into next/cleanup

* 'imx-cleanup' of git://git.pengutronix.de/git/imx/linux-2.6: (247 commits)
  ARM: Remove redundant ';' from avic_irq_set_priority()
  ARM: mx3: Let mx31 and mx35 share the same CCM header file
  ARM: plat-mxc: audmux-v1: Remove unneeded ifdef's

Update to v3.3-rc3

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This commit is contained in:
Arnd Bergmann 2012-02-27 13:40:41 +00:00
commit 96a3eab338
256 changed files with 2058 additions and 2373 deletions

View File

@ -102,9 +102,12 @@ X!Iinclude/linux/kobject.h
!Iinclude/linux/device.h
</sect1>
<sect1><title>Device Drivers Base</title>
!Idrivers/base/init.c
!Edrivers/base/driver.c
!Edrivers/base/core.c
!Edrivers/base/syscore.c
!Edrivers/base/class.c
!Idrivers/base/node.c
!Edrivers/base/firmware_class.c
!Edrivers/base/transport_class.c
<!-- Cannot be included, because
@ -113,13 +116,18 @@ X!Iinclude/linux/kobject.h
exceed allowed 44 characters maximum
X!Edrivers/base/attribute_container.c
-->
!Edrivers/base/sys.c
!Edrivers/base/dd.c
<!--
X!Edrivers/base/interface.c
-->
!Iinclude/linux/platform_device.h
!Edrivers/base/platform.c
!Edrivers/base/bus.c
</sect1>
<sect1><title>Device Drivers DMA Management</title>
!Edrivers/base/dma-buf.c
!Edrivers/base/dma-coherent.c
!Edrivers/base/dma-mapping.c
</sect1>
<sect1><title>Device Drivers Power Management</title>
!Edrivers/base/power/main.c
@ -219,7 +227,7 @@ X!Isound/sound_firmware.c
<chapter id="uart16x50">
<title>16x50 UART Driver</title>
!Edrivers/tty/serial/serial_core.c
!Edrivers/tty/serial/8250.c
!Edrivers/tty/serial/8250/8250.c
</chapter>
<chapter id="fbdev">

View File

@ -17,11 +17,11 @@ reports supported by a device are also provided by sysfs in
class/input/event*/device/capabilities/, and the properties of a device are
provided in class/input/event*/device/properties.
Types:
==========
Types are groupings of codes under a logical input construct. Each type has a
set of applicable codes to be used in generating events. See the Codes section
for details on valid codes for each type.
Event types:
===========
Event types are groupings of codes under a logical input construct. Each
type has a set of applicable codes to be used in generating events. See the
Codes section for details on valid codes for each type.
* EV_SYN:
- Used as markers to separate events. Events may be separated in time or in
@ -63,9 +63,9 @@ for details on valid codes for each type.
* EV_FF_STATUS:
- Used to receive force feedback device status.
Codes:
==========
Codes define the precise type of event.
Event codes:
===========
Event codes define the precise type of event.
EV_SYN:
----------
@ -220,6 +220,56 @@ EV_PWR:
EV_PWR events are a special type of event used specifically for power
mangement. Its usage is not well defined. To be addressed later.
Device properties:
=================
Normally, userspace sets up an input device based on the data it emits,
i.e., the event types. In the case of two devices emitting the same event
types, additional information can be provided in the form of device
properties.
INPUT_PROP_DIRECT + INPUT_PROP_POINTER:
--------------------------------------
The INPUT_PROP_DIRECT property indicates that device coordinates should be
directly mapped to screen coordinates (not taking into account trivial
transformations, such as scaling, flipping and rotating). Non-direct input
devices require non-trivial transformation, such as absolute to relative
transformation for touchpads. Typical direct input devices: touchscreens,
drawing tablets; non-direct devices: touchpads, mice.
The INPUT_PROP_POINTER property indicates that the device is not transposed
on the screen and thus requires use of an on-screen pointer to trace user's
movements. Typical pointer devices: touchpads, tablets, mice; non-pointer
device: touchscreen.
If neither INPUT_PROP_DIRECT or INPUT_PROP_POINTER are set, the property is
considered undefined and the device type should be deduced in the
traditional way, using emitted event types.
INPUT_PROP_BUTTONPAD:
--------------------
For touchpads where the button is placed beneath the surface, such that
pressing down on the pad causes a button click, this property should be
set. Common in clickpad notebooks and macbooks from 2009 and onwards.
Originally, the buttonpad property was coded into the bcm5974 driver
version field under the name integrated button. For backwards
compatibility, both methods need to be checked in userspace.
INPUT_PROP_SEMI_MT:
------------------
Some touchpads, most common between 2008 and 2011, can detect the presence
of multiple contacts without resolving the individual positions; only the
number of contacts and a rectangular shape is known. For such
touchpads, the semi-mt property should be set.
Depending on the device, the rectangle may enclose all touches, like a
bounding box, or just some of them, for instance the two most recent
touches. The diversity makes the rectangle of limited use, but some
gestures can normally be extracted from it.
If INPUT_PROP_SEMI_MT is not set, the device is assumed to be a true MT
device.
Guidelines:
==========
The guidelines below ensure proper single-touch and multi-finger functionality.
@ -240,6 +290,8 @@ used to report when a touch is active on the screen.
BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch
contact. BTN_TOOL_<name> events should be reported where possible.
For new hardware, INPUT_PROP_DIRECT should be set.
Trackpads:
----------
Legacy trackpads that only provide relative position information must report
@ -250,6 +302,8 @@ location of the touch. BTN_TOUCH should be used to report when a touch is active
on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should
be used to report the number of touches active on the trackpad.
For new hardware, INPUT_PROP_POINTER should be set.
Tablets:
----------
BTN_TOOL_<name> events must be reported when a stylus or other tool is active on
@ -260,3 +314,5 @@ button may be used for buttons on the tablet except BTN_{MOUSE,LEFT}.
BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use
meaningful buttons, like BTN_FORWARD, unless the button is labeled for that
purpose on the device.
For new hardware, both INPUT_PROP_DIRECT and INPUT_PROP_POINTER should be set.

View File

@ -601,6 +601,8 @@ can be ORed together:
instead of using the one provided by the hardware.
512 - A kernel warning has occurred.
1024 - A module from drivers/staging was loaded.
2048 - The system is working around a severe firmware bug.
4096 - An out-of-tree module has been loaded.
==============================================================

View File

@ -159,7 +159,7 @@ S: Maintained
F: drivers/net/ethernet/realtek/r8169.c
8250/16?50 (AND CLONE UARTS) SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
W: http://serial.sourceforge.net
S: Maintained
@ -789,12 +789,6 @@ F: arch/arm/mach-mx*/
F: arch/arm/mach-imx/
F: arch/arm/plat-mxc/
ARM/FREESCALE IMX51
M: Amit Kucheria <amit.kucheria@canonical.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-mx5/
ARM/FREESCALE IMX6
M: Shawn Guo <shawn.guo@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -1783,9 +1777,9 @@ X: net/wireless/wext*
CHAR and MISC DRIVERS
M: Arnd Bergmann <arnd@arndb.de>
M: Greg Kroah-Hartman <greg@kroah.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
S: Maintained
S: Supported
F: drivers/char/*
F: drivers/misc/*
@ -2287,7 +2281,7 @@ F: drivers/acpi/dock.c
DOCUMENTATION
M: Randy Dunlap <rdunlap@xenotime.net>
L: linux-doc@vger.kernel.org
T: quilt http://userweb.kernel.org/~rdunlap/kernel-doc-patches/current/
T: quilt http://xenotime.net/kernel-doc-patches/current/
S: Maintained
F: Documentation/
@ -2320,7 +2314,7 @@ F: lib/lru_cache.c
F: Documentation/blockdev/drbd/
DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6.git
S: Supported
F: Documentation/kobject.txt
@ -3992,11 +3986,11 @@ M: Rusty Russell <rusty@rustcorp.com.au>
L: lguest@lists.ozlabs.org
W: http://lguest.ozlabs.org/
S: Odd Fixes
F: Documentation/virtual/lguest/
F: arch/x86/include/asm/lguest*.h
F: arch/x86/lguest/
F: drivers/lguest/
F: include/linux/lguest*.h
F: arch/x86/include/asm/lguest*.h
F: tools/lguest/
LINUX FOR IBM pSERIES (RS/6000)
M: Paul Mackerras <paulus@au.ibm.com>
@ -4136,7 +4130,7 @@ L: linux-ntfs-dev@lists.sourceforge.net
W: http://www.linux-ntfs.org/content/view/19/37/
S: Maintained
F: Documentation/ldm.txt
F: fs/partitions/ldm.*
F: block/partitions/ldm.*
LogFS
M: Joern Engel <joern@logfs.org>
@ -5633,7 +5627,7 @@ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported
F: arch/s390/
F: drivers/s390/
F: fs/partitions/ibm.c
F: block/partitions/ibm.c
F: Documentation/s390/
F: Documentation/DocBook/s390*
@ -6276,15 +6270,15 @@ S: Maintained
F: arch/alpha/kernel/srm_env.c
STABLE BRANCH
M: Greg Kroah-Hartman <greg@kroah.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: stable@vger.kernel.org
S: Maintained
S: Supported
STAGING SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
L: devel@driverdev.osuosl.org
S: Maintained
S: Supported
F: drivers/staging/
STAGING - AGERE HERMES II and II.5 WIRELESS DRIVERS
@ -6396,11 +6390,6 @@ M: Omar Ramirez Luna <omar.ramirez@ti.com>
S: Odd Fixes
F: drivers/staging/tidspbridge/
STAGING - TRIDENT TVMASTER TMxxxx USB VIDEO CAPTURE DRIVERS
L: linux-media@vger.kernel.org
S: Odd Fixes
F: drivers/staging/tm6000/
STAGING - USB ENE SM/MS CARD READER DRIVER
M: Al Cho <acho@novell.com>
S: Odd Fixes
@ -6669,8 +6658,8 @@ S: Maintained
K: ^Subject:.*(?i)trivial
TTY LAYER
M: Greg Kroah-Hartman <gregkh@suse.de>
S: Maintained
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6.git
F: drivers/tty/
F: drivers/tty/serial/serial_core.c
@ -6958,7 +6947,7 @@ S: Maintained
F: drivers/usb/serial/digi_acceleport.c
USB SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
S: Supported
F: Documentation/usb/usb-serial.txt
@ -6973,9 +6962,8 @@ S: Maintained
F: drivers/usb/serial/empeg.c
USB SERIAL KEYSPAN DRIVER
M: Greg Kroah-Hartman <greg@kroah.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
W: http://www.kroah.com/linux/
S: Maintained
F: drivers/usb/serial/*keyspan*
@ -7003,7 +6991,7 @@ F: Documentation/video4linux/sn9c102.txt
F: drivers/media/video/sn9c102/
USB SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
W: http://www.linux-usb.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb-2.6.git
@ -7090,7 +7078,7 @@ F: fs/hppfs/
USERSPACE I/O (UIO)
M: "Hans J. Koch" <hjk@hansjkoch.de>
M: Greg Kroah-Hartman <gregkh@suse.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Maintained
F: Documentation/DocBook/uio-howto.tmpl
F: drivers/uio/

View File

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 3
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Saber-toothed Squirrel
# *DOCUMENTATION*

View File

@ -198,7 +198,15 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
unsigned long addr)
{
pgtable_page_dtor(pte);
tlb_add_flush(tlb, addr);
/*
* With the classic ARM MMU, a pte page has two corresponding pmd
* entries, each covering 1MB.
*/
addr &= PMD_MASK;
tlb_add_flush(tlb, addr + SZ_1M - PAGE_SIZE);
tlb_add_flush(tlb, addr + SZ_1M);
tlb_remove_page(tlb, pte);
}

View File

@ -790,7 +790,7 @@ __kuser_cmpxchg64: @ 0xffff0f60
smp_dmb arm
rsbs r0, r3, #0 @ set returned val and C flag
ldmfd sp!, {r4, r5, r6, r7}
bx lr
usr_ret lr
#elif !defined(CONFIG_SMP)

View File

@ -469,6 +469,20 @@ static const unsigned armv7_a5_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};
/*
@ -579,6 +593,20 @@ static const unsigned armv7_a15_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};
/*

View File

@ -699,10 +699,13 @@ static int vfp_set(struct task_struct *target,
{
int ret;
struct thread_info *thread = task_thread_info(target);
struct vfp_hard_struct new_vfp = thread->vfpstate.hard;
struct vfp_hard_struct new_vfp;
const size_t user_fpregs_offset = offsetof(struct user_vfp, fpregs);
const size_t user_fpscr_offset = offsetof(struct user_vfp, fpscr);
vfp_sync_hwstate(thread);
new_vfp = thread->vfpstate.hard;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&new_vfp.fpregs,
user_fpregs_offset,
@ -723,9 +726,8 @@ static int vfp_set(struct task_struct *target,
if (ret)
return ret;
vfp_sync_hwstate(thread);
thread->vfpstate.hard = new_vfp;
vfp_flush_hwstate(thread);
thread->vfpstate.hard = new_vfp;
return 0;
}

View File

@ -227,6 +227,8 @@ static int restore_vfp_context(struct vfp_sigframe __user *frame)
if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE)
return -EINVAL;
vfp_flush_hwstate(thread);
/*
* Copy the floating point registers. There can be unused
* registers see asm/hwcap.h for details.
@ -251,9 +253,6 @@ static int restore_vfp_context(struct vfp_sigframe __user *frame)
__get_user_error(h->fpinst, &frame->ufp_exc.fpinst, err);
__get_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err);
if (!err)
vfp_flush_hwstate(thread);
return err ? -EFAULT : 0;
}

View File

@ -194,6 +194,6 @@ MACHINE_START(BCMRING, "BCMRING")
.init_early = bcmring_init_early,
.init_irq = bcmring_init_irq,
.timer = &bcmring_timer,
.init_machine = bcmring_init_machine
.init_machine = bcmring_init_machine,
.restart = bcmring_restart,
MACHINE_END

View File

@ -33,17 +33,11 @@
#include <mach/timer.h>
#include <linux/mm.h>
#include <linux/pfn.h>
#include <linux/atomic.h>
#include <linux/sched.h>
#include <mach/dma.h>
/* I don't quite understand why dc4 fails when this is set to 1 and DMA is enabled */
/* especially since dc4 doesn't use kmalloc'd memory. */
#define ALLOW_MAP_OF_KMALLOC_MEMORY 0
/* ---- Public Variables ------------------------------------------------- */
/* ---- Private Constants and Types -------------------------------------- */
@ -53,58 +47,18 @@
#define CONTROLLER_FROM_HANDLE(handle) (((handle) >> 4) & 0x0f)
#define CHANNEL_FROM_HANDLE(handle) ((handle) & 0x0f)
#define DMA_MAP_DEBUG 0
#if DMA_MAP_DEBUG
# define DMA_MAP_PRINT(fmt, args...) printk("%s: " fmt, __func__, ## args)
#else
# define DMA_MAP_PRINT(fmt, args...)
#endif
/* ---- Private Variables ------------------------------------------------ */
static DMA_Global_t gDMA;
static struct proc_dir_entry *gDmaDir;
static atomic_t gDmaStatMemTypeKmalloc = ATOMIC_INIT(0);
static atomic_t gDmaStatMemTypeVmalloc = ATOMIC_INIT(0);
static atomic_t gDmaStatMemTypeUser = ATOMIC_INIT(0);
static atomic_t gDmaStatMemTypeCoherent = ATOMIC_INIT(0);
#include "dma_device.c"
/* ---- Private Function Prototypes -------------------------------------- */
/* ---- Functions ------------------------------------------------------- */
/****************************************************************************/
/**
* Displays information for /proc/dma/mem-type
*/
/****************************************************************************/
static int dma_proc_read_mem_type(char *buf, char **start, off_t offset,
int count, int *eof, void *data)
{
int len = 0;
len += sprintf(buf + len, "dma_map_mem statistics\n");
len +=
sprintf(buf + len, "coherent: %d\n",
atomic_read(&gDmaStatMemTypeCoherent));
len +=
sprintf(buf + len, "kmalloc: %d\n",
atomic_read(&gDmaStatMemTypeKmalloc));
len +=
sprintf(buf + len, "vmalloc: %d\n",
atomic_read(&gDmaStatMemTypeVmalloc));
len +=
sprintf(buf + len, "user: %d\n",
atomic_read(&gDmaStatMemTypeUser));
return len;
}
/****************************************************************************/
/**
* Displays information for /proc/dma/channels
@ -846,8 +800,6 @@ int dma_init(void)
dma_proc_read_channels, NULL);
create_proc_read_entry("devices", 0, gDmaDir,
dma_proc_read_devices, NULL);
create_proc_read_entry("mem-type", 0, gDmaDir,
dma_proc_read_mem_type, NULL);
}
out:
@ -1565,767 +1517,3 @@ int dma_set_device_handler(DMA_Device_t dev, /* Device to set the callback for.
}
EXPORT_SYMBOL(dma_set_device_handler);
/****************************************************************************/
/**
* Initializes a memory mapping structure
*/
/****************************************************************************/
int dma_init_mem_map(DMA_MemMap_t *memMap)
{
memset(memMap, 0, sizeof(*memMap));
sema_init(&memMap->lock, 1);
return 0;
}
EXPORT_SYMBOL(dma_init_mem_map);
/****************************************************************************/
/**
* Releases any memory currently being held by a memory mapping structure.
*/
/****************************************************************************/
int dma_term_mem_map(DMA_MemMap_t *memMap)
{
down(&memMap->lock); /* Just being paranoid */
/* Free up any allocated memory */
up(&memMap->lock);
memset(memMap, 0, sizeof(*memMap));
return 0;
}
EXPORT_SYMBOL(dma_term_mem_map);
/****************************************************************************/
/**
* Looks at a memory address and categorizes it.
*
* @return One of the values from the DMA_MemType_t enumeration.
*/
/****************************************************************************/
DMA_MemType_t dma_mem_type(void *addr)
{
unsigned long addrVal = (unsigned long)addr;
if (addrVal >= CONSISTENT_BASE) {
/* NOTE: DMA virtual memory space starts at 0xFFxxxxxx */
/* dma_alloc_xxx pages are physically and virtually contiguous */
return DMA_MEM_TYPE_DMA;
}
/* Technically, we could add one more classification. Addresses between VMALLOC_END */
/* and the beginning of the DMA virtual address could be considered to be I/O space. */
/* Right now, nobody cares about this particular classification, so we ignore it. */
if (is_vmalloc_addr(addr)) {
/* Address comes from the vmalloc'd region. Pages are virtually */
/* contiguous but NOT physically contiguous */
return DMA_MEM_TYPE_VMALLOC;
}
if (addrVal >= PAGE_OFFSET) {
/* PAGE_OFFSET is typically 0xC0000000 */
/* kmalloc'd pages are physically contiguous */
return DMA_MEM_TYPE_KMALLOC;
}
return DMA_MEM_TYPE_USER;
}
EXPORT_SYMBOL(dma_mem_type);
/****************************************************************************/
/**
* Looks at a memory address and determines if we support DMA'ing to/from
* that type of memory.
*
* @return boolean -
* return value != 0 means dma supported
* return value == 0 means dma not supported
*/
/****************************************************************************/
int dma_mem_supports_dma(void *addr)
{
DMA_MemType_t memType = dma_mem_type(addr);
return (memType == DMA_MEM_TYPE_DMA)
#if ALLOW_MAP_OF_KMALLOC_MEMORY
|| (memType == DMA_MEM_TYPE_KMALLOC)
#endif
|| (memType == DMA_MEM_TYPE_USER);
}
EXPORT_SYMBOL(dma_mem_supports_dma);
/****************************************************************************/
/**
* Maps in a memory region such that it can be used for performing a DMA.
*
* @return
*/
/****************************************************************************/
int dma_map_start(DMA_MemMap_t *memMap, /* Stores state information about the map */
enum dma_data_direction dir /* Direction that the mapping will be going */
) {
int rc;
down(&memMap->lock);
DMA_MAP_PRINT("memMap: %p\n", memMap);
if (memMap->inUse) {
printk(KERN_ERR "%s: memory map %p is already being used\n",
__func__, memMap);
rc = -EBUSY;
goto out;
}
memMap->inUse = 1;
memMap->dir = dir;
memMap->numRegionsUsed = 0;
rc = 0;
out:
DMA_MAP_PRINT("returning %d", rc);
up(&memMap->lock);
return rc;
}
EXPORT_SYMBOL(dma_map_start);
/****************************************************************************/
/**
* Adds a segment of memory to a memory map. Each segment is both
* physically and virtually contiguous.
*
* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
static int dma_map_add_segment(DMA_MemMap_t *memMap, /* Stores state information about the map */
DMA_Region_t *region, /* Region that the segment belongs to */
void *virtAddr, /* Virtual address of the segment being added */
dma_addr_t physAddr, /* Physical address of the segment being added */
size_t numBytes /* Number of bytes of the segment being added */
) {
DMA_Segment_t *segment;
DMA_MAP_PRINT("memMap:%p va:%p pa:0x%x #:%d\n", memMap, virtAddr,
physAddr, numBytes);
/* Sanity check */
if (((unsigned long)virtAddr < (unsigned long)region->virtAddr)
|| (((unsigned long)virtAddr + numBytes)) >
((unsigned long)region->virtAddr + region->numBytes)) {
printk(KERN_ERR
"%s: virtAddr %p is outside region @ %p len: %d\n",
__func__, virtAddr, region->virtAddr, region->numBytes);
return -EINVAL;
}
if (region->numSegmentsUsed > 0) {
/* Check to see if this segment is physically contiguous with the previous one */
segment = &region->segment[region->numSegmentsUsed - 1];
if ((segment->physAddr + segment->numBytes) == physAddr) {
/* It is - just add on to the end */
DMA_MAP_PRINT("appending %d bytes to last segment\n",
numBytes);
segment->numBytes += numBytes;
return 0;
}
}
/* Reallocate to hold more segments, if required. */
if (region->numSegmentsUsed >= region->numSegmentsAllocated) {
DMA_Segment_t *newSegment;
size_t oldSize =
region->numSegmentsAllocated * sizeof(*newSegment);
int newAlloc = region->numSegmentsAllocated + 4;
size_t newSize = newAlloc * sizeof(*newSegment);
newSegment = kmalloc(newSize, GFP_KERNEL);
if (newSegment == NULL) {
return -ENOMEM;
}
memcpy(newSegment, region->segment, oldSize);
memset(&((uint8_t *) newSegment)[oldSize], 0,
newSize - oldSize);
kfree(region->segment);
region->numSegmentsAllocated = newAlloc;
region->segment = newSegment;
}
segment = &region->segment[region->numSegmentsUsed];
region->numSegmentsUsed++;
segment->virtAddr = virtAddr;
segment->physAddr = physAddr;
segment->numBytes = numBytes;
DMA_MAP_PRINT("returning success\n");
return 0;
}
/****************************************************************************/
/**
* Adds a region of memory to a memory map. Each region is virtually
* contiguous, but not necessarily physically contiguous.
*
* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
int dma_map_add_region(DMA_MemMap_t *memMap, /* Stores state information about the map */
void *mem, /* Virtual address that we want to get a map of */
size_t numBytes /* Number of bytes being mapped */
) {
unsigned long addr = (unsigned long)mem;
unsigned int offset;
int rc = 0;
DMA_Region_t *region;
dma_addr_t physAddr;
down(&memMap->lock);
DMA_MAP_PRINT("memMap:%p va:%p #:%d\n", memMap, mem, numBytes);
if (!memMap->inUse) {
printk(KERN_ERR "%s: Make sure you call dma_map_start first\n",
__func__);
rc = -EINVAL;
goto out;
}
/* Reallocate to hold more regions. */
if (memMap->numRegionsUsed >= memMap->numRegionsAllocated) {
DMA_Region_t *newRegion;
size_t oldSize =
memMap->numRegionsAllocated * sizeof(*newRegion);
int newAlloc = memMap->numRegionsAllocated + 4;
size_t newSize = newAlloc * sizeof(*newRegion);
newRegion = kmalloc(newSize, GFP_KERNEL);
if (newRegion == NULL) {
rc = -ENOMEM;
goto out;
}
memcpy(newRegion, memMap->region, oldSize);
memset(&((uint8_t *) newRegion)[oldSize], 0, newSize - oldSize);
kfree(memMap->region);
memMap->numRegionsAllocated = newAlloc;
memMap->region = newRegion;
}
region = &memMap->region[memMap->numRegionsUsed];
memMap->numRegionsUsed++;
offset = addr & ~PAGE_MASK;
region->memType = dma_mem_type(mem);
region->virtAddr = mem;
region->numBytes = numBytes;
region->numSegmentsUsed = 0;
region->numLockedPages = 0;
region->lockedPages = NULL;
switch (region->memType) {
case DMA_MEM_TYPE_VMALLOC:
{
atomic_inc(&gDmaStatMemTypeVmalloc);
/* printk(KERN_ERR "%s: vmalloc'd pages are not supported\n", __func__); */
/* vmalloc'd pages are not physically contiguous */
rc = -EINVAL;
break;
}
case DMA_MEM_TYPE_KMALLOC:
{
atomic_inc(&gDmaStatMemTypeKmalloc);
/* kmalloc'd pages are physically contiguous, so they'll have exactly */
/* one segment */
#if ALLOW_MAP_OF_KMALLOC_MEMORY
physAddr =
dma_map_single(NULL, mem, numBytes, memMap->dir);
rc = dma_map_add_segment(memMap, region, mem, physAddr,
numBytes);
#else
rc = -EINVAL;
#endif
break;
}
case DMA_MEM_TYPE_DMA:
{
/* dma_alloc_xxx pages are physically contiguous */
atomic_inc(&gDmaStatMemTypeCoherent);
physAddr = (vmalloc_to_pfn(mem) << PAGE_SHIFT) + offset;
dma_sync_single_for_cpu(NULL, physAddr, numBytes,
memMap->dir);
rc = dma_map_add_segment(memMap, region, mem, physAddr,
numBytes);
break;
}
case DMA_MEM_TYPE_USER:
{
size_t firstPageOffset;
size_t firstPageSize;
struct page **pages;
struct task_struct *userTask;
atomic_inc(&gDmaStatMemTypeUser);
#if 1
/* If the pages are user pages, then the dma_mem_map_set_user_task function */
/* must have been previously called. */
if (memMap->userTask == NULL) {
printk(KERN_ERR
"%s: must call dma_mem_map_set_user_task when using user-mode memory\n",
__func__);
return -EINVAL;
}
/* User pages need to be locked. */
firstPageOffset =
(unsigned long)region->virtAddr & (PAGE_SIZE - 1);
firstPageSize = PAGE_SIZE - firstPageOffset;
region->numLockedPages = (firstPageOffset
+ region->numBytes +
PAGE_SIZE - 1) / PAGE_SIZE;
pages =
kmalloc(region->numLockedPages *
sizeof(struct page *), GFP_KERNEL);
if (pages == NULL) {
region->numLockedPages = 0;
return -ENOMEM;
}
userTask = memMap->userTask;
down_read(&userTask->mm->mmap_sem);
rc = get_user_pages(userTask, /* task */
userTask->mm, /* mm */
(unsigned long)region->virtAddr, /* start */
region->numLockedPages, /* len */
memMap->dir == DMA_FROM_DEVICE, /* write */
0, /* force */
pages, /* pages (array of pointers to page) */
NULL); /* vmas */
up_read(&userTask->mm->mmap_sem);
if (rc != region->numLockedPages) {
kfree(pages);
region->numLockedPages = 0;
if (rc >= 0) {
rc = -EINVAL;
}
} else {
uint8_t *virtAddr = region->virtAddr;
size_t bytesRemaining;
int pageIdx;
rc = 0; /* Since get_user_pages returns +ve number */
region->lockedPages = pages;
/* We've locked the user pages. Now we need to walk them and figure */
/* out the physical addresses. */
/* The first page may be partial */
dma_map_add_segment(memMap,
region,
virtAddr,
PFN_PHYS(page_to_pfn
(pages[0])) +
firstPageOffset,
firstPageSize);
virtAddr += firstPageSize;
bytesRemaining =
region->numBytes - firstPageSize;
for (pageIdx = 1;
pageIdx < region->numLockedPages;
pageIdx++) {
size_t bytesThisPage =
(bytesRemaining >
PAGE_SIZE ? PAGE_SIZE :
bytesRemaining);
DMA_MAP_PRINT
("pageIdx:%d pages[pageIdx]=%p pfn=%u phys=%u\n",
pageIdx, pages[pageIdx],
page_to_pfn(pages[pageIdx]),
PFN_PHYS(page_to_pfn
(pages[pageIdx])));
dma_map_add_segment(memMap,
region,
virtAddr,
PFN_PHYS(page_to_pfn
(pages
[pageIdx])),
bytesThisPage);
virtAddr += bytesThisPage;
bytesRemaining -= bytesThisPage;
}
}
#else
printk(KERN_ERR
"%s: User mode pages are not yet supported\n",
__func__);
/* user pages are not physically contiguous */
rc = -EINVAL;
#endif
break;
}
default:
{
printk(KERN_ERR "%s: Unsupported memory type: %d\n",
__func__, region->memType);
rc = -EINVAL;
break;
}
}
if (rc != 0) {
memMap->numRegionsUsed--;
}
out:
DMA_MAP_PRINT("returning %d\n", rc);
up(&memMap->lock);
return rc;
}
EXPORT_SYMBOL(dma_map_add_segment);
/****************************************************************************/
/**
* Maps in a memory region such that it can be used for performing a DMA.
*
* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
int dma_map_mem(DMA_MemMap_t *memMap, /* Stores state information about the map */
void *mem, /* Virtual address that we want to get a map of */
size_t numBytes, /* Number of bytes being mapped */
enum dma_data_direction dir /* Direction that the mapping will be going */
) {
int rc;
rc = dma_map_start(memMap, dir);
if (rc == 0) {
rc = dma_map_add_region(memMap, mem, numBytes);
if (rc < 0) {
/* Since the add fails, this function will fail, and the caller won't */
/* call unmap, so we need to do it here. */
dma_unmap(memMap, 0);
}
}
return rc;
}
EXPORT_SYMBOL(dma_map_mem);
/****************************************************************************/
/**
* Setup a descriptor ring for a given memory map.
*
* It is assumed that the descriptor ring has already been initialized, and
* this routine will only reallocate a new descriptor ring if the existing
* one is too small.
*
* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
int dma_map_create_descriptor_ring(DMA_Device_t dev, /* DMA device (where the ring is stored) */
DMA_MemMap_t *memMap, /* Memory map that will be used */
dma_addr_t devPhysAddr /* Physical address of device */
) {
int rc;
int numDescriptors;
DMA_DeviceAttribute_t *devAttr;
DMA_Region_t *region;
DMA_Segment_t *segment;
dma_addr_t srcPhysAddr;
dma_addr_t dstPhysAddr;
int regionIdx;
int segmentIdx;
devAttr = &DMA_gDeviceAttribute[dev];
down(&memMap->lock);
/* Figure out how many descriptors we need */
numDescriptors = 0;
for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) {
region = &memMap->region[regionIdx];
for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed;
segmentIdx++) {
segment = &region->segment[segmentIdx];
if (memMap->dir == DMA_TO_DEVICE) {
srcPhysAddr = segment->physAddr;
dstPhysAddr = devPhysAddr;
} else {
srcPhysAddr = devPhysAddr;
dstPhysAddr = segment->physAddr;
}
rc =
dma_calculate_descriptor_count(dev, srcPhysAddr,
dstPhysAddr,
segment->
numBytes);
if (rc < 0) {
printk(KERN_ERR
"%s: dma_calculate_descriptor_count failed: %d\n",
__func__, rc);
goto out;
}
numDescriptors += rc;
}
}
/* Adjust the size of the ring, if it isn't big enough */
if (numDescriptors > devAttr->ring.descriptorsAllocated) {
dma_free_descriptor_ring(&devAttr->ring);
rc =
dma_alloc_descriptor_ring(&devAttr->ring,
numDescriptors);
if (rc < 0) {
printk(KERN_ERR
"%s: dma_alloc_descriptor_ring failed: %d\n",
__func__, rc);
goto out;
}
} else {
rc =
dma_init_descriptor_ring(&devAttr->ring,
numDescriptors);
if (rc < 0) {
printk(KERN_ERR
"%s: dma_init_descriptor_ring failed: %d\n",
__func__, rc);
goto out;
}
}
/* Populate the descriptors */
for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) {
region = &memMap->region[regionIdx];
for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed;
segmentIdx++) {
segment = &region->segment[segmentIdx];
if (memMap->dir == DMA_TO_DEVICE) {
srcPhysAddr = segment->physAddr;
dstPhysAddr = devPhysAddr;
} else {
srcPhysAddr = devPhysAddr;
dstPhysAddr = segment->physAddr;
}
rc =
dma_add_descriptors(&devAttr->ring, dev,
srcPhysAddr, dstPhysAddr,
segment->numBytes);
if (rc < 0) {
printk(KERN_ERR
"%s: dma_add_descriptors failed: %d\n",
__func__, rc);
goto out;
}
}
}
rc = 0;
out:
up(&memMap->lock);
return rc;
}
EXPORT_SYMBOL(dma_map_create_descriptor_ring);
/****************************************************************************/
/**
* Maps in a memory region such that it can be used for performing a DMA.
*
* @return
*/
/****************************************************************************/
int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */
int dirtied /* non-zero if any of the pages were modified */
) {
int rc = 0;
int regionIdx;
int segmentIdx;
DMA_Region_t *region;
DMA_Segment_t *segment;
down(&memMap->lock);
for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) {
region = &memMap->region[regionIdx];
for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed;
segmentIdx++) {
segment = &region->segment[segmentIdx];
switch (region->memType) {
case DMA_MEM_TYPE_VMALLOC:
{
printk(KERN_ERR
"%s: vmalloc'd pages are not yet supported\n",
__func__);
rc = -EINVAL;
goto out;
}
case DMA_MEM_TYPE_KMALLOC:
{
#if ALLOW_MAP_OF_KMALLOC_MEMORY
dma_unmap_single(NULL,
segment->physAddr,
segment->numBytes,
memMap->dir);
#endif
break;
}
case DMA_MEM_TYPE_DMA:
{
dma_sync_single_for_cpu(NULL,
segment->
physAddr,
segment->
numBytes,
memMap->dir);
break;
}
case DMA_MEM_TYPE_USER:
{
/* Nothing to do here. */
break;
}
default:
{
printk(KERN_ERR
"%s: Unsupported memory type: %d\n",
__func__, region->memType);
rc = -EINVAL;
goto out;
}
}
segment->virtAddr = NULL;
segment->physAddr = 0;
segment->numBytes = 0;
}
if (region->numLockedPages > 0) {
int pageIdx;
/* Some user pages were locked. We need to go and unlock them now. */
for (pageIdx = 0; pageIdx < region->numLockedPages;
pageIdx++) {
struct page *page =
region->lockedPages[pageIdx];
if (memMap->dir == DMA_FROM_DEVICE) {
SetPageDirty(page);
}
page_cache_release(page);
}
kfree(region->lockedPages);
region->numLockedPages = 0;
region->lockedPages = NULL;
}
region->memType = DMA_MEM_TYPE_NONE;
region->virtAddr = NULL;
region->numBytes = 0;
region->numSegmentsUsed = 0;
}
memMap->userTask = NULL;
memMap->numRegionsUsed = 0;
memMap->inUse = 0;
out:
up(&memMap->lock);
return rc;
}
EXPORT_SYMBOL(dma_unmap);

View File

@ -26,15 +26,9 @@
/* ---- Include Files ---------------------------------------------------- */
#include <linux/kernel.h>
#include <linux/wait.h>
#include <linux/semaphore.h>
#include <csp/dmacHw.h>
#include <mach/timer.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/pagemap.h>
/* ---- Constants and Types ---------------------------------------------- */
@ -111,78 +105,6 @@ typedef struct {
} DMA_DescriptorRing_t;
/****************************************************************************
*
* The DMA_MemType_t and DMA_MemMap_t are helper structures used to setup
* DMA chains from a variety of memory sources.
*
*****************************************************************************/
#define DMA_MEM_MAP_MIN_SIZE 4096 /* Pages less than this size are better */
/* off not being DMA'd. */
typedef enum {
DMA_MEM_TYPE_NONE, /* Not a valid setting */
DMA_MEM_TYPE_VMALLOC, /* Memory came from vmalloc call */
DMA_MEM_TYPE_KMALLOC, /* Memory came from kmalloc call */
DMA_MEM_TYPE_DMA, /* Memory came from dma_alloc_xxx call */
DMA_MEM_TYPE_USER, /* Memory came from user space. */
} DMA_MemType_t;
/* A segment represents a physically and virtually contiguous chunk of memory. */
/* i.e. each segment can be DMA'd */
/* A user of the DMA code will add memory regions. Each region may need to be */
/* represented by one or more segments. */
typedef struct {
void *virtAddr; /* Virtual address used for this segment */
dma_addr_t physAddr; /* Physical address this segment maps to */
size_t numBytes; /* Size of the segment, in bytes */
} DMA_Segment_t;
/* A region represents a virtually contiguous chunk of memory, which may be */
/* made up of multiple segments. */
typedef struct {
DMA_MemType_t memType;
void *virtAddr;
size_t numBytes;
/* Each region (virtually contiguous) consists of one or more segments. Each */
/* segment is virtually and physically contiguous. */
int numSegmentsUsed;
int numSegmentsAllocated;
DMA_Segment_t *segment;
/* When a region corresponds to user memory, we need to lock all of the pages */
/* down before we can figure out the physical addresses. The lockedPage array contains */
/* the pages that were locked, and which subsequently need to be unlocked once the */
/* memory is unmapped. */
unsigned numLockedPages;
struct page **lockedPages;
} DMA_Region_t;
typedef struct {
int inUse; /* Is this mapping currently being used? */
struct semaphore lock; /* Acquired when using this structure */
enum dma_data_direction dir; /* Direction this transfer is intended for */
/* In the event that we're mapping user memory, we need to know which task */
/* the memory is for, so that we can obtain the correct mm locks. */
struct task_struct *userTask;
int numRegionsUsed;
int numRegionsAllocated;
DMA_Region_t *region;
} DMA_MemMap_t;
/****************************************************************************
*
* The DMA_DeviceAttribute_t contains information which describes a
@ -568,124 +490,6 @@ int dma_alloc_double_dst_descriptors(DMA_Handle_t handle, /* DMA Handle */
size_t numBytes /* Number of bytes in each destination buffer */
);
/****************************************************************************/
/**
* Initializes a DMA_MemMap_t data structure
*/
/****************************************************************************/
int dma_init_mem_map(DMA_MemMap_t *memMap /* Stores state information about the map */
);
/****************************************************************************/
/**
* Releases any memory currently being held by a memory mapping structure.
*/
/****************************************************************************/
int dma_term_mem_map(DMA_MemMap_t *memMap /* Stores state information about the map */
);
/****************************************************************************/
/**
* Looks at a memory address and categorizes it.
*
* @return One of the values from the DMA_MemType_t enumeration.
*/
/****************************************************************************/
DMA_MemType_t dma_mem_type(void *addr);
/****************************************************************************/
/**
* Sets the process (aka userTask) associated with a mem map. This is
* required if user-mode segments will be added to the mapping.
*/
/****************************************************************************/
static inline void dma_mem_map_set_user_task(DMA_MemMap_t *memMap,
struct task_struct *task)
{
memMap->userTask = task;
}
/****************************************************************************/
/**
* Looks at a memory address and determines if we support DMA'ing to/from
* that type of memory.
*
* @return boolean -
* return value != 0 means dma supported
* return value == 0 means dma not supported
*/
/****************************************************************************/
int dma_mem_supports_dma(void *addr);
/****************************************************************************/
/**
* Initializes a memory map for use. Since this function acquires a
* sempaphore within the memory map, it is VERY important that dma_unmap
* be called when you're finished using the map.
*/
/****************************************************************************/
int dma_map_start(DMA_MemMap_t *memMap, /* Stores state information about the map */
enum dma_data_direction dir /* Direction that the mapping will be going */
);
/****************************************************************************/
/**
* Adds a segment of memory to a memory map.
*
* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
int dma_map_add_region(DMA_MemMap_t *memMap, /* Stores state information about the map */
void *mem, /* Virtual address that we want to get a map of */
size_t numBytes /* Number of bytes being mapped */
);
/****************************************************************************/
/**
* Creates a descriptor ring from a memory mapping.
*
* @return 0 on success, error code otherwise.
*/
/****************************************************************************/
int dma_map_create_descriptor_ring(DMA_Device_t dev, /* DMA device (where the ring is stored) */
DMA_MemMap_t *memMap, /* Memory map that will be used */
dma_addr_t devPhysAddr /* Physical address of device */
);
/****************************************************************************/
/**
* Maps in a memory region such that it can be used for performing a DMA.
*
* @return
*/
/****************************************************************************/
int dma_map_mem(DMA_MemMap_t *memMap, /* Stores state information about the map */
void *addr, /* Virtual address that we want to get a map of */
size_t count, /* Number of bytes being mapped */
enum dma_data_direction dir /* Direction that the mapping will be going */
);
/****************************************************************************/
/**
* Maps in a memory region such that it can be used for performing a DMA.
*
* @return
*/
/****************************************************************************/
int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */
int dirtied /* non-zero if any of the pages were modified */
);
/****************************************************************************/
/**
* Initiates a transfer when the descriptors have already been setup.

View File

@ -44,7 +44,7 @@
#include <mach/aemif.h>
#include <mach/spi.h>
#define DA850_EVM_PHY_ID "0:00"
#define DA850_EVM_PHY_ID "davinci_mdio-0:00"
#define DA850_LCD_PWR_PIN GPIO_TO_PIN(2, 8)
#define DA850_LCD_BL_PIN GPIO_TO_PIN(2, 15)

View File

@ -54,7 +54,7 @@ static inline int have_tvp7002(void)
return 0;
}
#define DM365_EVM_PHY_ID "0:01"
#define DM365_EVM_PHY_ID "davinci_mdio-0:01"
/*
* A MAX-II CPLD is used for various board control functions.
*/

View File

@ -40,7 +40,7 @@
#include <mach/usb.h>
#include <mach/aemif.h>
#define DM644X_EVM_PHY_ID "0:01"
#define DM644X_EVM_PHY_ID "davinci_mdio-0:01"
#define LXT971_PHY_ID (0x001378e2)
#define LXT971_PHY_MASK (0xfffffff0)

View File

@ -736,7 +736,7 @@ static struct davinci_uart_config uart_config __initdata = {
.enabled_uarts = (1 << 0),
};
#define DM646X_EVM_PHY_ID "0:01"
#define DM646X_EVM_PHY_ID "davinci_mdio-0:01"
/*
* The following EDMA channels/slots are not being used by drivers (for
* example: Timer, GPIO, UART events etc) on dm646x, hence they are being

View File

@ -39,7 +39,7 @@
#include <mach/mmc.h>
#include <mach/usb.h>
#define NEUROS_OSD2_PHY_ID "0:01"
#define NEUROS_OSD2_PHY_ID "davinci_mdio-0:01"
#define LXT971_PHY_ID 0x001378e2
#define LXT971_PHY_MASK 0xfffffff0

View File

@ -21,7 +21,7 @@
#include <mach/da8xx.h>
#include <mach/mux.h>
#define HAWKBOARD_PHY_ID "0:07"
#define HAWKBOARD_PHY_ID "davinci_mdio-0:07"
#define DA850_HAWK_MMCSD_CD_PIN GPIO_TO_PIN(3, 12)
#define DA850_HAWK_MMCSD_WP_PIN GPIO_TO_PIN(3, 13)

View File

@ -42,7 +42,7 @@
#include <mach/mux.h>
#include <mach/usb.h>
#define SFFSDR_PHY_ID "0:01"
#define SFFSDR_PHY_ID "davinci_mdio-0:01"
static struct mtd_partition davinci_sffsdr_nandflash_partition[] = {
/* U-Boot Environment: Block 0
* UBL: Block 1

View File

@ -153,34 +153,6 @@ static struct clk pll1_sysclk3 = {
.div_reg = PLLDIV3,
};
static struct clk pll1_sysclk4 = {
.name = "pll1_sysclk4",
.parent = &pll1_clk,
.flags = CLK_PLL,
.div_reg = PLLDIV4,
};
static struct clk pll1_sysclk5 = {
.name = "pll1_sysclk5",
.parent = &pll1_clk,
.flags = CLK_PLL,
.div_reg = PLLDIV5,
};
static struct clk pll1_sysclk6 = {
.name = "pll0_sysclk6",
.parent = &pll0_clk,
.flags = CLK_PLL,
.div_reg = PLLDIV6,
};
static struct clk pll1_sysclk7 = {
.name = "pll1_sysclk7",
.parent = &pll1_clk,
.flags = CLK_PLL,
.div_reg = PLLDIV7,
};
static struct clk i2c0_clk = {
.name = "i2c0",
.parent = &pll0_aux_clk,
@ -397,10 +369,6 @@ static struct clk_lookup da850_clks[] = {
CLK(NULL, "pll1_aux", &pll1_aux_clk),
CLK(NULL, "pll1_sysclk2", &pll1_sysclk2),
CLK(NULL, "pll1_sysclk3", &pll1_sysclk3),
CLK(NULL, "pll1_sysclk4", &pll1_sysclk4),
CLK(NULL, "pll1_sysclk5", &pll1_sysclk5),
CLK(NULL, "pll1_sysclk6", &pll1_sysclk6),
CLK(NULL, "pll1_sysclk7", &pll1_sysclk7),
CLK("i2c_davinci.1", NULL, &i2c0_clk),
CLK(NULL, "timer0", &timerp64_0_clk),
CLK("watchdog", NULL, &timerp64_1_clk),

View File

@ -32,7 +32,7 @@
#include <mach/mx31.h>
#include <mach/common.h>
#include "crmregs-imx31.h"
#include "crmregs-imx3.h"
#define PRE_DIV_MIN_FREQ 10000000 /* Minimum Frequency after Predivider */

View File

@ -27,23 +27,7 @@
#include <mach/hardware.h>
#include <mach/common.h>
#define CCM_BASE MX35_IO_ADDRESS(MX35_CCM_BASE_ADDR)
#define CCM_CCMR 0x00
#define CCM_PDR0 0x04
#define CCM_PDR1 0x08
#define CCM_PDR2 0x0C
#define CCM_PDR3 0x10
#define CCM_PDR4 0x14
#define CCM_RCSR 0x18
#define CCM_MPCTL 0x1C
#define CCM_PPCTL 0x20
#define CCM_ACMR 0x24
#define CCM_COSR 0x28
#define CCM_CGR0 0x2C
#define CCM_CGR1 0x30
#define CCM_CGR2 0x34
#define CCM_CGR3 0x38
#include "crmregs-imx3.h"
#ifdef HAVE_SET_RATE_SUPPORT
static void calc_dividers(u32 div, u32 *pre, u32 *post, u32 maxpost)
@ -111,14 +95,14 @@ static void calc_dividers_3_3(u32 div, u32 *pre, u32 *post)
static unsigned long get_rate_mpll(void)
{
ulong mpctl = __raw_readl(CCM_BASE + CCM_MPCTL);
ulong mpctl = __raw_readl(MX35_CCM_MPCTL);
return mxc_decode_pll(mpctl, 24000000);
}
static unsigned long get_rate_ppll(void)
{
ulong ppctl = __raw_readl(CCM_BASE + CCM_PPCTL);
ulong ppctl = __raw_readl(MX35_CCM_PPCTL);
return mxc_decode_pll(ppctl, 24000000);
}
@ -148,7 +132,7 @@ static struct arm_ahb_div clk_consumer[] = {
static unsigned long get_rate_arm(void)
{
unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0);
unsigned long pdr0 = __raw_readl(MXC_CCM_PDR0);
struct arm_ahb_div *aad;
unsigned long fref = get_rate_mpll();
@ -161,7 +145,7 @@ static unsigned long get_rate_arm(void)
static unsigned long get_rate_ahb(struct clk *clk)
{
unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0);
unsigned long pdr0 = __raw_readl(MXC_CCM_PDR0);
struct arm_ahb_div *aad;
unsigned long fref = get_rate_arm();
@ -177,8 +161,8 @@ static unsigned long get_rate_ipg(struct clk *clk)
static unsigned long get_rate_uart(struct clk *clk)
{
unsigned long pdr3 = __raw_readl(CCM_BASE + CCM_PDR3);
unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4);
unsigned long pdr3 = __raw_readl(MX35_CCM_PDR3);
unsigned long pdr4 = __raw_readl(MX35_CCM_PDR4);
unsigned long div = ((pdr4 >> 10) & 0x3f) + 1;
if (pdr3 & (1 << 14))
@ -189,7 +173,7 @@ static unsigned long get_rate_uart(struct clk *clk)
static unsigned long get_rate_sdhc(struct clk *clk)
{
unsigned long pdr3 = __raw_readl(CCM_BASE + CCM_PDR3);
unsigned long pdr3 = __raw_readl(MX35_CCM_PDR3);
unsigned long div, rate;
if (pdr3 & (1 << 6))
@ -215,7 +199,7 @@ static unsigned long get_rate_sdhc(struct clk *clk)
static unsigned long get_rate_mshc(struct clk *clk)
{
unsigned long pdr1 = __raw_readl(CCM_BASE + CCM_PDR1);
unsigned long pdr1 = __raw_readl(MXC_CCM_PDR1);
unsigned long div1, div2, rate;
if (pdr1 & (1 << 7))
@ -231,7 +215,7 @@ static unsigned long get_rate_mshc(struct clk *clk)
static unsigned long get_rate_ssi(struct clk *clk)
{
unsigned long pdr2 = __raw_readl(CCM_BASE + CCM_PDR2);
unsigned long pdr2 = __raw_readl(MX35_CCM_PDR2);
unsigned long div1, div2, rate;
if (pdr2 & (1 << 6))
@ -256,7 +240,7 @@ static unsigned long get_rate_ssi(struct clk *clk)
static unsigned long get_rate_csi(struct clk *clk)
{
unsigned long pdr2 = __raw_readl(CCM_BASE + CCM_PDR2);
unsigned long pdr2 = __raw_readl(MX35_CCM_PDR2);
unsigned long rate;
if (pdr2 & (1 << 7))
@ -269,7 +253,7 @@ static unsigned long get_rate_csi(struct clk *clk)
static unsigned long get_rate_otg(struct clk *clk)
{
unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4);
unsigned long pdr4 = __raw_readl(MX35_CCM_PDR4);
unsigned long rate;
if (pdr4 & (1 << 9))
@ -282,8 +266,8 @@ static unsigned long get_rate_otg(struct clk *clk)
static unsigned long get_rate_ipg_per(struct clk *clk)
{
unsigned long pdr0 = __raw_readl(CCM_BASE + CCM_PDR0);
unsigned long pdr4 = __raw_readl(CCM_BASE + CCM_PDR4);
unsigned long pdr0 = __raw_readl(MXC_CCM_PDR0);
unsigned long pdr4 = __raw_readl(MX35_CCM_PDR4);
unsigned long div;
if (pdr0 & (1 << 26)) {
@ -297,7 +281,7 @@ static unsigned long get_rate_ipg_per(struct clk *clk)
static unsigned long get_rate_hsp(struct clk *clk)
{
unsigned long hsp_podf = (__raw_readl(CCM_BASE + CCM_PDR0) >> 20) & 0x03;
unsigned long hsp_podf = (__raw_readl(MXC_CCM_PDR0) >> 20) & 0x03;
unsigned long fref = get_rate_mpll();
if (fref > 400 * 1000 * 1000) {
@ -345,7 +329,7 @@ static void clk_cgr_disable(struct clk *clk)
#define DEFINE_CLOCK(name, i, er, es, gr, sr) \
static struct clk name = { \
.id = i, \
.enable_reg = CCM_BASE + er, \
.enable_reg = er, \
.enable_shift = es, \
.get_rate = gr, \
.set_rate = sr, \
@ -353,59 +337,59 @@ static void clk_cgr_disable(struct clk *clk)
.disable = clk_cgr_disable, \
}
DEFINE_CLOCK(asrc_clk, 0, CCM_CGR0, 0, NULL, NULL);
DEFINE_CLOCK(pata_clk, 0, CCM_CGR0, 2, get_rate_ipg, NULL);
/* DEFINE_CLOCK(audmux_clk, 0, CCM_CGR0, 4, NULL, NULL); */
DEFINE_CLOCK(can1_clk, 0, CCM_CGR0, 6, get_rate_ipg, NULL);
DEFINE_CLOCK(can2_clk, 1, CCM_CGR0, 8, get_rate_ipg, NULL);
DEFINE_CLOCK(cspi1_clk, 0, CCM_CGR0, 10, get_rate_ipg, NULL);
DEFINE_CLOCK(cspi2_clk, 1, CCM_CGR0, 12, get_rate_ipg, NULL);
DEFINE_CLOCK(ect_clk, 0, CCM_CGR0, 14, get_rate_ipg, NULL);
DEFINE_CLOCK(edio_clk, 0, CCM_CGR0, 16, NULL, NULL);
DEFINE_CLOCK(emi_clk, 0, CCM_CGR0, 18, get_rate_ipg, NULL);
DEFINE_CLOCK(epit1_clk, 0, CCM_CGR0, 20, get_rate_ipg, NULL);
DEFINE_CLOCK(epit2_clk, 1, CCM_CGR0, 22, get_rate_ipg, NULL);
DEFINE_CLOCK(esai_clk, 0, CCM_CGR0, 24, NULL, NULL);
DEFINE_CLOCK(esdhc1_clk, 0, CCM_CGR0, 26, get_rate_sdhc, NULL);
DEFINE_CLOCK(esdhc2_clk, 1, CCM_CGR0, 28, get_rate_sdhc, NULL);
DEFINE_CLOCK(esdhc3_clk, 2, CCM_CGR0, 30, get_rate_sdhc, NULL);
DEFINE_CLOCK(asrc_clk, 0, MX35_CCM_CGR0, 0, NULL, NULL);
DEFINE_CLOCK(pata_clk, 0, MX35_CCM_CGR0, 2, get_rate_ipg, NULL);
/* DEFINE_CLOCK(audmux_clk, 0, MX35_CCM_CGR0, 4, NULL, NULL); */
DEFINE_CLOCK(can1_clk, 0, MX35_CCM_CGR0, 6, get_rate_ipg, NULL);
DEFINE_CLOCK(can2_clk, 1, MX35_CCM_CGR0, 8, get_rate_ipg, NULL);
DEFINE_CLOCK(cspi1_clk, 0, MX35_CCM_CGR0, 10, get_rate_ipg, NULL);
DEFINE_CLOCK(cspi2_clk, 1, MX35_CCM_CGR0, 12, get_rate_ipg, NULL);
DEFINE_CLOCK(ect_clk, 0, MX35_CCM_CGR0, 14, get_rate_ipg, NULL);
DEFINE_CLOCK(edio_clk, 0, MX35_CCM_CGR0, 16, NULL, NULL);
DEFINE_CLOCK(emi_clk, 0, MX35_CCM_CGR0, 18, get_rate_ipg, NULL);
DEFINE_CLOCK(epit1_clk, 0, MX35_CCM_CGR0, 20, get_rate_ipg, NULL);
DEFINE_CLOCK(epit2_clk, 1, MX35_CCM_CGR0, 22, get_rate_ipg, NULL);
DEFINE_CLOCK(esai_clk, 0, MX35_CCM_CGR0, 24, NULL, NULL);
DEFINE_CLOCK(esdhc1_clk, 0, MX35_CCM_CGR0, 26, get_rate_sdhc, NULL);
DEFINE_CLOCK(esdhc2_clk, 1, MX35_CCM_CGR0, 28, get_rate_sdhc, NULL);
DEFINE_CLOCK(esdhc3_clk, 2, MX35_CCM_CGR0, 30, get_rate_sdhc, NULL);
DEFINE_CLOCK(fec_clk, 0, CCM_CGR1, 0, get_rate_ipg, NULL);
DEFINE_CLOCK(gpio1_clk, 0, CCM_CGR1, 2, NULL, NULL);
DEFINE_CLOCK(gpio2_clk, 1, CCM_CGR1, 4, NULL, NULL);
DEFINE_CLOCK(gpio3_clk, 2, CCM_CGR1, 6, NULL, NULL);
DEFINE_CLOCK(gpt_clk, 0, CCM_CGR1, 8, get_rate_ipg, NULL);
DEFINE_CLOCK(i2c1_clk, 0, CCM_CGR1, 10, get_rate_ipg_per, NULL);
DEFINE_CLOCK(i2c2_clk, 1, CCM_CGR1, 12, get_rate_ipg_per, NULL);
DEFINE_CLOCK(i2c3_clk, 2, CCM_CGR1, 14, get_rate_ipg_per, NULL);
DEFINE_CLOCK(iomuxc_clk, 0, CCM_CGR1, 16, NULL, NULL);
DEFINE_CLOCK(ipu_clk, 0, CCM_CGR1, 18, get_rate_hsp, NULL);
DEFINE_CLOCK(kpp_clk, 0, CCM_CGR1, 20, get_rate_ipg, NULL);
DEFINE_CLOCK(mlb_clk, 0, CCM_CGR1, 22, get_rate_ahb, NULL);
DEFINE_CLOCK(mshc_clk, 0, CCM_CGR1, 24, get_rate_mshc, NULL);
DEFINE_CLOCK(owire_clk, 0, CCM_CGR1, 26, get_rate_ipg_per, NULL);
DEFINE_CLOCK(pwm_clk, 0, CCM_CGR1, 28, get_rate_ipg_per, NULL);
DEFINE_CLOCK(rngc_clk, 0, CCM_CGR1, 30, get_rate_ipg, NULL);
DEFINE_CLOCK(fec_clk, 0, MX35_CCM_CGR1, 0, get_rate_ipg, NULL);
DEFINE_CLOCK(gpio1_clk, 0, MX35_CCM_CGR1, 2, NULL, NULL);
DEFINE_CLOCK(gpio2_clk, 1, MX35_CCM_CGR1, 4, NULL, NULL);
DEFINE_CLOCK(gpio3_clk, 2, MX35_CCM_CGR1, 6, NULL, NULL);
DEFINE_CLOCK(gpt_clk, 0, MX35_CCM_CGR1, 8, get_rate_ipg, NULL);
DEFINE_CLOCK(i2c1_clk, 0, MX35_CCM_CGR1, 10, get_rate_ipg_per, NULL);
DEFINE_CLOCK(i2c2_clk, 1, MX35_CCM_CGR1, 12, get_rate_ipg_per, NULL);
DEFINE_CLOCK(i2c3_clk, 2, MX35_CCM_CGR1, 14, get_rate_ipg_per, NULL);
DEFINE_CLOCK(iomuxc_clk, 0, MX35_CCM_CGR1, 16, NULL, NULL);
DEFINE_CLOCK(ipu_clk, 0, MX35_CCM_CGR1, 18, get_rate_hsp, NULL);
DEFINE_CLOCK(kpp_clk, 0, MX35_CCM_CGR1, 20, get_rate_ipg, NULL);
DEFINE_CLOCK(mlb_clk, 0, MX35_CCM_CGR1, 22, get_rate_ahb, NULL);
DEFINE_CLOCK(mshc_clk, 0, MX35_CCM_CGR1, 24, get_rate_mshc, NULL);
DEFINE_CLOCK(owire_clk, 0, MX35_CCM_CGR1, 26, get_rate_ipg_per, NULL);
DEFINE_CLOCK(pwm_clk, 0, MX35_CCM_CGR1, 28, get_rate_ipg_per, NULL);
DEFINE_CLOCK(rngc_clk, 0, MX35_CCM_CGR1, 30, get_rate_ipg, NULL);
DEFINE_CLOCK(rtc_clk, 0, CCM_CGR2, 0, get_rate_ipg, NULL);
DEFINE_CLOCK(rtic_clk, 0, CCM_CGR2, 2, get_rate_ahb, NULL);
DEFINE_CLOCK(scc_clk, 0, CCM_CGR2, 4, get_rate_ipg, NULL);
DEFINE_CLOCK(sdma_clk, 0, CCM_CGR2, 6, NULL, NULL);
DEFINE_CLOCK(spba_clk, 0, CCM_CGR2, 8, get_rate_ipg, NULL);
DEFINE_CLOCK(spdif_clk, 0, CCM_CGR2, 10, NULL, NULL);
DEFINE_CLOCK(ssi1_clk, 0, CCM_CGR2, 12, get_rate_ssi, NULL);
DEFINE_CLOCK(ssi2_clk, 1, CCM_CGR2, 14, get_rate_ssi, NULL);
DEFINE_CLOCK(uart1_clk, 0, CCM_CGR2, 16, get_rate_uart, NULL);
DEFINE_CLOCK(uart2_clk, 1, CCM_CGR2, 18, get_rate_uart, NULL);
DEFINE_CLOCK(uart3_clk, 2, CCM_CGR2, 20, get_rate_uart, NULL);
DEFINE_CLOCK(usbotg_clk, 0, CCM_CGR2, 22, get_rate_otg, NULL);
DEFINE_CLOCK(wdog_clk, 0, CCM_CGR2, 24, NULL, NULL);
DEFINE_CLOCK(max_clk, 0, CCM_CGR2, 26, NULL, NULL);
DEFINE_CLOCK(audmux_clk, 0, CCM_CGR2, 30, NULL, NULL);
DEFINE_CLOCK(rtc_clk, 0, MX35_CCM_CGR2, 0, get_rate_ipg, NULL);
DEFINE_CLOCK(rtic_clk, 0, MX35_CCM_CGR2, 2, get_rate_ahb, NULL);
DEFINE_CLOCK(scc_clk, 0, MX35_CCM_CGR2, 4, get_rate_ipg, NULL);
DEFINE_CLOCK(sdma_clk, 0, MX35_CCM_CGR2, 6, NULL, NULL);
DEFINE_CLOCK(spba_clk, 0, MX35_CCM_CGR2, 8, get_rate_ipg, NULL);
DEFINE_CLOCK(spdif_clk, 0, MX35_CCM_CGR2, 10, NULL, NULL);
DEFINE_CLOCK(ssi1_clk, 0, MX35_CCM_CGR2, 12, get_rate_ssi, NULL);
DEFINE_CLOCK(ssi2_clk, 1, MX35_CCM_CGR2, 14, get_rate_ssi, NULL);
DEFINE_CLOCK(uart1_clk, 0, MX35_CCM_CGR2, 16, get_rate_uart, NULL);
DEFINE_CLOCK(uart2_clk, 1, MX35_CCM_CGR2, 18, get_rate_uart, NULL);
DEFINE_CLOCK(uart3_clk, 2, MX35_CCM_CGR2, 20, get_rate_uart, NULL);
DEFINE_CLOCK(usbotg_clk, 0, MX35_CCM_CGR2, 22, get_rate_otg, NULL);
DEFINE_CLOCK(wdog_clk, 0, MX35_CCM_CGR2, 24, NULL, NULL);
DEFINE_CLOCK(max_clk, 0, MX35_CCM_CGR2, 26, NULL, NULL);
DEFINE_CLOCK(audmux_clk, 0, MX35_CCM_CGR2, 30, NULL, NULL);
DEFINE_CLOCK(csi_clk, 0, CCM_CGR3, 0, get_rate_csi, NULL);
DEFINE_CLOCK(iim_clk, 0, CCM_CGR3, 2, NULL, NULL);
DEFINE_CLOCK(gpu2d_clk, 0, CCM_CGR3, 4, NULL, NULL);
DEFINE_CLOCK(csi_clk, 0, MX35_CCM_CGR3, 0, get_rate_csi, NULL);
DEFINE_CLOCK(iim_clk, 0, MX35_CCM_CGR3, 2, NULL, NULL);
DEFINE_CLOCK(gpu2d_clk, 0, MX35_CCM_CGR3, 4, NULL, NULL);
DEFINE_CLOCK(usbahb_clk, 0, 0, 0, get_rate_ahb, NULL);
@ -422,7 +406,7 @@ static unsigned long get_rate_nfc(struct clk *clk)
{
unsigned long div1;
div1 = (__raw_readl(CCM_BASE + CCM_PDR4) >> 28) + 1;
div1 = (__raw_readl(MX35_CCM_PDR4) >> 28) + 1;
return get_rate_ahb(NULL) / div1;
}
@ -518,11 +502,11 @@ int __init mx35_clocks_init()
/* Turn off all clocks except the ones we need to survive, namely:
* EMI, GPIO1/2/3, GPT, IOMUX, MAX and eventually uart
*/
__raw_writel((3 << 18), CCM_BASE + CCM_CGR0);
__raw_writel((3 << 18), MX35_CCM_CGR0);
__raw_writel((3 << 2) | (3 << 4) | (3 << 6) | (3 << 8) | (3 << 16),
CCM_BASE + CCM_CGR1);
__raw_writel(cgr2, CCM_BASE + CCM_CGR2);
__raw_writel(0, CCM_BASE + CCM_CGR3);
MX35_CCM_CGR1);
__raw_writel(cgr2, MX35_CCM_CGR2);
__raw_writel(0, MX35_CCM_CGR3);
clk_enable(&iim_clk);
imx_print_silicon_rev("i.MX35", mx35_revision());
@ -533,7 +517,7 @@ int __init mx35_clocks_init()
* extra clocks turned on, otherwise the MX35 boot ROM code will
* hang after a watchdog reset.
*/
if (!(__raw_readl(CCM_BASE + CCM_RCSR) & (3 << 10))) {
if (!(__raw_readl(MX35_CCM_RCSR) & (3 << 10))) {
/* Additionally turn on UART1, SCC, and IIM clocks */
clk_enable(&iim_clk);
clk_enable(&uart1_clk);

View File

@ -24,23 +24,36 @@
#define CKIH_CLK_FREQ_27MHZ 27000000
#define CKIL_CLK_FREQ 32768
#define MXC_CCM_BASE MX31_IO_ADDRESS(MX31_CCM_BASE_ADDR)
#define MXC_CCM_BASE (cpu_is_mx31() ? \
MX31_IO_ADDRESS(MX31_CCM_BASE_ADDR) : MX35_IO_ADDRESS(MX35_CCM_BASE_ADDR))
/* Register addresses */
#define MXC_CCM_CCMR (MXC_CCM_BASE + 0x00)
#define MXC_CCM_PDR0 (MXC_CCM_BASE + 0x04)
#define MXC_CCM_PDR1 (MXC_CCM_BASE + 0x08)
#define MX35_CCM_PDR2 (MXC_CCM_BASE + 0x0C)
#define MXC_CCM_RCSR (MXC_CCM_BASE + 0x0C)
#define MX35_CCM_PDR3 (MXC_CCM_BASE + 0x10)
#define MXC_CCM_MPCTL (MXC_CCM_BASE + 0x10)
#define MX35_CCM_PDR4 (MXC_CCM_BASE + 0x14)
#define MXC_CCM_UPCTL (MXC_CCM_BASE + 0x14)
#define MX35_CCM_RCSR (MXC_CCM_BASE + 0x18)
#define MXC_CCM_SRPCTL (MXC_CCM_BASE + 0x18)
#define MX35_CCM_MPCTL (MXC_CCM_BASE + 0x1C)
#define MXC_CCM_COSR (MXC_CCM_BASE + 0x1C)
#define MX35_CCM_PPCTL (MXC_CCM_BASE + 0x20)
#define MXC_CCM_CGR0 (MXC_CCM_BASE + 0x20)
#define MX35_CCM_ACMR (MXC_CCM_BASE + 0x24)
#define MXC_CCM_CGR1 (MXC_CCM_BASE + 0x24)
#define MX35_CCM_COSR (MXC_CCM_BASE + 0x28)
#define MXC_CCM_CGR2 (MXC_CCM_BASE + 0x28)
#define MX35_CCM_CGR0 (MXC_CCM_BASE + 0x2C)
#define MXC_CCM_WIMR (MXC_CCM_BASE + 0x2C)
#define MX35_CCM_CGR1 (MXC_CCM_BASE + 0x30)
#define MXC_CCM_LDC (MXC_CCM_BASE + 0x30)
#define MX35_CCM_CGR2 (MXC_CCM_BASE + 0x34)
#define MXC_CCM_DCVR0 (MXC_CCM_BASE + 0x34)
#define MX35_CCM_CGR3 (MXC_CCM_BASE + 0x38)
#define MXC_CCM_DCVR1 (MXC_CCM_BASE + 0x38)
#define MXC_CCM_DCVR2 (MXC_CCM_BASE + 0x3C)
#define MXC_CCM_DCVR3 (MXC_CCM_BASE + 0x40)

View File

@ -51,7 +51,7 @@
#include <mach/ulpi.h>
#include "devices-imx31.h"
#include "crmregs-imx31.h"
#include "crmregs-imx3.h"
static int armadillo5x0_pins[] = {
/* UART1 */

View File

@ -213,13 +213,12 @@ config MACH_OMAP3_PANDORA
depends on ARCH_OMAP3
default y
select OMAP_PACKAGE_CBB
select REGULATOR_FIXED_VOLTAGE
select REGULATOR_FIXED_VOLTAGE if REGULATOR
config MACH_OMAP3_TOUCHBOOK
bool "OMAP3 Touch Book"
depends on ARCH_OMAP3
default y
select BACKLIGHT_CLASS_DEVICE
config MACH_OMAP_3430SDP
bool "OMAP 3430 SDP board"
@ -265,7 +264,7 @@ config MACH_OMAP_ZOOM2
select SERIAL_8250
select SERIAL_CORE_CONSOLE
select SERIAL_8250_CONSOLE
select REGULATOR_FIXED_VOLTAGE
select REGULATOR_FIXED_VOLTAGE if REGULATOR
config MACH_OMAP_ZOOM3
bool "OMAP3630 Zoom3 board"
@ -275,7 +274,7 @@ config MACH_OMAP_ZOOM3
select SERIAL_8250
select SERIAL_CORE_CONSOLE
select SERIAL_8250_CONSOLE
select REGULATOR_FIXED_VOLTAGE
select REGULATOR_FIXED_VOLTAGE if REGULATOR
config MACH_CM_T35
bool "CompuLab CM-T35/CM-T3730 modules"
@ -334,7 +333,7 @@ config MACH_OMAP_4430SDP
depends on ARCH_OMAP4
select OMAP_PACKAGE_CBL
select OMAP_PACKAGE_CBS
select REGULATOR_FIXED_VOLTAGE
select REGULATOR_FIXED_VOLTAGE if REGULATOR
config MACH_OMAP4_PANDA
bool "OMAP4 Panda Board"
@ -342,7 +341,7 @@ config MACH_OMAP4_PANDA
depends on ARCH_OMAP4
select OMAP_PACKAGE_CBL
select OMAP_PACKAGE_CBS
select REGULATOR_FIXED_VOLTAGE
select REGULATOR_FIXED_VOLTAGE if REGULATOR
config OMAP3_EMU
bool "OMAP3 debugging peripherals"

View File

@ -52,8 +52,9 @@
#define ETH_KS8851_QUART 138
#define OMAP4_SFH7741_SENSOR_OUTPUT_GPIO 184
#define OMAP4_SFH7741_ENABLE_GPIO 188
#define HDMI_GPIO_HPD 60 /* Hot plug pin for HDMI */
#define HDMI_GPIO_CT_CP_HPD 60 /* HPD mode enable/disable */
#define HDMI_GPIO_LS_OE 41 /* Level shifter for HDMI */
#define HDMI_GPIO_HPD 63 /* Hotplug detect */
#define DISPLAY_SEL_GPIO 59 /* LCD2/PicoDLP switch */
#define DLP_POWER_ON_GPIO 40
@ -603,8 +604,9 @@ static void __init omap_sfh7741prox_init(void)
}
static struct gpio sdp4430_hdmi_gpios[] = {
{ HDMI_GPIO_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_hpd" },
{ HDMI_GPIO_CT_CP_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ct_cp_hpd" },
{ HDMI_GPIO_LS_OE, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ls_oe" },
{ HDMI_GPIO_HPD, GPIOF_DIR_IN, "hdmi_gpio_hpd" },
};
static int sdp4430_panel_enable_hdmi(struct omap_dss_device *dssdev)
@ -621,8 +623,7 @@ static int sdp4430_panel_enable_hdmi(struct omap_dss_device *dssdev)
static void sdp4430_panel_disable_hdmi(struct omap_dss_device *dssdev)
{
gpio_free(HDMI_GPIO_LS_OE);
gpio_free(HDMI_GPIO_HPD);
gpio_free_array(sdp4430_hdmi_gpios, ARRAY_SIZE(sdp4430_hdmi_gpios));
}
static struct nokia_dsi_panel_data dsi1_panel = {
@ -738,6 +739,10 @@ static void sdp4430_lcd_init(void)
pr_err("%s: Could not get lcd2_reset_gpio\n", __func__);
}
static struct omap_dss_hdmi_data sdp4430_hdmi_data = {
.hpd_gpio = HDMI_GPIO_HPD,
};
static struct omap_dss_device sdp4430_hdmi_device = {
.name = "hdmi",
.driver_name = "hdmi_panel",
@ -745,6 +750,7 @@ static struct omap_dss_device sdp4430_hdmi_device = {
.platform_enable = sdp4430_panel_enable_hdmi,
.platform_disable = sdp4430_panel_disable_hdmi,
.channel = OMAP_DSS_CHANNEL_DIGIT,
.data = &sdp4430_hdmi_data,
};
static struct picodlp_panel_data sdp4430_picodlp_pdata = {
@ -829,6 +835,10 @@ static void omap_4430sdp_display_init(void)
omap_hdmi_init(OMAP_HDMI_SDA_SCL_EXTERNAL_PULLUP);
else
omap_hdmi_init(0);
omap_mux_init_gpio(HDMI_GPIO_LS_OE, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_CT_CP_HPD, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_HPD, OMAP_PIN_INPUT_PULLDOWN);
}
#ifdef CONFIG_OMAP_MUX

View File

@ -51,8 +51,9 @@
#define GPIO_HUB_NRESET 62
#define GPIO_WIFI_PMENA 43
#define GPIO_WIFI_IRQ 53
#define HDMI_GPIO_HPD 60 /* Hot plug pin for HDMI */
#define HDMI_GPIO_CT_CP_HPD 60 /* HPD mode enable/disable */
#define HDMI_GPIO_LS_OE 41 /* Level shifter for HDMI */
#define HDMI_GPIO_HPD 63 /* Hotplug detect */
/* wl127x BT, FM, GPS connectivity chip */
static int wl1271_gpios[] = {46, -1, -1};
@ -413,8 +414,9 @@ int __init omap4_panda_dvi_init(void)
}
static struct gpio panda_hdmi_gpios[] = {
{ HDMI_GPIO_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_hpd" },
{ HDMI_GPIO_CT_CP_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ct_cp_hpd" },
{ HDMI_GPIO_LS_OE, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ls_oe" },
{ HDMI_GPIO_HPD, GPIOF_DIR_IN, "hdmi_gpio_hpd" },
};
static int omap4_panda_panel_enable_hdmi(struct omap_dss_device *dssdev)
@ -431,10 +433,13 @@ static int omap4_panda_panel_enable_hdmi(struct omap_dss_device *dssdev)
static void omap4_panda_panel_disable_hdmi(struct omap_dss_device *dssdev)
{
gpio_free(HDMI_GPIO_LS_OE);
gpio_free(HDMI_GPIO_HPD);
gpio_free_array(panda_hdmi_gpios, ARRAY_SIZE(panda_hdmi_gpios));
}
static struct omap_dss_hdmi_data omap4_panda_hdmi_data = {
.hpd_gpio = HDMI_GPIO_HPD,
};
static struct omap_dss_device omap4_panda_hdmi_device = {
.name = "hdmi",
.driver_name = "hdmi_panel",
@ -442,6 +447,7 @@ static struct omap_dss_device omap4_panda_hdmi_device = {
.platform_enable = omap4_panda_panel_enable_hdmi,
.platform_disable = omap4_panda_panel_disable_hdmi,
.channel = OMAP_DSS_CHANNEL_DIGIT,
.data = &omap4_panda_hdmi_data,
};
static struct omap_dss_device *omap4_panda_dss_devices[] = {
@ -473,6 +479,10 @@ void omap4_panda_display_init(void)
omap_hdmi_init(OMAP_HDMI_SDA_SCL_EXTERNAL_PULLUP);
else
omap_hdmi_init(0);
omap_mux_init_gpio(HDMI_GPIO_LS_OE, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_CT_CP_HPD, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_HPD, OMAP_PIN_INPUT_PULLDOWN);
}
static void __init omap4_panda_init(void)

View File

@ -405,6 +405,7 @@ static int omap_mcspi_init(struct omap_hwmod *oh, void *unused)
break;
default:
pr_err("Invalid McSPI Revision value\n");
kfree(pdata);
return -EINVAL;
}

View File

@ -103,12 +103,8 @@ static void omap4_hdmi_mux_pads(enum omap_hdmi_flags flags)
u32 reg;
u16 control_i2c_1;
/* PAD0_HDMI_HPD_PAD1_HDMI_CEC */
omap_mux_init_signal("hdmi_hpd",
OMAP_PIN_INPUT_PULLUP);
omap_mux_init_signal("hdmi_cec",
OMAP_PIN_INPUT_PULLUP);
/* PAD0_HDMI_DDC_SCL_PAD1_HDMI_DDC_SDA */
omap_mux_init_signal("hdmi_ddc_scl",
OMAP_PIN_INPUT_PULLUP);
omap_mux_init_signal("hdmi_ddc_sda",

View File

@ -528,7 +528,13 @@ int gpmc_cs_configure(int cs, int cmd, int wval)
case GPMC_CONFIG_DEV_SIZE:
regval = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1);
/* clear 2 target bits */
regval &= ~GPMC_CONFIG1_DEVICESIZE(3);
/* set the proper value */
regval |= GPMC_CONFIG1_DEVICESIZE(wval);
gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, regval);
break;

View File

@ -175,14 +175,15 @@ static void hsmmc2_select_input_clk_src(struct omap_mmc_platform_data *mmc)
{
u32 reg;
if (mmc->slots[0].internal_clock) {
reg = omap_ctrl_readl(control_devconf1_offset);
reg = omap_ctrl_readl(control_devconf1_offset);
if (mmc->slots[0].internal_clock)
reg |= OMAP2_MMCSDIO2ADPCLKISEL;
omap_ctrl_writel(reg, control_devconf1_offset);
}
else
reg &= ~OMAP2_MMCSDIO2ADPCLKISEL;
omap_ctrl_writel(reg, control_devconf1_offset);
}
static void hsmmc23_before_set_reg(struct device *dev, int slot,
static void hsmmc2_before_set_reg(struct device *dev, int slot,
int power_on, int vdd)
{
struct omap_mmc_platform_data *mmc = dev->platform_data;
@ -407,14 +408,13 @@ static int __init omap_hsmmc_pdata_init(struct omap2_hsmmc_info *c,
c->caps &= ~MMC_CAP_8_BIT_DATA;
c->caps |= MMC_CAP_4_BIT_DATA;
}
/* FALLTHROUGH */
case 3:
if (mmc->slots[0].features & HSMMC_HAS_PBIAS) {
/* off-chip level shifting, or none */
mmc->slots[0].before_set_reg = hsmmc23_before_set_reg;
mmc->slots[0].before_set_reg = hsmmc2_before_set_reg;
mmc->slots[0].after_set_reg = NULL;
}
break;
case 3:
case 4:
case 5:
mmc->slots[0].before_set_reg = NULL;

View File

@ -388,7 +388,7 @@ static void __init omap_hwmod_init_postsetup(void)
omap_pm_if_early_init();
}
#ifdef CONFIG_ARCH_OMAP2
#ifdef CONFIG_SOC_OMAP2420
void __init omap2420_init_early(void)
{
omap2_set_globals_242x();
@ -400,7 +400,9 @@ void __init omap2420_init_early(void)
omap_hwmod_init_postsetup();
omap2420_clk_init();
}
#endif
#ifdef CONFIG_SOC_OMAP2430
void __init omap2430_init_early(void)
{
omap2_set_globals_243x();

View File

@ -55,27 +55,6 @@ struct omap_hwmod_class omap2_dss_hwmod_class = {
.reset = omap_dss_reset,
};
/*
* 'dispc' class
* display controller
*/
static struct omap_hwmod_class_sysconfig omap2_dispc_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE |
SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
};
struct omap_hwmod_class omap2_dispc_hwmod_class = {
.name = "dispc",
.sysc = &omap2_dispc_sysc,
};
/*
* 'rfbi' class
* remote frame buffer interface

View File

@ -28,6 +28,28 @@ struct omap_hwmod_dma_info omap2xxx_dss_sdma_chs[] = {
{ .name = "dispc", .dma_req = 5 },
{ .dma_req = -1 }
};
/*
* 'dispc' class
* display controller
*/
static struct omap_hwmod_class_sysconfig omap2_dispc_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE |
SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
};
struct omap_hwmod_class omap2_dispc_hwmod_class = {
.name = "dispc",
.sysc = &omap2_dispc_sysc,
};
/* OMAP2xxx Timer Common */
static struct omap_hwmod_class_sysconfig omap2xxx_timer_sysc = {
.rev_offs = 0x0000,

View File

@ -1480,6 +1480,28 @@ static struct omap_hwmod omap3xxx_dss_core_hwmod = {
.masters_cnt = ARRAY_SIZE(omap3xxx_dss_masters),
};
/*
* 'dispc' class
* display controller
*/
static struct omap_hwmod_class_sysconfig omap3_dispc_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE |
SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE |
SYSC_HAS_ENAWAKEUP),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
};
static struct omap_hwmod_class omap3_dispc_hwmod_class = {
.name = "dispc",
.sysc = &omap3_dispc_sysc,
};
/* l4_core -> dss_dispc */
static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_dispc = {
.master = &omap3xxx_l4_core_hwmod,
@ -1503,7 +1525,7 @@ static struct omap_hwmod_ocp_if *omap3xxx_dss_dispc_slaves[] = {
static struct omap_hwmod omap3xxx_dss_dispc_hwmod = {
.name = "dss_dispc",
.class = &omap2_dispc_hwmod_class,
.class = &omap3_dispc_hwmod_class,
.mpu_irqs = omap2_dispc_irqs,
.main_clk = "dss1_alwon_fck",
.prcm = {
@ -3523,12 +3545,6 @@ static __initdata struct omap_hwmod *omap3xxx_hwmods[] = {
&omap3xxx_uart2_hwmod,
&omap3xxx_uart3_hwmod,
/* dss class */
&omap3xxx_dss_dispc_hwmod,
&omap3xxx_dss_dsi1_hwmod,
&omap3xxx_dss_rfbi_hwmod,
&omap3xxx_dss_venc_hwmod,
/* i2c class */
&omap3xxx_i2c1_hwmod,
&omap3xxx_i2c2_hwmod,
@ -3635,6 +3651,15 @@ static __initdata struct omap_hwmod *am35xx_hwmods[] = {
NULL
};
static __initdata struct omap_hwmod *omap3xxx_dss_hwmods[] = {
/* dss class */
&omap3xxx_dss_dispc_hwmod,
&omap3xxx_dss_dsi1_hwmod,
&omap3xxx_dss_rfbi_hwmod,
&omap3xxx_dss_venc_hwmod,
NULL
};
int __init omap3xxx_hwmod_init(void)
{
int r;
@ -3708,6 +3733,21 @@ int __init omap3xxx_hwmod_init(void)
if (h)
r = omap_hwmod_register(h);
if (r < 0)
return r;
/*
* DSS code presumes that dss_core hwmod is handled first,
* _before_ any other DSS related hwmods so register common
* DSS hwmods last to ensure that dss_core is already registered.
* Otherwise some change things may happen, for ex. if dispc
* is handled before dss_core and DSS is enabled in bootloader
* DIPSC will be reset with outputs enabled which sometimes leads
* to unrecoverable L3 error.
* XXX The long-term fix to this is to ensure modules are set up
* in dependency order in the hwmod core code.
*/
r = omap_hwmod_register(omap3xxx_dss_hwmods);
return r;
}

View File

@ -1031,6 +1031,7 @@ static struct omap_hwmod_dma_info omap44xx_dmic_sdma_reqs[] = {
static struct omap_hwmod_addr_space omap44xx_dmic_addrs[] = {
{
.name = "mpu",
.pa_start = 0x4012e000,
.pa_end = 0x4012e07f,
.flags = ADDR_TYPE_RT
@ -1049,6 +1050,7 @@ static struct omap_hwmod_ocp_if omap44xx_l4_abe__dmic = {
static struct omap_hwmod_addr_space omap44xx_dmic_dma_addrs[] = {
{
.name = "dma",
.pa_start = 0x4902e000,
.pa_end = 0x4902e07f,
.flags = ADDR_TYPE_RT

View File

@ -19,6 +19,7 @@
#include "common.h"
#include <plat/cpu.h>
#include <plat/prcm.h>
#include <plat/irqs.h>
#include "vp.h"

View File

@ -897,7 +897,7 @@ static int __init omap_sr_probe(struct platform_device *pdev)
ret = sr_late_init(sr_info);
if (ret) {
pr_warning("%s: Error in SR late init\n", __func__);
return ret;
goto err_iounmap;
}
}

View File

@ -270,7 +270,7 @@ static struct clocksource clocksource_gpt = {
static u32 notrace dmtimer_read_sched_clock(void)
{
if (clksrc.reserved)
return __omap_dm_timer_read_counter(clksrc.io_base, 1);
return __omap_dm_timer_read_counter(&clksrc, 1);
return 0;
}

View File

@ -662,6 +662,7 @@ static struct sh_dmae_pdata usb_dma0_platform_data = {
.dmaor_is_32bit = 1,
.needs_tend_set = 1,
.no_dmars = 1,
.slave_only = 1,
};
static struct resource sh7372_usb_dmae0_resources[] = {
@ -723,6 +724,7 @@ static struct sh_dmae_pdata usb_dma1_platform_data = {
.dmaor_is_32bit = 1,
.needs_tend_set = 1,
.no_dmars = 1,
.slave_only = 1,
};
static struct resource sh7372_usb_dmae1_resources[] = {

View File

@ -225,8 +225,7 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
if ((area->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
continue;
if (__phys_to_pfn(area->phys_addr) > pfn ||
__pfn_to_phys(pfn) + offset + size-1 >
area->phys_addr + area->size-1)
__pfn_to_phys(pfn) + size-1 > area->phys_addr + area->size-1)
continue;
/* we can drop the lock here as we know *area is static */
read_unlock(&vmlist_lock);

View File

@ -46,16 +46,13 @@ EXPORT_SYMBOL_GPL(mxc_audmux_v1_configure_port);
static int mxc_audmux_v1_init(void)
{
#ifdef CONFIG_MACH_MX21
if (cpu_is_mx21())
audmux_base = MX21_IO_ADDRESS(MX21_AUDMUX_BASE_ADDR);
else
#endif
#ifdef CONFIG_MACH_MX27
if (cpu_is_mx27())
audmux_base = MX27_IO_ADDRESS(MX27_AUDMUX_BASE_ADDR);
else
#endif
(void)0;
return 0;

View File

@ -60,7 +60,7 @@ static int avic_irq_set_priority(unsigned char irq, unsigned char prio)
unsigned int mask = 0x0F << irq % 8 * 4;
if (irq >= AVIC_NUM_IRQS)
return -EINVAL;;
return -EINVAL;
temp = __raw_readl(avic_base + AVIC_NIPRIORITY(irq / 8));
temp &= ~mask;

View File

@ -8,6 +8,7 @@ config AVR32
select HAVE_KPROBES
select HAVE_GENERIC_HARDIRQS
select GENERIC_IRQ_PROBE
select GENERIC_ATOMIC64
select HARDIRQS_SW_RESEND
select GENERIC_IRQ_SHOW
select ARCH_HAVE_NMI_SAFE_CMPXCHG

View File

@ -26,7 +26,6 @@
#include <linux/cache.h>
#include <linux/of_platform.h>
#include <linux/dma-mapping.h>
#include <linux/cpu.h>
#include <asm/cacheflush.h>
#include <asm/entry.h>
#include <asm/cpuinfo.h>
@ -227,23 +226,5 @@ static int __init setup_bus_notifier(void)
return 0;
}
arch_initcall(setup_bus_notifier);
static DEFINE_PER_CPU(struct cpu, cpu_devices);
static int __init topology_init(void)
{
int i, ret;
for_each_present_cpu(i) {
struct cpu *c = &per_cpu(cpu_devices, i);
ret = register_cpu(c, i);
if (ret)
printk(KERN_WARNING "topology_init: register_cpu %d "
"failed (%d)\n", i, ret);
}
return 0;
}
subsys_initcall(topology_init);

View File

@ -2356,6 +2356,7 @@ config PCI
depends on HW_HAS_PCI
select PCI_DOMAINS
select GENERIC_PCI_IOMAP
select NO_GENERIC_PCI_IOPORT_MAP
help
Find out whether you have a PCI motherboard. PCI is the name of a
bus system, i.e. the way the CPU talks to the other stuff inside

View File

@ -10,8 +10,8 @@
#include <linux/module.h>
#include <asm/io.h>
static void __iomem *ioport_map_pci(struct pci_dev *dev,
unsigned long port, unsigned int nr)
void __iomem *__pci_ioport_map(struct pci_dev *dev,
unsigned long port, unsigned int nr)
{
struct pci_controller *ctrl = dev->bus->sysdata;
unsigned long base = ctrl->io_map_base;

View File

@ -859,6 +859,7 @@ config PCI
depends on SYS_SUPPORTS_PCI
select PCI_DOMAINS
select GENERIC_PCI_IOMAP
select NO_GENERIC_PCI_IOPORT_MAP
help
Find out whether you have a PCI motherboard. PCI is the name of a
bus system, i.e. the way the CPU talks to the other stuff inside

View File

@ -356,8 +356,8 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
#ifndef CONFIG_GENERIC_IOMAP
static void __iomem *ioport_map_pci(struct pci_dev *dev,
unsigned long port, unsigned int nr)
void __iomem *__pci_ioport_map(struct pci_dev *dev,
unsigned long port, unsigned int nr)
{
struct pci_channel *chan = dev->sysdata;

View File

@ -33,6 +33,7 @@ config SPARC
config SPARC32
def_bool !64BIT
select GENERIC_ATOMIC64
select CLZ_TAB
config SPARC64
def_bool 64BIT

View File

@ -17,23 +17,9 @@ along with GNU CC; see the file COPYING. If not, write to
the Free Software Foundation, 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
.data
.align 8
.globl __clz_tab
__clz_tab:
.byte 0,1,2,2,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5
.byte 6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6
.byte 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7
.byte 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8
.size __clz_tab,256
.global .udiv
.text
.align 4
.global .udiv
.globl __divdi3
__divdi3:
save %sp,-104,%sp

View File

@ -145,13 +145,13 @@ extern void __add_wrong_size(void)
#ifdef __HAVE_ARCH_CMPXCHG
#define cmpxchg(ptr, old, new) \
__cmpxchg((ptr), (old), (new), sizeof(*ptr))
__cmpxchg(ptr, old, new, sizeof(*(ptr)))
#define sync_cmpxchg(ptr, old, new) \
__sync_cmpxchg((ptr), (old), (new), sizeof(*ptr))
__sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
#define cmpxchg_local(ptr, old, new) \
__cmpxchg_local((ptr), (old), (new), sizeof(*ptr))
__cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
#endif
/*

View File

@ -190,6 +190,9 @@ struct x86_emulate_ops {
int (*intercept)(struct x86_emulate_ctxt *ctxt,
struct x86_instruction_info *info,
enum x86_intercept_stage stage);
bool (*get_cpuid)(struct x86_emulate_ctxt *ctxt,
u32 *eax, u32 *ebx, u32 *ecx, u32 *edx);
};
typedef u32 __attribute__((vector_size(16))) sse128_t;
@ -298,6 +301,19 @@ struct x86_emulate_ctxt {
#define X86EMUL_MODE_PROT (X86EMUL_MODE_PROT16|X86EMUL_MODE_PROT32| \
X86EMUL_MODE_PROT64)
/* CPUID vendors */
#define X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx 0x68747541
#define X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx 0x444d4163
#define X86EMUL_CPUID_VENDOR_AuthenticAMD_edx 0x69746e65
#define X86EMUL_CPUID_VENDOR_AMDisbetterI_ebx 0x69444d41
#define X86EMUL_CPUID_VENDOR_AMDisbetterI_ecx 0x21726574
#define X86EMUL_CPUID_VENDOR_AMDisbetterI_edx 0x74656273
#define X86EMUL_CPUID_VENDOR_GenuineIntel_ebx 0x756e6547
#define X86EMUL_CPUID_VENDOR_GenuineIntel_ecx 0x6c65746e
#define X86EMUL_CPUID_VENDOR_GenuineIntel_edx 0x49656e69
enum x86_intercept_stage {
X86_ICTP_NONE = 0, /* Allow zero-init to not match anything */
X86_ICPT_PRE_EXCEPT,

View File

@ -252,7 +252,8 @@ int __kprobes __die(const char *str, struct pt_regs *regs, long err)
unsigned short ss;
unsigned long sp;
#endif
printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
printk(KERN_DEFAULT
"%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
#ifdef CONFIG_PREEMPT
printk("PREEMPT ");
#endif

View File

@ -129,7 +129,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
if (!stack) {
if (regs)
stack = (unsigned long *)regs->sp;
else if (task && task != current)
else if (task != current)
stack = (unsigned long *)task->thread.sp;
else
stack = &dummy;
@ -269,11 +269,11 @@ void show_registers(struct pt_regs *regs)
unsigned char c;
u8 *ip;
printk(KERN_EMERG "Stack:\n");
printk(KERN_DEFAULT "Stack:\n");
show_stack_log_lvl(NULL, regs, (unsigned long *)sp,
0, KERN_EMERG);
0, KERN_DEFAULT);
printk(KERN_EMERG "Code: ");
printk(KERN_DEFAULT "Code: ");
ip = (u8 *)regs->ip - code_prologue;
if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {

View File

@ -39,6 +39,14 @@ static int reboot_mode;
enum reboot_type reboot_type = BOOT_ACPI;
int reboot_force;
/* This variable is used privately to keep track of whether or not
* reboot_type is still set to its default value (i.e., reboot= hasn't
* been set on the command line). This is needed so that we can
* suppress DMI scanning for reboot quirks. Without it, it's
* impossible to override a faulty reboot quirk without recompiling.
*/
static int reboot_default = 1;
#if defined(CONFIG_X86_32) && defined(CONFIG_SMP)
static int reboot_cpu = -1;
#endif
@ -67,6 +75,12 @@ bool port_cf9_safe = false;
static int __init reboot_setup(char *str)
{
for (;;) {
/* Having anything passed on the command line via
* reboot= will cause us to disable DMI checking
* below.
*/
reboot_default = 0;
switch (*str) {
case 'w':
reboot_mode = 0x1234;
@ -295,14 +309,6 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
DMI_MATCH(DMI_BOARD_NAME, "P4S800"),
},
},
{ /* Handle problems with rebooting on VersaLogic Menlow boards */
.callback = set_bios_reboot,
.ident = "VersaLogic Menlow based board",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "VersaLogic Corporation"),
DMI_MATCH(DMI_BOARD_NAME, "VersaLogic Menlow board"),
},
},
{ /* Handle reboot issue on Acer Aspire one */
.callback = set_kbd_reboot,
.ident = "Acer Aspire One A110",
@ -316,7 +322,12 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
static int __init reboot_init(void)
{
dmi_check_system(reboot_dmi_table);
/* Only do the DMI check if reboot_type hasn't been overridden
* on the command line
*/
if (reboot_default) {
dmi_check_system(reboot_dmi_table);
}
return 0;
}
core_initcall(reboot_init);
@ -465,7 +476,12 @@ static struct dmi_system_id __initdata pci_reboot_dmi_table[] = {
static int __init pci_reboot_init(void)
{
dmi_check_system(pci_reboot_dmi_table);
/* Only do the DMI check if reboot_type hasn't been overridden
* on the command line
*/
if (reboot_default) {
dmi_check_system(pci_reboot_dmi_table);
}
return 0;
}
core_initcall(pci_reboot_init);

View File

@ -1891,6 +1891,51 @@ setup_syscalls_segments(struct x86_emulate_ctxt *ctxt,
ss->p = 1;
}
static bool em_syscall_is_enabled(struct x86_emulate_ctxt *ctxt)
{
struct x86_emulate_ops *ops = ctxt->ops;
u32 eax, ebx, ecx, edx;
/*
* syscall should always be enabled in longmode - so only become
* vendor specific (cpuid) if other modes are active...
*/
if (ctxt->mode == X86EMUL_MODE_PROT64)
return true;
eax = 0x00000000;
ecx = 0x00000000;
if (ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx)) {
/*
* Intel ("GenuineIntel")
* remark: Intel CPUs only support "syscall" in 64bit
* longmode. Also an 64bit guest with a
* 32bit compat-app running will #UD !! While this
* behaviour can be fixed (by emulating) into AMD
* response - CPUs of AMD can't behave like Intel.
*/
if (ebx == X86EMUL_CPUID_VENDOR_GenuineIntel_ebx &&
ecx == X86EMUL_CPUID_VENDOR_GenuineIntel_ecx &&
edx == X86EMUL_CPUID_VENDOR_GenuineIntel_edx)
return false;
/* AMD ("AuthenticAMD") */
if (ebx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx &&
ecx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx &&
edx == X86EMUL_CPUID_VENDOR_AuthenticAMD_edx)
return true;
/* AMD ("AMDisbetter!") */
if (ebx == X86EMUL_CPUID_VENDOR_AMDisbetterI_ebx &&
ecx == X86EMUL_CPUID_VENDOR_AMDisbetterI_ecx &&
edx == X86EMUL_CPUID_VENDOR_AMDisbetterI_edx)
return true;
}
/* default: (not Intel, not AMD), apply Intel's stricter rules... */
return false;
}
static int em_syscall(struct x86_emulate_ctxt *ctxt)
{
struct x86_emulate_ops *ops = ctxt->ops;
@ -1904,9 +1949,15 @@ static int em_syscall(struct x86_emulate_ctxt *ctxt)
ctxt->mode == X86EMUL_MODE_VM86)
return emulate_ud(ctxt);
if (!(em_syscall_is_enabled(ctxt)))
return emulate_ud(ctxt);
ops->get_msr(ctxt, MSR_EFER, &efer);
setup_syscalls_segments(ctxt, &cs, &ss);
if (!(efer & EFER_SCE))
return emulate_ud(ctxt);
ops->get_msr(ctxt, MSR_STAR, &msr_data);
msr_data >>= 32;
cs_sel = (u16)(msr_data & 0xfffc);

View File

@ -1495,6 +1495,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
{
bool pr = false;
switch (msr) {
case MSR_EFER:
return set_efer(vcpu, data);
@ -1635,6 +1637,18 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
pr_unimpl(vcpu, "unimplemented perfctr wrmsr: "
"0x%x data 0x%llx\n", msr, data);
break;
case MSR_P6_PERFCTR0:
case MSR_P6_PERFCTR1:
pr = true;
case MSR_P6_EVNTSEL0:
case MSR_P6_EVNTSEL1:
if (kvm_pmu_msr(vcpu, msr))
return kvm_pmu_set_msr(vcpu, msr, data);
if (pr || data != 0)
pr_unimpl(vcpu, "disabled perfctr wrmsr: "
"0x%x data 0x%llx\n", msr, data);
break;
case MSR_K7_CLK_CTL:
/*
* Ignore all writes to this no longer documented MSR.
@ -1835,6 +1849,14 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
case MSR_FAM10H_MMIO_CONF_BASE:
data = 0;
break;
case MSR_P6_PERFCTR0:
case MSR_P6_PERFCTR1:
case MSR_P6_EVNTSEL0:
case MSR_P6_EVNTSEL1:
if (kvm_pmu_msr(vcpu, msr))
return kvm_pmu_get_msr(vcpu, msr, pdata);
data = 0;
break;
case MSR_IA32_UCODE_REV:
data = 0x100000000ULL;
break;
@ -4180,6 +4202,28 @@ static int emulator_intercept(struct x86_emulate_ctxt *ctxt,
return kvm_x86_ops->check_intercept(emul_to_vcpu(ctxt), info, stage);
}
static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt,
u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
{
struct kvm_cpuid_entry2 *cpuid = NULL;
if (eax && ecx)
cpuid = kvm_find_cpuid_entry(emul_to_vcpu(ctxt),
*eax, *ecx);
if (cpuid) {
*eax = cpuid->eax;
*ecx = cpuid->ecx;
if (ebx)
*ebx = cpuid->ebx;
if (edx)
*edx = cpuid->edx;
return true;
}
return false;
}
static struct x86_emulate_ops emulate_ops = {
.read_std = kvm_read_guest_virt_system,
.write_std = kvm_write_guest_virt_system,
@ -4211,6 +4255,7 @@ static struct x86_emulate_ops emulate_ops = {
.get_fpu = emulator_get_fpu,
.put_fpu = emulator_put_fpu,
.intercept = emulator_intercept,
.get_cpuid = emulator_get_cpuid,
};
static void cache_all_regs(struct kvm_vcpu *vcpu)

View File

@ -673,7 +673,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
stackend = end_of_stack(tsk);
if (tsk != &init_task && *stackend != STACK_END_MAGIC)
printk(KERN_ALERT "Thread overran stack, or stack corrupted\n");
printk(KERN_EMERG "Thread overran stack, or stack corrupted\n");
tsk->thread.cr2 = address;
tsk->thread.trap_no = 14;
@ -684,7 +684,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
sig = 0;
/* Executive summary in case the body of the oops scrolled away */
printk(KERN_EMERG "CR2: %016lx\n", address);
printk(KERN_DEFAULT "CR2: %016lx\n", address);
oops_end(flags, regs, sig);
}

View File

@ -118,7 +118,4 @@ extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
/* Don't build bcopy at all ... */
#define __HAVE_ARCH_BCOPY
#define __HAVE_ARCH_MEMSCAN
#define memscan memchr
#endif /* _XTENSA_STRING_H */

View File

@ -579,13 +579,6 @@ static int __cpuinit acpi_processor_add(struct acpi_device *device)
goto err_free_cpumask;
}
/*
* Do not start hotplugged CPUs now, but when they
* are onlined the first time
*/
if (pr->flags.need_hotplug_init)
return 0;
/*
* Do not start hotplugged CPUs now, but when they
* are onlined the first time

View File

@ -380,6 +380,7 @@ static int rbd_get_client(struct rbd_device *rbd_dev, const char *mon_addr,
rbdc = __rbd_client_find(opt);
if (rbdc) {
ceph_destroy_options(opt);
kfree(rbd_opts);
/* using an existing client */
kref_get(&rbdc->kref);
@ -406,15 +407,15 @@ done_err:
/*
* Destroy ceph client
*
* Caller must hold node_lock.
*/
static void rbd_client_release(struct kref *kref)
{
struct rbd_client *rbdc = container_of(kref, struct rbd_client, kref);
dout("rbd_release_client %p\n", rbdc);
spin_lock(&node_lock);
list_del(&rbdc->node);
spin_unlock(&node_lock);
ceph_destroy_client(rbdc->client);
kfree(rbdc->rbd_opts);
@ -427,7 +428,9 @@ static void rbd_client_release(struct kref *kref)
*/
static void rbd_put_client(struct rbd_device *rbd_dev)
{
spin_lock(&node_lock);
kref_put(&rbd_dev->rbd_client->kref, rbd_client_release);
spin_unlock(&node_lock);
rbd_dev->rbd_client = NULL;
rbd_dev->client = NULL;
}

View File

@ -1343,7 +1343,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
tasklet_init(&atchan->tasklet, atc_tasklet,
(unsigned long)atchan);
atc_enable_irq(atchan);
atc_enable_chan_irq(atdma, i);
}
/* set base routines */
@ -1410,7 +1410,7 @@ static int __exit at_dma_remove(struct platform_device *pdev)
struct at_dma_chan *atchan = to_at_dma_chan(chan);
/* Disable interrupts */
atc_disable_irq(atchan);
atc_disable_chan_irq(atdma, chan->chan_id);
tasklet_disable(&atchan->tasklet);
tasklet_kill(&atchan->tasklet);

View File

@ -327,28 +327,27 @@ static void atc_dump_lli(struct at_dma_chan *atchan, struct at_lli *lli)
}
static void atc_setup_irq(struct at_dma_chan *atchan, int on)
static void atc_setup_irq(struct at_dma *atdma, int chan_id, int on)
{
struct at_dma *atdma = to_at_dma(atchan->chan_common.device);
u32 ebci;
u32 ebci;
/* enable interrupts on buffer transfer completion & error */
ebci = AT_DMA_BTC(atchan->chan_common.chan_id)
| AT_DMA_ERR(atchan->chan_common.chan_id);
ebci = AT_DMA_BTC(chan_id)
| AT_DMA_ERR(chan_id);
if (on)
dma_writel(atdma, EBCIER, ebci);
else
dma_writel(atdma, EBCIDR, ebci);
}
static inline void atc_enable_irq(struct at_dma_chan *atchan)
static void atc_enable_chan_irq(struct at_dma *atdma, int chan_id)
{
atc_setup_irq(atchan, 1);
atc_setup_irq(atdma, chan_id, 1);
}
static inline void atc_disable_irq(struct at_dma_chan *atchan)
static void atc_disable_chan_irq(struct at_dma *atdma, int chan_id)
{
atc_setup_irq(atchan, 0);
atc_setup_irq(atdma, chan_id, 0);
}

View File

@ -599,7 +599,7 @@ static int dmatest_add_channel(struct dma_chan *chan)
}
if (dma_has_cap(DMA_PQ, dma_dev->cap_mask)) {
cnt = dmatest_add_threads(dtc, DMA_PQ);
thread_count += cnt > 0 ?: 0;
thread_count += cnt > 0 ? cnt : 0;
}
pr_info("dmatest: Started %u threads using %s\n",

View File

@ -1102,11 +1102,13 @@ static int sdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
case DMA_SLAVE_CONFIG:
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
sdmac->per_address = dmaengine_cfg->src_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst;
sdmac->watermark_level = dmaengine_cfg->src_maxburst *
dmaengine_cfg->src_addr_width;
sdmac->word_size = dmaengine_cfg->src_addr_width;
} else {
sdmac->per_address = dmaengine_cfg->dst_addr;
sdmac->watermark_level = dmaengine_cfg->dst_maxburst;
sdmac->watermark_level = dmaengine_cfg->dst_maxburst *
dmaengine_cfg->dst_addr_width;
sdmac->word_size = dmaengine_cfg->dst_addr_width;
}
sdmac->direction = dmaengine_cfg->direction;

View File

@ -1262,7 +1262,8 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&shdev->common.channels);
dma_cap_set(DMA_MEMCPY, shdev->common.cap_mask);
if (!pdata->slave_only)
dma_cap_set(DMA_MEMCPY, shdev->common.cap_mask);
if (pdata->slave && pdata->slave_num)
dma_cap_set(DMA_SLAVE, shdev->common.cap_mask);

View File

@ -263,6 +263,7 @@ static inline struct fw_ohci *fw_ohci(struct fw_card *card)
static char ohci_driver_name[] = KBUILD_MODNAME;
#define PCI_DEVICE_ID_AGERE_FW643 0x5901
#define PCI_DEVICE_ID_CREATIVE_SB1394 0x4001
#define PCI_DEVICE_ID_JMICRON_JMB38X_FW 0x2380
#define PCI_DEVICE_ID_TI_TSB12LV22 0x8009
#define PCI_DEVICE_ID_TI_TSB12LV26 0x8020
@ -289,6 +290,9 @@ static const struct {
{PCI_VENDOR_ID_ATT, PCI_DEVICE_ID_AGERE_FW643, 6,
QUIRK_NO_MSI},
{PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_SB1394, PCI_ANY_ID,
QUIRK_RESET_PACKET},
{PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB38X_FW, PCI_ANY_ID,
QUIRK_NO_MSI},
@ -299,7 +303,7 @@ static const struct {
QUIRK_NO_MSI},
{PCI_VENDOR_ID_RICOH, PCI_ANY_ID, PCI_ANY_ID,
QUIRK_CYCLE_TIMER},
QUIRK_CYCLE_TIMER | QUIRK_NO_MSI},
{PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_TSB12LV22, PCI_ANY_ID,
QUIRK_CYCLE_TIMER | QUIRK_RESET_PACKET | QUIRK_NO_1394A},

View File

@ -96,7 +96,7 @@ static const char *gpio_p2_names[LPC32XX_GPIO_P2_MAX] = {
};
static const char *gpio_p3_names[LPC32XX_GPIO_P3_MAX] = {
"gpi000", "gpio01", "gpio02", "gpio03",
"gpio00", "gpio01", "gpio02", "gpio03",
"gpio04", "gpio05"
};

View File

@ -448,6 +448,7 @@ static int __devinit ioh_gpio_probe(struct pci_dev *pdev,
chip->reg = chip->base;
chip->ch = i;
mutex_init(&chip->lock);
spin_lock_init(&chip->spinlock);
ioh_gpio_setup(chip, num_ports[i]);
ret = gpiochip_add(&chip->gpio);
if (ret) {

View File

@ -392,6 +392,7 @@ static int __devinit pch_gpio_probe(struct pci_dev *pdev,
chip->reg = chip->base;
pci_set_drvdata(pdev, chip);
mutex_init(&chip->lock);
spin_lock_init(&chip->spinlock);
pch_gpio_setup(chip);
ret = gpiochip_add(&chip->gpio);
if (ret) {

View File

@ -2387,27 +2387,30 @@ static struct samsung_gpio_chip exynos4_gpios_3[] = {
};
#if defined(CONFIG_ARCH_EXYNOS4) && defined(CONFIG_OF)
static int exynos4_gpio_xlate(struct gpio_chip *gc, struct device_node *np,
const void *gpio_spec, u32 *flags)
static int exynos4_gpio_xlate(struct gpio_chip *gc,
const struct of_phandle_args *gpiospec, u32 *flags)
{
const __be32 *gpio = gpio_spec;
const u32 n = be32_to_cpup(gpio);
unsigned int pin = gc->base + be32_to_cpu(gpio[0]);
unsigned int pin;
if (WARN_ON(gc->of_gpio_n_cells < 4))
return -EINVAL;
if (n > gc->ngpio)
if (WARN_ON(gpiospec->args_count < gc->of_gpio_n_cells))
return -EINVAL;
if (s3c_gpio_cfgpin(pin, S3C_GPIO_SFN(be32_to_cpu(gpio[1]))))
if (gpiospec->args[0] > gc->ngpio)
return -EINVAL;
pin = gc->base + gpiospec->args[0];
if (s3c_gpio_cfgpin(pin, S3C_GPIO_SFN(gpiospec->args[1])))
pr_warn("gpio_xlate: failed to set pin function\n");
if (s3c_gpio_setpull(pin, be32_to_cpu(gpio[2])))
if (s3c_gpio_setpull(pin, gpiospec->args[2]))
pr_warn("gpio_xlate: failed to set pin pull up/down\n");
if (s5p_gpio_set_drvstr(pin, be32_to_cpu(gpio[3])))
if (s5p_gpio_set_drvstr(pin, gpiospec->args[3]))
pr_warn("gpio_xlate: failed to set pin drive strength\n");
return n;
return gpiospec->args[0];
}
static const struct of_device_id exynos4_gpio_dt_match[] __initdata = {

View File

@ -54,9 +54,10 @@ struct bit_entry {
int bit_table(struct drm_device *, u8 id, struct bit_entry *);
enum dcb_gpio_tag {
DCB_GPIO_TVDAC0 = 0xc,
DCB_GPIO_PANEL_POWER = 0x01,
DCB_GPIO_TVDAC0 = 0x0c,
DCB_GPIO_TVDAC1 = 0x2d,
DCB_GPIO_PWM_FAN = 0x9,
DCB_GPIO_PWM_FAN = 0x09,
DCB_GPIO_FAN_SENSE = 0x3d,
DCB_GPIO_UNUSED = 0xff
};

View File

@ -219,6 +219,16 @@ nouveau_display_init(struct drm_device *dev)
if (ret)
return ret;
/* power on internal panel if it's not already. the init tables of
* some vbios default this to off for some reason, causing the
* panel to not work after resume
*/
if (nouveau_gpio_func_get(dev, DCB_GPIO_PANEL_POWER) == 0) {
nouveau_gpio_func_set(dev, DCB_GPIO_PANEL_POWER, true);
msleep(300);
}
/* enable polling for external displays */
drm_kms_helper_poll_enable(dev);
/* enable hotplug interrupts */

View File

@ -124,7 +124,7 @@ MODULE_PARM_DESC(ctxfw, "Use external HUB/GPC ucode (fermi)\n");
int nouveau_ctxfw;
module_param_named(ctxfw, nouveau_ctxfw, int, 0400);
MODULE_PARM_DESC(ctxfw, "Santise DCB table according to MXM-SIS\n");
MODULE_PARM_DESC(mxmdcb, "Santise DCB table according to MXM-SIS\n");
int nouveau_mxmdcb = 1;
module_param_named(mxmdcb, nouveau_mxmdcb, int, 0400);

View File

@ -379,6 +379,25 @@ retry:
return 0;
}
static int
validate_sync(struct nouveau_channel *chan, struct nouveau_bo *nvbo)
{
struct nouveau_fence *fence = NULL;
int ret = 0;
spin_lock(&nvbo->bo.bdev->fence_lock);
if (nvbo->bo.sync_obj)
fence = nouveau_fence_ref(nvbo->bo.sync_obj);
spin_unlock(&nvbo->bo.bdev->fence_lock);
if (fence) {
ret = nouveau_fence_sync(fence, chan);
nouveau_fence_unref(&fence);
}
return ret;
}
static int
validate_list(struct nouveau_channel *chan, struct list_head *list,
struct drm_nouveau_gem_pushbuf_bo *pbbo, uint64_t user_pbbo_ptr)
@ -393,7 +412,7 @@ validate_list(struct nouveau_channel *chan, struct list_head *list,
list_for_each_entry(nvbo, list, entry) {
struct drm_nouveau_gem_pushbuf_bo *b = &pbbo[nvbo->pbbo_index];
ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan);
ret = validate_sync(chan, nvbo);
if (unlikely(ret)) {
NV_ERROR(dev, "fail pre-validate sync\n");
return ret;
@ -416,7 +435,7 @@ validate_list(struct nouveau_channel *chan, struct list_head *list,
return ret;
}
ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan);
ret = validate_sync(chan, nvbo);
if (unlikely(ret)) {
NV_ERROR(dev, "fail post-validate sync\n");
return ret;

View File

@ -656,7 +656,16 @@ nouveau_mxm_init(struct drm_device *dev)
if (mxm_shadow(dev, mxm[0])) {
MXM_MSG(dev, "failed to locate valid SIS\n");
#if 0
/* we should, perhaps, fall back to some kind of limited
* mode here if the x86 vbios hasn't already done the
* work for us (so we prevent loading with completely
* whacked vbios tables).
*/
return -EINVAL;
#else
return 0;
#endif
}
MXM_MSG(dev, "MXMS Version %d.%d\n",

View File

@ -495,9 +495,9 @@ nv50_pm_clocks_pre(struct drm_device *dev, struct nouveau_pm_level *perflvl)
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nv50_pm_state *info;
struct pll_lims pll;
int ret = -EINVAL;
int clk, ret = -EINVAL;
int N, M, P1, P2;
u32 clk, out;
u32 out;
if (dev_priv->chipset == 0xaa ||
dev_priv->chipset == 0xac)

View File

@ -1184,7 +1184,7 @@ static int dce4_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(EVERGREEN_GRPH_ENABLE + radeon_crtc->crtc_offset, 1);
WREG32(EVERGREEN_DESKTOP_HEIGHT + radeon_crtc->crtc_offset,
crtc->mode.vdisplay);
target_fb->height);
x &= ~3;
y &= ~1;
WREG32(EVERGREEN_VIEWPORT_START + radeon_crtc->crtc_offset,
@ -1353,7 +1353,7 @@ static int avivo_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(AVIVO_D1GRPH_ENABLE + radeon_crtc->crtc_offset, 1);
WREG32(AVIVO_D1MODE_DESKTOP_HEIGHT + radeon_crtc->crtc_offset,
crtc->mode.vdisplay);
target_fb->height);
x &= ~3;
y &= ~1;
WREG32(AVIVO_D1MODE_VIEWPORT_START + radeon_crtc->crtc_offset,

View File

@ -564,9 +564,21 @@ int radeon_dp_get_panel_mode(struct drm_encoder *encoder,
ENCODER_OBJECT_ID_NUTMEG)
panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE;
else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) ==
ENCODER_OBJECT_ID_TRAVIS)
panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;
else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
ENCODER_OBJECT_ID_TRAVIS) {
u8 id[6];
int i;
for (i = 0; i < 6; i++)
id[i] = radeon_read_dpcd_reg(radeon_connector, 0x503 + i);
if (id[0] == 0x73 &&
id[1] == 0x69 &&
id[2] == 0x76 &&
id[3] == 0x61 &&
id[4] == 0x72 &&
id[5] == 0x54)
panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE;
else
panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;
} else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP);
if (tmp & 1)
panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;

View File

@ -468,27 +468,42 @@ set_default_state(struct radeon_device *rdev)
radeon_ring_write(ring, sq_stack_resource_mgmt_2);
}
#define I2F_MAX_BITS 15
#define I2F_MAX_INPUT ((1 << I2F_MAX_BITS) - 1)
#define I2F_SHIFT (24 - I2F_MAX_BITS)
/*
* Converts unsigned integer into 32-bit IEEE floating point representation.
* Conversion is not universal and only works for the range from 0
* to 2^I2F_MAX_BITS-1. Currently we only use it with inputs between
* 0 and 16384 (inclusive), so I2F_MAX_BITS=15 is enough. If necessary,
* I2F_MAX_BITS can be increased, but that will add to the loop iterations
* and slow us down. Conversion is done by shifting the input and counting
* down until the first 1 reaches bit position 23. The resulting counter
* and the shifted input are, respectively, the exponent and the fraction.
* The sign is always zero.
*/
static uint32_t i2f(uint32_t input)
{
u32 result, i, exponent, fraction;
if ((input & 0x3fff) == 0)
result = 0; /* 0 is a special case */
WARN_ON_ONCE(input > I2F_MAX_INPUT);
if ((input & I2F_MAX_INPUT) == 0)
result = 0;
else {
exponent = 140; /* exponent biased by 127; */
fraction = (input & 0x3fff) << 10; /* cheat and only
handle numbers below 2^^15 */
for (i = 0; i < 14; i++) {
exponent = 126 + I2F_MAX_BITS;
fraction = (input & I2F_MAX_INPUT) << I2F_SHIFT;
for (i = 0; i < I2F_MAX_BITS; i++) {
if (fraction & 0x800000)
break;
else {
fraction = fraction << 1; /* keep
shifting left until top bit = 1 */
fraction = fraction << 1;
exponent = exponent - 1;
}
}
result = exponent << 23 | (fraction & 0x7fffff); /* mask
off top bit; assumed 1 */
result = exponent << 23 | (fraction & 0x7fffff);
}
return result;
}

View File

@ -59,8 +59,9 @@ static int radeon_atrm_call(acpi_handle atrm_handle, uint8_t *bios,
obj = (union acpi_object *)buffer.pointer;
memcpy(bios+offset, obj->buffer.pointer, obj->buffer.length);
len = obj->buffer.length;
kfree(buffer.pointer);
return obj->buffer.length;
return len;
}
bool radeon_atrm_supported(struct pci_dev *pdev)

View File

@ -883,6 +883,8 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
drm_kms_helper_poll_disable(dev);
/* turn off display hw */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
@ -972,6 +974,8 @@ int radeon_resume_kms(struct drm_device *dev)
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
}
drm_kms_helper_poll_enable(dev);
return 0;
}

View File

@ -958,6 +958,7 @@ struct radeon_i2c_chan *radeon_i2c_create_dp(struct drm_device *dev,
i2c->rec = *rec;
i2c->adapter.owner = THIS_MODULE;
i2c->adapter.class = I2C_CLASS_DDC;
i2c->adapter.dev.parent = &dev->pdev->dev;
i2c->dev = dev;
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"Radeon aux bus %s", name);

View File

@ -548,6 +548,7 @@ static int mousevsc_remove(struct hv_device *dev)
struct mousevsc_dev *input_dev = hv_get_drvdata(dev);
vmbus_close(dev->channel);
hid_hw_stop(input_dev->hid_device);
hid_destroy_device(input_dev->hid_device);
mousevsc_free_device(input_dev);

View File

@ -531,7 +531,6 @@ static int wacom_probe(struct hid_device *hdev,
wdata->battery.type = POWER_SUPPLY_TYPE_BATTERY;
wdata->battery.use_for_apm = 0;
power_supply_powers(&wdata->battery, &hdev->dev);
ret = power_supply_register(&hdev->dev, &wdata->battery);
if (ret) {
@ -540,6 +539,8 @@ static int wacom_probe(struct hid_device *hdev,
goto err_battery;
}
power_supply_powers(&wdata->battery, &hdev->dev);
wdata->ac.properties = wacom_ac_props;
wdata->ac.num_properties = ARRAY_SIZE(wacom_ac_props);
wdata->ac.get_property = wacom_ac_get_property;
@ -547,14 +548,14 @@ static int wacom_probe(struct hid_device *hdev,
wdata->ac.type = POWER_SUPPLY_TYPE_MAINS;
wdata->ac.use_for_apm = 0;
power_supply_powers(&wdata->battery, &hdev->dev);
ret = power_supply_register(&hdev->dev, &wdata->ac);
if (ret) {
hid_warn(hdev,
"can't create ac battery attribute, err: %d\n", ret);
goto err_ac;
}
power_supply_powers(&wdata->ac, &hdev->dev);
#endif
return 0;

View File

@ -1226,14 +1226,14 @@ static int wiimote_hid_probe(struct hid_device *hdev,
wdata->battery.type = POWER_SUPPLY_TYPE_BATTERY;
wdata->battery.use_for_apm = 0;
power_supply_powers(&wdata->battery, &hdev->dev);
ret = power_supply_register(&wdata->hdev->dev, &wdata->battery);
if (ret) {
hid_err(hdev, "Cannot register battery device\n");
goto err_battery;
}
power_supply_powers(&wdata->battery, &hdev->dev);
ret = wiimote_leds_create(wdata);
if (ret)
goto err_free;

View File

@ -922,11 +922,11 @@ void hiddev_disconnect(struct hid_device *hid)
struct hiddev *hiddev = hid->hiddev;
struct usbhid_device *usbhid = hid->driver_data;
usb_deregister_dev(usbhid->intf, &hiddev_class);
mutex_lock(&hiddev->existancelock);
hiddev->exist = 0;
usb_deregister_dev(usbhid->intf, &hiddev_class);
if (hiddev->open) {
mutex_unlock(&hiddev->existancelock);
usbhid_close(hiddev->hid);

View File

@ -1920,9 +1920,26 @@ w83627ehf_check_fan_inputs(const struct w83627ehf_sio_data *sio_data,
fan4min = 0;
fan5pin = 0;
} else if (sio_data->kind == nct6776) {
fan3pin = !(superio_inb(sio_data->sioreg, 0x24) & 0x40);
fan4pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x01);
fan5pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x02);
bool gpok = superio_inb(sio_data->sioreg, 0x27) & 0x80;
superio_select(sio_data->sioreg, W83627EHF_LD_HWM);
regval = superio_inb(sio_data->sioreg, SIO_REG_ENABLE);
if (regval & 0x80)
fan3pin = gpok;
else
fan3pin = !(superio_inb(sio_data->sioreg, 0x24) & 0x40);
if (regval & 0x40)
fan4pin = gpok;
else
fan4pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x01);
if (regval & 0x20)
fan5pin = gpok;
else
fan5pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x02);
fan4min = fan4pin;
} else if (sio_data->kind == w83667hg || sio_data->kind == w83667hg_b) {
fan3pin = 1;

View File

@ -1018,7 +1018,7 @@ omap_i2c_probe(struct platform_device *pdev)
goto err_release_region;
}
match = of_match_device(omap_i2c_of_match, &pdev->dev);
match = of_match_device(of_match_ptr(omap_i2c_of_match), &pdev->dev);
if (match) {
u32 freq = 100000; /* default to 100000 Hz */

View File

@ -808,9 +808,12 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf,
return PTR_ERR(ctx);
if (cmd.conn_param.valid) {
ctx->uid = cmd.uid;
ucma_copy_conn_param(&conn_param, &cmd.conn_param);
mutex_lock(&file->mut);
ret = rdma_accept(ctx->cm_id, &conn_param);
if (!ret)
ctx->uid = cmd.uid;
mutex_unlock(&file->mut);
} else
ret = rdma_accept(ctx->cm_id, NULL);

View File

@ -1485,6 +1485,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
qp->event_handler = attr.event_handler;
qp->qp_context = attr.qp_context;
qp->qp_type = attr.qp_type;
atomic_set(&qp->usecnt, 0);
atomic_inc(&pd->usecnt);
atomic_inc(&attr.send_cq->usecnt);
if (attr.recv_cq)

View File

@ -421,6 +421,7 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
qp->uobject = NULL;
qp->qp_type = qp_init_attr->qp_type;
atomic_set(&qp->usecnt, 0);
if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) {
qp->event_handler = __ib_shared_qp_event_handler;
qp->qp_context = qp;
@ -430,7 +431,6 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
qp->xrcd = qp_init_attr->xrcd;
atomic_inc(&qp_init_attr->xrcd->usecnt);
INIT_LIST_HEAD(&qp->open_list);
atomic_set(&qp->usecnt, 0);
real_qp = qp;
qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,

View File

@ -89,7 +89,7 @@ static int create_file(const char *name, umode_t mode,
error = ipathfs_mknod(parent->d_inode, *dentry,
mode, fops, data);
else
error = PTR_ERR(dentry);
error = PTR_ERR(*dentry);
mutex_unlock(&parent->d_inode->i_mutex);
return error;

View File

@ -257,12 +257,9 @@ static int ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
return IB_MAD_RESULT_SUCCESS;
/*
* Don't process SMInfo queries or vendor-specific
* MADs -- the SMA can't handle them.
* Don't process SMInfo queries -- the SMA can't handle them.
*/
if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO ||
((in_mad->mad_hdr.attr_id & IB_SMP_ATTR_VENDOR_MASK) ==
IB_SMP_ATTR_VENDOR_MASK))
if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO)
return IB_MAD_RESULT_SUCCESS;
} else if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT ||
in_mad->mad_hdr.mgmt_class == MLX4_IB_VENDOR_CLASS1 ||

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -233,6 +233,7 @@ static int send_mpa_reject(struct nes_cm_node *cm_node)
u8 *start_ptr = &start_addr;
u8 **start_buff = &start_ptr;
u16 buff_len = 0;
struct ietf_mpa_v1 *mpa_frame;
skb = dev_alloc_skb(MAX_CM_BUFFER);
if (!skb) {
@ -242,6 +243,8 @@ static int send_mpa_reject(struct nes_cm_node *cm_node)
/* send an MPA reject frame */
cm_build_mpa_frame(cm_node, start_buff, &buff_len, NULL, MPA_KEY_REPLY);
mpa_frame = (struct ietf_mpa_v1 *)*start_buff;
mpa_frame->flags |= IETF_MPA_FLAGS_REJECT;
form_cm_frame(skb, cm_node, NULL, 0, *start_buff, buff_len, SET_ACK | SET_FIN);
cm_node->state = NES_CM_STATE_FIN_WAIT1;
@ -1360,8 +1363,7 @@ static int nes_addr_resolve_neigh(struct nes_vnic *nesvnic, u32 dst_ip, int arpi
if (!memcmp(nesadapter->arp_table[arpindex].mac_addr,
neigh->ha, ETH_ALEN)) {
/* Mac address same as in nes_arp_table */
ip_rt_put(rt);
return rc;
goto out;
}
nes_manage_arp_cache(nesvnic->netdev,
@ -1377,6 +1379,8 @@ static int nes_addr_resolve_neigh(struct nes_vnic *nesvnic, u32 dst_ip, int arpi
neigh_event_send(neigh, NULL);
}
}
out:
rcu_read_unlock();
ip_rt_put(rt);
return rc;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

Some files were not shown because too many files have changed in this diff Show More