Replace the goto loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.
No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation, because previously buffers
on the lru list were always guaranteed to be reservable.
This should work since currently ttm guarantees items on the lru are
always reservable, and reserving items blockingly with some bo held
are enough to cause you to run into a deadlock.
Currently this is not a concern since removal off the lru list and
reservations are always done with atomically, but when this guarantee
no longer holds, we have to handle this situation or end up with
possible deadlocks.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Replace the while loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.
No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
By removing the unlocking of lru and retaking it immediately, a race is
removed where the bo is taken off the swap list or the lru list between
the unlock and relock. As such the cleanup_refs code can be simplified,
it will attempt to call ttm_bo_wait non-blockingly, and if it fails
it will drop the locks and perform a blocking wait, or return an error
if no_wait_gpu was set.
The need for looping is also eliminated, since swapout and evict_mem_first
will always follow the destruction path, no new fence is allowed
to be attached. As far as I can see this may already have been the case,
but the unlocking / relocking required a complicated loop to deal with
re-reservation.
Changes since v1:
- Simplify no_wait_gpu case by folding it in with empty ddestroy.
- Hold a reservation while calling ttm_bo_cleanup_memtype_use again.
Changes since v2:
- Do not remove bo from lru list while waiting
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The few places that care should have those checks instead.
This allows destruction of bo backed memory without a reservation.
It's required for being able to rework the delayed destroy path,
as it is no longer guaranteed to hold a reservation before unlocking.
However any previous wait is still guaranteed to complete, and it's
one of the last things to be done before the buffer object is freed.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This requires changing the order in ttm_bo_cleanup_refs_or_queue to
take the reservation first, as there is otherwise no race free way to
take lru lock before fence_lock.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Alex writes:
Pretty minor -next pull request. We some additional new bits waiting
internally for release. Hopefully Monday we can get at least some of
them out. The others will probably take a few more weeks.
Highlights of the current request:
- ELD registers for passing audio information to the sound hardware
- Handle GPUVM page faults more gracefully
- Misc fixes
Merge radeon test
* 'drm-next-3.8' of git://people.freedesktop.org/~agd5f/linux: (483 commits)
drm/radeon: bump driver version for new info ioctl requests
drm/radeon: fix eDP clk and lane setup for scaled modes
drm/radeon: add new INFO ioctl requests
drm/radeon/dce32+: use fractional fb dividers for high clocks
drm/radeon: use cached memory when evicting for vram on non agp
drm/radeon: add a CS flag END_OF_FRAME
drm/radeon: stop page faults from hanging the system (v2)
drm/radeon/dce4/5: add registers for ELD handling
drm/radeon/dce3.2: add registers for ELD handling
radeon: fix pll/ctrc mapping on dce2 and dce3 hardware
Linux 3.7-rc7
powerpc/eeh: Do not invalidate PE properly
Revert "drm/i915: enable rc6 on ilk again"
ALSA: hda - Fix build without CONFIG_PM
of/address: sparc: Declare of_iomap as an extern function for sparc again
PM / QoS: fix wrong error-checking condition
bnx2x: remove redundant warning log
vxlan: fix command usage in its doc
8139cp: revert "set ring address before enabling receiver"
MPI: Fix compilation on MIPS with GCC 4.4 and newer
...
Conflicts:
drivers/gpu/drm/exynos/exynos_drm_encoder.c
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
Need to use the adjusted mode since we are sending native
timing and using the scaler for non-native modes.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
cc: stable@vger.kernel.org
Add requests to get the number of shader engines (SE) and
the number of SH per SE. These are needed for geometry
and tesselation shaders in the 3D driver as well as setting
up PA_SC_RASTER_CONFIG on SI asics.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Force the use of cached memory when evicting from vram on non agp
hardware. Also force write combine on agp hw. This is to insure
the minimum cache type change when allocating memory and improving
memory eviction especialy on pci/pcie hw.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Redirect invalid memory accesses to the default page
instead of locking up the memory controller. Also
enable the invalid memory access interrupts and
start spamming system log with it.
v2 (agd5f): fix up against 2 level PT changes
Signed-off-by: Christian König <deathsimple@vodafone.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
This patch adds code for composing AVI and AUI info frames
and send them every VSYNC.
This patch is important for hdmi certification.
v3:
- Moved enums, macros to exynos_hdmi.c.
- Corrected hex format.
- Added static to hdmi_reg_infoframe.
v2:
- Added few blank lines.
- Corrected comments format.
- Added comments for 2's Complement calculation for check sum.
v1:
- Remove un-necessary blank lines.
- Change the case of hex constants.
Signed-off-by: Rahul Sharma <rahul.sharma@samsung.com>
Signed-off-by: Fahad Kunnathadi <fahad.k@samsung.com>
Signed-off-by: Shirish S <s.shirish@samsung.com>
Acked-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
devm_clk_get is device managed and makes error handling and exit code
simpler.
Also fixes an error related to returning 'ret' without initialising
with error code.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
devm_* functions are device managed and make error handling and exit code
simpler.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
devm_clk_get is device managed and makes error handling and exit code
simpler.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Pointer was being dereferenced after freeing.
Fixes the following error:
drivers/gpu/drm/exynos/exynos_drm_g2d.c:323 g2d_userptr_put_dma_addr() error:
dereferencing freed memory 'g2d_userptr'
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
devm_clk_get is device managed and makes error handling and exit code
simpler.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
The 'pages' structure in the exynos gem buffer has been
removed. So we get the fix.smem_start from the first sgl
of the scatter gather table.
Signed-off-by: Prathyush K <prathyush.k@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch is to preserve the display mode header during the mode adjustment.
Display mode header is overwritten with the adjusted mode header which is
throwing the stack dump.
Signed-off-by: Rahul Sharma <rahul.sharma@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
drm_get_edid() returns a pointer to an EDID block. The caller
is responsible to free this pointer itself.
Here the pointer gets assigned to the local variable raw_edid.
Therefore it should be freed before the variable goes out of
scope.
Signed-off-by: Egbert Eich <eich@suse.de>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Changelog v2:
Removed redundant check for invalid sgl.
Added check for valid page_offset in the beginning of exynos_drm_gem_map_buf.
Changelog v1:
The 'pages' structure is not required since we can use the 'sgt'. Even for
CONTIG buffers, a SGT is created (which will have just one sgl). This SGT
can be used during mmap instead of 'pages'. The 'page_size' element of the
structure is also not used anywhere and is removed.
This patch also fixes a memory leak where the 'pages' structure was being
allocated during gem buffer allocation but not being freed during deallocate.
Signed-off-by: Prathyush K <prathyush.k@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Changelog v3:
Passing the actual buffer size instead of vm_size to dma_mmap_attrs.
Changelog v2:
Extracting the private data from fb_info structure to obtain the exynos
gem buffer structure. Now, dma address is obtained from the exynos gem
buffer structure and not from smem_start. Also calling dma_mmap_attrs
(instead of dma_mmap_writecombine) with the same attributes used
during allocation.
Changelog v1:
This patch adds a exynos drm specific implementation of fb_mmap
which supports mapping a non-contiguous buffer to user space.
This new function does not assume that the frame buffer is contiguous
and calls dma_mmap_writecombine for mapping the buffer to user space.
dma_mmap_writecombine will be able to map a contiguous buffer as well
as non-contig buffer depending on whether an IOMMU mapping is created
for drm or not.
Signed-off-by: Prathyush K <prathyush.k@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Changelog v2:
fix a little bit performance issue to previous patch.
- When drm framebuffer is destroyed, make sure that overlay
data are updated to real hardwrae for all encoders
instead of waiting for vblank every page flip request.
For this, it adds a new function,
exynos_drm_encoder_complete_scanout function.
Changelog v1:
This patch removes wait_for_vblank call from
exynos_drm_encoder_plane_disable function and move it to
exynos_drm_encoder_plane_commit function.
Disabling dma channel to each plane doens't need vblank
signal to update data to real hardware. But updating
overlay data to real hardware does need vblank signal.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Changelog v3:
use drm_file's file object instead of gem object's
- gem object's file represents the shmem storage so
process-unique file object should be used instead.
Changelog v2:
call mutex_lock before drm_vm_open_locked is called.
Changelog v1:
This patch makes it takes a reference to gem object when
specific gem mmap is requested. For this, it sets
dev->driver->gem_vm_ops to vma->vm_ops.
And this patch is based on exynos-drm-next-iommu branch of
git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch adds userptr feautre for G2D module.
The userptr means user space address allocated by malloc().
And the purpose of this feature is to make G2D's dma able
to access the user space region.
To user this feature, user should flag G2D_BUF_USRPTR to
offset variable of struct drm_exynos_g2d_cmd and fill
struct drm_exynos_g2d_userptr with user space address
and size for it and then should set a pointer to
drm_exynos_g2d_userptr object to data variable of struct
drm_exynos_g2d_cmd. The last bit of offset variable is used
to check if the cmdlist's buffer type is userptr or not.
If userptr, the g2d driver gets user space address and size
and then gets pages through get_user_pages().
(another case is counted as gem handle)
Below is sample codes:
static void set_cmd(struct drm_exynos_g2d_cmd *cmd,
unsigned long offset, unsigned long data)
{
cmd->offset = offset;
cmd->data = data;
}
static int solid_fill_test(int x, int y, unsigned long userptr)
{
struct drm_exynos_g2d_cmd cmd_gem[5];
struct drm_exynos_g2d_userptr g2d_userptr;
unsigned int gem_nr = 0;
...
g2d_userptr.userptr = userptr;
g2d_userptr.size = x * y * 4;
set_cmd(&cmd_gem[gem_nr++], DST_BASE_ADDR_REG |
G2D_BUF_USERPTR,
(unsigned long)&g2d_userptr);
...
}
int main(int argc, char **argv)
{
unsigned long addr;
...
addr = malloc(x * y * 4);
...
solid_fill_test(x, y, addr);
...
}
And next, the pages are mapped with iommu table and the device
address is set to cmdlist so that G2D's dma can access it.
As you may know, the pages from get_user_pages() are pinned.
In other words, they CAN NOT be migrated and also swapped out.
So the dma access would be safe.
But the use of userptr feature has performance overhead so
this patch also has memory pool to the userptr feature.
Please, assume that user sends cmdlist filled with userptr
and size every time to g2d driver, and the get_user_pages
funcion will be called every time.
The memory pool has maximum 64MB size and the userptr that
user had ever sent, is holded in the memory pool.
This meaning is that if the userptr from user is same as one
in the memory pool, device address to the userptr in the memory
pool is set to cmdlist.
And last, the pages from get_user_pages() will be freed once
user calls free() and the dma access is completed. Actually,
get_user_pages() takes 2 reference counts if the user process
has never accessed user region allocated by malloc(). Then, if
the user calls free(), the page reference count becomes 1 and
becomes 0 with put_page() call. And the reverse holds as well.
This means how the pages backed are used by dma and freed.
This patch is based on "drm/exynos: add iommu support for g2d",
https://patchwork.kernel.org/patch/1629481/
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
The function dma_get_sgtable will allocate a sg table internally so
it is not necessary to allocate a sg table before it. The unnecessary
'sg_alloc_table' call is removed.
Signed-off-by: Prathyush K <prathyush.k@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch fixes the problem of mapping contigous and non contigous dma buffers.
Currently page struct is calculated from the buf->dma_addr which is not the
physical address. It is replaced by buf->pages which points to the page struct
of the first page of contigous memory chunk. This gives the correct page frame
number for mapping.
Non-contigous dma buffers are described using SG table and SG lists. Each
valid SG List is pointing to a single page or group of pages which are
physically contigous. Current implementation just maps the first page of each
SG List and leave the other pages unmapped, leading to a crash. Given solution
finds the page struct for the faulting page through parsing SG table and map it.
Signed-off-by: Rahul Sharma <rahul.sharma@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
With iommu support, non-continuous buffer also is supported so
this patch removes these checking from exynos_drm_gem_get/put_dma_addr
funciton.
This patch is based on the below patch set, "drm/exynos: add
iommu support for -next".
http://www.spinics.net/lists/dri-devel/msg29041.html
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Chagelog v2:
removed unnecessary structure, struct g2d_gem_node.
Chagelog v1:
This patch adds iommu support for g2d driver. For this, it
adds subdrv_probe/remove callback to enable or disable
g2d iommu. And with this patch, in case of using g2d iommu,
we can get or put device address to a gem handle from user
through exynos_drm_gem_get/put_dma_addr(). Actually, these
functions take a reference to a gem handle so that the gem
object used by g2d dma is released properly.
And runqueue_node has a pointer to drm_file object of current
process to manage gem handles to owner.
This patch is based on the below patch set, "drm/exynos: add
iommu support for -next".
http://www.spinics.net/lists/dri-devel/msg29041.html
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Changelog v2:
move iommu support feature to mixer side.
And below is Prathyush's comment.
According to the new IOMMU framework for exynos sysmmus,
the owner of the sysmmu-tv is mixer (which is the actual
device that does DMA) and not hdmi.
The mmu-master in sysmmu-tv node is set as below in exynos5250.dtsi
sysmmu-tv {
-
mmu-master = <&mixer>;
};
Changelog v1:
The iommu will be enabled when hdmi sub driver is probed and
will be disabled when removed.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Replace references to and remove the connector property fxns, which
have been superseded with the more general object property fxns:
+ drm_connector_attach_property -> drm_object_attach_property
+ drm_connector_property_set_value -> drm_object_property_set_value
+ drm_connector_property_get_value -> drm_object_property_get_value
Signed-off-by: Rob Clark <rob@ti.com>
The iommu will be enabled when fimd sub driver is probed and
will be disabled when removed.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Changelog v4:
- fix condition to drm_iommu_detach_device funtion.
Changelog v3:
- add dma_parms->max_segment_size setting of drm_device->dev.
- use devm_kzalloc instead of kzalloc.
Changelog v2:
- fix iommu attach condition.
. check archdata.dma_ops of drm device instead of
subdrv device's one.
- code clean to exynos_drm_iommu.c file.
. remove '#ifdef CONFIG_ARM_DMA_USE_IOMMU' from exynos_drm_iommu.c
and add it to driver/gpu/drm/exynos/Kconfig.
Changelog v1:
This patch adds iommu support for exynos drm framework with dma mapping
api. In this patch, we used dma mapping api to allocate physical memory
and maps it with iommu table and removed some existing codes and added
new some codes for iommu support.
GEM allocation requires one device object to use dma mapping api so
this patch uses one iommu mapping for all sub drivers. In other words,
all sub drivers have same iommu mapping.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Currently the only users of drm_vblank_off() are i915 and gma500,
neither of which holds the event_lock when calling this function.
Fix this by holding the event_lock while traversing the list.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Currently the exynos driver calls drm_vblank_off() with the event_lock
held, while drm_vblank_off() will lock vbl_time and vblank_time_lock.
This lock dependency chain conflicts with the one in drm_handle_vblank()
where we first lock vblank_time_lock and then the event_lock.
Fix this by removing the above drm_vblank_off() calls which are in fact
never executed: drm_dev->vblank_disable_allowed is only ever non-zero
during driver init, until it's set in {fimd,vidi}_subdrv_probe. Both the
driver init and open code is protected by drm_global_mutex, so the
earliest page flip ioctl can happen only after vblank_disable_allowed is
set to 1. Thus {fimd,vidi}_finish_pageflip - with pending flip events -
will always get called with vblank_disable_allowed being 1.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
It's guaranteed that for each event on pageflip_event_list we have
called drm_vblank_get() - see exynos_drm_crtc_page_flip() - so checking
for this is redundant.
Also we need to call drm_vblank_put() for each event on the list, not
only once, otherwise we'd leak vblank references if there are multiple
events on the list.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Check overlay_ops is not NULL as checked in the previous 'if' condition.
Fixes the following smatch error:
drivers/gpu/drm/exynos/exynos_drm_encoder.c:509 exynos_drm_encoder_plane_disable()
error: we previously assumed 'overlay_ops' could be null (see line 499)
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>