2009-06-05 20:42:42 +08:00
|
|
|
/*
|
|
|
|
* Copyright 2008 Advanced Micro Devices, Inc.
|
|
|
|
* Copyright 2008 Red Hat Inc.
|
|
|
|
* Copyright 2009 Jerome Glisse.
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
|
|
|
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
|
|
|
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
|
|
|
* OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
*
|
|
|
|
* Authors: Dave Airlie
|
|
|
|
* Alex Deucher
|
|
|
|
* Jerome Glisse
|
|
|
|
*/
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2009-09-08 08:10:24 +08:00
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/firmware.h>
|
2011-08-30 23:04:30 +08:00
|
|
|
#include <linux/module.h>
|
2012-10-03 01:01:07 +08:00
|
|
|
#include <drm/drmP.h>
|
|
|
|
#include <drm/radeon_drm.h>
|
2009-06-05 20:42:42 +08:00
|
|
|
#include "radeon.h"
|
2010-03-12 05:19:17 +08:00
|
|
|
#include "radeon_asic.h"
|
2009-09-08 08:10:24 +08:00
|
|
|
#include "radeon_mode.h"
|
|
|
|
#include "r600d.h"
|
|
|
|
#include "atom.h"
|
2009-09-29 00:34:43 +08:00
|
|
|
#include "avivod.h"
|
2013-01-12 04:33:13 +08:00
|
|
|
#include "radeon_ucode.h"
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
/* Firmware Names */
|
|
|
|
MODULE_FIRMWARE("radeon/R600_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/R600_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV610_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV610_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV630_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV630_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV620_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV620_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV635_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV635_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV670_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV670_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RS780_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RS780_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV770_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV770_me.bin");
|
2013-06-26 12:11:19 +08:00
|
|
|
MODULE_FIRMWARE("radeon/RV770_smc.bin");
|
2009-09-08 08:10:24 +08:00
|
|
|
MODULE_FIRMWARE("radeon/RV730_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV730_me.bin");
|
2013-06-26 12:11:19 +08:00
|
|
|
MODULE_FIRMWARE("radeon/RV730_smc.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV740_smc.bin");
|
2009-09-08 08:10:24 +08:00
|
|
|
MODULE_FIRMWARE("radeon/RV710_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/RV710_me.bin");
|
2013-06-26 12:11:19 +08:00
|
|
|
MODULE_FIRMWARE("radeon/RV710_smc.bin");
|
2009-12-02 02:43:46 +08:00
|
|
|
MODULE_FIRMWARE("radeon/R600_rlc.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/R700_rlc.bin");
|
2010-03-25 01:36:43 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CEDAR_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/CEDAR_me.bin");
|
2010-03-25 01:55:51 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CEDAR_rlc.bin");
|
2013-06-26 12:33:35 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CEDAR_smc.bin");
|
2010-03-25 01:36:43 +08:00
|
|
|
MODULE_FIRMWARE("radeon/REDWOOD_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/REDWOOD_me.bin");
|
2010-03-25 01:55:51 +08:00
|
|
|
MODULE_FIRMWARE("radeon/REDWOOD_rlc.bin");
|
2013-06-26 12:33:35 +08:00
|
|
|
MODULE_FIRMWARE("radeon/REDWOOD_smc.bin");
|
2010-03-25 01:36:43 +08:00
|
|
|
MODULE_FIRMWARE("radeon/JUNIPER_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/JUNIPER_me.bin");
|
2010-03-25 01:55:51 +08:00
|
|
|
MODULE_FIRMWARE("radeon/JUNIPER_rlc.bin");
|
2013-06-26 12:33:35 +08:00
|
|
|
MODULE_FIRMWARE("radeon/JUNIPER_smc.bin");
|
2010-04-09 13:31:09 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CYPRESS_pfp.bin");
|
2010-03-25 01:36:43 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CYPRESS_me.bin");
|
2010-03-25 01:55:51 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CYPRESS_rlc.bin");
|
2013-06-26 12:33:35 +08:00
|
|
|
MODULE_FIRMWARE("radeon/CYPRESS_smc.bin");
|
2010-11-23 06:56:31 +08:00
|
|
|
MODULE_FIRMWARE("radeon/PALM_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/PALM_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/SUMO_rlc.bin");
|
2011-06-01 03:42:48 +08:00
|
|
|
MODULE_FIRMWARE("radeon/SUMO_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/SUMO_me.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/SUMO2_pfp.bin");
|
|
|
|
MODULE_FIRMWARE("radeon/SUMO2_me.bin");
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
static const u32 crtc_offsets[2] =
|
|
|
|
{
|
|
|
|
0,
|
|
|
|
AVIVO_D2CRTC_H_TOTAL - AVIVO_D1CRTC_H_TOTAL
|
|
|
|
};
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
int r600_debugfs_mc_info_init(struct radeon_device *rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-10-07 01:04:30 +08:00
|
|
|
/* r600,rv610,rv630,rv620,rv635,rv670 */
|
2009-06-05 20:42:42 +08:00
|
|
|
int r600_mc_wait_for_idle(struct radeon_device *rdev);
|
2012-09-01 01:43:50 +08:00
|
|
|
static void r600_gpu_init(struct radeon_device *rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
void r600_fini(struct radeon_device *rdev);
|
2010-03-25 01:55:51 +08:00
|
|
|
void r600_irq_disable(struct radeon_device *rdev);
|
2011-01-07 07:49:35 +08:00
|
|
|
static void r600_pcie_gen2_enable(struct radeon_device *rdev);
|
2013-04-13 01:52:52 +08:00
|
|
|
extern int evergreen_rlc_resume(struct radeon_device *rdev);
|
2013-11-02 07:01:36 +08:00
|
|
|
extern void rv770_set_clk_bypass_mode(struct radeon_device *rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2013-02-14 23:04:02 +08:00
|
|
|
/**
|
|
|
|
* r600_get_xclk - get the xclk
|
|
|
|
*
|
|
|
|
* @rdev: radeon_device pointer
|
|
|
|
*
|
|
|
|
* Returns the reference clock used by the gfx engine
|
|
|
|
* (r6xx, IGPs, APUs).
|
|
|
|
*/
|
|
|
|
u32 r600_get_xclk(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
return rdev->clock.spll.reference_freq;
|
|
|
|
}
|
|
|
|
|
2013-09-05 21:52:37 +08:00
|
|
|
int r600_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-09-24 00:22:11 +08:00
|
|
|
void dce3_program_fmt(struct drm_encoder *encoder)
|
|
|
|
{
|
|
|
|
struct drm_device *dev = encoder->dev;
|
|
|
|
struct radeon_device *rdev = dev->dev_private;
|
|
|
|
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
|
|
|
|
struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
|
|
|
|
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
|
|
|
|
int bpc = 0;
|
|
|
|
u32 tmp = 0;
|
2013-09-25 05:26:26 +08:00
|
|
|
enum radeon_connector_dither dither = RADEON_FMT_DITHER_DISABLE;
|
2013-09-24 00:22:11 +08:00
|
|
|
|
2013-09-25 05:26:26 +08:00
|
|
|
if (connector) {
|
|
|
|
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
|
2013-09-24 00:22:11 +08:00
|
|
|
bpc = radeon_get_monitor_bpc(connector);
|
2013-09-25 05:26:26 +08:00
|
|
|
dither = radeon_connector->dither;
|
|
|
|
}
|
2013-09-24 00:22:11 +08:00
|
|
|
|
|
|
|
/* LVDS FMT is set up by atom */
|
|
|
|
if (radeon_encoder->devices & ATOM_DEVICE_LCD_SUPPORT)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* not needed for analog */
|
|
|
|
if ((radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1) ||
|
|
|
|
(radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (bpc == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
switch (bpc) {
|
|
|
|
case 6:
|
2013-09-25 05:26:26 +08:00
|
|
|
if (dither == RADEON_FMT_DITHER_ENABLE)
|
2013-09-24 00:22:11 +08:00
|
|
|
/* XXX sort out optimal dither settings */
|
|
|
|
tmp |= FMT_SPATIAL_DITHER_EN;
|
|
|
|
else
|
|
|
|
tmp |= FMT_TRUNCATE_EN;
|
|
|
|
break;
|
|
|
|
case 8:
|
2013-09-25 05:26:26 +08:00
|
|
|
if (dither == RADEON_FMT_DITHER_ENABLE)
|
2013-09-24 00:22:11 +08:00
|
|
|
/* XXX sort out optimal dither settings */
|
|
|
|
tmp |= (FMT_SPATIAL_DITHER_EN | FMT_SPATIAL_DITHER_DEPTH);
|
|
|
|
else
|
|
|
|
tmp |= (FMT_TRUNCATE_EN | FMT_TRUNCATE_DEPTH);
|
|
|
|
break;
|
|
|
|
case 10:
|
|
|
|
default:
|
|
|
|
/* not needed */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
WREG32(FMT_BIT_DEPTH_CONTROL + radeon_crtc->crtc_offset, tmp);
|
|
|
|
}
|
|
|
|
|
2010-07-03 00:58:16 +08:00
|
|
|
/* get temperature in millidegrees */
|
2011-02-02 05:12:34 +08:00
|
|
|
int rv6xx_get_temp(struct radeon_device *rdev)
|
2010-07-03 00:58:16 +08:00
|
|
|
{
|
|
|
|
u32 temp = (RREG32(CG_THERMAL_STATUS) & ASIC_T_MASK) >>
|
|
|
|
ASIC_T_SHIFT;
|
2011-02-02 05:12:34 +08:00
|
|
|
int actual_temp = temp & 0xff;
|
2010-07-03 00:58:16 +08:00
|
|
|
|
2011-02-02 05:12:34 +08:00
|
|
|
if (temp & 0x100)
|
|
|
|
actual_temp -= 256;
|
|
|
|
|
|
|
|
return actual_temp * 1000;
|
2010-07-03 00:58:16 +08:00
|
|
|
}
|
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
void r600_pm_get_dynpm_state(struct radeon_device *rdev)
|
2010-04-23 02:03:55 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_upclock = true;
|
|
|
|
rdev->pm.dynpm_can_downclock = true;
|
2010-04-23 02:03:55 +08:00
|
|
|
|
|
|
|
/* power state array is low to high, default is first */
|
|
|
|
if ((rdev->flags & RADEON_IS_IGP) || (rdev->family == CHIP_R600)) {
|
|
|
|
int min_power_state_index = 0;
|
|
|
|
|
|
|
|
if (rdev->pm.num_power_states > 2)
|
|
|
|
min_power_state_index = 1;
|
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
switch (rdev->pm.dynpm_planned_action) {
|
|
|
|
case DYNPM_ACTION_MINIMUM:
|
2010-04-23 02:03:55 +08:00
|
|
|
rdev->pm.requested_power_state_index = min_power_state_index;
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_downclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_DOWNCLOCK:
|
2010-04-23 02:03:55 +08:00
|
|
|
if (rdev->pm.current_power_state_index == min_power_state_index) {
|
|
|
|
rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_downclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
} else {
|
|
|
|
if (rdev->pm.active_crtc_count > 1) {
|
|
|
|
for (i = 0; i < rdev->pm.num_power_states; i++) {
|
2010-05-03 13:13:14 +08:00
|
|
|
if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
|
2010-04-23 02:03:55 +08:00
|
|
|
continue;
|
|
|
|
else if (i >= rdev->pm.current_power_state_index) {
|
|
|
|
rdev->pm.requested_power_state_index =
|
|
|
|
rdev->pm.current_power_state_index;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
rdev->pm.requested_power_state_index = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2010-06-26 04:21:27 +08:00
|
|
|
} else {
|
|
|
|
if (rdev->pm.current_power_state_index == 0)
|
|
|
|
rdev->pm.requested_power_state_index =
|
|
|
|
rdev->pm.num_power_states - 1;
|
|
|
|
else
|
|
|
|
rdev->pm.requested_power_state_index =
|
|
|
|
rdev->pm.current_power_state_index - 1;
|
|
|
|
}
|
2010-04-23 02:03:55 +08:00
|
|
|
}
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-03 13:13:14 +08:00
|
|
|
/* don't use the power state if crtcs are active and no display flag is set */
|
|
|
|
if ((rdev->pm.active_crtc_count > 0) &&
|
|
|
|
(rdev->pm.power_state[rdev->pm.requested_power_state_index].
|
|
|
|
clock_info[rdev->pm.requested_clock_mode_index].flags &
|
|
|
|
RADEON_PM_MODE_NO_DISPLAY)) {
|
|
|
|
rdev->pm.requested_power_state_index++;
|
|
|
|
}
|
2010-04-23 02:03:55 +08:00
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_UPCLOCK:
|
2010-04-23 02:03:55 +08:00
|
|
|
if (rdev->pm.current_power_state_index == (rdev->pm.num_power_states - 1)) {
|
|
|
|
rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_upclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
} else {
|
|
|
|
if (rdev->pm.active_crtc_count > 1) {
|
|
|
|
for (i = (rdev->pm.num_power_states - 1); i >= 0; i--) {
|
2010-05-03 13:13:14 +08:00
|
|
|
if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
|
2010-04-23 02:03:55 +08:00
|
|
|
continue;
|
|
|
|
else if (i <= rdev->pm.current_power_state_index) {
|
|
|
|
rdev->pm.requested_power_state_index =
|
|
|
|
rdev->pm.current_power_state_index;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
rdev->pm.requested_power_state_index = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
rdev->pm.requested_power_state_index =
|
|
|
|
rdev->pm.current_power_state_index + 1;
|
|
|
|
}
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_DEFAULT:
|
2010-03-23 01:31:08 +08:00
|
|
|
rdev->pm.requested_power_state_index = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_upclock = false;
|
2010-03-23 01:31:08 +08:00
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_NONE:
|
2010-04-23 02:03:55 +08:00
|
|
|
default:
|
|
|
|
DRM_ERROR("Requested mode for not defined action\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* XXX select a power state based on AC/DC, single/dualhead, etc. */
|
|
|
|
/* for now just select the first power state and switch between clock modes */
|
|
|
|
/* power state array is low to high, default is first (0) */
|
|
|
|
if (rdev->pm.active_crtc_count > 1) {
|
|
|
|
rdev->pm.requested_power_state_index = -1;
|
|
|
|
/* start at 1 as we don't want the default mode */
|
|
|
|
for (i = 1; i < rdev->pm.num_power_states; i++) {
|
2010-05-03 13:13:14 +08:00
|
|
|
if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
|
2010-04-23 02:03:55 +08:00
|
|
|
continue;
|
|
|
|
else if ((rdev->pm.power_state[i].type == POWER_STATE_TYPE_PERFORMANCE) ||
|
|
|
|
(rdev->pm.power_state[i].type == POWER_STATE_TYPE_BATTERY)) {
|
|
|
|
rdev->pm.requested_power_state_index = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* if nothing selected, grab the default state. */
|
|
|
|
if (rdev->pm.requested_power_state_index == -1)
|
|
|
|
rdev->pm.requested_power_state_index = 0;
|
|
|
|
} else
|
|
|
|
rdev->pm.requested_power_state_index = 1;
|
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
switch (rdev->pm.dynpm_planned_action) {
|
|
|
|
case DYNPM_ACTION_MINIMUM:
|
2010-04-23 02:03:55 +08:00
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_downclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_DOWNCLOCK:
|
2010-04-23 02:03:55 +08:00
|
|
|
if (rdev->pm.requested_power_state_index == rdev->pm.current_power_state_index) {
|
|
|
|
if (rdev->pm.current_clock_mode_index == 0) {
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_downclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
} else
|
|
|
|
rdev->pm.requested_clock_mode_index =
|
|
|
|
rdev->pm.current_clock_mode_index - 1;
|
|
|
|
} else {
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_downclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
}
|
2010-05-03 13:13:14 +08:00
|
|
|
/* don't use the power state if crtcs are active and no display flag is set */
|
|
|
|
if ((rdev->pm.active_crtc_count > 0) &&
|
|
|
|
(rdev->pm.power_state[rdev->pm.requested_power_state_index].
|
|
|
|
clock_info[rdev->pm.requested_clock_mode_index].flags &
|
|
|
|
RADEON_PM_MODE_NO_DISPLAY)) {
|
|
|
|
rdev->pm.requested_clock_mode_index++;
|
|
|
|
}
|
2010-04-23 02:03:55 +08:00
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_UPCLOCK:
|
2010-04-23 02:03:55 +08:00
|
|
|
if (rdev->pm.requested_power_state_index == rdev->pm.current_power_state_index) {
|
|
|
|
if (rdev->pm.current_clock_mode_index ==
|
|
|
|
(rdev->pm.power_state[rdev->pm.requested_power_state_index].num_clock_modes - 1)) {
|
|
|
|
rdev->pm.requested_clock_mode_index = rdev->pm.current_clock_mode_index;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_upclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
} else
|
|
|
|
rdev->pm.requested_clock_mode_index =
|
|
|
|
rdev->pm.current_clock_mode_index + 1;
|
|
|
|
} else {
|
|
|
|
rdev->pm.requested_clock_mode_index =
|
|
|
|
rdev->pm.power_state[rdev->pm.requested_power_state_index].num_clock_modes - 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_upclock = false;
|
2010-04-23 02:03:55 +08:00
|
|
|
}
|
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_DEFAULT:
|
2010-03-23 01:31:08 +08:00
|
|
|
rdev->pm.requested_power_state_index = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.requested_clock_mode_index = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.dynpm_can_upclock = false;
|
2010-03-23 01:31:08 +08:00
|
|
|
break;
|
2010-05-08 03:10:16 +08:00
|
|
|
case DYNPM_ACTION_NONE:
|
2010-04-23 02:03:55 +08:00
|
|
|
default:
|
|
|
|
DRM_ERROR("Requested mode for not defined action\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-08-02 08:42:55 +08:00
|
|
|
DRM_DEBUG_DRIVER("Requested: e: %d m: %d p: %d\n",
|
2010-05-08 04:58:27 +08:00
|
|
|
rdev->pm.power_state[rdev->pm.requested_power_state_index].
|
|
|
|
clock_info[rdev->pm.requested_clock_mode_index].sclk,
|
|
|
|
rdev->pm.power_state[rdev->pm.requested_power_state_index].
|
|
|
|
clock_info[rdev->pm.requested_clock_mode_index].mclk,
|
|
|
|
rdev->pm.power_state[rdev->pm.requested_power_state_index].
|
|
|
|
pcie_lanes);
|
2010-04-23 02:03:55 +08:00
|
|
|
}
|
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
void rs780_pm_init_profile(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
if (rdev->pm.num_power_states == 2) {
|
|
|
|
/* default */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
} else if (rdev->pm.num_power_states == 3) {
|
|
|
|
/* default */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
} else {
|
|
|
|
/* default */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 3;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 3;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
}
|
|
|
|
}
|
2010-04-23 01:38:05 +08:00
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
void r600_pm_init_profile(struct radeon_device *rdev)
|
|
|
|
{
|
2011-11-04 22:09:42 +08:00
|
|
|
int idx;
|
|
|
|
|
2010-05-08 03:10:16 +08:00
|
|
|
if (rdev->family == CHIP_R600) {
|
|
|
|
/* XXX */
|
|
|
|
/* default */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* low sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* low mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
|
2010-05-08 03:10:16 +08:00
|
|
|
} else {
|
|
|
|
if (rdev->pm.num_power_states < 4) {
|
|
|
|
/* default */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 2;
|
|
|
|
/* low sh */
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* mid sh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high sh */
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 1;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 2;
|
|
|
|
/* low mh */
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 2;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
|
|
|
|
/* low mh */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high mh */
|
2010-05-18 07:41:26 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 2;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 2;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 2;
|
|
|
|
} else {
|
|
|
|
/* default */
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 2;
|
|
|
|
/* low sh */
|
2011-11-04 22:09:42 +08:00
|
|
|
if (rdev->flags & RADEON_IS_MOBILITY)
|
|
|
|
idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0);
|
|
|
|
else
|
|
|
|
idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid sh */
|
2011-11-04 22:09:42 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high sh */
|
2011-11-04 22:09:42 +08:00
|
|
|
idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = idx;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 2;
|
|
|
|
/* low mh */
|
2011-11-04 22:09:42 +08:00
|
|
|
if (rdev->flags & RADEON_IS_MOBILITY)
|
|
|
|
idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1);
|
|
|
|
else
|
|
|
|
idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
|
2010-06-03 05:56:01 +08:00
|
|
|
/* mid mh */
|
2011-11-04 22:09:42 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1;
|
2010-05-08 03:10:16 +08:00
|
|
|
/* high mh */
|
2011-11-04 22:09:42 +08:00
|
|
|
idx = radeon_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = idx;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = idx;
|
2010-05-08 03:10:16 +08:00
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
|
|
|
|
rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 2;
|
|
|
|
}
|
|
|
|
}
|
2010-04-23 01:38:05 +08:00
|
|
|
}
|
|
|
|
|
2010-04-24 05:57:27 +08:00
|
|
|
void r600_pm_misc(struct radeon_device *rdev)
|
|
|
|
{
|
2010-06-08 06:20:25 +08:00
|
|
|
int req_ps_idx = rdev->pm.requested_power_state_index;
|
|
|
|
int req_cm_idx = rdev->pm.requested_clock_mode_index;
|
|
|
|
struct radeon_power_state *ps = &rdev->pm.power_state[req_ps_idx];
|
|
|
|
struct radeon_voltage *voltage = &ps->clock_info[req_cm_idx].voltage;
|
2010-05-28 07:25:54 +08:00
|
|
|
|
2010-06-08 06:15:18 +08:00
|
|
|
if ((voltage->type == VOLTAGE_SW) && voltage->voltage) {
|
2011-06-21 01:00:31 +08:00
|
|
|
/* 0xff01 is a flag rather then an actual voltage */
|
|
|
|
if (voltage->voltage == 0xff01)
|
|
|
|
return;
|
2010-06-08 06:15:18 +08:00
|
|
|
if (voltage->voltage != rdev->pm.current_vddc) {
|
2011-04-13 02:49:23 +08:00
|
|
|
radeon_atom_set_voltage(rdev, voltage->voltage, SET_VOLTAGE_TYPE_ASIC_VDDC);
|
2010-06-08 06:15:18 +08:00
|
|
|
rdev->pm.current_vddc = voltage->voltage;
|
2010-08-02 08:42:55 +08:00
|
|
|
DRM_DEBUG_DRIVER("Setting: v: %d\n", voltage->voltage);
|
2010-06-08 06:15:18 +08:00
|
|
|
}
|
|
|
|
}
|
2010-04-24 05:57:27 +08:00
|
|
|
}
|
|
|
|
|
2010-04-23 00:39:58 +08:00
|
|
|
bool r600_gui_idle(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
if (RREG32(GRBM_STATUS) & GUI_ACTIVE)
|
|
|
|
return false;
|
|
|
|
else
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2009-12-05 04:12:21 +08:00
|
|
|
/* hpd for digital panel detect/disconnect */
|
|
|
|
bool r600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd)
|
|
|
|
{
|
|
|
|
bool connected = false;
|
|
|
|
|
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
switch (hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
if (RREG32(DC_HPD1_INT_STATUS) & DC_HPDx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
if (RREG32(DC_HPD2_INT_STATUS) & DC_HPDx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
if (RREG32(DC_HPD3_INT_STATUS) & DC_HPDx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_4:
|
|
|
|
if (RREG32(DC_HPD4_INT_STATUS) & DC_HPDx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
/* DCE 3.2 */
|
|
|
|
case RADEON_HPD_5:
|
|
|
|
if (RREG32(DC_HPD5_INT_STATUS) & DC_HPDx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_6:
|
|
|
|
if (RREG32(DC_HPD6_INT_STATUS) & DC_HPDx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
switch (hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
if (RREG32(DC_HOT_PLUG_DETECT1_INT_STATUS) & DC_HOT_PLUG_DETECTx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
if (RREG32(DC_HOT_PLUG_DETECT2_INT_STATUS) & DC_HOT_PLUG_DETECTx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
if (RREG32(DC_HOT_PLUG_DETECT3_INT_STATUS) & DC_HOT_PLUG_DETECTx_SENSE)
|
|
|
|
connected = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return connected;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_hpd_set_polarity(struct radeon_device *rdev,
|
2009-12-05 04:26:55 +08:00
|
|
|
enum radeon_hpd_id hpd)
|
2009-12-05 04:12:21 +08:00
|
|
|
{
|
|
|
|
u32 tmp;
|
|
|
|
bool connected = r600_hpd_sense(rdev, hpd);
|
|
|
|
|
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
switch (hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
tmp = RREG32(DC_HPD1_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HPDx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD1_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
tmp = RREG32(DC_HPD2_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HPDx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD2_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
tmp = RREG32(DC_HPD3_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HPDx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD3_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_4:
|
|
|
|
tmp = RREG32(DC_HPD4_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HPDx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD4_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_5:
|
|
|
|
tmp = RREG32(DC_HPD5_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HPDx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD5_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
/* DCE 3.2 */
|
|
|
|
case RADEON_HPD_6:
|
|
|
|
tmp = RREG32(DC_HPD6_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HPDx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD6_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
switch (hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL);
|
|
|
|
if (connected)
|
|
|
|
tmp &= ~DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
|
|
|
else
|
|
|
|
tmp |= DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_hpd_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
struct drm_device *dev = rdev->ddev;
|
|
|
|
struct drm_connector *connector;
|
2012-05-17 07:33:30 +08:00
|
|
|
unsigned enable = 0;
|
2009-12-05 04:12:21 +08:00
|
|
|
|
2011-11-03 23:21:39 +08:00
|
|
|
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
|
|
|
|
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
|
|
|
|
|
2012-05-04 23:06:22 +08:00
|
|
|
if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
|
|
|
|
connector->connector_type == DRM_MODE_CONNECTOR_LVDS) {
|
|
|
|
/* don't try to enable hpd on eDP or LVDS avoid breaking the
|
|
|
|
* aux dp channel on imac and help (but not completely fix)
|
|
|
|
* https://bugzilla.redhat.com/show_bug.cgi?id=726143
|
|
|
|
*/
|
|
|
|
continue;
|
|
|
|
}
|
2011-11-03 23:21:39 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
u32 tmp = DC_HPDx_CONNECTION_TIMER(0x9c4) | DC_HPDx_RX_INT_TIMER(0xfa);
|
|
|
|
if (ASIC_IS_DCE32(rdev))
|
|
|
|
tmp |= DC_HPDx_EN;
|
2009-12-05 04:12:21 +08:00
|
|
|
|
|
|
|
switch (radeon_connector->hpd.hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
WREG32(DC_HPD1_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
WREG32(DC_HPD2_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
WREG32(DC_HPD3_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_4:
|
|
|
|
WREG32(DC_HPD4_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
/* DCE 3.2 */
|
|
|
|
case RADEON_HPD_5:
|
|
|
|
WREG32(DC_HPD5_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_6:
|
|
|
|
WREG32(DC_HPD6_CONTROL, tmp);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2011-11-03 23:21:39 +08:00
|
|
|
} else {
|
2009-12-05 04:12:21 +08:00
|
|
|
switch (radeon_connector->hpd.hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT1_CONTROL, DC_HOT_PLUG_DETECTx_EN);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT2_CONTROL, DC_HOT_PLUG_DETECTx_EN);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT3_CONTROL, DC_HOT_PLUG_DETECTx_EN);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2012-05-17 07:33:30 +08:00
|
|
|
enable |= 1 << radeon_connector->hpd.hpd;
|
2011-11-03 23:21:39 +08:00
|
|
|
radeon_hpd_set_polarity(rdev, radeon_connector->hpd.hpd);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
2012-05-17 07:33:30 +08:00
|
|
|
radeon_irq_kms_enable_hpd(rdev, enable);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void r600_hpd_fini(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
struct drm_device *dev = rdev->ddev;
|
|
|
|
struct drm_connector *connector;
|
2012-05-17 07:33:30 +08:00
|
|
|
unsigned disable = 0;
|
2009-12-05 04:12:21 +08:00
|
|
|
|
2012-05-17 07:33:30 +08:00
|
|
|
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
|
|
|
|
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
|
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
2009-12-05 04:12:21 +08:00
|
|
|
switch (radeon_connector->hpd.hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
WREG32(DC_HPD1_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
WREG32(DC_HPD2_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
WREG32(DC_HPD3_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_4:
|
|
|
|
WREG32(DC_HPD4_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
/* DCE 3.2 */
|
|
|
|
case RADEON_HPD_5:
|
|
|
|
WREG32(DC_HPD5_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_6:
|
|
|
|
WREG32(DC_HPD6_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2012-05-17 07:33:30 +08:00
|
|
|
} else {
|
2009-12-05 04:12:21 +08:00
|
|
|
switch (radeon_connector->hpd.hpd) {
|
|
|
|
case RADEON_HPD_1:
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT1_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_2:
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT2_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
case RADEON_HPD_3:
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT3_CONTROL, 0);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2012-05-17 07:33:30 +08:00
|
|
|
disable |= 1 << radeon_connector->hpd.hpd;
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
2012-05-17 07:33:30 +08:00
|
|
|
radeon_irq_kms_disable_hpd(rdev, disable);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
|
2009-06-05 20:42:42 +08:00
|
|
|
/*
|
2009-09-08 08:10:24 +08:00
|
|
|
* R600 PCIE GART
|
2009-06-05 20:42:42 +08:00
|
|
|
*/
|
2009-09-08 08:10:24 +08:00
|
|
|
void r600_pcie_gart_tlb_flush(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
unsigned i;
|
|
|
|
u32 tmp;
|
|
|
|
|
2010-02-15 13:54:45 +08:00
|
|
|
/* flush hdp cache so updates hit vram */
|
2010-12-08 23:05:34 +08:00
|
|
|
if ((rdev->family >= CHIP_RV770) && (rdev->family <= CHIP_RV740) &&
|
|
|
|
!(rdev->flags & RADEON_IS_AGP)) {
|
2011-11-03 23:16:49 +08:00
|
|
|
void __iomem *ptr = (void *)rdev->gart.ptr;
|
2010-07-27 06:51:53 +08:00
|
|
|
u32 tmp;
|
|
|
|
|
|
|
|
/* r7xx hw bug. write to HDP_DEBUG1 followed by fb read
|
|
|
|
* rather than write to HDP_REG_COHERENCY_FLUSH_CNTL
|
2010-12-08 23:05:34 +08:00
|
|
|
* This seems to cause problems on some AGP cards. Just use the old
|
|
|
|
* method for them.
|
2010-07-27 06:51:53 +08:00
|
|
|
*/
|
|
|
|
WREG32(HDP_DEBUG1, 0);
|
|
|
|
tmp = readl((void __iomem *)ptr);
|
|
|
|
} else
|
|
|
|
WREG32(R_005480_HDP_MEM_COHERENCY_FLUSH_CNTL, 0x1);
|
2010-02-15 13:54:45 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(VM_CONTEXT0_INVALIDATION_LOW_ADDR, rdev->mc.gtt_start >> 12);
|
|
|
|
WREG32(VM_CONTEXT0_INVALIDATION_HIGH_ADDR, (rdev->mc.gtt_end - 1) >> 12);
|
|
|
|
WREG32(VM_CONTEXT0_REQUEST_RESPONSE, REQUEST_TYPE(1));
|
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
/* read MC_STATUS */
|
|
|
|
tmp = RREG32(VM_CONTEXT0_REQUEST_RESPONSE);
|
|
|
|
tmp = (tmp & RESPONSE_TYPE_MASK) >> RESPONSE_TYPE_SHIFT;
|
|
|
|
if (tmp == 2) {
|
|
|
|
printk(KERN_WARNING "[drm] r600 flush TLB failed\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (tmp) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
udelay(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-09-15 00:29:49 +08:00
|
|
|
int r600_pcie_gart_init(struct radeon_device *rdev)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
2009-09-15 00:29:49 +08:00
|
|
|
int r;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2011-11-03 23:16:49 +08:00
|
|
|
if (rdev->gart.robj) {
|
2010-10-31 05:08:30 +08:00
|
|
|
WARN(1, "R600 PCIE GART already initialized\n");
|
2009-09-15 00:29:49 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Initialize common gart structure */
|
|
|
|
r = radeon_gart_init(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
if (r)
|
2009-09-08 08:10:24 +08:00
|
|
|
return r;
|
|
|
|
rdev->gart.table_size = rdev->gart.num_gpu_pages * 8;
|
2009-09-15 00:29:49 +08:00
|
|
|
return radeon_gart_table_vram_alloc(rdev);
|
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static int r600_pcie_gart_enable(struct radeon_device *rdev)
|
2009-09-15 00:29:49 +08:00
|
|
|
{
|
|
|
|
u32 tmp;
|
|
|
|
int r, i;
|
|
|
|
|
2011-11-03 23:16:49 +08:00
|
|
|
if (rdev->gart.robj == NULL) {
|
2009-09-15 00:29:49 +08:00
|
|
|
dev_err(rdev->dev, "No VRAM object for PCIE GART.\n");
|
|
|
|
return -EINVAL;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2009-09-15 00:29:49 +08:00
|
|
|
r = radeon_gart_table_vram_pin(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
2010-02-05 14:00:07 +08:00
|
|
|
radeon_gart_restore(rdev);
|
2009-09-15 09:07:52 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Setup L2 cache */
|
|
|
|
WREG32(VM_L2_CNTL, ENABLE_L2_CACHE | ENABLE_L2_FRAGMENT_PROCESSING |
|
|
|
|
ENABLE_L2_PTE_CACHE_LRU_UPDATE_BY_WRITE |
|
|
|
|
EFFECTIVE_L2_QUEUE_SIZE(7));
|
|
|
|
WREG32(VM_L2_CNTL2, 0);
|
|
|
|
WREG32(VM_L2_CNTL3, BANK_SELECT_0(0) | BANK_SELECT_1(1));
|
|
|
|
/* Setup TLB control */
|
|
|
|
tmp = ENABLE_L1_TLB | ENABLE_L1_FRAGMENT_PROCESSING |
|
|
|
|
SYSTEM_ACCESS_MODE_NOT_IN_SYS |
|
|
|
|
EFFECTIVE_L1_TLB_SIZE(5) | EFFECTIVE_L1_QUEUE_SIZE(5) |
|
|
|
|
ENABLE_WAIT_L2_QUERY;
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_SYS_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_SYS_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_HDP_CNTL, tmp | ENABLE_L1_STRICT_ORDERING);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_HDP_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_RD_A_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_WR_A_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_RD_B_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_WR_B_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_GFX_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_GFX_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_PDMA_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_PDMA_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
|
|
|
|
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
|
2009-10-07 01:04:30 +08:00
|
|
|
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
|
|
|
|
WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) |
|
|
|
|
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);
|
|
|
|
WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
|
|
|
|
(u32)(rdev->dummy_page.addr >> 12));
|
|
|
|
for (i = 1; i < 7; i++)
|
|
|
|
WREG32(VM_CONTEXT0_CNTL + (i * 4), 0);
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
r600_pcie_gart_tlb_flush(rdev);
|
2011-09-01 05:54:07 +08:00
|
|
|
DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
|
|
|
|
(unsigned)(rdev->mc.gtt_size >> 20),
|
|
|
|
(unsigned long long)rdev->gart.table_addr);
|
2009-09-08 08:10:24 +08:00
|
|
|
rdev->gart.ready = true;
|
2009-06-05 20:42:42 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static void r600_pcie_gart_disable(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 tmp;
|
2011-11-03 23:16:49 +08:00
|
|
|
int i;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Disable all tables */
|
|
|
|
for (i = 0; i < 7; i++)
|
|
|
|
WREG32(VM_CONTEXT0_CNTL + (i * 4), 0);
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Disable L2 cache */
|
|
|
|
WREG32(VM_L2_CNTL, ENABLE_L2_FRAGMENT_PROCESSING |
|
|
|
|
EFFECTIVE_L2_QUEUE_SIZE(7));
|
|
|
|
WREG32(VM_L2_CNTL3, BANK_SELECT_0(0) | BANK_SELECT_1(1));
|
|
|
|
/* Setup L1 TLB control */
|
|
|
|
tmp = EFFECTIVE_L1_TLB_SIZE(5) | EFFECTIVE_L1_QUEUE_SIZE(5) |
|
|
|
|
ENABLE_WAIT_L2_QUERY;
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_RD_A_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_WR_A_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_RD_B_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_WR_B_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_GFX_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_GFX_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_PDMA_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_PDMA_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_SEM_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_SEM_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_SYS_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_SYS_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_HDP_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_HDP_CNTL, tmp);
|
2011-11-03 23:16:49 +08:00
|
|
|
radeon_gart_table_vram_unpin(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static void r600_pcie_gart_fini(struct radeon_device *rdev)
|
2009-09-15 00:29:49 +08:00
|
|
|
{
|
2010-03-17 22:44:29 +08:00
|
|
|
radeon_gart_fini(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
r600_pcie_gart_disable(rdev);
|
|
|
|
radeon_gart_table_vram_free(rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static void r600_agp_enable(struct radeon_device *rdev)
|
2009-10-07 01:04:30 +08:00
|
|
|
{
|
|
|
|
u32 tmp;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Setup L2 cache */
|
|
|
|
WREG32(VM_L2_CNTL, ENABLE_L2_CACHE | ENABLE_L2_FRAGMENT_PROCESSING |
|
|
|
|
ENABLE_L2_PTE_CACHE_LRU_UPDATE_BY_WRITE |
|
|
|
|
EFFECTIVE_L2_QUEUE_SIZE(7));
|
|
|
|
WREG32(VM_L2_CNTL2, 0);
|
|
|
|
WREG32(VM_L2_CNTL3, BANK_SELECT_0(0) | BANK_SELECT_1(1));
|
|
|
|
/* Setup TLB control */
|
|
|
|
tmp = ENABLE_L1_TLB | ENABLE_L1_FRAGMENT_PROCESSING |
|
|
|
|
SYSTEM_ACCESS_MODE_NOT_IN_SYS |
|
|
|
|
EFFECTIVE_L1_TLB_SIZE(5) | EFFECTIVE_L1_QUEUE_SIZE(5) |
|
|
|
|
ENABLE_WAIT_L2_QUERY;
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_SYS_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_SYS_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_HDP_CNTL, tmp | ENABLE_L1_STRICT_ORDERING);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_HDP_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_RD_A_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_WR_A_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_RD_B_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCD_WR_B_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_GFX_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_GFX_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_PDMA_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_PDMA_CNTL, tmp);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_RD_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
|
|
|
|
WREG32(MC_VM_L1_TLB_MCB_WR_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
|
|
|
|
for (i = 0; i < 7; i++)
|
|
|
|
WREG32(VM_CONTEXT0_CNTL + (i * 4), 0);
|
|
|
|
}
|
|
|
|
|
2009-06-05 20:42:42 +08:00
|
|
|
int r600_mc_wait_for_idle(struct radeon_device *rdev)
|
|
|
|
{
|
2009-09-08 08:10:24 +08:00
|
|
|
unsigned i;
|
|
|
|
u32 tmp;
|
|
|
|
|
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
/* read MC_STATUS */
|
|
|
|
tmp = RREG32(R_000E50_SRBM_STATUS) & 0x3F00;
|
|
|
|
if (!tmp)
|
|
|
|
return 0;
|
|
|
|
udelay(1);
|
|
|
|
}
|
|
|
|
return -1;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2013-04-06 05:50:53 +08:00
|
|
|
uint32_t rs780_mc_rreg(struct radeon_device *rdev, uint32_t reg)
|
|
|
|
{
|
2013-09-04 07:00:09 +08:00
|
|
|
unsigned long flags;
|
2013-04-06 05:50:53 +08:00
|
|
|
uint32_t r;
|
|
|
|
|
2013-09-04 07:00:09 +08:00
|
|
|
spin_lock_irqsave(&rdev->mc_idx_lock, flags);
|
2013-04-06 05:50:53 +08:00
|
|
|
WREG32(R_0028F8_MC_INDEX, S_0028F8_MC_IND_ADDR(reg));
|
|
|
|
r = RREG32(R_0028FC_MC_DATA);
|
|
|
|
WREG32(R_0028F8_MC_INDEX, ~C_0028F8_MC_IND_ADDR);
|
2013-09-04 07:00:09 +08:00
|
|
|
spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
|
2013-04-06 05:50:53 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
void rs780_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
|
|
|
|
{
|
2013-09-04 07:00:09 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&rdev->mc_idx_lock, flags);
|
2013-04-06 05:50:53 +08:00
|
|
|
WREG32(R_0028F8_MC_INDEX, S_0028F8_MC_IND_ADDR(reg) |
|
|
|
|
S_0028F8_MC_IND_WR_EN(1));
|
|
|
|
WREG32(R_0028FC_MC_DATA, v);
|
|
|
|
WREG32(R_0028F8_MC_INDEX, 0x7F);
|
2013-09-04 07:00:09 +08:00
|
|
|
spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
|
2013-04-06 05:50:53 +08:00
|
|
|
}
|
|
|
|
|
2009-10-02 00:02:13 +08:00
|
|
|
static void r600_mc_program(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2009-10-02 00:02:13 +08:00
|
|
|
struct rv515_mc_save save;
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 tmp;
|
|
|
|
int i, j;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Initialize HDP */
|
|
|
|
for (i = 0, j = 0; i < 32; i++, j += 0x18) {
|
|
|
|
WREG32((0x2c14 + j), 0x00000000);
|
|
|
|
WREG32((0x2c18 + j), 0x00000000);
|
|
|
|
WREG32((0x2c1c + j), 0x00000000);
|
|
|
|
WREG32((0x2c20 + j), 0x00000000);
|
|
|
|
WREG32((0x2c24 + j), 0x00000000);
|
|
|
|
}
|
|
|
|
WREG32(HDP_REG_COHERENCY_FLUSH_CNTL, 0);
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-10-02 00:02:13 +08:00
|
|
|
rv515_mc_stop(rdev, &save);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r600_mc_wait_for_idle(rdev)) {
|
2009-10-02 00:02:13 +08:00
|
|
|
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
2009-10-02 00:02:13 +08:00
|
|
|
/* Lockout access through VGA aperture (doesn't exist before R600) */
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(VGA_HDP_CONTROL, VGA_MEMORY_DISABLE);
|
|
|
|
/* Update configuration */
|
2009-10-07 01:04:30 +08:00
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
if (rdev->mc.vram_start < rdev->mc.gtt_start) {
|
|
|
|
/* VRAM before AGP */
|
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_LOW_ADDR,
|
|
|
|
rdev->mc.vram_start >> 12);
|
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR,
|
|
|
|
rdev->mc.gtt_end >> 12);
|
|
|
|
} else {
|
|
|
|
/* VRAM after AGP */
|
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_LOW_ADDR,
|
|
|
|
rdev->mc.gtt_start >> 12);
|
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR,
|
|
|
|
rdev->mc.vram_end >> 12);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_LOW_ADDR, rdev->mc.vram_start >> 12);
|
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR, rdev->mc.vram_end >> 12);
|
|
|
|
}
|
2011-10-28 22:30:02 +08:00
|
|
|
WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, rdev->vram_scratch.gpu_addr >> 12);
|
2009-10-07 01:04:30 +08:00
|
|
|
tmp = ((rdev->mc.vram_end >> 24) & 0xFFFF) << 16;
|
2009-09-08 08:10:24 +08:00
|
|
|
tmp |= ((rdev->mc.vram_start >> 24) & 0xFFFF);
|
|
|
|
WREG32(MC_VM_FB_LOCATION, tmp);
|
|
|
|
WREG32(HDP_NONSURFACE_BASE, (rdev->mc.vram_start >> 8));
|
|
|
|
WREG32(HDP_NONSURFACE_INFO, (2 << 7));
|
2010-06-04 01:34:48 +08:00
|
|
|
WREG32(HDP_NONSURFACE_SIZE, 0x3FFFFFFF);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
2009-10-07 01:04:30 +08:00
|
|
|
WREG32(MC_VM_AGP_TOP, rdev->mc.gtt_end >> 22);
|
|
|
|
WREG32(MC_VM_AGP_BOT, rdev->mc.gtt_start >> 22);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(MC_VM_AGP_BASE, rdev->mc.agp_base >> 22);
|
|
|
|
} else {
|
|
|
|
WREG32(MC_VM_AGP_BASE, 0);
|
|
|
|
WREG32(MC_VM_AGP_TOP, 0x0FFFFFFF);
|
|
|
|
WREG32(MC_VM_AGP_BOT, 0x0FFFFFFF);
|
|
|
|
}
|
|
|
|
if (r600_mc_wait_for_idle(rdev)) {
|
2009-10-02 00:02:13 +08:00
|
|
|
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
2009-10-02 00:02:13 +08:00
|
|
|
rv515_mc_resume(rdev, &save);
|
2009-09-18 12:16:38 +08:00
|
|
|
/* we need to own VRAM, so turn off the VGA renderer here
|
|
|
|
* to stop it overwriting our objects */
|
2009-09-29 00:34:43 +08:00
|
|
|
rv515_vga_render_disable(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
/**
|
|
|
|
* r600_vram_gtt_location - try to find VRAM & GTT location
|
|
|
|
* @rdev: radeon device structure holding all necessary informations
|
|
|
|
* @mc: memory controller structure holding memory informations
|
|
|
|
*
|
|
|
|
* Function will place try to place VRAM at same place as in CPU (PCI)
|
|
|
|
* address space as some GPU seems to have issue when we reprogram at
|
|
|
|
* different address space.
|
|
|
|
*
|
|
|
|
* If there is not enough space to fit the unvisible VRAM after the
|
|
|
|
* aperture then we limit the VRAM size to the aperture.
|
|
|
|
*
|
|
|
|
* If we are using AGP then place VRAM adjacent to AGP aperture are we need
|
|
|
|
* them to be in one from GPU point of view so that we can program GPU to
|
|
|
|
* catch access outside them (weird GPU policy see ??).
|
|
|
|
*
|
|
|
|
* This function will never fails, worst case are limiting VRAM or GTT.
|
|
|
|
*
|
|
|
|
* Note: GTT start, end, size should be initialized before calling this
|
|
|
|
* function on AGP platform.
|
|
|
|
*/
|
2010-11-23 06:56:26 +08:00
|
|
|
static void r600_vram_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc)
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
{
|
|
|
|
u64 size_bf, size_af;
|
|
|
|
|
|
|
|
if (mc->mc_vram_size > 0xE0000000) {
|
|
|
|
/* leave room for at least 512M GTT */
|
|
|
|
dev_warn(rdev->dev, "limiting VRAM\n");
|
|
|
|
mc->real_vram_size = 0xE0000000;
|
|
|
|
mc->mc_vram_size = 0xE0000000;
|
|
|
|
}
|
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
size_bf = mc->gtt_start;
|
2013-04-08 23:13:01 +08:00
|
|
|
size_af = mc->mc_mask - mc->gtt_end;
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
if (size_bf > size_af) {
|
|
|
|
if (mc->mc_vram_size > size_bf) {
|
|
|
|
dev_warn(rdev->dev, "limiting VRAM\n");
|
|
|
|
mc->real_vram_size = size_bf;
|
|
|
|
mc->mc_vram_size = size_bf;
|
|
|
|
}
|
|
|
|
mc->vram_start = mc->gtt_start - mc->mc_vram_size;
|
|
|
|
} else {
|
|
|
|
if (mc->mc_vram_size > size_af) {
|
|
|
|
dev_warn(rdev->dev, "limiting VRAM\n");
|
|
|
|
mc->real_vram_size = size_af;
|
|
|
|
mc->mc_vram_size = size_af;
|
|
|
|
}
|
2012-04-18 04:51:38 +08:00
|
|
|
mc->vram_start = mc->gtt_end + 1;
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
}
|
|
|
|
mc->vram_end = mc->vram_start + mc->mc_vram_size - 1;
|
|
|
|
dev_info(rdev->dev, "VRAM: %lluM 0x%08llX - 0x%08llX (%lluM used)\n",
|
|
|
|
mc->mc_vram_size >> 20, mc->vram_start,
|
|
|
|
mc->vram_end, mc->real_vram_size >> 20);
|
|
|
|
} else {
|
|
|
|
u64 base = 0;
|
2010-12-04 03:37:22 +08:00
|
|
|
if (rdev->flags & RADEON_IS_IGP) {
|
|
|
|
base = RREG32(MC_VM_FB_LOCATION) & 0xFFFF;
|
|
|
|
base <<= 24;
|
|
|
|
}
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
radeon_vram_location(rdev, &rdev->mc, base);
|
2010-07-15 22:51:10 +08:00
|
|
|
rdev->mc.gtt_base_align = 0;
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
radeon_gtt_location(rdev, mc);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static int r600_mc_init(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 tmp;
|
2009-10-20 05:23:33 +08:00
|
|
|
int chansize, numchan;
|
2013-04-06 05:50:53 +08:00
|
|
|
uint32_t h_addr, l_addr;
|
|
|
|
unsigned long long k8_addr;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Get VRAM informations */
|
2009-06-05 20:42:42 +08:00
|
|
|
rdev->mc.vram_is_ddr = true;
|
2009-09-08 08:10:24 +08:00
|
|
|
tmp = RREG32(RAMCFG);
|
|
|
|
if (tmp & CHANSIZE_OVERRIDE) {
|
2009-06-05 20:42:42 +08:00
|
|
|
chansize = 16;
|
2009-09-08 08:10:24 +08:00
|
|
|
} else if (tmp & CHANSIZE_MASK) {
|
2009-06-05 20:42:42 +08:00
|
|
|
chansize = 64;
|
|
|
|
} else {
|
|
|
|
chansize = 32;
|
|
|
|
}
|
2009-10-20 05:23:33 +08:00
|
|
|
tmp = RREG32(CHMAP);
|
|
|
|
switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
|
|
|
|
case 0:
|
|
|
|
default:
|
|
|
|
numchan = 1;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
numchan = 2;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
numchan = 4;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
numchan = 8;
|
|
|
|
break;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2009-10-20 05:23:33 +08:00
|
|
|
rdev->mc.vram_width = numchan * chansize;
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Could aper size report 0 ? */
|
2010-05-28 03:40:24 +08:00
|
|
|
rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0);
|
|
|
|
rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0);
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Setup GPU memory space */
|
|
|
|
rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE);
|
|
|
|
rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE);
|
2010-02-19 22:33:54 +08:00
|
|
|
rdev->mc.visible_vram_size = rdev->mc.aper_size;
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
r600_vram_gtt_location(rdev, &rdev->mc);
|
2010-03-17 08:54:38 +08:00
|
|
|
|
2010-07-01 00:02:03 +08:00
|
|
|
if (rdev->flags & RADEON_IS_IGP) {
|
|
|
|
rs690_pm_info(rdev);
|
2010-01-06 00:27:29 +08:00
|
|
|
rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);
|
2013-04-06 05:50:53 +08:00
|
|
|
|
|
|
|
if (rdev->family == CHIP_RS780 || rdev->family == CHIP_RS880) {
|
|
|
|
/* Use K8 direct mapping for fast fb access. */
|
|
|
|
rdev->fastfb_working = false;
|
|
|
|
h_addr = G_000012_K8_ADDR_EXT(RREG32_MC(R_000012_MC_MISC_UMA_CNTL));
|
|
|
|
l_addr = RREG32_MC(R_000011_K8_FB_LOCATION);
|
|
|
|
k8_addr = ((unsigned long long)h_addr) << 32 | l_addr;
|
|
|
|
#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
|
|
|
|
if (k8_addr + rdev->mc.visible_vram_size < 0x100000000ULL)
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
/* FastFB shall be used with UMA memory. Here it is simply disabled when sideport
|
|
|
|
* memory is present.
|
|
|
|
*/
|
|
|
|
if (rdev->mc.igp_sideport_enabled == false && radeon_fastfb == 1) {
|
|
|
|
DRM_INFO("Direct mapping: aper base at 0x%llx, replaced by direct mapping base 0x%llx.\n",
|
|
|
|
(unsigned long long)rdev->mc.aper_base, k8_addr);
|
|
|
|
rdev->mc.aper_base = (resource_size_t)k8_addr;
|
|
|
|
rdev->fastfb_working = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2010-07-01 00:02:03 +08:00
|
|
|
}
|
2013-04-06 05:50:53 +08:00
|
|
|
|
2010-03-17 08:54:38 +08:00
|
|
|
radeon_update_bandwidth_info(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
return 0;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2011-10-28 22:30:02 +08:00
|
|
|
int r600_vram_scratch_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
if (rdev->vram_scratch.robj == NULL) {
|
|
|
|
r = radeon_bo_create(rdev, RADEON_GPU_PAGE_SIZE,
|
|
|
|
PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
|
2012-05-11 06:33:13 +08:00
|
|
|
NULL, &rdev->vram_scratch.robj);
|
2011-10-28 22:30:02 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
r = radeon_bo_reserve(rdev->vram_scratch.robj, false);
|
|
|
|
if (unlikely(r != 0))
|
|
|
|
return r;
|
|
|
|
r = radeon_bo_pin(rdev->vram_scratch.robj,
|
|
|
|
RADEON_GEM_DOMAIN_VRAM, &rdev->vram_scratch.gpu_addr);
|
|
|
|
if (r) {
|
|
|
|
radeon_bo_unreserve(rdev->vram_scratch.robj);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
r = radeon_bo_kmap(rdev->vram_scratch.robj,
|
|
|
|
(void **)&rdev->vram_scratch.ptr);
|
|
|
|
if (r)
|
|
|
|
radeon_bo_unpin(rdev->vram_scratch.robj);
|
|
|
|
radeon_bo_unreserve(rdev->vram_scratch.robj);
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_vram_scratch_fini(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
if (rdev->vram_scratch.robj == NULL) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
r = radeon_bo_reserve(rdev->vram_scratch.robj, false);
|
|
|
|
if (likely(r == 0)) {
|
|
|
|
radeon_bo_kunmap(rdev->vram_scratch.robj);
|
|
|
|
radeon_bo_unpin(rdev->vram_scratch.robj);
|
|
|
|
radeon_bo_unreserve(rdev->vram_scratch.robj);
|
|
|
|
}
|
|
|
|
radeon_bo_unref(&rdev->vram_scratch.robj);
|
|
|
|
}
|
|
|
|
|
2013-01-19 02:05:39 +08:00
|
|
|
void r600_set_bios_scratch_engine_hung(struct radeon_device *rdev, bool hung)
|
|
|
|
{
|
|
|
|
u32 tmp = RREG32(R600_BIOS_3_SCRATCH);
|
|
|
|
|
|
|
|
if (hung)
|
|
|
|
tmp |= ATOM_S3_ASIC_GUI_ENGINE_HUNG;
|
|
|
|
else
|
|
|
|
tmp &= ~ATOM_S3_ASIC_GUI_ENGINE_HUNG;
|
|
|
|
|
|
|
|
WREG32(R600_BIOS_3_SCRATCH, tmp);
|
|
|
|
}
|
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
static void r600_print_gpu_status_regs(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2013-01-03 06:30:35 +08:00
|
|
|
dev_info(rdev->dev, " R_008010_GRBM_STATUS = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(R_008010_GRBM_STATUS));
|
2013-01-03 06:30:35 +08:00
|
|
|
dev_info(rdev->dev, " R_008014_GRBM_STATUS2 = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(R_008014_GRBM_STATUS2));
|
2013-01-03 06:30:35 +08:00
|
|
|
dev_info(rdev->dev, " R_000E50_SRBM_STATUS = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(R_000E50_SRBM_STATUS));
|
2012-06-28 00:25:01 +08:00
|
|
|
dev_info(rdev->dev, " R_008674_CP_STALLED_STAT1 = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(CP_STALLED_STAT1));
|
2012-06-28 00:25:01 +08:00
|
|
|
dev_info(rdev->dev, " R_008678_CP_STALLED_STAT2 = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(CP_STALLED_STAT2));
|
2012-06-28 00:25:01 +08:00
|
|
|
dev_info(rdev->dev, " R_00867C_CP_BUSY_STAT = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(CP_BUSY_STAT));
|
2012-06-28 00:25:01 +08:00
|
|
|
dev_info(rdev->dev, " R_008680_CP_STAT = 0x%08X\n",
|
2013-01-19 02:53:37 +08:00
|
|
|
RREG32(CP_STAT));
|
2013-01-04 01:20:35 +08:00
|
|
|
dev_info(rdev->dev, " R_00D034_DMA_STATUS_REG = 0x%08X\n",
|
|
|
|
RREG32(DMA_STATUS_REG));
|
|
|
|
}
|
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
static bool r600_is_display_hung(struct radeon_device *rdev)
|
2013-01-04 01:20:35 +08:00
|
|
|
{
|
2013-01-19 07:12:22 +08:00
|
|
|
u32 crtc_hung = 0;
|
|
|
|
u32 crtc_status[2];
|
|
|
|
u32 i, j, tmp;
|
|
|
|
|
|
|
|
for (i = 0; i < rdev->num_crtc; i++) {
|
|
|
|
if (RREG32(AVIVO_D1CRTC_CONTROL + crtc_offsets[i]) & AVIVO_CRTC_EN) {
|
|
|
|
crtc_status[i] = RREG32(AVIVO_D1CRTC_STATUS_HV_COUNT + crtc_offsets[i]);
|
|
|
|
crtc_hung |= (1 << i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (j = 0; j < 10; j++) {
|
|
|
|
for (i = 0; i < rdev->num_crtc; i++) {
|
|
|
|
if (crtc_hung & (1 << i)) {
|
|
|
|
tmp = RREG32(AVIVO_D1CRTC_STATUS_HV_COUNT + crtc_offsets[i]);
|
|
|
|
if (tmp != crtc_status[i])
|
|
|
|
crtc_hung &= ~(1 << i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (crtc_hung == 0)
|
|
|
|
return false;
|
|
|
|
udelay(100);
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-08-13 17:56:54 +08:00
|
|
|
u32 r600_gpu_check_soft_reset(struct radeon_device *rdev)
|
2013-01-19 07:12:22 +08:00
|
|
|
{
|
|
|
|
u32 reset_mask = 0;
|
2013-01-19 02:53:37 +08:00
|
|
|
u32 tmp;
|
2013-01-04 01:20:35 +08:00
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
/* GRBM_STATUS */
|
|
|
|
tmp = RREG32(R_008010_GRBM_STATUS);
|
|
|
|
if (rdev->family >= CHIP_RV770) {
|
|
|
|
if (G_008010_PA_BUSY(tmp) | G_008010_SC_BUSY(tmp) |
|
|
|
|
G_008010_SH_BUSY(tmp) | G_008010_SX_BUSY(tmp) |
|
|
|
|
G_008010_TA_BUSY(tmp) | G_008010_VGT_BUSY(tmp) |
|
|
|
|
G_008010_DB03_BUSY(tmp) | G_008010_CB03_BUSY(tmp) |
|
|
|
|
G_008010_SPI03_BUSY(tmp) | G_008010_VGT_BUSY_NO_DMA(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_GFX;
|
|
|
|
} else {
|
|
|
|
if (G_008010_PA_BUSY(tmp) | G_008010_SC_BUSY(tmp) |
|
|
|
|
G_008010_SH_BUSY(tmp) | G_008010_SX_BUSY(tmp) |
|
|
|
|
G_008010_TA03_BUSY(tmp) | G_008010_VGT_BUSY(tmp) |
|
|
|
|
G_008010_DB03_BUSY(tmp) | G_008010_CB03_BUSY(tmp) |
|
|
|
|
G_008010_SPI03_BUSY(tmp) | G_008010_VGT_BUSY_NO_DMA(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_GFX;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (G_008010_CF_RQ_PENDING(tmp) | G_008010_PF_RQ_PENDING(tmp) |
|
|
|
|
G_008010_CP_BUSY(tmp) | G_008010_CP_COHERENCY_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_CP;
|
|
|
|
|
|
|
|
if (G_008010_GRBM_EE_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_GRBM | RADEON_RESET_GFX | RADEON_RESET_CP;
|
|
|
|
|
|
|
|
/* DMA_STATUS_REG */
|
|
|
|
tmp = RREG32(DMA_STATUS_REG);
|
|
|
|
if (!(tmp & DMA_IDLE))
|
|
|
|
reset_mask |= RADEON_RESET_DMA;
|
|
|
|
|
|
|
|
/* SRBM_STATUS */
|
|
|
|
tmp = RREG32(R_000E50_SRBM_STATUS);
|
|
|
|
if (G_000E50_RLC_RQ_PENDING(tmp) | G_000E50_RLC_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_RLC;
|
|
|
|
|
|
|
|
if (G_000E50_IH_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_IH;
|
|
|
|
|
|
|
|
if (G_000E50_SEM_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_SEM;
|
2013-01-15 00:04:39 +08:00
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
if (G_000E50_GRBM_RQ_PENDING(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_GRBM;
|
|
|
|
|
|
|
|
if (G_000E50_VMC_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_VMC;
|
|
|
|
|
|
|
|
if (G_000E50_MCB_BUSY(tmp) | G_000E50_MCDZ_BUSY(tmp) |
|
|
|
|
G_000E50_MCDY_BUSY(tmp) | G_000E50_MCDX_BUSY(tmp) |
|
|
|
|
G_000E50_MCDW_BUSY(tmp))
|
|
|
|
reset_mask |= RADEON_RESET_MC;
|
|
|
|
|
|
|
|
if (r600_is_display_hung(rdev))
|
|
|
|
reset_mask |= RADEON_RESET_DISPLAY;
|
|
|
|
|
2013-02-28 23:03:08 +08:00
|
|
|
/* Skip MC reset as it's mostly likely not hung, just busy */
|
|
|
|
if (reset_mask & RADEON_RESET_MC) {
|
|
|
|
DRM_DEBUG("MC busy: 0x%08X, clearing.\n", reset_mask);
|
|
|
|
reset_mask &= ~RADEON_RESET_MC;
|
|
|
|
}
|
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
return reset_mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void r600_gpu_soft_reset(struct radeon_device *rdev, u32 reset_mask)
|
|
|
|
{
|
|
|
|
struct rv515_mc_save save;
|
|
|
|
u32 grbm_soft_reset = 0, srbm_soft_reset = 0;
|
|
|
|
u32 tmp;
|
2013-01-15 00:04:39 +08:00
|
|
|
|
2013-01-04 01:20:35 +08:00
|
|
|
if (reset_mask == 0)
|
2013-01-19 07:12:22 +08:00
|
|
|
return;
|
2013-01-04 01:20:35 +08:00
|
|
|
|
|
|
|
dev_info(rdev->dev, "GPU softreset: 0x%08X\n", reset_mask);
|
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
r600_print_gpu_status_regs(rdev);
|
|
|
|
|
|
|
|
/* Disable CP parsing/prefetching */
|
|
|
|
if (rdev->family >= CHIP_RV770)
|
|
|
|
WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1) | S_0086D8_CP_PFP_HALT(1));
|
|
|
|
else
|
|
|
|
WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1));
|
|
|
|
|
|
|
|
/* disable the RLC */
|
|
|
|
WREG32(RLC_CNTL, 0);
|
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_DMA) {
|
|
|
|
/* Disable DMA */
|
|
|
|
tmp = RREG32(DMA_RB_CNTL);
|
|
|
|
tmp &= ~DMA_RB_ENABLE;
|
|
|
|
WREG32(DMA_RB_CNTL, tmp);
|
|
|
|
}
|
|
|
|
|
|
|
|
mdelay(50);
|
|
|
|
|
2013-01-24 07:56:08 +08:00
|
|
|
rv515_mc_stop(rdev, &save);
|
|
|
|
if (r600_mc_wait_for_idle(rdev)) {
|
|
|
|
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
|
|
|
|
}
|
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
if (reset_mask & (RADEON_RESET_GFX | RADEON_RESET_COMPUTE)) {
|
|
|
|
if (rdev->family >= CHIP_RV770)
|
|
|
|
grbm_soft_reset |= S_008020_SOFT_RESET_DB(1) |
|
|
|
|
S_008020_SOFT_RESET_CB(1) |
|
|
|
|
S_008020_SOFT_RESET_PA(1) |
|
|
|
|
S_008020_SOFT_RESET_SC(1) |
|
|
|
|
S_008020_SOFT_RESET_SPI(1) |
|
|
|
|
S_008020_SOFT_RESET_SX(1) |
|
|
|
|
S_008020_SOFT_RESET_SH(1) |
|
|
|
|
S_008020_SOFT_RESET_TC(1) |
|
|
|
|
S_008020_SOFT_RESET_TA(1) |
|
|
|
|
S_008020_SOFT_RESET_VC(1) |
|
|
|
|
S_008020_SOFT_RESET_VGT(1);
|
|
|
|
else
|
|
|
|
grbm_soft_reset |= S_008020_SOFT_RESET_CR(1) |
|
|
|
|
S_008020_SOFT_RESET_DB(1) |
|
|
|
|
S_008020_SOFT_RESET_CB(1) |
|
|
|
|
S_008020_SOFT_RESET_PA(1) |
|
|
|
|
S_008020_SOFT_RESET_SC(1) |
|
|
|
|
S_008020_SOFT_RESET_SMX(1) |
|
|
|
|
S_008020_SOFT_RESET_SPI(1) |
|
|
|
|
S_008020_SOFT_RESET_SX(1) |
|
|
|
|
S_008020_SOFT_RESET_SH(1) |
|
|
|
|
S_008020_SOFT_RESET_TC(1) |
|
|
|
|
S_008020_SOFT_RESET_TA(1) |
|
|
|
|
S_008020_SOFT_RESET_VC(1) |
|
|
|
|
S_008020_SOFT_RESET_VGT(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_CP) {
|
|
|
|
grbm_soft_reset |= S_008020_SOFT_RESET_CP(1) |
|
|
|
|
S_008020_SOFT_RESET_VGT(1);
|
|
|
|
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_GRBM(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_DMA) {
|
|
|
|
if (rdev->family >= CHIP_RV770)
|
|
|
|
srbm_soft_reset |= RV770_SOFT_RESET_DMA;
|
|
|
|
else
|
|
|
|
srbm_soft_reset |= SOFT_RESET_DMA;
|
|
|
|
}
|
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
if (reset_mask & RADEON_RESET_RLC)
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_RLC(1);
|
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_SEM)
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_SEM(1);
|
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_IH)
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_IH(1);
|
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_GRBM)
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_GRBM(1);
|
|
|
|
|
2013-01-25 04:00:17 +08:00
|
|
|
if (!(rdev->flags & RADEON_IS_IGP)) {
|
|
|
|
if (reset_mask & RADEON_RESET_MC)
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_MC(1);
|
|
|
|
}
|
2013-01-19 07:12:22 +08:00
|
|
|
|
|
|
|
if (reset_mask & RADEON_RESET_VMC)
|
|
|
|
srbm_soft_reset |= S_000E60_SOFT_RESET_VMC(1);
|
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
if (grbm_soft_reset) {
|
|
|
|
tmp = RREG32(R_008020_GRBM_SOFT_RESET);
|
|
|
|
tmp |= grbm_soft_reset;
|
|
|
|
dev_info(rdev->dev, "R_008020_GRBM_SOFT_RESET=0x%08X\n", tmp);
|
|
|
|
WREG32(R_008020_GRBM_SOFT_RESET, tmp);
|
|
|
|
tmp = RREG32(R_008020_GRBM_SOFT_RESET);
|
|
|
|
|
|
|
|
udelay(50);
|
|
|
|
|
|
|
|
tmp &= ~grbm_soft_reset;
|
|
|
|
WREG32(R_008020_GRBM_SOFT_RESET, tmp);
|
|
|
|
tmp = RREG32(R_008020_GRBM_SOFT_RESET);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (srbm_soft_reset) {
|
|
|
|
tmp = RREG32(SRBM_SOFT_RESET);
|
|
|
|
tmp |= srbm_soft_reset;
|
|
|
|
dev_info(rdev->dev, "SRBM_SOFT_RESET=0x%08X\n", tmp);
|
|
|
|
WREG32(SRBM_SOFT_RESET, tmp);
|
|
|
|
tmp = RREG32(SRBM_SOFT_RESET);
|
|
|
|
|
|
|
|
udelay(50);
|
2013-01-04 01:20:35 +08:00
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
tmp &= ~srbm_soft_reset;
|
|
|
|
WREG32(SRBM_SOFT_RESET, tmp);
|
|
|
|
tmp = RREG32(SRBM_SOFT_RESET);
|
|
|
|
}
|
2013-01-04 01:20:35 +08:00
|
|
|
|
|
|
|
/* Wait a little for things to settle down */
|
|
|
|
mdelay(1);
|
|
|
|
|
2009-10-02 00:02:13 +08:00
|
|
|
rv515_mc_resume(rdev, &save);
|
2013-01-19 02:53:37 +08:00
|
|
|
udelay(50);
|
2013-01-19 02:05:39 +08:00
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
r600_print_gpu_status_regs(rdev);
|
|
|
|
}
|
|
|
|
|
2013-11-02 07:01:36 +08:00
|
|
|
static void r600_gpu_pci_config_reset(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
struct rv515_mc_save save;
|
|
|
|
u32 tmp, i;
|
|
|
|
|
|
|
|
dev_info(rdev->dev, "GPU pci config reset\n");
|
|
|
|
|
|
|
|
/* disable dpm? */
|
|
|
|
|
|
|
|
/* Disable CP parsing/prefetching */
|
|
|
|
if (rdev->family >= CHIP_RV770)
|
|
|
|
WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1) | S_0086D8_CP_PFP_HALT(1));
|
|
|
|
else
|
|
|
|
WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1));
|
|
|
|
|
|
|
|
/* disable the RLC */
|
|
|
|
WREG32(RLC_CNTL, 0);
|
|
|
|
|
|
|
|
/* Disable DMA */
|
|
|
|
tmp = RREG32(DMA_RB_CNTL);
|
|
|
|
tmp &= ~DMA_RB_ENABLE;
|
|
|
|
WREG32(DMA_RB_CNTL, tmp);
|
|
|
|
|
|
|
|
mdelay(50);
|
|
|
|
|
|
|
|
/* set mclk/sclk to bypass */
|
|
|
|
if (rdev->family >= CHIP_RV770)
|
|
|
|
rv770_set_clk_bypass_mode(rdev);
|
|
|
|
/* disable BM */
|
|
|
|
pci_clear_master(rdev->pdev);
|
|
|
|
/* disable mem access */
|
|
|
|
rv515_mc_stop(rdev, &save);
|
|
|
|
if (r600_mc_wait_for_idle(rdev)) {
|
|
|
|
dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* BIF reset workaround. Not sure if this is needed on 6xx */
|
|
|
|
tmp = RREG32(BUS_CNTL);
|
|
|
|
tmp |= VGA_COHE_SPEC_TIMER_DIS;
|
|
|
|
WREG32(BUS_CNTL, tmp);
|
|
|
|
|
|
|
|
tmp = RREG32(BIF_SCRATCH0);
|
|
|
|
|
|
|
|
/* reset */
|
|
|
|
radeon_pci_config_reset(rdev);
|
|
|
|
mdelay(1);
|
|
|
|
|
|
|
|
/* BIF reset workaround. Not sure if this is needed on 6xx */
|
|
|
|
tmp = SOFT_RESET_BIF;
|
|
|
|
WREG32(SRBM_SOFT_RESET, tmp);
|
|
|
|
mdelay(1);
|
|
|
|
WREG32(SRBM_SOFT_RESET, 0);
|
|
|
|
|
|
|
|
/* wait for asic to come out of reset */
|
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
if (RREG32(CONFIG_MEMSIZE) != 0xffffffff)
|
|
|
|
break;
|
|
|
|
udelay(1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-01-19 02:53:37 +08:00
|
|
|
int r600_asic_reset(struct radeon_device *rdev)
|
|
|
|
{
|
2013-01-19 07:12:22 +08:00
|
|
|
u32 reset_mask;
|
|
|
|
|
|
|
|
reset_mask = r600_gpu_check_soft_reset(rdev);
|
|
|
|
|
|
|
|
if (reset_mask)
|
|
|
|
r600_set_bios_scratch_engine_hung(rdev, true);
|
|
|
|
|
2013-11-02 07:01:36 +08:00
|
|
|
/* try soft reset */
|
2013-01-19 07:12:22 +08:00
|
|
|
r600_gpu_soft_reset(rdev, reset_mask);
|
|
|
|
|
|
|
|
reset_mask = r600_gpu_check_soft_reset(rdev);
|
|
|
|
|
2013-11-02 07:01:36 +08:00
|
|
|
/* try pci config reset */
|
|
|
|
if (reset_mask && radeon_hard_reset)
|
|
|
|
r600_gpu_pci_config_reset(rdev);
|
|
|
|
|
|
|
|
reset_mask = r600_gpu_check_soft_reset(rdev);
|
|
|
|
|
2013-01-19 07:12:22 +08:00
|
|
|
if (!reset_mask)
|
|
|
|
r600_set_bios_scratch_engine_hung(rdev, false);
|
|
|
|
|
|
|
|
return 0;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2013-01-25 00:37:19 +08:00
|
|
|
/**
|
|
|
|
* r600_gfx_is_lockup - Check if the GFX engine is locked up
|
|
|
|
*
|
|
|
|
* @rdev: radeon_device pointer
|
|
|
|
* @ring: radeon_ring structure holding ring information
|
|
|
|
*
|
|
|
|
* Check if the GFX engine is locked up.
|
|
|
|
* Returns true if the engine appears to be locked up, false if not.
|
|
|
|
*/
|
|
|
|
bool r600_gfx_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring)
|
2010-03-09 22:45:10 +08:00
|
|
|
{
|
2013-01-25 00:37:19 +08:00
|
|
|
u32 reset_mask = r600_gpu_check_soft_reset(rdev);
|
|
|
|
|
|
|
|
if (!(reset_mask & (RADEON_RESET_GFX |
|
|
|
|
RADEON_RESET_COMPUTE |
|
|
|
|
RADEON_RESET_CP))) {
|
2014-02-18 21:52:33 +08:00
|
|
|
radeon_ring_lockup_update(rdev, ring);
|
2010-03-09 22:45:10 +08:00
|
|
|
return false;
|
|
|
|
}
|
2012-05-02 21:11:20 +08:00
|
|
|
return radeon_ring_test_lockup(rdev, ring);
|
2010-03-09 22:45:10 +08:00
|
|
|
}
|
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
u32 r6xx_remap_render_backend(struct radeon_device *rdev,
|
|
|
|
u32 tiling_pipe_num,
|
|
|
|
u32 max_rb_num,
|
|
|
|
u32 total_max_rb_num,
|
|
|
|
u32 disabled_rb_mask)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
2012-06-01 07:00:25 +08:00
|
|
|
u32 rendering_pipe_num, rb_num_width, req_rb_num;
|
2013-01-31 03:10:04 +08:00
|
|
|
u32 pipe_rb_ratio, pipe_rb_remain, tmp;
|
2012-06-01 07:00:25 +08:00
|
|
|
u32 data = 0, mask = 1 << (max_rb_num - 1);
|
|
|
|
unsigned i, j;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
/* mask out the RBs that don't exist on that asic */
|
2013-01-31 03:10:04 +08:00
|
|
|
tmp = disabled_rb_mask | ((0xff << max_rb_num) & 0xff);
|
|
|
|
/* make sure at least one RB is available */
|
|
|
|
if ((tmp & 0xff) != 0xff)
|
|
|
|
disabled_rb_mask = tmp;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
rendering_pipe_num = 1 << tiling_pipe_num;
|
|
|
|
req_rb_num = total_max_rb_num - r600_count_pipe_bits(disabled_rb_mask);
|
|
|
|
BUG_ON(rendering_pipe_num < req_rb_num);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
pipe_rb_ratio = rendering_pipe_num / req_rb_num;
|
|
|
|
pipe_rb_remain = rendering_pipe_num - pipe_rb_ratio * req_rb_num;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
if (rdev->family <= CHIP_RV740) {
|
|
|
|
/* r6xx/r7xx */
|
|
|
|
rb_num_width = 2;
|
|
|
|
} else {
|
|
|
|
/* eg+ */
|
|
|
|
rb_num_width = 4;
|
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
for (i = 0; i < max_rb_num; i++) {
|
|
|
|
if (!(mask & disabled_rb_mask)) {
|
|
|
|
for (j = 0; j < pipe_rb_ratio; j++) {
|
|
|
|
data <<= rb_num_width;
|
|
|
|
data |= max_rb_num - i - 1;
|
|
|
|
}
|
|
|
|
if (pipe_rb_remain) {
|
|
|
|
data <<= rb_num_width;
|
|
|
|
data |= max_rb_num - i - 1;
|
|
|
|
pipe_rb_remain--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mask >>= 1;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
return data;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int r600_count_pipe_bits(uint32_t val)
|
|
|
|
{
|
2012-11-09 20:10:41 +08:00
|
|
|
return hweight32(val);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static void r600_gpu_init(struct radeon_device *rdev)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
|
|
|
u32 tiling_config;
|
|
|
|
u32 ramcfg;
|
2010-02-20 05:22:31 +08:00
|
|
|
u32 cc_rb_backend_disable;
|
|
|
|
u32 cc_gc_shader_pipe_config;
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 tmp;
|
|
|
|
int i, j;
|
|
|
|
u32 sq_config;
|
|
|
|
u32 sq_gpr_resource_mgmt_1 = 0;
|
|
|
|
u32 sq_gpr_resource_mgmt_2 = 0;
|
|
|
|
u32 sq_thread_resource_mgmt = 0;
|
|
|
|
u32 sq_stack_resource_mgmt_1 = 0;
|
|
|
|
u32 sq_stack_resource_mgmt_2 = 0;
|
2012-06-01 07:00:25 +08:00
|
|
|
u32 disabled_rb_mask;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2012-06-01 07:00:25 +08:00
|
|
|
rdev->config.r600.tiling_group_size = 256;
|
2009-09-08 08:10:24 +08:00
|
|
|
switch (rdev->family) {
|
|
|
|
case CHIP_R600:
|
|
|
|
rdev->config.r600.max_pipes = 4;
|
|
|
|
rdev->config.r600.max_tile_pipes = 8;
|
|
|
|
rdev->config.r600.max_simds = 4;
|
|
|
|
rdev->config.r600.max_backends = 4;
|
|
|
|
rdev->config.r600.max_gprs = 256;
|
|
|
|
rdev->config.r600.max_threads = 192;
|
|
|
|
rdev->config.r600.max_stack_entries = 256;
|
|
|
|
rdev->config.r600.max_hw_contexts = 8;
|
|
|
|
rdev->config.r600.max_gs_threads = 16;
|
|
|
|
rdev->config.r600.sx_max_export_size = 128;
|
|
|
|
rdev->config.r600.sx_max_export_pos_size = 16;
|
|
|
|
rdev->config.r600.sx_max_export_smx_size = 128;
|
|
|
|
rdev->config.r600.sq_num_cf_insts = 2;
|
|
|
|
break;
|
|
|
|
case CHIP_RV630:
|
|
|
|
case CHIP_RV635:
|
|
|
|
rdev->config.r600.max_pipes = 2;
|
|
|
|
rdev->config.r600.max_tile_pipes = 2;
|
|
|
|
rdev->config.r600.max_simds = 3;
|
|
|
|
rdev->config.r600.max_backends = 1;
|
|
|
|
rdev->config.r600.max_gprs = 128;
|
|
|
|
rdev->config.r600.max_threads = 192;
|
|
|
|
rdev->config.r600.max_stack_entries = 128;
|
|
|
|
rdev->config.r600.max_hw_contexts = 8;
|
|
|
|
rdev->config.r600.max_gs_threads = 4;
|
|
|
|
rdev->config.r600.sx_max_export_size = 128;
|
|
|
|
rdev->config.r600.sx_max_export_pos_size = 16;
|
|
|
|
rdev->config.r600.sx_max_export_smx_size = 128;
|
|
|
|
rdev->config.r600.sq_num_cf_insts = 2;
|
|
|
|
break;
|
|
|
|
case CHIP_RV610:
|
|
|
|
case CHIP_RV620:
|
|
|
|
case CHIP_RS780:
|
|
|
|
case CHIP_RS880:
|
|
|
|
rdev->config.r600.max_pipes = 1;
|
|
|
|
rdev->config.r600.max_tile_pipes = 1;
|
|
|
|
rdev->config.r600.max_simds = 2;
|
|
|
|
rdev->config.r600.max_backends = 1;
|
|
|
|
rdev->config.r600.max_gprs = 128;
|
|
|
|
rdev->config.r600.max_threads = 192;
|
|
|
|
rdev->config.r600.max_stack_entries = 128;
|
|
|
|
rdev->config.r600.max_hw_contexts = 4;
|
|
|
|
rdev->config.r600.max_gs_threads = 4;
|
|
|
|
rdev->config.r600.sx_max_export_size = 128;
|
|
|
|
rdev->config.r600.sx_max_export_pos_size = 16;
|
|
|
|
rdev->config.r600.sx_max_export_smx_size = 128;
|
|
|
|
rdev->config.r600.sq_num_cf_insts = 1;
|
|
|
|
break;
|
|
|
|
case CHIP_RV670:
|
|
|
|
rdev->config.r600.max_pipes = 4;
|
|
|
|
rdev->config.r600.max_tile_pipes = 4;
|
|
|
|
rdev->config.r600.max_simds = 4;
|
|
|
|
rdev->config.r600.max_backends = 4;
|
|
|
|
rdev->config.r600.max_gprs = 192;
|
|
|
|
rdev->config.r600.max_threads = 192;
|
|
|
|
rdev->config.r600.max_stack_entries = 256;
|
|
|
|
rdev->config.r600.max_hw_contexts = 8;
|
|
|
|
rdev->config.r600.max_gs_threads = 16;
|
|
|
|
rdev->config.r600.sx_max_export_size = 128;
|
|
|
|
rdev->config.r600.sx_max_export_pos_size = 16;
|
|
|
|
rdev->config.r600.sx_max_export_smx_size = 128;
|
|
|
|
rdev->config.r600.sq_num_cf_insts = 2;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize HDP */
|
|
|
|
for (i = 0, j = 0; i < 32; i++, j += 0x18) {
|
|
|
|
WREG32((0x2c14 + j), 0x00000000);
|
|
|
|
WREG32((0x2c18 + j), 0x00000000);
|
|
|
|
WREG32((0x2c1c + j), 0x00000000);
|
|
|
|
WREG32((0x2c20 + j), 0x00000000);
|
|
|
|
WREG32((0x2c24 + j), 0x00000000);
|
|
|
|
}
|
|
|
|
|
|
|
|
WREG32(GRBM_CNTL, GRBM_READ_TIMEOUT(0xff));
|
|
|
|
|
|
|
|
/* Setup tiling */
|
|
|
|
tiling_config = 0;
|
|
|
|
ramcfg = RREG32(RAMCFG);
|
|
|
|
switch (rdev->config.r600.max_tile_pipes) {
|
|
|
|
case 1:
|
|
|
|
tiling_config |= PIPE_TILING(0);
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
tiling_config |= PIPE_TILING(1);
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
tiling_config |= PIPE_TILING(2);
|
|
|
|
break;
|
|
|
|
case 8:
|
|
|
|
tiling_config |= PIPE_TILING(3);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2010-02-20 05:22:31 +08:00
|
|
|
rdev->config.r600.tiling_npipes = rdev->config.r600.max_tile_pipes;
|
2010-02-11 06:30:05 +08:00
|
|
|
rdev->config.r600.tiling_nbanks = 4 << ((ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT);
|
2009-09-08 08:10:24 +08:00
|
|
|
tiling_config |= BANK_TILING((ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT);
|
2010-10-19 11:54:56 +08:00
|
|
|
tiling_config |= GROUP_SIZE((ramcfg & BURSTLENGTH_MASK) >> BURSTLENGTH_SHIFT);
|
2012-06-01 07:00:25 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
tmp = (ramcfg & NOOFROWS_MASK) >> NOOFROWS_SHIFT;
|
|
|
|
if (tmp > 3) {
|
|
|
|
tiling_config |= ROW_TILING(3);
|
|
|
|
tiling_config |= SAMPLE_SPLIT(3);
|
|
|
|
} else {
|
|
|
|
tiling_config |= ROW_TILING(tmp);
|
|
|
|
tiling_config |= SAMPLE_SPLIT(tmp);
|
|
|
|
}
|
|
|
|
tiling_config |= BANK_SWAPS(1);
|
2010-02-20 05:22:31 +08:00
|
|
|
|
|
|
|
cc_rb_backend_disable = RREG32(CC_RB_BACKEND_DISABLE) & 0x00ff0000;
|
2012-06-01 07:00:25 +08:00
|
|
|
tmp = R6XX_MAX_BACKENDS -
|
|
|
|
r600_count_pipe_bits((cc_rb_backend_disable >> 16) & R6XX_MAX_BACKENDS_MASK);
|
|
|
|
if (tmp < rdev->config.r600.max_backends) {
|
|
|
|
rdev->config.r600.max_backends = tmp;
|
|
|
|
}
|
|
|
|
|
|
|
|
cc_gc_shader_pipe_config = RREG32(CC_GC_SHADER_PIPE_CONFIG) & 0x00ffff00;
|
|
|
|
tmp = R6XX_MAX_PIPES -
|
|
|
|
r600_count_pipe_bits((cc_gc_shader_pipe_config >> 8) & R6XX_MAX_PIPES_MASK);
|
|
|
|
if (tmp < rdev->config.r600.max_pipes) {
|
|
|
|
rdev->config.r600.max_pipes = tmp;
|
|
|
|
}
|
|
|
|
tmp = R6XX_MAX_SIMDS -
|
|
|
|
r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R6XX_MAX_SIMDS_MASK);
|
|
|
|
if (tmp < rdev->config.r600.max_simds) {
|
|
|
|
rdev->config.r600.max_simds = tmp;
|
|
|
|
}
|
2014-06-03 04:13:21 +08:00
|
|
|
tmp = rdev->config.r600.max_simds -
|
|
|
|
r600_count_pipe_bits((cc_gc_shader_pipe_config >> 16) & R6XX_MAX_SIMDS_MASK);
|
|
|
|
rdev->config.r600.active_simds = tmp;
|
2012-06-01 07:00:25 +08:00
|
|
|
|
|
|
|
disabled_rb_mask = (RREG32(CC_RB_BACKEND_DISABLE) >> 16) & R6XX_MAX_BACKENDS_MASK;
|
|
|
|
tmp = (tiling_config & PIPE_TILING__MASK) >> PIPE_TILING__SHIFT;
|
|
|
|
tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.r600.max_backends,
|
|
|
|
R6XX_MAX_BACKENDS, disabled_rb_mask);
|
|
|
|
tiling_config |= tmp << 16;
|
|
|
|
rdev->config.r600.backend_map = tmp;
|
|
|
|
|
2010-06-05 01:10:12 +08:00
|
|
|
rdev->config.r600.tile_config = tiling_config;
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(GB_TILING_CONFIG, tiling_config);
|
|
|
|
WREG32(DCP_TILING_CONFIG, tiling_config & 0xffff);
|
|
|
|
WREG32(HDP_TILING_CONFIG, tiling_config & 0xffff);
|
2012-09-28 03:08:35 +08:00
|
|
|
WREG32(DMA_TILING_CONFIG, tiling_config & 0xffff);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2010-02-20 05:22:31 +08:00
|
|
|
tmp = R6XX_MAX_PIPES - r600_count_pipe_bits((cc_gc_shader_pipe_config & INACTIVE_QD_PIPES_MASK) >> 8);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(VGT_OUT_DEALLOC_CNTL, (tmp * 4) & DEALLOC_DIST_MASK);
|
|
|
|
WREG32(VGT_VERTEX_REUSE_BLOCK_CNTL, ((tmp * 4) - 2) & VTX_REUSE_DEPTH_MASK);
|
|
|
|
|
|
|
|
/* Setup some CP states */
|
|
|
|
WREG32(CP_QUEUE_THRESHOLDS, (ROQ_IB1_START(0x16) | ROQ_IB2_START(0x2b)));
|
|
|
|
WREG32(CP_MEQ_THRESHOLDS, (MEQ_END(0x40) | ROQ_END(0x40)));
|
|
|
|
|
|
|
|
WREG32(TA_CNTL_AUX, (DISABLE_CUBE_ANISO | SYNC_GRADIENT |
|
|
|
|
SYNC_WALKER | SYNC_ALIGNER));
|
|
|
|
/* Setup various GPU states */
|
|
|
|
if (rdev->family == CHIP_RV670)
|
|
|
|
WREG32(ARB_GDEC_RD_CNTL, 0x00000021);
|
|
|
|
|
|
|
|
tmp = RREG32(SX_DEBUG_1);
|
|
|
|
tmp |= SMX_EVENT_RELEASE;
|
|
|
|
if ((rdev->family > CHIP_R600))
|
|
|
|
tmp |= ENABLE_NEW_SMX_ADDRESS;
|
|
|
|
WREG32(SX_DEBUG_1, tmp);
|
|
|
|
|
|
|
|
if (((rdev->family) == CHIP_R600) ||
|
|
|
|
((rdev->family) == CHIP_RV630) ||
|
|
|
|
((rdev->family) == CHIP_RV610) ||
|
|
|
|
((rdev->family) == CHIP_RV620) ||
|
2009-11-06 02:11:46 +08:00
|
|
|
((rdev->family) == CHIP_RS780) ||
|
|
|
|
((rdev->family) == CHIP_RS880)) {
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(DB_DEBUG, PREZ_MUST_WAIT_FOR_POSTZ_DONE);
|
|
|
|
} else {
|
|
|
|
WREG32(DB_DEBUG, 0);
|
|
|
|
}
|
|
|
|
WREG32(DB_WATERMARKS, (DEPTH_FREE(4) | DEPTH_CACHELINE_FREE(16) |
|
|
|
|
DEPTH_FLUSH(16) | DEPTH_PENDING_FREE(4)));
|
|
|
|
|
|
|
|
WREG32(PA_SC_MULTI_CHIP_CNTL, 0);
|
|
|
|
WREG32(VGT_NUM_INSTANCES, 0);
|
|
|
|
|
|
|
|
WREG32(SPI_CONFIG_CNTL, GPR_WRITE_PRIORITY(0));
|
|
|
|
WREG32(SPI_CONFIG_CNTL_1, VTX_DONE_DELAY(0));
|
|
|
|
|
|
|
|
tmp = RREG32(SQ_MS_FIFO_SIZES);
|
|
|
|
if (((rdev->family) == CHIP_RV610) ||
|
|
|
|
((rdev->family) == CHIP_RV620) ||
|
2009-11-06 02:11:46 +08:00
|
|
|
((rdev->family) == CHIP_RS780) ||
|
|
|
|
((rdev->family) == CHIP_RS880)) {
|
2009-09-08 08:10:24 +08:00
|
|
|
tmp = (CACHE_FIFO_SIZE(0xa) |
|
|
|
|
FETCH_FIFO_HIWATER(0xa) |
|
|
|
|
DONE_FIFO_HIWATER(0xe0) |
|
|
|
|
ALU_UPDATE_FIFO_HIWATER(0x8));
|
|
|
|
} else if (((rdev->family) == CHIP_R600) ||
|
|
|
|
((rdev->family) == CHIP_RV630)) {
|
|
|
|
tmp &= ~DONE_FIFO_HIWATER(0xff);
|
|
|
|
tmp |= DONE_FIFO_HIWATER(0x4);
|
|
|
|
}
|
|
|
|
WREG32(SQ_MS_FIFO_SIZES, tmp);
|
|
|
|
|
|
|
|
/* SQ_CONFIG, SQ_GPR_RESOURCE_MGMT, SQ_THREAD_RESOURCE_MGMT, SQ_STACK_RESOURCE_MGMT
|
|
|
|
* should be adjusted as needed by the 2D/3D drivers. This just sets default values
|
|
|
|
*/
|
|
|
|
sq_config = RREG32(SQ_CONFIG);
|
|
|
|
sq_config &= ~(PS_PRIO(3) |
|
|
|
|
VS_PRIO(3) |
|
|
|
|
GS_PRIO(3) |
|
|
|
|
ES_PRIO(3));
|
|
|
|
sq_config |= (DX9_CONSTS |
|
|
|
|
VC_ENABLE |
|
|
|
|
PS_PRIO(0) |
|
|
|
|
VS_PRIO(1) |
|
|
|
|
GS_PRIO(2) |
|
|
|
|
ES_PRIO(3));
|
|
|
|
|
|
|
|
if ((rdev->family) == CHIP_R600) {
|
|
|
|
sq_gpr_resource_mgmt_1 = (NUM_PS_GPRS(124) |
|
|
|
|
NUM_VS_GPRS(124) |
|
|
|
|
NUM_CLAUSE_TEMP_GPRS(4));
|
|
|
|
sq_gpr_resource_mgmt_2 = (NUM_GS_GPRS(0) |
|
|
|
|
NUM_ES_GPRS(0));
|
|
|
|
sq_thread_resource_mgmt = (NUM_PS_THREADS(136) |
|
|
|
|
NUM_VS_THREADS(48) |
|
|
|
|
NUM_GS_THREADS(4) |
|
|
|
|
NUM_ES_THREADS(4));
|
|
|
|
sq_stack_resource_mgmt_1 = (NUM_PS_STACK_ENTRIES(128) |
|
|
|
|
NUM_VS_STACK_ENTRIES(128));
|
|
|
|
sq_stack_resource_mgmt_2 = (NUM_GS_STACK_ENTRIES(0) |
|
|
|
|
NUM_ES_STACK_ENTRIES(0));
|
|
|
|
} else if (((rdev->family) == CHIP_RV610) ||
|
|
|
|
((rdev->family) == CHIP_RV620) ||
|
2009-11-06 02:11:46 +08:00
|
|
|
((rdev->family) == CHIP_RS780) ||
|
|
|
|
((rdev->family) == CHIP_RS880)) {
|
2009-09-08 08:10:24 +08:00
|
|
|
/* no vertex cache */
|
|
|
|
sq_config &= ~VC_ENABLE;
|
|
|
|
|
|
|
|
sq_gpr_resource_mgmt_1 = (NUM_PS_GPRS(44) |
|
|
|
|
NUM_VS_GPRS(44) |
|
|
|
|
NUM_CLAUSE_TEMP_GPRS(2));
|
|
|
|
sq_gpr_resource_mgmt_2 = (NUM_GS_GPRS(17) |
|
|
|
|
NUM_ES_GPRS(17));
|
|
|
|
sq_thread_resource_mgmt = (NUM_PS_THREADS(79) |
|
|
|
|
NUM_VS_THREADS(78) |
|
|
|
|
NUM_GS_THREADS(4) |
|
|
|
|
NUM_ES_THREADS(31));
|
|
|
|
sq_stack_resource_mgmt_1 = (NUM_PS_STACK_ENTRIES(40) |
|
|
|
|
NUM_VS_STACK_ENTRIES(40));
|
|
|
|
sq_stack_resource_mgmt_2 = (NUM_GS_STACK_ENTRIES(32) |
|
|
|
|
NUM_ES_STACK_ENTRIES(16));
|
|
|
|
} else if (((rdev->family) == CHIP_RV630) ||
|
|
|
|
((rdev->family) == CHIP_RV635)) {
|
|
|
|
sq_gpr_resource_mgmt_1 = (NUM_PS_GPRS(44) |
|
|
|
|
NUM_VS_GPRS(44) |
|
|
|
|
NUM_CLAUSE_TEMP_GPRS(2));
|
|
|
|
sq_gpr_resource_mgmt_2 = (NUM_GS_GPRS(18) |
|
|
|
|
NUM_ES_GPRS(18));
|
|
|
|
sq_thread_resource_mgmt = (NUM_PS_THREADS(79) |
|
|
|
|
NUM_VS_THREADS(78) |
|
|
|
|
NUM_GS_THREADS(4) |
|
|
|
|
NUM_ES_THREADS(31));
|
|
|
|
sq_stack_resource_mgmt_1 = (NUM_PS_STACK_ENTRIES(40) |
|
|
|
|
NUM_VS_STACK_ENTRIES(40));
|
|
|
|
sq_stack_resource_mgmt_2 = (NUM_GS_STACK_ENTRIES(32) |
|
|
|
|
NUM_ES_STACK_ENTRIES(16));
|
|
|
|
} else if ((rdev->family) == CHIP_RV670) {
|
|
|
|
sq_gpr_resource_mgmt_1 = (NUM_PS_GPRS(44) |
|
|
|
|
NUM_VS_GPRS(44) |
|
|
|
|
NUM_CLAUSE_TEMP_GPRS(2));
|
|
|
|
sq_gpr_resource_mgmt_2 = (NUM_GS_GPRS(17) |
|
|
|
|
NUM_ES_GPRS(17));
|
|
|
|
sq_thread_resource_mgmt = (NUM_PS_THREADS(79) |
|
|
|
|
NUM_VS_THREADS(78) |
|
|
|
|
NUM_GS_THREADS(4) |
|
|
|
|
NUM_ES_THREADS(31));
|
|
|
|
sq_stack_resource_mgmt_1 = (NUM_PS_STACK_ENTRIES(64) |
|
|
|
|
NUM_VS_STACK_ENTRIES(64));
|
|
|
|
sq_stack_resource_mgmt_2 = (NUM_GS_STACK_ENTRIES(64) |
|
|
|
|
NUM_ES_STACK_ENTRIES(64));
|
|
|
|
}
|
|
|
|
|
|
|
|
WREG32(SQ_CONFIG, sq_config);
|
|
|
|
WREG32(SQ_GPR_RESOURCE_MGMT_1, sq_gpr_resource_mgmt_1);
|
|
|
|
WREG32(SQ_GPR_RESOURCE_MGMT_2, sq_gpr_resource_mgmt_2);
|
|
|
|
WREG32(SQ_THREAD_RESOURCE_MGMT, sq_thread_resource_mgmt);
|
|
|
|
WREG32(SQ_STACK_RESOURCE_MGMT_1, sq_stack_resource_mgmt_1);
|
|
|
|
WREG32(SQ_STACK_RESOURCE_MGMT_2, sq_stack_resource_mgmt_2);
|
|
|
|
|
|
|
|
if (((rdev->family) == CHIP_RV610) ||
|
|
|
|
((rdev->family) == CHIP_RV620) ||
|
2009-11-06 02:11:46 +08:00
|
|
|
((rdev->family) == CHIP_RS780) ||
|
|
|
|
((rdev->family) == CHIP_RS880)) {
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(VGT_CACHE_INVALIDATION, CACHE_INVALIDATION(TC_ONLY));
|
|
|
|
} else {
|
|
|
|
WREG32(VGT_CACHE_INVALIDATION, CACHE_INVALIDATION(VC_AND_TC));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* More default values. 2D/3D driver should adjust as needed */
|
|
|
|
WREG32(PA_SC_AA_SAMPLE_LOCS_2S, (S0_X(0xc) | S0_Y(0x4) |
|
|
|
|
S1_X(0x4) | S1_Y(0xc)));
|
|
|
|
WREG32(PA_SC_AA_SAMPLE_LOCS_4S, (S0_X(0xe) | S0_Y(0xe) |
|
|
|
|
S1_X(0x2) | S1_Y(0x2) |
|
|
|
|
S2_X(0xa) | S2_Y(0x6) |
|
|
|
|
S3_X(0x6) | S3_Y(0xa)));
|
|
|
|
WREG32(PA_SC_AA_SAMPLE_LOCS_8S_WD0, (S0_X(0xe) | S0_Y(0xb) |
|
|
|
|
S1_X(0x4) | S1_Y(0xc) |
|
|
|
|
S2_X(0x1) | S2_Y(0x6) |
|
|
|
|
S3_X(0xa) | S3_Y(0xe)));
|
|
|
|
WREG32(PA_SC_AA_SAMPLE_LOCS_8S_WD1, (S4_X(0x6) | S4_Y(0x1) |
|
|
|
|
S5_X(0x0) | S5_Y(0x0) |
|
|
|
|
S6_X(0xb) | S6_Y(0x4) |
|
|
|
|
S7_X(0x7) | S7_Y(0x8)));
|
|
|
|
|
|
|
|
WREG32(VGT_STRMOUT_EN, 0);
|
|
|
|
tmp = rdev->config.r600.max_pipes * 16;
|
|
|
|
switch (rdev->family) {
|
|
|
|
case CHIP_RV610:
|
|
|
|
case CHIP_RV620:
|
2009-11-06 02:11:46 +08:00
|
|
|
case CHIP_RS780:
|
|
|
|
case CHIP_RS880:
|
2009-09-08 08:10:24 +08:00
|
|
|
tmp += 32;
|
|
|
|
break;
|
|
|
|
case CHIP_RV670:
|
|
|
|
tmp += 128;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (tmp > 256) {
|
|
|
|
tmp = 256;
|
|
|
|
}
|
|
|
|
WREG32(VGT_ES_PER_GS, 128);
|
|
|
|
WREG32(VGT_GS_PER_ES, tmp);
|
|
|
|
WREG32(VGT_GS_PER_VS, 2);
|
|
|
|
WREG32(VGT_GS_VERTEX_REUSE, 16);
|
|
|
|
|
|
|
|
/* more default values. 2D/3D driver should adjust as needed */
|
|
|
|
WREG32(PA_SC_LINE_STIPPLE_STATE, 0);
|
|
|
|
WREG32(VGT_STRMOUT_EN, 0);
|
|
|
|
WREG32(SX_MISC, 0);
|
|
|
|
WREG32(PA_SC_MODE_CNTL, 0);
|
|
|
|
WREG32(PA_SC_AA_CONFIG, 0);
|
|
|
|
WREG32(PA_SC_LINE_STIPPLE, 0);
|
|
|
|
WREG32(SPI_INPUT_Z, 0);
|
|
|
|
WREG32(SPI_PS_IN_CONTROL_0, NUM_INTERP(2));
|
|
|
|
WREG32(CB_COLOR7_FRAG, 0);
|
|
|
|
|
|
|
|
/* Clear render buffer base addresses */
|
|
|
|
WREG32(CB_COLOR0_BASE, 0);
|
|
|
|
WREG32(CB_COLOR1_BASE, 0);
|
|
|
|
WREG32(CB_COLOR2_BASE, 0);
|
|
|
|
WREG32(CB_COLOR3_BASE, 0);
|
|
|
|
WREG32(CB_COLOR4_BASE, 0);
|
|
|
|
WREG32(CB_COLOR5_BASE, 0);
|
|
|
|
WREG32(CB_COLOR6_BASE, 0);
|
|
|
|
WREG32(CB_COLOR7_BASE, 0);
|
|
|
|
WREG32(CB_COLOR7_FRAG, 0);
|
|
|
|
|
|
|
|
switch (rdev->family) {
|
|
|
|
case CHIP_RV610:
|
|
|
|
case CHIP_RV620:
|
2009-11-06 02:11:46 +08:00
|
|
|
case CHIP_RS780:
|
|
|
|
case CHIP_RS880:
|
2009-09-08 08:10:24 +08:00
|
|
|
tmp = TC_L2_SIZE(8);
|
|
|
|
break;
|
|
|
|
case CHIP_RV630:
|
|
|
|
case CHIP_RV635:
|
|
|
|
tmp = TC_L2_SIZE(4);
|
|
|
|
break;
|
|
|
|
case CHIP_R600:
|
|
|
|
tmp = TC_L2_SIZE(0) | L2_DISABLE_LATE_HIT;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
tmp = TC_L2_SIZE(0);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
WREG32(TC_CNTL, tmp);
|
|
|
|
|
|
|
|
tmp = RREG32(HDP_HOST_PATH_CNTL);
|
|
|
|
WREG32(HDP_HOST_PATH_CNTL, tmp);
|
|
|
|
|
|
|
|
tmp = RREG32(ARB_POP);
|
|
|
|
tmp |= ENABLE_TC128;
|
|
|
|
WREG32(ARB_POP, tmp);
|
|
|
|
|
|
|
|
WREG32(PA_SC_MULTI_CHIP_CNTL, 0);
|
|
|
|
WREG32(PA_CL_ENHANCE, (CLIP_VTX_REORDER_ENA |
|
|
|
|
NUM_CLIP_SEQ(3)));
|
|
|
|
WREG32(PA_SC_ENHANCE, FORCE_EOV_MAX_CLK_CNT(4095));
|
2012-06-15 04:06:36 +08:00
|
|
|
WREG32(VC_ENHANCE, 0);
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2009-06-05 20:42:42 +08:00
|
|
|
/*
|
|
|
|
* Indirect registers accessor
|
|
|
|
*/
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 r600_pciep_rreg(struct radeon_device *rdev, u32 reg)
|
|
|
|
{
|
2013-09-04 07:00:09 +08:00
|
|
|
unsigned long flags;
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 r;
|
|
|
|
|
2013-09-04 07:00:09 +08:00
|
|
|
spin_lock_irqsave(&rdev->pciep_idx_lock, flags);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(PCIE_PORT_INDEX, ((reg) & 0xff));
|
|
|
|
(void)RREG32(PCIE_PORT_INDEX);
|
|
|
|
r = RREG32(PCIE_PORT_DATA);
|
2013-09-04 07:00:09 +08:00
|
|
|
spin_unlock_irqrestore(&rdev->pciep_idx_lock, flags);
|
2009-09-08 08:10:24 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_pciep_wreg(struct radeon_device *rdev, u32 reg, u32 v)
|
|
|
|
{
|
2013-09-04 07:00:09 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&rdev->pciep_idx_lock, flags);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(PCIE_PORT_INDEX, ((reg) & 0xff));
|
|
|
|
(void)RREG32(PCIE_PORT_INDEX);
|
|
|
|
WREG32(PCIE_PORT_DATA, (v));
|
|
|
|
(void)RREG32(PCIE_PORT_DATA);
|
2013-09-04 07:00:09 +08:00
|
|
|
spin_unlock_irqrestore(&rdev->pciep_idx_lock, flags);
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CP & Ring
|
|
|
|
*/
|
|
|
|
void r600_cp_stop(struct radeon_device *rdev)
|
|
|
|
{
|
2014-01-28 00:26:33 +08:00
|
|
|
if (rdev->asic->copy.copy_ring_index == RADEON_RING_TYPE_GFX_INDEX)
|
|
|
|
radeon_ttm_set_active_vram_size(rdev, rdev->mc.visible_vram_size);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1));
|
2010-08-28 06:25:25 +08:00
|
|
|
WREG32(SCRATCH_UMSK, 0);
|
2012-09-28 03:08:35 +08:00
|
|
|
rdev->ring[RADEON_RING_TYPE_GFX_INDEX].ready = false;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
int r600_init_microcode(struct radeon_device *rdev)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
|
|
|
const char *chip_name;
|
2009-12-02 02:43:46 +08:00
|
|
|
const char *rlc_chip_name;
|
2013-06-26 12:11:19 +08:00
|
|
|
const char *smc_chip_name = "RV770";
|
|
|
|
size_t pfp_req_size, me_req_size, rlc_req_size, smc_req_size = 0;
|
2009-09-08 08:10:24 +08:00
|
|
|
char fw_name[30];
|
|
|
|
int err;
|
|
|
|
|
|
|
|
DRM_DEBUG("\n");
|
|
|
|
|
|
|
|
switch (rdev->family) {
|
2009-12-02 02:43:46 +08:00
|
|
|
case CHIP_R600:
|
|
|
|
chip_name = "R600";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
|
|
|
case CHIP_RV610:
|
|
|
|
chip_name = "RV610";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
|
|
|
case CHIP_RV630:
|
|
|
|
chip_name = "RV630";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
|
|
|
case CHIP_RV620:
|
|
|
|
chip_name = "RV620";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
|
|
|
case CHIP_RV635:
|
|
|
|
chip_name = "RV635";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
|
|
|
case CHIP_RV670:
|
|
|
|
chip_name = "RV670";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
2009-09-08 08:10:24 +08:00
|
|
|
case CHIP_RS780:
|
2009-12-02 02:43:46 +08:00
|
|
|
case CHIP_RS880:
|
|
|
|
chip_name = "RS780";
|
|
|
|
rlc_chip_name = "R600";
|
|
|
|
break;
|
|
|
|
case CHIP_RV770:
|
|
|
|
chip_name = "RV770";
|
|
|
|
rlc_chip_name = "R700";
|
2013-06-26 12:11:19 +08:00
|
|
|
smc_chip_name = "RV770";
|
|
|
|
smc_req_size = ALIGN(RV770_SMC_UCODE_SIZE, 4);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
2009-09-08 08:10:24 +08:00
|
|
|
case CHIP_RV730:
|
2009-12-02 02:43:46 +08:00
|
|
|
chip_name = "RV730";
|
|
|
|
rlc_chip_name = "R700";
|
2013-06-26 12:11:19 +08:00
|
|
|
smc_chip_name = "RV730";
|
|
|
|
smc_req_size = ALIGN(RV730_SMC_UCODE_SIZE, 4);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
|
|
|
case CHIP_RV710:
|
|
|
|
chip_name = "RV710";
|
|
|
|
rlc_chip_name = "R700";
|
2013-06-26 12:11:19 +08:00
|
|
|
smc_chip_name = "RV710";
|
|
|
|
smc_req_size = ALIGN(RV710_SMC_UCODE_SIZE, 4);
|
|
|
|
break;
|
|
|
|
case CHIP_RV740:
|
|
|
|
chip_name = "RV730";
|
|
|
|
rlc_chip_name = "R700";
|
|
|
|
smc_chip_name = "RV740";
|
|
|
|
smc_req_size = ALIGN(RV740_SMC_UCODE_SIZE, 4);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
2010-03-25 01:36:43 +08:00
|
|
|
case CHIP_CEDAR:
|
|
|
|
chip_name = "CEDAR";
|
2010-03-25 01:55:51 +08:00
|
|
|
rlc_chip_name = "CEDAR";
|
2013-06-26 12:33:35 +08:00
|
|
|
smc_chip_name = "CEDAR";
|
|
|
|
smc_req_size = ALIGN(CEDAR_SMC_UCODE_SIZE, 4);
|
2010-03-25 01:36:43 +08:00
|
|
|
break;
|
|
|
|
case CHIP_REDWOOD:
|
|
|
|
chip_name = "REDWOOD";
|
2010-03-25 01:55:51 +08:00
|
|
|
rlc_chip_name = "REDWOOD";
|
2013-06-26 12:33:35 +08:00
|
|
|
smc_chip_name = "REDWOOD";
|
|
|
|
smc_req_size = ALIGN(REDWOOD_SMC_UCODE_SIZE, 4);
|
2010-03-25 01:36:43 +08:00
|
|
|
break;
|
|
|
|
case CHIP_JUNIPER:
|
|
|
|
chip_name = "JUNIPER";
|
2010-03-25 01:55:51 +08:00
|
|
|
rlc_chip_name = "JUNIPER";
|
2013-06-26 12:33:35 +08:00
|
|
|
smc_chip_name = "JUNIPER";
|
|
|
|
smc_req_size = ALIGN(JUNIPER_SMC_UCODE_SIZE, 4);
|
2010-03-25 01:36:43 +08:00
|
|
|
break;
|
|
|
|
case CHIP_CYPRESS:
|
|
|
|
case CHIP_HEMLOCK:
|
|
|
|
chip_name = "CYPRESS";
|
2010-03-25 01:55:51 +08:00
|
|
|
rlc_chip_name = "CYPRESS";
|
2013-06-26 12:33:35 +08:00
|
|
|
smc_chip_name = "CYPRESS";
|
|
|
|
smc_req_size = ALIGN(CYPRESS_SMC_UCODE_SIZE, 4);
|
2010-03-25 01:36:43 +08:00
|
|
|
break;
|
2010-11-23 06:56:31 +08:00
|
|
|
case CHIP_PALM:
|
|
|
|
chip_name = "PALM";
|
|
|
|
rlc_chip_name = "SUMO";
|
|
|
|
break;
|
2011-06-01 03:42:48 +08:00
|
|
|
case CHIP_SUMO:
|
|
|
|
chip_name = "SUMO";
|
|
|
|
rlc_chip_name = "SUMO";
|
|
|
|
break;
|
|
|
|
case CHIP_SUMO2:
|
|
|
|
chip_name = "SUMO2";
|
|
|
|
rlc_chip_name = "SUMO";
|
|
|
|
break;
|
2009-09-08 08:10:24 +08:00
|
|
|
default: BUG();
|
|
|
|
}
|
|
|
|
|
2010-03-25 01:36:43 +08:00
|
|
|
if (rdev->family >= CHIP_CEDAR) {
|
|
|
|
pfp_req_size = EVERGREEN_PFP_UCODE_SIZE * 4;
|
|
|
|
me_req_size = EVERGREEN_PM4_UCODE_SIZE * 4;
|
2010-03-25 01:55:51 +08:00
|
|
|
rlc_req_size = EVERGREEN_RLC_UCODE_SIZE * 4;
|
2010-03-25 01:36:43 +08:00
|
|
|
} else if (rdev->family >= CHIP_RV770) {
|
2009-09-08 08:10:24 +08:00
|
|
|
pfp_req_size = R700_PFP_UCODE_SIZE * 4;
|
|
|
|
me_req_size = R700_PM4_UCODE_SIZE * 4;
|
2009-12-02 02:43:46 +08:00
|
|
|
rlc_req_size = R700_RLC_UCODE_SIZE * 4;
|
2009-09-08 08:10:24 +08:00
|
|
|
} else {
|
2013-01-12 04:33:13 +08:00
|
|
|
pfp_req_size = R600_PFP_UCODE_SIZE * 4;
|
|
|
|
me_req_size = R600_PM4_UCODE_SIZE * 12;
|
|
|
|
rlc_req_size = R600_RLC_UCODE_SIZE * 4;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_INFO("Loading %s Microcode\n", chip_name);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
snprintf(fw_name, sizeof(fw_name), "radeon/%s_pfp.bin", chip_name);
|
2013-07-12 03:53:01 +08:00
|
|
|
err = request_firmware(&rdev->pfp_fw, fw_name, rdev->dev);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if (rdev->pfp_fw->size != pfp_req_size) {
|
|
|
|
printk(KERN_ERR
|
|
|
|
"r600_cp: Bogus length %zu in firmware \"%s\"\n",
|
|
|
|
rdev->pfp_fw->size, fw_name);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
snprintf(fw_name, sizeof(fw_name), "radeon/%s_me.bin", chip_name);
|
2013-07-12 03:53:01 +08:00
|
|
|
err = request_firmware(&rdev->me_fw, fw_name, rdev->dev);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if (rdev->me_fw->size != me_req_size) {
|
|
|
|
printk(KERN_ERR
|
|
|
|
"r600_cp: Bogus length %zu in firmware \"%s\"\n",
|
|
|
|
rdev->me_fw->size, fw_name);
|
|
|
|
err = -EINVAL;
|
|
|
|
}
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
snprintf(fw_name, sizeof(fw_name), "radeon/%s_rlc.bin", rlc_chip_name);
|
2013-07-12 03:53:01 +08:00
|
|
|
err = request_firmware(&rdev->rlc_fw, fw_name, rdev->dev);
|
2009-12-02 02:43:46 +08:00
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if (rdev->rlc_fw->size != rlc_req_size) {
|
|
|
|
printk(KERN_ERR
|
|
|
|
"r600_rlc: Bogus length %zu in firmware \"%s\"\n",
|
|
|
|
rdev->rlc_fw->size, fw_name);
|
|
|
|
err = -EINVAL;
|
|
|
|
}
|
|
|
|
|
2013-06-26 12:33:35 +08:00
|
|
|
if ((rdev->family >= CHIP_RV770) && (rdev->family <= CHIP_HEMLOCK)) {
|
2013-06-26 12:11:19 +08:00
|
|
|
snprintf(fw_name, sizeof(fw_name), "radeon/%s_smc.bin", smc_chip_name);
|
2013-07-12 03:53:01 +08:00
|
|
|
err = request_firmware(&rdev->smc_fw, fw_name, rdev->dev);
|
2013-08-08 04:09:08 +08:00
|
|
|
if (err) {
|
|
|
|
printk(KERN_ERR
|
|
|
|
"smc: error loading firmware \"%s\"\n",
|
|
|
|
fw_name);
|
|
|
|
release_firmware(rdev->smc_fw);
|
|
|
|
rdev->smc_fw = NULL;
|
2013-10-16 23:36:30 +08:00
|
|
|
err = 0;
|
2013-08-08 04:09:08 +08:00
|
|
|
} else if (rdev->smc_fw->size != smc_req_size) {
|
2013-06-26 12:11:19 +08:00
|
|
|
printk(KERN_ERR
|
|
|
|
"smc: Bogus length %zu in firmware \"%s\"\n",
|
|
|
|
rdev->smc_fw->size, fw_name);
|
|
|
|
err = -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
out:
|
|
|
|
if (err) {
|
|
|
|
if (err != -EINVAL)
|
|
|
|
printk(KERN_ERR
|
|
|
|
"r600_cp: Failed to load firmware \"%s\"\n",
|
|
|
|
fw_name);
|
|
|
|
release_firmware(rdev->pfp_fw);
|
|
|
|
rdev->pfp_fw = NULL;
|
|
|
|
release_firmware(rdev->me_fw);
|
|
|
|
rdev->me_fw = NULL;
|
2009-12-02 02:43:46 +08:00
|
|
|
release_firmware(rdev->rlc_fw);
|
|
|
|
rdev->rlc_fw = NULL;
|
2013-06-26 12:11:19 +08:00
|
|
|
release_firmware(rdev->smc_fw);
|
|
|
|
rdev->smc_fw = NULL;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-12-10 08:44:30 +08:00
|
|
|
u32 r600_gfx_get_rptr(struct radeon_device *rdev,
|
|
|
|
struct radeon_ring *ring)
|
|
|
|
{
|
|
|
|
u32 rptr;
|
|
|
|
|
|
|
|
if (rdev->wb.enabled)
|
|
|
|
rptr = rdev->wb.wb[ring->rptr_offs/4];
|
|
|
|
else
|
|
|
|
rptr = RREG32(R600_CP_RB_RPTR);
|
|
|
|
|
|
|
|
return rptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
u32 r600_gfx_get_wptr(struct radeon_device *rdev,
|
|
|
|
struct radeon_ring *ring)
|
|
|
|
{
|
|
|
|
u32 wptr;
|
|
|
|
|
|
|
|
wptr = RREG32(R600_CP_RB_WPTR);
|
|
|
|
|
|
|
|
return wptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_gfx_set_wptr(struct radeon_device *rdev,
|
|
|
|
struct radeon_ring *ring)
|
|
|
|
{
|
|
|
|
WREG32(R600_CP_RB_WPTR, ring->wptr);
|
|
|
|
(void)RREG32(R600_CP_RB_WPTR);
|
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
static int r600_cp_load_microcode(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
const __be32 *fw_data;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!rdev->me_fw || !rdev->pfp_fw)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
r600_cp_stop(rdev);
|
|
|
|
|
2011-02-12 08:45:38 +08:00
|
|
|
WREG32(CP_RB_CNTL,
|
|
|
|
#ifdef __BIG_ENDIAN
|
|
|
|
BUF_SWAP_32BIT |
|
|
|
|
#endif
|
|
|
|
RB_NO_UPDATE | RB_BLKSZ(15) | RB_BUFSZ(3));
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
/* Reset cp */
|
|
|
|
WREG32(GRBM_SOFT_RESET, SOFT_RESET_CP);
|
|
|
|
RREG32(GRBM_SOFT_RESET);
|
|
|
|
mdelay(15);
|
|
|
|
WREG32(GRBM_SOFT_RESET, 0);
|
|
|
|
|
|
|
|
WREG32(CP_ME_RAM_WADDR, 0);
|
|
|
|
|
|
|
|
fw_data = (const __be32 *)rdev->me_fw->data;
|
|
|
|
WREG32(CP_ME_RAM_WADDR, 0);
|
2013-01-12 04:33:13 +08:00
|
|
|
for (i = 0; i < R600_PM4_UCODE_SIZE * 3; i++)
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(CP_ME_RAM_DATA,
|
|
|
|
be32_to_cpup(fw_data++));
|
|
|
|
|
|
|
|
fw_data = (const __be32 *)rdev->pfp_fw->data;
|
|
|
|
WREG32(CP_PFP_UCODE_ADDR, 0);
|
2013-01-12 04:33:13 +08:00
|
|
|
for (i = 0; i < R600_PFP_UCODE_SIZE; i++)
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(CP_PFP_UCODE_DATA,
|
|
|
|
be32_to_cpup(fw_data++));
|
|
|
|
|
|
|
|
WREG32(CP_PFP_UCODE_ADDR, 0);
|
|
|
|
WREG32(CP_ME_RAM_WADDR, 0);
|
|
|
|
WREG32(CP_ME_RAM_RADDR, 0);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r600_cp_start(struct radeon_device *rdev)
|
|
|
|
{
|
2011-10-23 18:56:27 +08:00
|
|
|
struct radeon_ring *ring = &rdev->ring[RADEON_RING_TYPE_GFX_INDEX];
|
2009-09-08 08:10:24 +08:00
|
|
|
int r;
|
|
|
|
uint32_t cp_me;
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
r = radeon_ring_lock(rdev, ring, 7);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_ME_INITIALIZE, 5));
|
|
|
|
radeon_ring_write(ring, 0x1);
|
2010-09-03 09:32:32 +08:00
|
|
|
if (rdev->family >= CHIP_RV770) {
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, 0x0);
|
|
|
|
radeon_ring_write(ring, rdev->config.rv770.max_hw_contexts - 1);
|
2010-03-25 01:36:43 +08:00
|
|
|
} else {
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, 0x3);
|
|
|
|
radeon_ring_write(ring, rdev->config.r600.max_hw_contexts - 1);
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_unlock_commit(rdev, ring);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
cp_me = 0xff;
|
|
|
|
WREG32(R_0086D8_CP_ME_CNTL, cp_me);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r600_cp_resume(struct radeon_device *rdev)
|
|
|
|
{
|
2011-10-23 18:56:27 +08:00
|
|
|
struct radeon_ring *ring = &rdev->ring[RADEON_RING_TYPE_GFX_INDEX];
|
2009-09-08 08:10:24 +08:00
|
|
|
u32 tmp;
|
|
|
|
u32 rb_bufsz;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
/* Reset cp */
|
|
|
|
WREG32(GRBM_SOFT_RESET, SOFT_RESET_CP);
|
|
|
|
RREG32(GRBM_SOFT_RESET);
|
|
|
|
mdelay(15);
|
|
|
|
WREG32(GRBM_SOFT_RESET, 0);
|
|
|
|
|
|
|
|
/* Set ring buffer size */
|
2013-07-10 20:11:59 +08:00
|
|
|
rb_bufsz = order_base_2(ring->ring_size / 8);
|
|
|
|
tmp = (order_base_2(RADEON_GPU_PAGE_SIZE/8) << 8) | rb_bufsz;
|
2009-09-08 08:10:24 +08:00
|
|
|
#ifdef __BIG_ENDIAN
|
2009-11-03 05:01:27 +08:00
|
|
|
tmp |= BUF_SWAP_32BIT;
|
2009-09-08 08:10:24 +08:00
|
|
|
#endif
|
2009-11-03 05:01:27 +08:00
|
|
|
WREG32(CP_RB_CNTL, tmp);
|
2011-09-16 01:02:22 +08:00
|
|
|
WREG32(CP_SEM_WAIT_TIMER, 0x0);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
/* Set the write pointer delay */
|
|
|
|
WREG32(CP_RB_WPTR_DELAY, 0);
|
|
|
|
|
|
|
|
/* Initialize the ring buffer's read and write pointers */
|
|
|
|
WREG32(CP_RB_CNTL, tmp | RB_RPTR_WR_ENA);
|
|
|
|
WREG32(CP_RB_RPTR_WR, 0);
|
2011-10-23 18:56:27 +08:00
|
|
|
ring->wptr = 0;
|
|
|
|
WREG32(CP_RB_WPTR, ring->wptr);
|
2010-08-28 06:25:25 +08:00
|
|
|
|
|
|
|
/* set the wb address whether it's enabled or not */
|
2011-02-12 08:45:38 +08:00
|
|
|
WREG32(CP_RB_RPTR_ADDR,
|
|
|
|
((rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFFFFFFFC));
|
2010-08-28 06:25:25 +08:00
|
|
|
WREG32(CP_RB_RPTR_ADDR_HI, upper_32_bits(rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFF);
|
|
|
|
WREG32(SCRATCH_ADDR, ((rdev->wb.gpu_addr + RADEON_WB_SCRATCH_OFFSET) >> 8) & 0xFFFFFFFF);
|
|
|
|
|
|
|
|
if (rdev->wb.enabled)
|
|
|
|
WREG32(SCRATCH_UMSK, 0xff);
|
|
|
|
else {
|
|
|
|
tmp |= RB_NO_UPDATE;
|
|
|
|
WREG32(SCRATCH_UMSK, 0);
|
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
mdelay(1);
|
|
|
|
WREG32(CP_RB_CNTL, tmp);
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
WREG32(CP_RB_BASE, ring->gpu_addr >> 8);
|
2009-09-08 08:10:24 +08:00
|
|
|
WREG32(CP_DEBUG, (1 << 27) | (1 << 28));
|
|
|
|
|
|
|
|
r600_cp_start(rdev);
|
2011-10-23 18:56:27 +08:00
|
|
|
ring->ready = true;
|
2012-02-24 06:53:45 +08:00
|
|
|
r = radeon_ring_test(rdev, RADEON_RING_TYPE_GFX_INDEX, ring);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
2011-10-23 18:56:27 +08:00
|
|
|
ring->ready = false;
|
2009-09-08 08:10:24 +08:00
|
|
|
return r;
|
|
|
|
}
|
2014-01-27 23:59:51 +08:00
|
|
|
|
2014-01-28 00:26:33 +08:00
|
|
|
if (rdev->asic->copy.copy_ring_index == RADEON_RING_TYPE_GFX_INDEX)
|
2014-01-27 23:59:51 +08:00
|
|
|
radeon_ttm_set_active_vram_size(rdev, rdev->mc.real_vram_size);
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
void r600_ring_init(struct radeon_device *rdev, struct radeon_ring *ring, unsigned ring_size)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
|
|
|
u32 rb_bufsz;
|
2012-07-06 22:22:55 +08:00
|
|
|
int r;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
/* Align ring size */
|
2013-07-10 20:11:59 +08:00
|
|
|
rb_bufsz = order_base_2(ring_size / 8);
|
2009-09-08 08:10:24 +08:00
|
|
|
ring_size = (1 << (rb_bufsz + 1)) * 4;
|
2011-10-23 18:56:27 +08:00
|
|
|
ring->ring_size = ring_size;
|
|
|
|
ring->align_mask = 16 - 1;
|
2012-07-06 22:22:55 +08:00
|
|
|
|
2012-07-18 02:02:31 +08:00
|
|
|
if (radeon_ring_supports_scratch_reg(rdev, ring)) {
|
|
|
|
r = radeon_scratch_get(rdev, &ring->rptr_save_reg);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("failed to get scratch reg for rptr save (%d).\n", r);
|
|
|
|
ring->rptr_save_reg = 0;
|
|
|
|
}
|
2012-07-06 22:22:55 +08:00
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2010-02-02 18:51:45 +08:00
|
|
|
void r600_cp_fini(struct radeon_device *rdev)
|
|
|
|
{
|
2012-07-06 22:22:55 +08:00
|
|
|
struct radeon_ring *ring = &rdev->ring[RADEON_RING_TYPE_GFX_INDEX];
|
2010-02-02 18:51:45 +08:00
|
|
|
r600_cp_stop(rdev);
|
2012-07-06 22:22:55 +08:00
|
|
|
radeon_ring_fini(rdev, ring);
|
|
|
|
radeon_scratch_free(rdev, ring->rptr_save_reg);
|
2010-02-02 18:51:45 +08:00
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
/*
|
|
|
|
* GPU scratch registers helpers function.
|
|
|
|
*/
|
|
|
|
void r600_scratch_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
rdev->scratch.num_reg = 7;
|
2010-08-28 06:25:25 +08:00
|
|
|
rdev->scratch.reg_base = SCRATCH_REG0;
|
2009-09-08 08:10:24 +08:00
|
|
|
for (i = 0; i < rdev->scratch.num_reg; i++) {
|
|
|
|
rdev->scratch.free[i] = true;
|
2010-08-28 06:25:25 +08:00
|
|
|
rdev->scratch.reg[i] = rdev->scratch.reg_base + (i * 4);
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
int r600_ring_test(struct radeon_device *rdev, struct radeon_ring *ring)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
|
|
|
uint32_t scratch;
|
|
|
|
uint32_t tmp = 0;
|
2012-07-18 02:02:30 +08:00
|
|
|
unsigned i;
|
2009-09-08 08:10:24 +08:00
|
|
|
int r;
|
|
|
|
|
|
|
|
r = radeon_scratch_get(rdev, &scratch);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: cp failed to get scratch reg (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
WREG32(scratch, 0xCAFEDEAD);
|
2011-10-23 18:56:27 +08:00
|
|
|
r = radeon_ring_lock(rdev, ring, 3);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
2012-07-18 02:02:30 +08:00
|
|
|
DRM_ERROR("radeon: cp failed to lock ring %d (%d).\n", ring->idx, r);
|
2009-09-08 08:10:24 +08:00
|
|
|
radeon_scratch_free(rdev, scratch);
|
|
|
|
return r;
|
|
|
|
}
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
|
|
|
|
radeon_ring_write(ring, ((scratch - PACKET3_SET_CONFIG_REG_OFFSET) >> 2));
|
|
|
|
radeon_ring_write(ring, 0xDEADBEEF);
|
|
|
|
radeon_ring_unlock_commit(rdev, ring);
|
2009-09-08 08:10:24 +08:00
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
tmp = RREG32(scratch);
|
|
|
|
if (tmp == 0xDEADBEEF)
|
|
|
|
break;
|
|
|
|
DRM_UDELAY(1);
|
|
|
|
}
|
|
|
|
if (i < rdev->usec_timeout) {
|
2012-07-18 02:02:30 +08:00
|
|
|
DRM_INFO("ring test on %d succeeded in %d usecs\n", ring->idx, i);
|
2009-09-08 08:10:24 +08:00
|
|
|
} else {
|
2011-10-13 19:19:22 +08:00
|
|
|
DRM_ERROR("radeon: ring %d test failed (scratch(0x%04X)=0x%08X)\n",
|
2012-07-18 02:02:30 +08:00
|
|
|
ring->idx, scratch, tmp);
|
2009-09-08 08:10:24 +08:00
|
|
|
r = -EINVAL;
|
|
|
|
}
|
|
|
|
radeon_scratch_free(rdev, scratch);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2012-09-28 03:08:35 +08:00
|
|
|
/*
|
|
|
|
* CP fences/semaphores
|
|
|
|
*/
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
void r600_fence_ring_emit(struct radeon_device *rdev,
|
|
|
|
struct radeon_fence *fence)
|
|
|
|
{
|
2011-10-23 18:56:27 +08:00
|
|
|
struct radeon_ring *ring = &rdev->ring[fence->ring];
|
2014-01-17 07:11:47 +08:00
|
|
|
u32 cp_coher_cntl = PACKET3_TC_ACTION_ENA | PACKET3_VC_ACTION_ENA |
|
|
|
|
PACKET3_SH_ACTION_ENA;
|
|
|
|
|
|
|
|
if (rdev->family >= CHIP_RV770)
|
|
|
|
cp_coher_cntl |= PACKET3_FULL_CACHE_ENA;
|
2011-09-23 21:11:23 +08:00
|
|
|
|
2010-09-04 17:04:34 +08:00
|
|
|
if (rdev->wb.use_event) {
|
2011-11-21 04:45:34 +08:00
|
|
|
u64 addr = rdev->fence_drv[fence->ring].gpu_addr;
|
2011-10-26 23:41:22 +08:00
|
|
|
/* flush read cache over gart */
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SURFACE_SYNC, 3));
|
2014-01-17 07:11:47 +08:00
|
|
|
radeon_ring_write(ring, cp_coher_cntl);
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, 0xFFFFFFFF);
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, 10); /* poll interval */
|
2010-09-04 17:04:34 +08:00
|
|
|
/* EVENT_WRITE_EOP - flush caches, send int */
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_EVENT_WRITE_EOP, 4));
|
|
|
|
radeon_ring_write(ring, EVENT_TYPE(CACHE_FLUSH_AND_INV_EVENT_TS) | EVENT_INDEX(5));
|
2014-06-04 02:51:46 +08:00
|
|
|
radeon_ring_write(ring, lower_32_bits(addr));
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | DATA_SEL(1) | INT_SEL(2));
|
|
|
|
radeon_ring_write(ring, fence->seq);
|
|
|
|
radeon_ring_write(ring, 0);
|
2010-09-04 17:04:34 +08:00
|
|
|
} else {
|
2011-10-26 23:41:22 +08:00
|
|
|
/* flush read cache over gart */
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SURFACE_SYNC, 3));
|
2014-01-17 07:11:47 +08:00
|
|
|
radeon_ring_write(ring, cp_coher_cntl);
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, 0xFFFFFFFF);
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, 10); /* poll interval */
|
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_EVENT_WRITE, 0));
|
|
|
|
radeon_ring_write(ring, EVENT_TYPE(CACHE_FLUSH_AND_INV_EVENT) | EVENT_INDEX(0));
|
2010-09-04 17:04:34 +08:00
|
|
|
/* wait for 3D idle clean */
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
|
|
|
|
radeon_ring_write(ring, (WAIT_UNTIL - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
|
|
|
|
radeon_ring_write(ring, WAIT_3D_IDLE_bit | WAIT_3D_IDLECLEAN_bit);
|
2010-09-04 17:04:34 +08:00
|
|
|
/* Emit fence sequence & fire IRQ */
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
|
|
|
|
radeon_ring_write(ring, ((rdev->fence_drv[fence->ring].scratch_reg - PACKET3_SET_CONFIG_REG_OFFSET) >> 2));
|
|
|
|
radeon_ring_write(ring, fence->seq);
|
2010-09-04 17:04:34 +08:00
|
|
|
/* CP_INTERRUPT packet 3 no longer exists, use packet 0 */
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET0(CP_INT_STATUS, 0));
|
|
|
|
radeon_ring_write(ring, RB_INT_STAT);
|
2010-09-04 17:04:34 +08:00
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2013-11-12 19:58:05 +08:00
|
|
|
bool r600_semaphore_ring_emit(struct radeon_device *rdev,
|
2011-10-23 18:56:27 +08:00
|
|
|
struct radeon_ring *ring,
|
2011-09-16 01:02:22 +08:00
|
|
|
struct radeon_semaphore *semaphore,
|
2011-09-23 21:11:23 +08:00
|
|
|
bool emit_wait)
|
2011-09-16 01:02:22 +08:00
|
|
|
{
|
|
|
|
uint64_t addr = semaphore->gpu_addr;
|
|
|
|
unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : PACKET3_SEM_SEL_SIGNAL;
|
|
|
|
|
2012-03-07 18:28:57 +08:00
|
|
|
if (rdev->family < CHIP_CAYMAN)
|
|
|
|
sel |= PACKET3_SEM_WAIT_ON_SIGNAL;
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1));
|
2014-06-04 02:51:46 +08:00
|
|
|
radeon_ring_write(ring, lower_32_bits(addr));
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
|
2013-11-12 19:58:05 +08:00
|
|
|
|
|
|
|
return true;
|
2011-09-16 01:02:22 +08:00
|
|
|
}
|
|
|
|
|
2013-07-12 02:48:05 +08:00
|
|
|
/**
|
|
|
|
* r600_copy_cpdma - copy pages using the CP DMA engine
|
|
|
|
*
|
|
|
|
* @rdev: radeon_device pointer
|
|
|
|
* @src_offset: src GPU address
|
|
|
|
* @dst_offset: dst GPU address
|
|
|
|
* @num_gpu_pages: number of GPU pages to xfer
|
|
|
|
* @fence: radeon fence object
|
|
|
|
*
|
|
|
|
* Copy GPU paging using the CP DMA engine (r6xx+).
|
|
|
|
* Used by the radeon ttm implementation to move pages if
|
|
|
|
* registered as the asic copy callback.
|
|
|
|
*/
|
|
|
|
int r600_copy_cpdma(struct radeon_device *rdev,
|
|
|
|
uint64_t src_offset, uint64_t dst_offset,
|
|
|
|
unsigned num_gpu_pages,
|
|
|
|
struct radeon_fence **fence)
|
|
|
|
{
|
|
|
|
struct radeon_semaphore *sem = NULL;
|
|
|
|
int ring_index = rdev->asic->copy.blit_ring_index;
|
|
|
|
struct radeon_ring *ring = &rdev->ring[ring_index];
|
|
|
|
u32 size_in_bytes, cur_size_in_bytes, tmp;
|
|
|
|
int i, num_loops;
|
|
|
|
int r = 0;
|
|
|
|
|
|
|
|
r = radeon_semaphore_create(rdev, &sem);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: moving bo (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
size_in_bytes = (num_gpu_pages << RADEON_GPU_PAGE_SHIFT);
|
|
|
|
num_loops = DIV_ROUND_UP(size_in_bytes, 0x1fffff);
|
2013-07-18 21:24:37 +08:00
|
|
|
r = radeon_ring_lock(rdev, ring, num_loops * 6 + 24);
|
2013-07-12 02:48:05 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: moving bo (%d).\n", r);
|
|
|
|
radeon_semaphore_free(rdev, &sem, NULL);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2013-11-12 19:58:05 +08:00
|
|
|
radeon_semaphore_sync_to(sem, *fence);
|
|
|
|
radeon_semaphore_sync_rings(rdev, sem, ring->idx);
|
2013-07-12 02:48:05 +08:00
|
|
|
|
2013-07-18 21:24:37 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
|
|
|
|
radeon_ring_write(ring, (WAIT_UNTIL - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
|
|
|
|
radeon_ring_write(ring, WAIT_3D_IDLE_bit);
|
2013-07-12 02:48:05 +08:00
|
|
|
for (i = 0; i < num_loops; i++) {
|
|
|
|
cur_size_in_bytes = size_in_bytes;
|
|
|
|
if (cur_size_in_bytes > 0x1fffff)
|
|
|
|
cur_size_in_bytes = 0x1fffff;
|
|
|
|
size_in_bytes -= cur_size_in_bytes;
|
|
|
|
tmp = upper_32_bits(src_offset) & 0xff;
|
|
|
|
if (size_in_bytes == 0)
|
|
|
|
tmp |= PACKET3_CP_DMA_CP_SYNC;
|
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_CP_DMA, 4));
|
2014-06-04 02:51:46 +08:00
|
|
|
radeon_ring_write(ring, lower_32_bits(src_offset));
|
2013-07-12 02:48:05 +08:00
|
|
|
radeon_ring_write(ring, tmp);
|
2014-06-04 02:51:46 +08:00
|
|
|
radeon_ring_write(ring, lower_32_bits(dst_offset));
|
2013-07-12 02:48:05 +08:00
|
|
|
radeon_ring_write(ring, upper_32_bits(dst_offset) & 0xff);
|
|
|
|
radeon_ring_write(ring, cur_size_in_bytes);
|
|
|
|
src_offset += cur_size_in_bytes;
|
|
|
|
dst_offset += cur_size_in_bytes;
|
|
|
|
}
|
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
|
|
|
|
radeon_ring_write(ring, (WAIT_UNTIL - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
|
|
|
|
radeon_ring_write(ring, WAIT_CP_DMA_IDLE_bit);
|
|
|
|
|
|
|
|
r = radeon_fence_emit(rdev, fence, ring->idx);
|
|
|
|
if (r) {
|
|
|
|
radeon_ring_unlock_undo(rdev, ring);
|
2014-04-24 19:29:14 +08:00
|
|
|
radeon_semaphore_free(rdev, &sem, NULL);
|
2013-07-12 02:48:05 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
radeon_ring_unlock_commit(rdev, ring);
|
|
|
|
radeon_semaphore_free(rdev, &sem, *fence);
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
int r600_set_surface_reg(struct radeon_device *rdev, int reg,
|
|
|
|
uint32_t tiling_flags, uint32_t pitch,
|
|
|
|
uint32_t offset, uint32_t obj_size)
|
|
|
|
{
|
|
|
|
/* FIXME: implement */
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_clear_surface_reg(struct radeon_device *rdev, int reg)
|
|
|
|
{
|
|
|
|
/* FIXME: implement */
|
|
|
|
}
|
|
|
|
|
2012-09-01 01:43:50 +08:00
|
|
|
static int r600_startup(struct radeon_device *rdev)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
2012-09-28 03:08:35 +08:00
|
|
|
struct radeon_ring *ring;
|
2009-09-08 08:10:24 +08:00
|
|
|
int r;
|
|
|
|
|
2011-01-07 07:49:35 +08:00
|
|
|
/* enable pcie gen2 link */
|
|
|
|
r600_pcie_gen2_enable(rdev);
|
|
|
|
|
2013-08-30 20:58:20 +08:00
|
|
|
/* scratch needs to be initialized before MC */
|
|
|
|
r = r600_vram_scratch_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
|
2013-08-05 00:13:17 +08:00
|
|
|
r600_mc_program(rdev);
|
|
|
|
|
2009-10-07 01:04:30 +08:00
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
r600_agp_enable(rdev);
|
|
|
|
} else {
|
|
|
|
r = r600_pcie_gart_enable(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
r600_gpu_init(rdev);
|
2010-08-07 09:36:58 +08:00
|
|
|
|
2010-08-28 06:25:25 +08:00
|
|
|
/* allocate wb buffer */
|
|
|
|
r = radeon_wb_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
|
2011-11-21 04:45:34 +08:00
|
|
|
r = radeon_fence_driver_start_ring(rdev, RADEON_RING_TYPE_GFX_INDEX);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "failed initializing CP fences (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
/* Enable IRQ */
|
radeon: Fix system hang issue when using KMS with older cards
The current radeon driver initialization routines, when using KMS, are written
so that the IRQ installation routine is called before initializing the WB buffer
and the CP rings. With some ASICs, though, the IRQ routine tries to access the
GFX_INDEX ring causing a call to RREG32 with the value of -1 in
radeon_fence_read. This, in turn causes the system to completely hang with some
cards, requiring a hard reset.
A call stack that can cause such a hang looks like this (using rv515 ASIC for the
example here):
* rv515_init (rv515.c)
* radeon_irq_kms_init (radeon_irq_kms.c)
* drm_irq_install (drm_irq.c)
* radeon_driver_irq_preinstall_kms (radeon_irq_kms.c)
* rs600_irq_process (rs600.c)
* radeon_fence_process - due to SW interrupt (radeon_fence.c)
* radeon_fence_read (radeon_fence.c)
* hang due to RREG32(-1)
The patch moves the IRQ installation to the card startup routine, after the ring
has been initialized, but before the IRQ has been set. This fixes the issue, but
requires a check to see if the IRQ is already installed, as is the case in the
system resume codepath.
I have tested the patch on three machines using the rv515, the rv770 and the
evergreen ASIC. They worked without issues.
This seems to be a known issue and has been reported on several bug tracking
sites by various distributions (see links below). Most of reports recommend
booting the system with KMS disabled and then enabling KMS by reloading the
radeon module. For some reason, this was indeed a usable workaround, however,
UMS is now deprecated and disabled by default.
Bug reports:
https://bugzilla.redhat.com/show_bug.cgi?id=845745
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/561789
https://bbs.archlinux.org/viewtopic.php?id=156964
Signed-off-by: Adis Hamzić <adis@hamzadis.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
2013-06-02 22:47:54 +08:00
|
|
|
if (!rdev->irq.installed) {
|
|
|
|
r = radeon_irq_kms_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
r = r600_irq_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: IH init failed (%d).\n", r);
|
|
|
|
radeon_irq_kms_fini(rdev);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
r600_irq_set(rdev);
|
|
|
|
|
2012-09-28 03:08:35 +08:00
|
|
|
ring = &rdev->ring[RADEON_RING_TYPE_GFX_INDEX];
|
2011-10-23 18:56:27 +08:00
|
|
|
r = radeon_ring_init(rdev, ring, ring->ring_size, RADEON_WB_CP_RPTR_OFFSET,
|
2013-08-13 17:56:52 +08:00
|
|
|
RADEON_CP_PACKET2);
|
2012-09-28 03:08:35 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
2011-10-13 18:48:45 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
r = r600_cp_load_microcode(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
r = r600_cp_resume(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
2010-08-28 06:25:25 +08:00
|
|
|
|
2012-07-05 17:55:34 +08:00
|
|
|
r = radeon_ib_pool_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "IB initialization failed (%d).\n", r);
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-16 00:48:34 +08:00
|
|
|
return r;
|
2012-07-05 17:55:34 +08:00
|
|
|
}
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-16 00:48:34 +08:00
|
|
|
|
2012-06-05 05:18:51 +08:00
|
|
|
r = r600_audio_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: audio init failed\n");
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-09-21 12:33:58 +08:00
|
|
|
void r600_vga_set_state(struct radeon_device *rdev, bool state)
|
|
|
|
{
|
|
|
|
uint32_t temp;
|
|
|
|
|
|
|
|
temp = RREG32(CONFIG_CNTL);
|
|
|
|
if (state == false) {
|
|
|
|
temp &= ~(1<<0);
|
|
|
|
temp |= (1<<1);
|
|
|
|
} else {
|
|
|
|
temp &= ~(1<<1);
|
|
|
|
}
|
|
|
|
WREG32(CONFIG_CNTL, temp);
|
|
|
|
}
|
|
|
|
|
2009-09-18 13:19:37 +08:00
|
|
|
int r600_resume(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
2009-10-07 01:04:30 +08:00
|
|
|
/* Do not reset GPU before posting, on r600 hw unlike on r500 hw,
|
|
|
|
* posting will perform necessary task to bring back GPU into good
|
|
|
|
* shape.
|
|
|
|
*/
|
2009-09-18 13:19:37 +08:00
|
|
|
/* post card */
|
2009-10-02 00:02:15 +08:00
|
|
|
atom_asic_init(rdev->mode_info.atom_context);
|
2009-09-18 13:19:37 +08:00
|
|
|
|
2014-02-26 01:01:28 +08:00
|
|
|
if (rdev->pm.pm_method == PM_METHOD_DPM)
|
|
|
|
radeon_pm_resume(rdev);
|
2013-12-19 03:07:14 +08:00
|
|
|
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-16 00:48:34 +08:00
|
|
|
rdev->accel_working = true;
|
2009-09-18 13:19:37 +08:00
|
|
|
r = r600_startup(rdev);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("r600 startup failed on resume\n");
|
2012-02-21 06:57:20 +08:00
|
|
|
rdev->accel_working = false;
|
2009-09-18 13:19:37 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
int r600_suspend(struct radeon_device *rdev)
|
|
|
|
{
|
2013-12-19 03:07:14 +08:00
|
|
|
radeon_pm_suspend(rdev);
|
2010-01-29 01:16:30 +08:00
|
|
|
r600_audio_fini(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
r600_cp_stop(rdev);
|
2010-01-15 21:44:37 +08:00
|
|
|
r600_irq_suspend(rdev);
|
2010-08-28 06:25:25 +08:00
|
|
|
radeon_wb_disable(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
r600_pcie_gart_disable(rdev);
|
2011-10-14 22:51:22 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Plan is to move initialization in that function and use
|
|
|
|
* helper function so that radeon_device_init pretty much
|
|
|
|
* do nothing more than calling asic specific function. This
|
|
|
|
* should also allow to remove a bunch of callback function
|
|
|
|
* like vram_info.
|
|
|
|
*/
|
|
|
|
int r600_init(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2009-09-08 08:10:24 +08:00
|
|
|
int r;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r600_debugfs_mc_info_init(rdev)) {
|
|
|
|
DRM_ERROR("Failed to register debugfs file for mc !\n");
|
|
|
|
}
|
|
|
|
/* Read BIOS */
|
|
|
|
if (!radeon_get_bios(rdev)) {
|
|
|
|
if (ASIC_IS_AVIVO(rdev))
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
/* Must be an ATOMBIOS */
|
2009-10-02 00:02:15 +08:00
|
|
|
if (!rdev->is_atom_bios) {
|
|
|
|
dev_err(rdev->dev, "Expecting atombios for R600 GPU\n");
|
2009-09-08 08:10:24 +08:00
|
|
|
return -EINVAL;
|
2009-10-02 00:02:15 +08:00
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
r = radeon_atombios_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
/* Post card if necessary */
|
2011-01-12 07:08:59 +08:00
|
|
|
if (!radeon_card_posted(rdev)) {
|
2009-12-01 12:06:31 +08:00
|
|
|
if (!rdev->bios) {
|
|
|
|
dev_err(rdev->dev, "Card not posted and no BIOS - ignoring\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
DRM_INFO("GPU not posted. posting now...\n");
|
|
|
|
atom_asic_init(rdev->mode_info.atom_context);
|
|
|
|
}
|
|
|
|
/* Initialize scratch registers */
|
|
|
|
r600_scratch_init(rdev);
|
|
|
|
/* Initialize surface registers */
|
|
|
|
radeon_surface_init(rdev);
|
2009-11-03 07:53:02 +08:00
|
|
|
/* Initialize clocks */
|
2009-09-17 15:42:28 +08:00
|
|
|
radeon_get_clock_info(rdev->ddev);
|
2009-09-08 08:10:24 +08:00
|
|
|
/* Fence driver */
|
2011-11-21 04:45:34 +08:00
|
|
|
r = radeon_fence_driver_init(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
2010-01-13 22:16:38 +08:00
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
r = radeon_agp_init(rdev);
|
|
|
|
if (r)
|
|
|
|
radeon_agp_disable(rdev);
|
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
r = r600_mc_init(rdev);
|
2009-10-07 01:04:29 +08:00
|
|
|
if (r)
|
2009-09-08 08:10:24 +08:00
|
|
|
return r;
|
|
|
|
/* Memory manager */
|
2009-11-20 21:29:23 +08:00
|
|
|
r = radeon_bo_init(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2013-12-19 08:11:27 +08:00
|
|
|
if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) {
|
|
|
|
r = r600_init_microcode(rdev);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("Failed to load firmware!\n");
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-12-19 03:07:14 +08:00
|
|
|
/* Initialize power management */
|
|
|
|
radeon_pm_init(rdev);
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
rdev->ring[RADEON_RING_TYPE_GFX_INDEX].ring_obj = NULL;
|
|
|
|
r600_ring_init(rdev, &rdev->ring[RADEON_RING_TYPE_GFX_INDEX], 1024 * 1024);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
rdev->ih.ring_obj = NULL;
|
|
|
|
r600_ih_ring_init(rdev, 64 * 1024);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2009-09-15 00:29:49 +08:00
|
|
|
r = r600_pcie_gart_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
|
2009-12-10 08:31:44 +08:00
|
|
|
rdev->accel_working = true;
|
2009-09-18 13:19:37 +08:00
|
|
|
r = r600_startup(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
2010-02-02 18:51:45 +08:00
|
|
|
dev_err(rdev->dev, "disabling GPU acceleration\n");
|
|
|
|
r600_cp_fini(rdev);
|
|
|
|
r600_irq_fini(rdev);
|
2010-08-28 06:25:25 +08:00
|
|
|
radeon_wb_fini(rdev);
|
2012-07-05 17:55:34 +08:00
|
|
|
radeon_ib_pool_fini(rdev);
|
2010-02-02 18:51:45 +08:00
|
|
|
radeon_irq_kms_fini(rdev);
|
2009-10-02 00:02:14 +08:00
|
|
|
r600_pcie_gart_fini(rdev);
|
2009-09-16 21:24:21 +08:00
|
|
|
rdev->accel_working = false;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
2009-10-12 05:49:13 +08:00
|
|
|
|
2009-09-08 08:10:24 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r600_fini(struct radeon_device *rdev)
|
|
|
|
{
|
2013-12-19 03:07:14 +08:00
|
|
|
radeon_pm_fini(rdev);
|
2009-10-12 05:49:13 +08:00
|
|
|
r600_audio_fini(rdev);
|
2010-02-02 18:51:45 +08:00
|
|
|
r600_cp_fini(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
r600_irq_fini(rdev);
|
2010-08-28 06:25:25 +08:00
|
|
|
radeon_wb_fini(rdev);
|
2012-07-05 17:55:34 +08:00
|
|
|
radeon_ib_pool_fini(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
radeon_irq_kms_fini(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
r600_pcie_gart_fini(rdev);
|
2011-10-28 22:30:02 +08:00
|
|
|
r600_vram_scratch_fini(rdev);
|
2010-02-02 18:51:45 +08:00
|
|
|
radeon_agp_fini(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
radeon_gem_fini(rdev);
|
|
|
|
radeon_fence_driver_fini(rdev);
|
2009-11-20 21:29:23 +08:00
|
|
|
radeon_bo_fini(rdev);
|
2009-10-02 00:02:15 +08:00
|
|
|
radeon_atombios_fini(rdev);
|
2009-09-08 08:10:24 +08:00
|
|
|
kfree(rdev->bios);
|
|
|
|
rdev->bios = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* CS stuff
|
|
|
|
*/
|
|
|
|
void r600_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib)
|
|
|
|
{
|
2012-05-08 20:24:01 +08:00
|
|
|
struct radeon_ring *ring = &rdev->ring[ib->ring];
|
2012-07-18 02:02:31 +08:00
|
|
|
u32 next_rptr;
|
2011-09-23 21:11:23 +08:00
|
|
|
|
2012-07-06 22:22:55 +08:00
|
|
|
if (ring->rptr_save_reg) {
|
2012-07-18 02:02:31 +08:00
|
|
|
next_rptr = ring->wptr + 3 + 4;
|
2012-07-06 22:22:55 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
|
|
|
|
radeon_ring_write(ring, ((ring->rptr_save_reg -
|
|
|
|
PACKET3_SET_CONFIG_REG_OFFSET) >> 2));
|
|
|
|
radeon_ring_write(ring, next_rptr);
|
2012-07-18 02:02:31 +08:00
|
|
|
} else if (rdev->wb.enabled) {
|
|
|
|
next_rptr = ring->wptr + 5 + 4;
|
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_MEM_WRITE, 3));
|
|
|
|
radeon_ring_write(ring, ring->next_rptr_gpu_addr & 0xfffffffc);
|
|
|
|
radeon_ring_write(ring, (upper_32_bits(ring->next_rptr_gpu_addr) & 0xff) | (1 << 18));
|
|
|
|
radeon_ring_write(ring, next_rptr);
|
|
|
|
radeon_ring_write(ring, 0);
|
2012-07-06 22:22:55 +08:00
|
|
|
}
|
|
|
|
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
|
|
|
|
radeon_ring_write(ring,
|
2011-02-12 08:45:38 +08:00
|
|
|
#ifdef __BIG_ENDIAN
|
|
|
|
(2 << 0) |
|
|
|
|
#endif
|
|
|
|
(ib->gpu_addr & 0xFFFFFFFC));
|
2011-10-23 18:56:27 +08:00
|
|
|
radeon_ring_write(ring, upper_32_bits(ib->gpu_addr) & 0xFF);
|
|
|
|
radeon_ring_write(ring, ib->length_dw);
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
|
2012-02-24 06:53:45 +08:00
|
|
|
int r600_ib_test(struct radeon_device *rdev, struct radeon_ring *ring)
|
2009-09-08 08:10:24 +08:00
|
|
|
{
|
2012-05-09 21:35:02 +08:00
|
|
|
struct radeon_ib ib;
|
2009-09-08 08:10:24 +08:00
|
|
|
uint32_t scratch;
|
|
|
|
uint32_t tmp = 0;
|
|
|
|
unsigned i;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
r = radeon_scratch_get(rdev, &scratch);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: failed to get scratch reg (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
WREG32(scratch, 0xCAFEDEAD);
|
2012-08-07 00:57:44 +08:00
|
|
|
r = radeon_ib_get(rdev, ring->idx, &ib, NULL, 256);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: failed to get ib (%d).\n", r);
|
2012-09-20 16:31:10 +08:00
|
|
|
goto free_scratch;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
2012-05-09 21:35:02 +08:00
|
|
|
ib.ptr[0] = PACKET3(PACKET3_SET_CONFIG_REG, 1);
|
|
|
|
ib.ptr[1] = ((scratch - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
|
|
|
|
ib.ptr[2] = 0xDEADBEEF;
|
|
|
|
ib.length_dw = 3;
|
2012-07-13 19:06:00 +08:00
|
|
|
r = radeon_ib_schedule(rdev, &ib, NULL);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
|
2012-09-20 16:31:10 +08:00
|
|
|
goto free_ib;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
2012-05-09 21:35:02 +08:00
|
|
|
r = radeon_fence_wait(ib.fence, false);
|
2009-09-08 08:10:24 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: fence wait failed (%d).\n", r);
|
2012-09-20 16:31:10 +08:00
|
|
|
goto free_ib;
|
2009-09-08 08:10:24 +08:00
|
|
|
}
|
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
tmp = RREG32(scratch);
|
|
|
|
if (tmp == 0xDEADBEEF)
|
|
|
|
break;
|
|
|
|
DRM_UDELAY(1);
|
|
|
|
}
|
|
|
|
if (i < rdev->usec_timeout) {
|
2012-05-09 21:35:02 +08:00
|
|
|
DRM_INFO("ib test on ring %d succeeded in %u usecs\n", ib.fence->ring, i);
|
2009-09-08 08:10:24 +08:00
|
|
|
} else {
|
2010-09-23 00:57:19 +08:00
|
|
|
DRM_ERROR("radeon: ib test failed (scratch(0x%04X)=0x%08X)\n",
|
2009-09-08 08:10:24 +08:00
|
|
|
scratch, tmp);
|
|
|
|
r = -EINVAL;
|
|
|
|
}
|
2012-09-20 16:31:10 +08:00
|
|
|
free_ib:
|
2009-09-08 08:10:24 +08:00
|
|
|
radeon_ib_free(rdev, &ib);
|
2012-09-20 16:31:10 +08:00
|
|
|
free_scratch:
|
|
|
|
radeon_scratch_free(rdev, scratch);
|
2009-06-05 20:42:42 +08:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
/*
|
|
|
|
* Interrupts
|
|
|
|
*
|
|
|
|
* Interrupts use a ring buffer on r6xx/r7xx hardware. It works pretty
|
|
|
|
* the same as the CP ring buffer, but in reverse. Rather than the CPU
|
|
|
|
* writing to the ring and the GPU consuming, the GPU writes to the ring
|
|
|
|
* and host consumes. As the host irq handler processes interrupts, it
|
|
|
|
* increments the rptr. When the rptr catches up with the wptr, all the
|
|
|
|
* current interrupts have been processed.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void r600_ih_ring_init(struct radeon_device *rdev, unsigned ring_size)
|
|
|
|
{
|
|
|
|
u32 rb_bufsz;
|
|
|
|
|
|
|
|
/* Align ring size */
|
2013-07-10 20:11:59 +08:00
|
|
|
rb_bufsz = order_base_2(ring_size / 4);
|
2009-12-02 02:43:46 +08:00
|
|
|
ring_size = (1 << rb_bufsz) * 4;
|
|
|
|
rdev->ih.ring_size = ring_size;
|
2010-01-15 21:44:37 +08:00
|
|
|
rdev->ih.ptr_mask = rdev->ih.ring_size - 1;
|
|
|
|
rdev->ih.rptr = 0;
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
|
|
|
|
2012-03-21 05:18:22 +08:00
|
|
|
int r600_ih_ring_alloc(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
/* Allocate ring buffer */
|
|
|
|
if (rdev->ih.ring_obj == NULL) {
|
2011-02-19 00:59:16 +08:00
|
|
|
r = radeon_bo_create(rdev, rdev->ih.ring_size,
|
2010-11-18 08:00:26 +08:00
|
|
|
PAGE_SIZE, true,
|
2009-11-20 21:29:23 +08:00
|
|
|
RADEON_GEM_DOMAIN_GTT,
|
2012-05-11 06:33:13 +08:00
|
|
|
NULL, &rdev->ih.ring_obj);
|
2009-12-02 02:43:46 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: failed to create ih ring buffer (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
2009-11-20 21:29:23 +08:00
|
|
|
r = radeon_bo_reserve(rdev->ih.ring_obj, false);
|
|
|
|
if (unlikely(r != 0))
|
|
|
|
return r;
|
|
|
|
r = radeon_bo_pin(rdev->ih.ring_obj,
|
|
|
|
RADEON_GEM_DOMAIN_GTT,
|
|
|
|
&rdev->ih.gpu_addr);
|
2009-12-02 02:43:46 +08:00
|
|
|
if (r) {
|
2009-11-20 21:29:23 +08:00
|
|
|
radeon_bo_unreserve(rdev->ih.ring_obj);
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_ERROR("radeon: failed to pin ih ring buffer (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
2009-11-20 21:29:23 +08:00
|
|
|
r = radeon_bo_kmap(rdev->ih.ring_obj,
|
|
|
|
(void **)&rdev->ih.ring);
|
|
|
|
radeon_bo_unreserve(rdev->ih.ring_obj);
|
2009-12-02 02:43:46 +08:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("radeon: failed to map ih ring buffer (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-21 05:18:22 +08:00
|
|
|
void r600_ih_ring_fini(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
2009-11-20 21:29:23 +08:00
|
|
|
int r;
|
2009-12-02 02:43:46 +08:00
|
|
|
if (rdev->ih.ring_obj) {
|
2009-11-20 21:29:23 +08:00
|
|
|
r = radeon_bo_reserve(rdev->ih.ring_obj, false);
|
|
|
|
if (likely(r == 0)) {
|
|
|
|
radeon_bo_kunmap(rdev->ih.ring_obj);
|
|
|
|
radeon_bo_unpin(rdev->ih.ring_obj);
|
|
|
|
radeon_bo_unreserve(rdev->ih.ring_obj);
|
|
|
|
}
|
|
|
|
radeon_bo_unref(&rdev->ih.ring_obj);
|
2009-12-02 02:43:46 +08:00
|
|
|
rdev->ih.ring = NULL;
|
|
|
|
rdev->ih.ring_obj = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-03-25 01:55:51 +08:00
|
|
|
void r600_rlc_stop(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
|
|
|
|
2010-03-25 01:55:51 +08:00
|
|
|
if ((rdev->family >= CHIP_RV770) &&
|
|
|
|
(rdev->family <= CHIP_RV740)) {
|
2009-12-02 02:43:46 +08:00
|
|
|
/* r7xx asics need to soft reset RLC before halting */
|
|
|
|
WREG32(SRBM_SOFT_RESET, SOFT_RESET_RLC);
|
|
|
|
RREG32(SRBM_SOFT_RESET);
|
2012-04-06 02:58:22 +08:00
|
|
|
mdelay(15);
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(SRBM_SOFT_RESET, 0);
|
|
|
|
RREG32(SRBM_SOFT_RESET);
|
|
|
|
}
|
|
|
|
|
|
|
|
WREG32(RLC_CNTL, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void r600_rlc_start(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
WREG32(RLC_CNTL, RLC_ENABLE);
|
|
|
|
}
|
|
|
|
|
2013-04-13 01:52:52 +08:00
|
|
|
static int r600_rlc_resume(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
|
|
|
u32 i;
|
|
|
|
const __be32 *fw_data;
|
|
|
|
|
|
|
|
if (!rdev->rlc_fw)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
r600_rlc_stop(rdev);
|
|
|
|
|
|
|
|
WREG32(RLC_HB_CNTL, 0);
|
2012-03-21 05:18:39 +08:00
|
|
|
|
2013-04-13 01:52:52 +08:00
|
|
|
WREG32(RLC_HB_BASE, 0);
|
|
|
|
WREG32(RLC_HB_RPTR, 0);
|
|
|
|
WREG32(RLC_HB_WPTR, 0);
|
|
|
|
WREG32(RLC_HB_WPTR_LSB_ADDR, 0);
|
|
|
|
WREG32(RLC_HB_WPTR_MSB_ADDR, 0);
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(RLC_MC_CNTL, 0);
|
|
|
|
WREG32(RLC_UCODE_CNTL, 0);
|
|
|
|
|
|
|
|
fw_data = (const __be32 *)rdev->rlc_fw->data;
|
2013-04-13 01:52:52 +08:00
|
|
|
if (rdev->family >= CHIP_RV770) {
|
2009-12-02 02:43:46 +08:00
|
|
|
for (i = 0; i < R700_RLC_UCODE_SIZE; i++) {
|
|
|
|
WREG32(RLC_UCODE_ADDR, i);
|
|
|
|
WREG32(RLC_UCODE_DATA, be32_to_cpup(fw_data++));
|
|
|
|
}
|
|
|
|
} else {
|
2013-01-12 04:33:13 +08:00
|
|
|
for (i = 0; i < R600_RLC_UCODE_SIZE; i++) {
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(RLC_UCODE_ADDR, i);
|
|
|
|
WREG32(RLC_UCODE_DATA, be32_to_cpup(fw_data++));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
WREG32(RLC_UCODE_ADDR, 0);
|
|
|
|
|
|
|
|
r600_rlc_start(rdev);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void r600_enable_interrupts(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 ih_cntl = RREG32(IH_CNTL);
|
|
|
|
u32 ih_rb_cntl = RREG32(IH_RB_CNTL);
|
|
|
|
|
|
|
|
ih_cntl |= ENABLE_INTR;
|
|
|
|
ih_rb_cntl |= IH_RB_ENABLE;
|
|
|
|
WREG32(IH_CNTL, ih_cntl);
|
|
|
|
WREG32(IH_RB_CNTL, ih_rb_cntl);
|
|
|
|
rdev->ih.enabled = true;
|
|
|
|
}
|
|
|
|
|
2010-03-25 01:55:51 +08:00
|
|
|
void r600_disable_interrupts(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
|
|
|
u32 ih_rb_cntl = RREG32(IH_RB_CNTL);
|
|
|
|
u32 ih_cntl = RREG32(IH_CNTL);
|
|
|
|
|
|
|
|
ih_rb_cntl &= ~IH_RB_ENABLE;
|
|
|
|
ih_cntl &= ~ENABLE_INTR;
|
|
|
|
WREG32(IH_RB_CNTL, ih_rb_cntl);
|
|
|
|
WREG32(IH_CNTL, ih_cntl);
|
|
|
|
/* set rptr, wptr to 0 */
|
|
|
|
WREG32(IH_RB_RPTR, 0);
|
|
|
|
WREG32(IH_RB_WPTR, 0);
|
|
|
|
rdev->ih.enabled = false;
|
|
|
|
rdev->ih.rptr = 0;
|
|
|
|
}
|
|
|
|
|
2009-12-05 04:12:21 +08:00
|
|
|
static void r600_disable_interrupt_state(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 tmp;
|
|
|
|
|
2010-10-09 00:09:12 +08:00
|
|
|
WREG32(CP_INT_CNTL, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
|
2012-09-28 03:08:35 +08:00
|
|
|
tmp = RREG32(DMA_CNTL) & ~TRAP_ENABLE;
|
|
|
|
WREG32(DMA_CNTL, tmp);
|
2009-12-05 04:12:21 +08:00
|
|
|
WREG32(GRBM_INT_CNTL, 0);
|
|
|
|
WREG32(DxMODE_INT_MASK, 0);
|
2010-11-21 23:59:01 +08:00
|
|
|
WREG32(D1GRPH_INTERRUPT_CONTROL, 0);
|
|
|
|
WREG32(D2GRPH_INTERRUPT_CONTROL, 0);
|
2009-12-05 04:12:21 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
WREG32(DCE3_DACA_AUTODETECT_INT_CONTROL, 0);
|
|
|
|
WREG32(DCE3_DACB_AUTODETECT_INT_CONTROL, 0);
|
|
|
|
tmp = RREG32(DC_HPD1_INT_CONTROL) & DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD1_INT_CONTROL, tmp);
|
|
|
|
tmp = RREG32(DC_HPD2_INT_CONTROL) & DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD2_INT_CONTROL, tmp);
|
|
|
|
tmp = RREG32(DC_HPD3_INT_CONTROL) & DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD3_INT_CONTROL, tmp);
|
|
|
|
tmp = RREG32(DC_HPD4_INT_CONTROL) & DC_HPDx_INT_POLARITY;
|
|
|
|
WREG32(DC_HPD4_INT_CONTROL, tmp);
|
|
|
|
if (ASIC_IS_DCE32(rdev)) {
|
|
|
|
tmp = RREG32(DC_HPD5_INT_CONTROL) & DC_HPDx_INT_POLARITY;
|
2010-03-25 01:57:29 +08:00
|
|
|
WREG32(DC_HPD5_INT_CONTROL, tmp);
|
2009-12-05 04:12:21 +08:00
|
|
|
tmp = RREG32(DC_HPD6_INT_CONTROL) & DC_HPDx_INT_POLARITY;
|
2010-03-25 01:57:29 +08:00
|
|
|
WREG32(DC_HPD6_INT_CONTROL, tmp);
|
2012-04-29 05:35:24 +08:00
|
|
|
tmp = RREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET0) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
WREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET0, tmp);
|
|
|
|
tmp = RREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET1) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
WREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET1, tmp);
|
2012-03-30 20:59:57 +08:00
|
|
|
} else {
|
|
|
|
tmp = RREG32(HDMI0_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
WREG32(HDMI0_AUDIO_PACKET_CONTROL, tmp);
|
|
|
|
tmp = RREG32(DCE3_HDMI1_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
WREG32(DCE3_HDMI1_AUDIO_PACKET_CONTROL, tmp);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
WREG32(DACA_AUTODETECT_INT_CONTROL, 0);
|
|
|
|
WREG32(DACB_AUTODETECT_INT_CONTROL, 0);
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL) & DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
2010-03-25 01:57:29 +08:00
|
|
|
WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, tmp);
|
2009-12-05 04:12:21 +08:00
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL) & DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
2010-03-25 01:57:29 +08:00
|
|
|
WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, tmp);
|
2009-12-05 04:12:21 +08:00
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL) & DC_HOT_PLUG_DETECTx_INT_POLARITY;
|
2010-03-25 01:57:29 +08:00
|
|
|
WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, tmp);
|
2012-03-30 20:59:57 +08:00
|
|
|
tmp = RREG32(HDMI0_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
WREG32(HDMI0_AUDIO_PACKET_CONTROL, tmp);
|
|
|
|
tmp = RREG32(HDMI1_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
WREG32(HDMI1_AUDIO_PACKET_CONTROL, tmp);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
int r600_irq_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
int rb_bufsz;
|
|
|
|
u32 interrupt_cntl, ih_cntl, ih_rb_cntl;
|
|
|
|
|
|
|
|
/* allocate ring */
|
2010-01-15 21:44:37 +08:00
|
|
|
ret = r600_ih_ring_alloc(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
/* disable irqs */
|
|
|
|
r600_disable_interrupts(rdev);
|
|
|
|
|
|
|
|
/* init rlc */
|
2013-04-13 01:52:52 +08:00
|
|
|
if (rdev->family >= CHIP_CEDAR)
|
|
|
|
ret = evergreen_rlc_resume(rdev);
|
|
|
|
else
|
|
|
|
ret = r600_rlc_resume(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
if (ret) {
|
|
|
|
r600_ih_ring_fini(rdev);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* setup interrupt control */
|
|
|
|
/* set dummy read address to ring address */
|
|
|
|
WREG32(INTERRUPT_CNTL2, rdev->ih.gpu_addr >> 8);
|
|
|
|
interrupt_cntl = RREG32(INTERRUPT_CNTL);
|
|
|
|
/* IH_DUMMY_RD_OVERRIDE=0 - dummy read disabled with msi, enabled without msi
|
|
|
|
* IH_DUMMY_RD_OVERRIDE=1 - dummy read controlled by IH_DUMMY_RD_EN
|
|
|
|
*/
|
|
|
|
interrupt_cntl &= ~IH_DUMMY_RD_OVERRIDE;
|
|
|
|
/* IH_REQ_NONSNOOP_EN=1 if ring is in non-cacheable memory, e.g., vram */
|
|
|
|
interrupt_cntl &= ~IH_REQ_NONSNOOP_EN;
|
|
|
|
WREG32(INTERRUPT_CNTL, interrupt_cntl);
|
|
|
|
|
|
|
|
WREG32(IH_RB_BASE, rdev->ih.gpu_addr >> 8);
|
2013-07-10 20:11:59 +08:00
|
|
|
rb_bufsz = order_base_2(rdev->ih.ring_size / 4);
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
ih_rb_cntl = (IH_WPTR_OVERFLOW_ENABLE |
|
|
|
|
IH_WPTR_OVERFLOW_CLEAR |
|
|
|
|
(rb_bufsz << 1));
|
2010-08-28 06:25:25 +08:00
|
|
|
|
|
|
|
if (rdev->wb.enabled)
|
|
|
|
ih_rb_cntl |= IH_WPTR_WRITEBACK_ENABLE;
|
|
|
|
|
|
|
|
/* set the writeback address whether it's enabled or not */
|
|
|
|
WREG32(IH_RB_WPTR_ADDR_LO, (rdev->wb.gpu_addr + R600_WB_IH_WPTR_OFFSET) & 0xFFFFFFFC);
|
|
|
|
WREG32(IH_RB_WPTR_ADDR_HI, upper_32_bits(rdev->wb.gpu_addr + R600_WB_IH_WPTR_OFFSET) & 0xFF);
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
WREG32(IH_RB_CNTL, ih_rb_cntl);
|
|
|
|
|
|
|
|
/* set rptr, wptr to 0 */
|
|
|
|
WREG32(IH_RB_RPTR, 0);
|
|
|
|
WREG32(IH_RB_WPTR, 0);
|
|
|
|
|
|
|
|
/* Default settings for IH_CNTL (disabled at first) */
|
|
|
|
ih_cntl = MC_WRREQ_CREDIT(0x10) | MC_WR_CLEAN_CNT(0x10);
|
|
|
|
/* RPTR_REARM only works if msi's are enabled */
|
|
|
|
if (rdev->msi_enabled)
|
|
|
|
ih_cntl |= RPTR_REARM;
|
|
|
|
WREG32(IH_CNTL, ih_cntl);
|
|
|
|
|
|
|
|
/* force the active interrupt state to all disabled */
|
2010-03-25 01:55:51 +08:00
|
|
|
if (rdev->family >= CHIP_CEDAR)
|
|
|
|
evergreen_disable_interrupt_state(rdev);
|
|
|
|
else
|
|
|
|
r600_disable_interrupt_state(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2012-04-03 18:53:05 +08:00
|
|
|
/* at this point everything should be setup correctly to enable master */
|
|
|
|
pci_set_master(rdev->pdev);
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
/* enable irqs */
|
|
|
|
r600_enable_interrupts(rdev);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-01-15 21:44:37 +08:00
|
|
|
void r600_irq_suspend(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
2010-03-25 01:55:51 +08:00
|
|
|
r600_irq_disable(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
r600_rlc_stop(rdev);
|
2010-01-15 21:44:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void r600_irq_fini(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
r600_irq_suspend(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
r600_ih_ring_fini(rdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
int r600_irq_set(struct radeon_device *rdev)
|
|
|
|
{
|
2009-12-05 04:12:21 +08:00
|
|
|
u32 cp_int_cntl = CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE;
|
|
|
|
u32 mode_int = 0;
|
|
|
|
u32 hpd1, hpd2, hpd3, hpd4 = 0, hpd5 = 0, hpd6 = 0;
|
2010-04-23 00:52:11 +08:00
|
|
|
u32 grbm_int_cntl = 0;
|
2012-03-30 20:59:57 +08:00
|
|
|
u32 hdmi0, hdmi1;
|
2012-09-28 03:08:35 +08:00
|
|
|
u32 dma_cntl;
|
2013-04-13 02:04:10 +08:00
|
|
|
u32 thermal_int = 0;
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2010-01-07 22:39:14 +08:00
|
|
|
if (!rdev->irq.installed) {
|
2010-10-31 05:08:30 +08:00
|
|
|
WARN(1, "Can't enable IRQ/MSI because no handler is installed\n");
|
2010-01-07 22:39:14 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-12-02 02:43:46 +08:00
|
|
|
/* don't enable anything if the ih is disabled */
|
2010-01-15 21:44:38 +08:00
|
|
|
if (!rdev->ih.enabled) {
|
|
|
|
r600_disable_interrupts(rdev);
|
|
|
|
/* force the active interrupt state to all disabled */
|
|
|
|
r600_disable_interrupt_state(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
return 0;
|
2010-01-15 21:44:38 +08:00
|
|
|
}
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2009-12-05 04:12:21 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
if (ASIC_IS_DCE32(rdev)) {
|
|
|
|
hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
2012-04-29 05:35:24 +08:00
|
|
|
hdmi0 = RREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET0) & ~AFMT_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
hdmi1 = RREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET1) & ~AFMT_AZ_FORMAT_WTRIG_MASK;
|
2012-03-30 20:59:57 +08:00
|
|
|
} else {
|
|
|
|
hdmi0 = RREG32(HDMI0_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
hdmi1 = RREG32(DCE3_HDMI1_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
hpd1 = RREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
hpd2 = RREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
|
|
|
hpd3 = RREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL) & ~DC_HPDx_INT_EN;
|
2012-03-30 20:59:57 +08:00
|
|
|
hdmi0 = RREG32(HDMI0_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
|
|
|
hdmi1 = RREG32(HDMI1_AUDIO_PACKET_CONTROL) & ~HDMI0_AZ_FORMAT_WTRIG_MASK;
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
2013-04-13 02:04:10 +08:00
|
|
|
|
2012-09-28 03:08:35 +08:00
|
|
|
dma_cntl = RREG32(DMA_CNTL) & ~TRAP_ENABLE;
|
2009-12-05 04:12:21 +08:00
|
|
|
|
2013-04-13 02:04:10 +08:00
|
|
|
if ((rdev->family > CHIP_R600) && (rdev->family < CHIP_RV770)) {
|
|
|
|
thermal_int = RREG32(CG_THERMAL_INT) &
|
|
|
|
~(THERM_INT_MASK_HIGH | THERM_INT_MASK_LOW);
|
2013-06-26 12:11:19 +08:00
|
|
|
} else if (rdev->family >= CHIP_RV770) {
|
|
|
|
thermal_int = RREG32(RV770_CG_THERMAL_INT) &
|
|
|
|
~(THERM_INT_MASK_HIGH | THERM_INT_MASK_LOW);
|
|
|
|
}
|
|
|
|
if (rdev->irq.dpm_thermal) {
|
|
|
|
DRM_DEBUG("dpm thermal\n");
|
|
|
|
thermal_int |= THERM_INT_MASK_HIGH | THERM_INT_MASK_LOW;
|
2013-04-13 02:04:10 +08:00
|
|
|
}
|
|
|
|
|
2012-05-18 01:52:00 +08:00
|
|
|
if (atomic_read(&rdev->irq.ring_int[RADEON_RING_TYPE_GFX_INDEX])) {
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("r600_irq_set: sw int\n");
|
|
|
|
cp_int_cntl |= RB_INT_ENABLE;
|
2010-09-04 17:04:34 +08:00
|
|
|
cp_int_cntl |= TIME_STAMP_INT_ENABLE;
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
2012-09-28 03:08:35 +08:00
|
|
|
|
|
|
|
if (atomic_read(&rdev->irq.ring_int[R600_RING_TYPE_DMA_INDEX])) {
|
|
|
|
DRM_DEBUG("r600_irq_set: sw int dma\n");
|
|
|
|
dma_cntl |= TRAP_ENABLE;
|
|
|
|
}
|
|
|
|
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.crtc_vblank_int[0] ||
|
2012-05-18 01:52:00 +08:00
|
|
|
atomic_read(&rdev->irq.pflip[0])) {
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("r600_irq_set: vblank 0\n");
|
|
|
|
mode_int |= D1MODE_VBLANK_INT_MASK;
|
|
|
|
}
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.crtc_vblank_int[1] ||
|
2012-05-18 01:52:00 +08:00
|
|
|
atomic_read(&rdev->irq.pflip[1])) {
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("r600_irq_set: vblank 1\n");
|
|
|
|
mode_int |= D2MODE_VBLANK_INT_MASK;
|
|
|
|
}
|
2009-12-05 04:12:21 +08:00
|
|
|
if (rdev->irq.hpd[0]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hpd 1\n");
|
|
|
|
hpd1 |= DC_HPDx_INT_EN;
|
|
|
|
}
|
|
|
|
if (rdev->irq.hpd[1]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hpd 2\n");
|
|
|
|
hpd2 |= DC_HPDx_INT_EN;
|
|
|
|
}
|
|
|
|
if (rdev->irq.hpd[2]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hpd 3\n");
|
|
|
|
hpd3 |= DC_HPDx_INT_EN;
|
|
|
|
}
|
|
|
|
if (rdev->irq.hpd[3]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hpd 4\n");
|
|
|
|
hpd4 |= DC_HPDx_INT_EN;
|
|
|
|
}
|
|
|
|
if (rdev->irq.hpd[4]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hpd 5\n");
|
|
|
|
hpd5 |= DC_HPDx_INT_EN;
|
|
|
|
}
|
|
|
|
if (rdev->irq.hpd[5]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hpd 6\n");
|
|
|
|
hpd6 |= DC_HPDx_INT_EN;
|
|
|
|
}
|
2012-03-30 20:59:57 +08:00
|
|
|
if (rdev->irq.afmt[0]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hdmi 0\n");
|
|
|
|
hdmi0 |= HDMI0_AZ_FORMAT_WTRIG_MASK;
|
2010-04-10 09:13:16 +08:00
|
|
|
}
|
2012-03-30 20:59:57 +08:00
|
|
|
if (rdev->irq.afmt[1]) {
|
|
|
|
DRM_DEBUG("r600_irq_set: hdmi 0\n");
|
|
|
|
hdmi1 |= HDMI0_AZ_FORMAT_WTRIG_MASK;
|
2010-04-10 09:13:16 +08:00
|
|
|
}
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
WREG32(CP_INT_CNTL, cp_int_cntl);
|
2012-09-28 03:08:35 +08:00
|
|
|
WREG32(DMA_CNTL, dma_cntl);
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(DxMODE_INT_MASK, mode_int);
|
2014-04-24 02:46:06 +08:00
|
|
|
WREG32(D1GRPH_INTERRUPT_CONTROL, DxGRPH_PFLIP_INT_MASK);
|
|
|
|
WREG32(D2GRPH_INTERRUPT_CONTROL, DxGRPH_PFLIP_INT_MASK);
|
2010-04-23 00:52:11 +08:00
|
|
|
WREG32(GRBM_INT_CNTL, grbm_int_cntl);
|
2009-12-05 04:12:21 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
WREG32(DC_HPD1_INT_CONTROL, hpd1);
|
|
|
|
WREG32(DC_HPD2_INT_CONTROL, hpd2);
|
|
|
|
WREG32(DC_HPD3_INT_CONTROL, hpd3);
|
|
|
|
WREG32(DC_HPD4_INT_CONTROL, hpd4);
|
|
|
|
if (ASIC_IS_DCE32(rdev)) {
|
|
|
|
WREG32(DC_HPD5_INT_CONTROL, hpd5);
|
|
|
|
WREG32(DC_HPD6_INT_CONTROL, hpd6);
|
2012-04-29 05:35:24 +08:00
|
|
|
WREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET0, hdmi0);
|
|
|
|
WREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET1, hdmi1);
|
2012-03-30 20:59:57 +08:00
|
|
|
} else {
|
|
|
|
WREG32(HDMI0_AUDIO_PACKET_CONTROL, hdmi0);
|
|
|
|
WREG32(DCE3_HDMI1_AUDIO_PACKET_CONTROL, hdmi1);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, hpd1);
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, hpd2);
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, hpd3);
|
2012-03-30 20:59:57 +08:00
|
|
|
WREG32(HDMI0_AUDIO_PACKET_CONTROL, hdmi0);
|
|
|
|
WREG32(HDMI1_AUDIO_PACKET_CONTROL, hdmi1);
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
2013-04-13 02:04:10 +08:00
|
|
|
if ((rdev->family > CHIP_R600) && (rdev->family < CHIP_RV770)) {
|
|
|
|
WREG32(CG_THERMAL_INT, thermal_int);
|
2013-06-26 12:11:19 +08:00
|
|
|
} else if (rdev->family >= CHIP_RV770) {
|
|
|
|
WREG32(RV770_CG_THERMAL_INT, thermal_int);
|
2013-04-13 02:04:10 +08:00
|
|
|
}
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-14 07:08:47 +08:00
|
|
|
static void r600_irq_ack(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
2009-12-05 04:12:21 +08:00
|
|
|
u32 tmp;
|
|
|
|
|
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
2010-11-21 23:59:01 +08:00
|
|
|
rdev->irq.stat_regs.r600.disp_int = RREG32(DCE3_DISP_INTERRUPT_STATUS);
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont = RREG32(DCE3_DISP_INTERRUPT_STATUS_CONTINUE);
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont2 = RREG32(DCE3_DISP_INTERRUPT_STATUS_CONTINUE2);
|
2012-03-30 20:59:57 +08:00
|
|
|
if (ASIC_IS_DCE32(rdev)) {
|
2012-04-29 05:35:24 +08:00
|
|
|
rdev->irq.stat_regs.r600.hdmi0_status = RREG32(AFMT_STATUS + DCE3_HDMI_OFFSET0);
|
|
|
|
rdev->irq.stat_regs.r600.hdmi1_status = RREG32(AFMT_STATUS + DCE3_HDMI_OFFSET1);
|
2012-03-30 20:59:57 +08:00
|
|
|
} else {
|
|
|
|
rdev->irq.stat_regs.r600.hdmi0_status = RREG32(HDMI0_STATUS);
|
|
|
|
rdev->irq.stat_regs.r600.hdmi1_status = RREG32(DCE3_HDMI1_STATUS);
|
|
|
|
}
|
2009-12-05 04:12:21 +08:00
|
|
|
} else {
|
2010-11-21 23:59:01 +08:00
|
|
|
rdev->irq.stat_regs.r600.disp_int = RREG32(DISP_INTERRUPT_STATUS);
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont = RREG32(DISP_INTERRUPT_STATUS_CONTINUE);
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont2 = 0;
|
2012-03-30 20:59:57 +08:00
|
|
|
rdev->irq.stat_regs.r600.hdmi0_status = RREG32(HDMI0_STATUS);
|
|
|
|
rdev->irq.stat_regs.r600.hdmi1_status = RREG32(HDMI1_STATUS);
|
2010-11-21 23:59:01 +08:00
|
|
|
}
|
|
|
|
rdev->irq.stat_regs.r600.d1grph_int = RREG32(D1GRPH_INTERRUPT_STATUS);
|
|
|
|
rdev->irq.stat_regs.r600.d2grph_int = RREG32(D2GRPH_INTERRUPT_STATUS);
|
|
|
|
|
|
|
|
if (rdev->irq.stat_regs.r600.d1grph_int & DxGRPH_PFLIP_INT_OCCURRED)
|
|
|
|
WREG32(D1GRPH_INTERRUPT_STATUS, DxGRPH_PFLIP_INT_CLEAR);
|
|
|
|
if (rdev->irq.stat_regs.r600.d2grph_int & DxGRPH_PFLIP_INT_OCCURRED)
|
|
|
|
WREG32(D2GRPH_INTERRUPT_STATUS, DxGRPH_PFLIP_INT_CLEAR);
|
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT)
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(D1MODE_VBLANK_STATUS, DxMODE_VBLANK_ACK);
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT)
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(D1MODE_VLINE_STATUS, DxMODE_VLINE_ACK);
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT)
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(D2MODE_VBLANK_STATUS, DxMODE_VBLANK_ACK);
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT)
|
2009-12-02 02:43:46 +08:00
|
|
|
WREG32(D2MODE_VLINE_STATUS, DxMODE_VLINE_ACK);
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT) {
|
2009-12-05 04:12:21 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
tmp = RREG32(DC_HPD1_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HPD1_INT_CONTROL, tmp);
|
|
|
|
} else {
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, tmp);
|
|
|
|
}
|
|
|
|
}
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT) {
|
2009-12-05 04:12:21 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
tmp = RREG32(DC_HPD2_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HPD2_INT_CONTROL, tmp);
|
|
|
|
} else {
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, tmp);
|
|
|
|
}
|
|
|
|
}
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT) {
|
2009-12-05 04:12:21 +08:00
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
tmp = RREG32(DC_HPD3_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HPD3_INT_CONTROL, tmp);
|
|
|
|
} else {
|
|
|
|
tmp = RREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, tmp);
|
|
|
|
}
|
|
|
|
}
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT) {
|
2009-12-05 04:12:21 +08:00
|
|
|
tmp = RREG32(DC_HPD4_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HPD4_INT_CONTROL, tmp);
|
|
|
|
}
|
|
|
|
if (ASIC_IS_DCE32(rdev)) {
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT) {
|
2009-12-05 04:12:21 +08:00
|
|
|
tmp = RREG32(DC_HPD5_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HPD5_INT_CONTROL, tmp);
|
|
|
|
}
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT) {
|
2009-12-05 04:12:21 +08:00
|
|
|
tmp = RREG32(DC_HPD5_INT_CONTROL);
|
|
|
|
tmp |= DC_HPDx_INT_ACK;
|
|
|
|
WREG32(DC_HPD6_INT_CONTROL, tmp);
|
|
|
|
}
|
2012-03-30 20:59:57 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.hdmi0_status & AFMT_AZ_FORMAT_WTRIG) {
|
2012-04-29 05:35:24 +08:00
|
|
|
tmp = RREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET0);
|
2012-03-30 20:59:57 +08:00
|
|
|
tmp |= AFMT_AZ_FORMAT_WTRIG_ACK;
|
2012-04-29 05:35:24 +08:00
|
|
|
WREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET0, tmp);
|
2012-03-30 20:59:57 +08:00
|
|
|
}
|
|
|
|
if (rdev->irq.stat_regs.r600.hdmi1_status & AFMT_AZ_FORMAT_WTRIG) {
|
2012-04-29 05:35:24 +08:00
|
|
|
tmp = RREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET1);
|
2012-03-30 20:59:57 +08:00
|
|
|
tmp |= AFMT_AZ_FORMAT_WTRIG_ACK;
|
2012-04-29 05:35:24 +08:00
|
|
|
WREG32(AFMT_AUDIO_PACKET_CONTROL + DCE3_HDMI_OFFSET1, tmp);
|
2010-04-10 09:13:16 +08:00
|
|
|
}
|
|
|
|
} else {
|
2012-03-30 20:59:57 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG) {
|
|
|
|
tmp = RREG32(HDMI0_AUDIO_PACKET_CONTROL);
|
|
|
|
tmp |= HDMI0_AZ_FORMAT_WTRIG_ACK;
|
|
|
|
WREG32(HDMI0_AUDIO_PACKET_CONTROL, tmp);
|
|
|
|
}
|
|
|
|
if (rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG) {
|
|
|
|
if (ASIC_IS_DCE3(rdev)) {
|
|
|
|
tmp = RREG32(DCE3_HDMI1_AUDIO_PACKET_CONTROL);
|
|
|
|
tmp |= HDMI0_AZ_FORMAT_WTRIG_ACK;
|
|
|
|
WREG32(DCE3_HDMI1_AUDIO_PACKET_CONTROL, tmp);
|
|
|
|
} else {
|
|
|
|
tmp = RREG32(HDMI1_AUDIO_PACKET_CONTROL);
|
|
|
|
tmp |= HDMI0_AZ_FORMAT_WTRIG_ACK;
|
|
|
|
WREG32(HDMI1_AUDIO_PACKET_CONTROL, tmp);
|
|
|
|
}
|
2010-04-10 09:13:16 +08:00
|
|
|
}
|
|
|
|
}
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void r600_irq_disable(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
r600_disable_interrupts(rdev);
|
|
|
|
/* Wait and acknowledge irq */
|
|
|
|
mdelay(1);
|
2010-11-21 23:59:01 +08:00
|
|
|
r600_irq_ack(rdev);
|
2009-12-05 04:12:21 +08:00
|
|
|
r600_disable_interrupt_state(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
|
|
|
|
2011-10-14 07:08:47 +08:00
|
|
|
static u32 r600_get_ih_wptr(struct radeon_device *rdev)
|
2009-12-02 02:43:46 +08:00
|
|
|
{
|
|
|
|
u32 wptr, tmp;
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2010-08-28 06:25:25 +08:00
|
|
|
if (rdev->wb.enabled)
|
2011-04-19 23:07:13 +08:00
|
|
|
wptr = le32_to_cpu(rdev->wb.wb[R600_WB_IH_WPTR_OFFSET/4]);
|
2010-08-28 06:25:25 +08:00
|
|
|
else
|
|
|
|
wptr = RREG32(IH_RB_WPTR);
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
if (wptr & RB_OVERFLOW) {
|
2010-01-15 21:44:39 +08:00
|
|
|
/* When a ring buffer overflow happen start parsing interrupt
|
|
|
|
* from the last not overwritten vector (wptr + 16). Hopefully
|
|
|
|
* this should allow us to catchup.
|
|
|
|
*/
|
|
|
|
dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n",
|
|
|
|
wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask);
|
|
|
|
rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask;
|
2009-12-02 02:43:46 +08:00
|
|
|
tmp = RREG32(IH_RB_CNTL);
|
|
|
|
tmp |= IH_WPTR_OVERFLOW_CLEAR;
|
|
|
|
WREG32(IH_RB_CNTL, tmp);
|
2014-07-23 15:47:58 +08:00
|
|
|
wptr &= ~RB_OVERFLOW;
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
2010-01-15 21:44:37 +08:00
|
|
|
return (wptr & rdev->ih.ptr_mask);
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
/* r600 IV Ring
|
|
|
|
* Each IV ring entry is 128 bits:
|
|
|
|
* [7:0] - interrupt source id
|
|
|
|
* [31:8] - reserved
|
|
|
|
* [59:32] - interrupt source data
|
|
|
|
* [127:60] - reserved
|
|
|
|
*
|
|
|
|
* The basic interrupt vector entries
|
|
|
|
* are decoded as follows:
|
|
|
|
* src_id src_data description
|
|
|
|
* 1 0 D1 Vblank
|
|
|
|
* 1 1 D1 Vline
|
|
|
|
* 5 0 D2 Vblank
|
|
|
|
* 5 1 D2 Vline
|
|
|
|
* 19 0 FP Hot plug detection A
|
|
|
|
* 19 1 FP Hot plug detection B
|
|
|
|
* 19 2 DAC A auto-detection
|
|
|
|
* 19 3 DAC B auto-detection
|
2010-04-10 09:13:16 +08:00
|
|
|
* 21 4 HDMI block A
|
|
|
|
* 21 5 HDMI block B
|
2009-12-02 02:43:46 +08:00
|
|
|
* 176 - CP_INT RB
|
|
|
|
* 177 - CP_INT IB1
|
|
|
|
* 178 - CP_INT IB2
|
|
|
|
* 181 - EOP Interrupt
|
|
|
|
* 233 - GUI Idle
|
|
|
|
*
|
|
|
|
* Note, these are based on r600 and may need to be
|
|
|
|
* adjusted or added to on newer asics
|
|
|
|
*/
|
|
|
|
|
|
|
|
int r600_irq_process(struct radeon_device *rdev)
|
|
|
|
{
|
2011-06-18 11:59:51 +08:00
|
|
|
u32 wptr;
|
|
|
|
u32 rptr;
|
2009-12-02 02:43:46 +08:00
|
|
|
u32 src_id, src_data;
|
2010-11-21 23:59:01 +08:00
|
|
|
u32 ring_index;
|
2009-12-05 05:56:37 +08:00
|
|
|
bool queue_hotplug = false;
|
2012-03-30 20:59:57 +08:00
|
|
|
bool queue_hdmi = false;
|
2013-04-13 02:04:10 +08:00
|
|
|
bool queue_thermal = false;
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2011-06-18 11:59:51 +08:00
|
|
|
if (!rdev->ih.enabled || rdev->shutdown)
|
2010-01-15 21:44:38 +08:00
|
|
|
return IRQ_NONE;
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2011-07-13 14:28:22 +08:00
|
|
|
/* No MSIs, need a dummy read to flush PCI DMAs */
|
|
|
|
if (!rdev->msi_enabled)
|
|
|
|
RREG32(IH_RB_WPTR);
|
|
|
|
|
2011-06-18 11:59:51 +08:00
|
|
|
wptr = r600_get_ih_wptr(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
|
2012-05-17 03:45:24 +08:00
|
|
|
restart_ih:
|
|
|
|
/* is somebody else already processing irqs? */
|
|
|
|
if (atomic_xchg(&rdev->ih.lock, 1))
|
2009-12-02 02:43:46 +08:00
|
|
|
return IRQ_NONE;
|
|
|
|
|
2012-05-17 03:45:24 +08:00
|
|
|
rptr = rdev->ih.rptr;
|
|
|
|
DRM_DEBUG("r600_irq_process start: rptr %d, wptr %d\n", rptr, wptr);
|
|
|
|
|
2011-07-13 14:28:19 +08:00
|
|
|
/* Order reading of wptr vs. reading of IH ring data */
|
|
|
|
rmb();
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
/* display interrupts */
|
2010-11-21 23:59:01 +08:00
|
|
|
r600_irq_ack(rdev);
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
while (rptr != wptr) {
|
|
|
|
/* wptr/rptr are in bytes! */
|
|
|
|
ring_index = rptr / 4;
|
2011-02-12 08:45:38 +08:00
|
|
|
src_id = le32_to_cpu(rdev->ih.ring[ring_index]) & 0xff;
|
|
|
|
src_data = le32_to_cpu(rdev->ih.ring[ring_index + 1]) & 0xfffffff;
|
2009-12-02 02:43:46 +08:00
|
|
|
|
|
|
|
switch (src_id) {
|
|
|
|
case 1: /* D1 vblank/vline */
|
|
|
|
switch (src_data) {
|
|
|
|
case 0: /* D1 vblank */
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT) {
|
|
|
|
if (rdev->irq.crtc_vblank_int[0]) {
|
|
|
|
drm_handle_vblank(rdev->ddev, 0);
|
|
|
|
rdev->pm.vblank_sync = true;
|
|
|
|
wake_up(&rdev->irq.vblank_queue);
|
|
|
|
}
|
2012-05-18 01:52:00 +08:00
|
|
|
if (atomic_read(&rdev->irq.pflip[0]))
|
2014-05-27 22:49:21 +08:00
|
|
|
radeon_crtc_handle_vblank(rdev, 0);
|
2010-11-21 23:59:01 +08:00
|
|
|
rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("IH: D1 vblank\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 1: /* D1 vline */
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT;
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("IH: D1 vline\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
2010-01-12 08:47:38 +08:00
|
|
|
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 5: /* D2 vblank/vline */
|
|
|
|
switch (src_data) {
|
|
|
|
case 0: /* D2 vblank */
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT) {
|
|
|
|
if (rdev->irq.crtc_vblank_int[1]) {
|
|
|
|
drm_handle_vblank(rdev->ddev, 1);
|
|
|
|
rdev->pm.vblank_sync = true;
|
|
|
|
wake_up(&rdev->irq.vblank_queue);
|
|
|
|
}
|
2012-05-18 01:52:00 +08:00
|
|
|
if (atomic_read(&rdev->irq.pflip[1]))
|
2014-05-27 22:49:21 +08:00
|
|
|
radeon_crtc_handle_vblank(rdev, 1);
|
2010-11-21 23:59:01 +08:00
|
|
|
rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT;
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("IH: D2 vblank\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 1: /* D1 vline */
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT;
|
2009-12-02 02:43:46 +08:00
|
|
|
DRM_DEBUG("IH: D2 vline\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
2010-01-12 08:47:38 +08:00
|
|
|
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
2014-04-24 02:46:06 +08:00
|
|
|
case 9: /* D1 pflip */
|
|
|
|
DRM_DEBUG("IH: D1 flip\n");
|
|
|
|
radeon_crtc_handle_flip(rdev, 0);
|
|
|
|
break;
|
|
|
|
case 11: /* D2 pflip */
|
|
|
|
DRM_DEBUG("IH: D2 flip\n");
|
|
|
|
radeon_crtc_handle_flip(rdev, 1);
|
|
|
|
break;
|
2009-12-05 04:12:21 +08:00
|
|
|
case 19: /* HPD/DAC hotplug */
|
|
|
|
switch (src_data) {
|
|
|
|
case 0:
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT;
|
2009-12-05 05:56:37 +08:00
|
|
|
queue_hotplug = true;
|
|
|
|
DRM_DEBUG("IH: HPD1\n");
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 1:
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT;
|
2009-12-05 05:56:37 +08:00
|
|
|
queue_hotplug = true;
|
|
|
|
DRM_DEBUG("IH: HPD2\n");
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 4:
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT;
|
2009-12-05 05:56:37 +08:00
|
|
|
queue_hotplug = true;
|
|
|
|
DRM_DEBUG("IH: HPD3\n");
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 5:
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT;
|
2009-12-05 05:56:37 +08:00
|
|
|
queue_hotplug = true;
|
|
|
|
DRM_DEBUG("IH: HPD4\n");
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 10:
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT;
|
2009-12-05 05:56:37 +08:00
|
|
|
queue_hotplug = true;
|
|
|
|
DRM_DEBUG("IH: HPD5\n");
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 12:
|
2010-11-21 23:59:01 +08:00
|
|
|
if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT) {
|
|
|
|
rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT;
|
2009-12-05 05:56:37 +08:00
|
|
|
queue_hotplug = true;
|
|
|
|
DRM_DEBUG("IH: HPD6\n");
|
2009-12-05 04:12:21 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
2010-01-12 08:47:38 +08:00
|
|
|
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
|
2009-12-05 04:12:21 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
2012-03-30 20:59:57 +08:00
|
|
|
case 21: /* hdmi */
|
|
|
|
switch (src_data) {
|
|
|
|
case 4:
|
|
|
|
if (rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG) {
|
|
|
|
rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG;
|
|
|
|
queue_hdmi = true;
|
|
|
|
DRM_DEBUG("IH: HDMI0\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 5:
|
|
|
|
if (rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG) {
|
|
|
|
rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG;
|
|
|
|
queue_hdmi = true;
|
|
|
|
DRM_DEBUG("IH: HDMI1\n");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
|
|
|
|
break;
|
|
|
|
}
|
2010-04-10 09:13:16 +08:00
|
|
|
break;
|
2014-01-31 03:35:04 +08:00
|
|
|
case 124: /* UVD */
|
|
|
|
DRM_DEBUG("IH: UVD int: 0x%08x\n", src_data);
|
|
|
|
radeon_fence_process(rdev, R600_RING_TYPE_UVD_INDEX);
|
|
|
|
break;
|
2009-12-02 02:43:46 +08:00
|
|
|
case 176: /* CP_INT in ring buffer */
|
|
|
|
case 177: /* CP_INT in IB1 */
|
|
|
|
case 178: /* CP_INT in IB2 */
|
|
|
|
DRM_DEBUG("IH: CP int: 0x%08x\n", src_data);
|
2011-08-26 01:39:48 +08:00
|
|
|
radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
|
|
|
case 181: /* CP EOP event */
|
|
|
|
DRM_DEBUG("IH: CP EOP\n");
|
2011-08-26 01:39:48 +08:00
|
|
|
radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
2012-09-28 03:08:35 +08:00
|
|
|
case 224: /* DMA trap event */
|
|
|
|
DRM_DEBUG("IH: DMA trap\n");
|
|
|
|
radeon_fence_process(rdev, R600_RING_TYPE_DMA_INDEX);
|
|
|
|
break;
|
2013-04-13 02:04:10 +08:00
|
|
|
case 230: /* thermal low to high */
|
|
|
|
DRM_DEBUG("IH: thermal low to high\n");
|
|
|
|
rdev->pm.dpm.thermal.high_to_low = false;
|
|
|
|
queue_thermal = true;
|
|
|
|
break;
|
|
|
|
case 231: /* thermal high to low */
|
|
|
|
DRM_DEBUG("IH: thermal high to low\n");
|
|
|
|
rdev->pm.dpm.thermal.high_to_low = true;
|
|
|
|
queue_thermal = true;
|
|
|
|
break;
|
2010-04-23 00:52:11 +08:00
|
|
|
case 233: /* GUI IDLE */
|
2011-06-08 02:54:48 +08:00
|
|
|
DRM_DEBUG("IH: GUI idle\n");
|
2010-04-23 00:52:11 +08:00
|
|
|
break;
|
2009-12-02 02:43:46 +08:00
|
|
|
default:
|
2010-01-12 08:47:38 +08:00
|
|
|
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
|
2009-12-02 02:43:46 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* wptr/rptr are in bytes! */
|
2010-01-15 21:44:37 +08:00
|
|
|
rptr += 16;
|
|
|
|
rptr &= rdev->ih.ptr_mask;
|
2009-12-02 02:43:46 +08:00
|
|
|
}
|
2009-12-05 05:56:37 +08:00
|
|
|
if (queue_hotplug)
|
2011-01-03 21:49:32 +08:00
|
|
|
schedule_work(&rdev->hotplug_work);
|
2012-03-30 20:59:57 +08:00
|
|
|
if (queue_hdmi)
|
|
|
|
schedule_work(&rdev->audio_work);
|
2013-04-13 02:04:10 +08:00
|
|
|
if (queue_thermal && rdev->pm.dpm_enabled)
|
|
|
|
schedule_work(&rdev->pm.dpm.thermal.work);
|
2009-12-02 02:43:46 +08:00
|
|
|
rdev->ih.rptr = rptr;
|
|
|
|
WREG32(IH_RB_RPTR, rdev->ih.rptr);
|
2012-05-17 03:45:24 +08:00
|
|
|
atomic_set(&rdev->ih.lock, 0);
|
|
|
|
|
|
|
|
/* make sure wptr hasn't changed while processing */
|
|
|
|
wptr = r600_get_ih_wptr(rdev);
|
|
|
|
if (wptr != rptr)
|
|
|
|
goto restart_ih;
|
|
|
|
|
2009-12-02 02:43:46 +08:00
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
2009-09-08 08:10:24 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Debugfs info
|
|
|
|
*/
|
|
|
|
#if defined(CONFIG_DEBUG_FS)
|
|
|
|
|
|
|
|
static int r600_debugfs_mc_info(struct seq_file *m, void *data)
|
|
|
|
{
|
|
|
|
struct drm_info_node *node = (struct drm_info_node *) m->private;
|
|
|
|
struct drm_device *dev = node->minor->dev;
|
|
|
|
struct radeon_device *rdev = dev->dev_private;
|
|
|
|
|
|
|
|
DREG32_SYS(m, rdev, R_000E50_SRBM_STATUS);
|
|
|
|
DREG32_SYS(m, rdev, VM_L2_STATUS);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct drm_info_list r600_mc_info_list[] = {
|
|
|
|
{"r600_mc_info", r600_debugfs_mc_info, 0, NULL},
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
|
|
|
int r600_debugfs_mc_info_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_DEBUG_FS)
|
|
|
|
return radeon_debugfs_add_files(rdev, r600_mc_info_list, ARRAY_SIZE(r600_mc_info_list));
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2010-02-05 03:36:39 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
* r600_ioctl_wait_idle - flush host path cache on wait idle ioctl
|
|
|
|
* rdev: radeon device structure
|
|
|
|
* bo: buffer object struct which userspace is waiting for idle
|
|
|
|
*
|
|
|
|
* Some R6XX/R7XX doesn't seems to take into account HDP flush performed
|
|
|
|
* through ring buffer, this leads to corruption in rendering, see
|
|
|
|
* http://bugzilla.kernel.org/show_bug.cgi?id=15186 to avoid this we
|
|
|
|
* directly perform HDP flush by writing register through MMIO.
|
|
|
|
*/
|
|
|
|
void r600_ioctl_wait_idle(struct radeon_device *rdev, struct radeon_bo *bo)
|
|
|
|
{
|
2010-07-27 06:51:53 +08:00
|
|
|
/* r7xx hw bug. write to HDP_DEBUG1 followed by fb read
|
2010-12-08 23:05:34 +08:00
|
|
|
* rather than write to HDP_REG_COHERENCY_FLUSH_CNTL.
|
|
|
|
* This seems to cause problems on some AGP cards. Just use the old
|
|
|
|
* method for them.
|
2010-07-27 06:51:53 +08:00
|
|
|
*/
|
2010-09-27 22:57:10 +08:00
|
|
|
if ((rdev->family >= CHIP_RV770) && (rdev->family <= CHIP_RV740) &&
|
2010-12-08 23:05:34 +08:00
|
|
|
rdev->vram_scratch.ptr && !(rdev->flags & RADEON_IS_AGP)) {
|
2010-08-28 01:59:54 +08:00
|
|
|
void __iomem *ptr = (void *)rdev->vram_scratch.ptr;
|
2010-07-27 06:51:53 +08:00
|
|
|
u32 tmp;
|
|
|
|
|
|
|
|
WREG32(HDP_DEBUG1, 0);
|
|
|
|
tmp = readl((void __iomem *)ptr);
|
|
|
|
} else
|
|
|
|
WREG32(R_005480_HDP_MEM_COHERENCY_FLUSH_CNTL, 0x1);
|
2010-02-05 03:36:39 +08:00
|
|
|
}
|
2011-01-07 07:49:34 +08:00
|
|
|
|
|
|
|
void r600_set_pcie_lanes(struct radeon_device *rdev, int lanes)
|
|
|
|
{
|
2013-03-19 06:52:13 +08:00
|
|
|
u32 link_width_cntl, mask;
|
2011-01-07 07:49:34 +08:00
|
|
|
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* x2 cards have a special sequence */
|
|
|
|
if (ASIC_IS_X2(rdev))
|
|
|
|
return;
|
|
|
|
|
2013-03-19 06:52:13 +08:00
|
|
|
radeon_gui_idle(rdev);
|
2011-01-07 07:49:34 +08:00
|
|
|
|
|
|
|
switch (lanes) {
|
|
|
|
case 0:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X0;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X1;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X2;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X4;
|
|
|
|
break;
|
|
|
|
case 8:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X8;
|
|
|
|
break;
|
|
|
|
case 12:
|
2013-03-19 06:52:13 +08:00
|
|
|
/* not actually supported */
|
2011-01-07 07:49:34 +08:00
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X12;
|
|
|
|
break;
|
|
|
|
case 16:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X16;
|
|
|
|
break;
|
2013-03-19 06:52:13 +08:00
|
|
|
default:
|
|
|
|
DRM_ERROR("invalid pcie lane request: %d\n", lanes);
|
|
|
|
return;
|
2011-01-07 07:49:34 +08:00
|
|
|
}
|
|
|
|
|
2012-10-26 04:06:59 +08:00
|
|
|
link_width_cntl = RREG32_PCIE_PORT(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
2013-03-19 06:52:13 +08:00
|
|
|
link_width_cntl &= ~RADEON_PCIE_LC_LINK_WIDTH_MASK;
|
|
|
|
link_width_cntl |= mask << RADEON_PCIE_LC_LINK_WIDTH_SHIFT;
|
|
|
|
link_width_cntl |= (RADEON_PCIE_LC_RECONFIG_NOW |
|
|
|
|
R600_PCIE_LC_RECONFIG_ARC_MISSING_ESCAPE);
|
2011-01-07 07:49:34 +08:00
|
|
|
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(RADEON_PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
2011-01-07 07:49:34 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int r600_get_pcie_lanes(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 link_width_cntl;
|
|
|
|
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* x2 cards have a special sequence */
|
|
|
|
if (ASIC_IS_X2(rdev))
|
|
|
|
return 0;
|
|
|
|
|
2013-03-19 06:52:13 +08:00
|
|
|
radeon_gui_idle(rdev);
|
2011-01-07 07:49:34 +08:00
|
|
|
|
2012-10-26 04:06:59 +08:00
|
|
|
link_width_cntl = RREG32_PCIE_PORT(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
2011-01-07 07:49:34 +08:00
|
|
|
|
|
|
|
switch ((link_width_cntl & RADEON_PCIE_LC_LINK_WIDTH_RD_MASK) >> RADEON_PCIE_LC_LINK_WIDTH_RD_SHIFT) {
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X1:
|
|
|
|
return 1;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X2:
|
|
|
|
return 2;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X4:
|
|
|
|
return 4;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X8:
|
|
|
|
return 8;
|
2013-03-19 06:52:13 +08:00
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X12:
|
|
|
|
/* not actually supported */
|
|
|
|
return 12;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X0:
|
2011-01-07 07:49:34 +08:00
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X16:
|
|
|
|
default:
|
|
|
|
return 16;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-01-07 07:49:35 +08:00
|
|
|
static void r600_pcie_gen2_enable(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 link_width_cntl, lanes, speed_cntl, training_cntl, tmp;
|
|
|
|
u16 link_cntl2;
|
|
|
|
|
2011-01-13 09:05:11 +08:00
|
|
|
if (radeon_pcie_gen2 == 0)
|
|
|
|
return;
|
|
|
|
|
2011-01-07 07:49:35 +08:00
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* x2 cards have a special sequence */
|
|
|
|
if (ASIC_IS_X2(rdev))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* only RV6xx+ chips are supported */
|
|
|
|
if (rdev->family <= CHIP_R600)
|
|
|
|
return;
|
|
|
|
|
2013-05-04 06:43:13 +08:00
|
|
|
if ((rdev->pdev->bus->max_bus_speed != PCIE_SPEED_5_0GT) &&
|
|
|
|
(rdev->pdev->bus->max_bus_speed != PCIE_SPEED_8_0GT))
|
2012-06-27 15:35:54 +08:00
|
|
|
return;
|
|
|
|
|
2012-10-26 04:06:59 +08:00
|
|
|
speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL);
|
2012-10-09 05:46:27 +08:00
|
|
|
if (speed_cntl & LC_CURRENT_DATA_RATE) {
|
|
|
|
DRM_INFO("PCIE gen 2 link speeds already enabled\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2012-06-27 15:35:54 +08:00
|
|
|
DRM_INFO("enabling PCIE gen 2 link speeds, disable with radeon.pcie_gen2=0\n");
|
|
|
|
|
2011-01-07 07:49:35 +08:00
|
|
|
/* 55 nm r6xx asics */
|
|
|
|
if ((rdev->family == CHIP_RV670) ||
|
|
|
|
(rdev->family == CHIP_RV620) ||
|
|
|
|
(rdev->family == CHIP_RV635)) {
|
|
|
|
/* advertise upconfig capability */
|
2012-10-26 04:06:59 +08:00
|
|
|
link_width_cntl = RREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
link_width_cntl &= ~LC_UPCONFIGURE_DIS;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
|
|
|
link_width_cntl = RREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
if (link_width_cntl & LC_RENEGOTIATION_SUPPORT) {
|
|
|
|
lanes = (link_width_cntl & LC_LINK_WIDTH_RD_MASK) >> LC_LINK_WIDTH_RD_SHIFT;
|
|
|
|
link_width_cntl &= ~(LC_LINK_WIDTH_MASK |
|
|
|
|
LC_RECONFIG_ARC_MISSING_ESCAPE);
|
|
|
|
link_width_cntl |= lanes | LC_RECONFIG_NOW | LC_RENEGOTIATE_EN;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
} else {
|
|
|
|
link_width_cntl |= LC_UPCONFIGURE_DIS;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-10-26 04:06:59 +08:00
|
|
|
speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
if ((speed_cntl & LC_OTHER_SIDE_EVER_SENT_GEN2) &&
|
|
|
|
(speed_cntl & LC_OTHER_SIDE_SUPPORTS_GEN2)) {
|
|
|
|
|
|
|
|
/* 55 nm r6xx asics */
|
|
|
|
if ((rdev->family == CHIP_RV670) ||
|
|
|
|
(rdev->family == CHIP_RV620) ||
|
|
|
|
(rdev->family == CHIP_RV635)) {
|
|
|
|
WREG32(MM_CFGREGS_CNTL, 0x8);
|
|
|
|
link_cntl2 = RREG32(0x4088);
|
|
|
|
WREG32(MM_CFGREGS_CNTL, 0);
|
|
|
|
/* not supported yet */
|
|
|
|
if (link_cntl2 & SELECTABLE_DEEMPHASIS)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
speed_cntl &= ~LC_SPEED_CHANGE_ATTEMPTS_ALLOWED_MASK;
|
|
|
|
speed_cntl |= (0x3 << LC_SPEED_CHANGE_ATTEMPTS_ALLOWED_SHIFT);
|
|
|
|
speed_cntl &= ~LC_VOLTAGE_TIMER_SEL_MASK;
|
|
|
|
speed_cntl &= ~LC_FORCE_DIS_HW_SPEED_CHANGE;
|
|
|
|
speed_cntl |= LC_FORCE_EN_HW_SPEED_CHANGE;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL, speed_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
|
|
|
|
tmp = RREG32(0x541c);
|
|
|
|
WREG32(0x541c, tmp | 0x8);
|
|
|
|
WREG32(MM_CFGREGS_CNTL, MM_WR_TO_CFG_EN);
|
|
|
|
link_cntl2 = RREG16(0x4088);
|
|
|
|
link_cntl2 &= ~TARGET_LINK_SPEED_MASK;
|
|
|
|
link_cntl2 |= 0x2;
|
|
|
|
WREG16(0x4088, link_cntl2);
|
|
|
|
WREG32(MM_CFGREGS_CNTL, 0);
|
|
|
|
|
|
|
|
if ((rdev->family == CHIP_RV670) ||
|
|
|
|
(rdev->family == CHIP_RV620) ||
|
|
|
|
(rdev->family == CHIP_RV635)) {
|
2012-10-26 04:06:59 +08:00
|
|
|
training_cntl = RREG32_PCIE_PORT(PCIE_LC_TRAINING_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
training_cntl &= ~LC_POINT_7_PLUS_EN;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_TRAINING_CNTL, training_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
} else {
|
2012-10-26 04:06:59 +08:00
|
|
|
speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
speed_cntl &= ~LC_TARGET_LINK_SPEED_OVERRIDE_EN;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL, speed_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
}
|
|
|
|
|
2012-10-26 04:06:59 +08:00
|
|
|
speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
speed_cntl |= LC_GEN2_EN_STRAP;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL, speed_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
|
|
|
|
} else {
|
2012-10-26 04:06:59 +08:00
|
|
|
link_width_cntl = RREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL);
|
2011-01-07 07:49:35 +08:00
|
|
|
/* XXX: only disable it if gen1 bridge vendor == 0x111d or 0x1106 */
|
|
|
|
if (1)
|
|
|
|
link_width_cntl |= LC_UPCONFIGURE_DIS;
|
|
|
|
else
|
|
|
|
link_width_cntl &= ~LC_UPCONFIGURE_DIS;
|
2012-10-26 04:06:59 +08:00
|
|
|
WREG32_PCIE_PORT(PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
2011-01-07 07:49:35 +08:00
|
|
|
}
|
|
|
|
}
|
2012-08-09 22:34:17 +08:00
|
|
|
|
|
|
|
/**
|
2013-01-24 23:35:23 +08:00
|
|
|
* r600_get_gpu_clock_counter - return GPU clock counter snapshot
|
2012-08-09 22:34:17 +08:00
|
|
|
*
|
|
|
|
* @rdev: radeon_device pointer
|
|
|
|
*
|
|
|
|
* Fetches a GPU clock counter snapshot (R6xx-cayman).
|
|
|
|
* Returns the 64 bit clock counter snapshot.
|
|
|
|
*/
|
2013-01-24 23:35:23 +08:00
|
|
|
uint64_t r600_get_gpu_clock_counter(struct radeon_device *rdev)
|
2012-08-09 22:34:17 +08:00
|
|
|
{
|
|
|
|
uint64_t clock;
|
|
|
|
|
|
|
|
mutex_lock(&rdev->gpu_clock_mutex);
|
|
|
|
WREG32(RLC_CAPTURE_GPU_CLOCK_COUNT, 1);
|
|
|
|
clock = (uint64_t)RREG32(RLC_GPU_CLOCK_COUNT_LSB) |
|
|
|
|
((uint64_t)RREG32(RLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
|
|
|
|
mutex_unlock(&rdev->gpu_clock_mutex);
|
|
|
|
return clock;
|
|
|
|
}
|