Merge tag 'drm-intel-next-2024-06-19' of https://gitlab.freedesktop.org/drm/i915/kernel into drm-next

drm/i915 feature pull for v6.11:

Features and functionality:
- Battlemage (BMG) Xe2 HPD display enabling (Balasubramani, Clint, Gustavo,
  José, Matt, Anusha, Lucas, Ravi, Radhakrishna, Nirmoy, Ankit, Matthew)
- Panel Replay enabling (Jouni, Animesh)
- DP AUX-less ALPM (Advanced Link Power Management) and LOBF (Link off between
  frames) enabling (Animesh, Jouni)
- Enable link training failure fallback for DP MST links (Imre)
- CMRR (Content Match Refresh Rate) enabling (Mitul)
- Allow the first async flip to change modifier (Ville)
- Enable eDP AUX based HDR backlight (Suraj)
- Increase ADL-S/ADL-P/DG2+ max TMDS bitrate to 6 Gbps (Ville)

Refactoring and cleanups:
- Stop using implicit dev_priv local variable in macros (Jani)
- Expand and clean up VBT table definitions (Ville)
- PSR/ALPM refactoring (Jouni, Animesh)
- Plane fb refactoring (Ville)
- Rawclk, FSB, and mem frequency refactoring (Jani)
- GVT register macro usage cleanups (Jani, Ville)
- Plane, cursor, wm and ddb register macro and usage cleanups (Ville)
- Pipe CRC register macro cleanups (Ville)
- PCI ID macro cleanups and refactoring to match xe style (Jani)
- Move drm-intel repo to gitlab.freedesktop.org (Ryszard)
- Identify all platforms/subplatforms in display probe (Jani)
- Move Intel drm headers under include/drm/intel (Jani)
- Drop local redundant W=1 warnings in favour of drm subsystem warnigs (Jani)
- Include cleanups; include what you use (Jani)
- Convert overlay and DMC error state printing to drm_printer (Jani)
- Joiner renames (Stan)
- DSB interface cleanups (Ville)
- Improve workaround for disabling FBC when VT-d is active (Vinod)
- State checker refactoring and cleanups for color, planes and cdclk (Ville)
- Cleanups around scanline arithmetic (Ville)
- Use drm_crtc_vblank_crtc() instead of open coding (Ville)
- DSC cleanups (Ville)

Fixes:
- Improve VBT array bounds check (Luca)
- LNL PSR fixes (Jouni)
- Audio workaround, disable min hblank fix (Uma)
- Stop selecting ACPI_BUTTON config (Jani)
- Add MTL Cx0 PHY config compare (Mika)
- Fix MTL C20 PHY port clock verification (Mika)
- Fix static analyzer warning for uapi.event access (Luca)
- HDCP fixes and workarounds (Suraj)
- Fix DP MST DSC input BPP computation (Imre)
- Fix assert on pending async-put power domain work (Imre)
- Fix documentation build for DMC wakelocks (Luca)
- Disable DSC on eDP when indicated by VBT (Ville)

DRM Core changes:
- Various DPCD register additions for panel replay and ALPM (Jouni)
- Add target_rr_divider to adaptive sync SDP (Mitul)

Xe driver changes:
- Remove unused xe->enabled_irq_mask and xe->sb_lock members (Jani)
- i915 display compat header cleanups (Jani)
- Remove redundant copy of intel_fbdev_fb.h (Ville)
- Add process name to devcoredump (José)
- Add xe_gt_err_once() (Matthew)
- Implement transient flush for BMG/Xe3 (Nirmoy)

Merges:
- Backmerges to sync with xe, drm-misc and upstream (Rodrigo, Jani)

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87y170eu80.fsf@intel.com
This commit is contained in:
Dave Airlie 2024-06-21 13:11:23 +10:00
commit 4552a6a42a
189 changed files with 8805 additions and 5472 deletions

View File

@ -150,7 +150,7 @@ High Definition Audio
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_audio.c
:internal:
.. kernel-doc:: include/drm/i915_component.h
.. kernel-doc:: include/drm/intel/i915_component.h
:internal:
Intel HDMI LPE Audio Support
@ -210,9 +210,6 @@ DMC wakelock support
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dmc_wl.c
:doc: DMC wakelock support
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_dmc_wl.c
:internal:
Video BIOS Table (VBT)
----------------------

View File

@ -11012,6 +11012,7 @@ S: Supported
F: drivers/gpu/drm/i915/display/
F: drivers/gpu/drm/xe/display/
F: drivers/gpu/drm/xe/compat-i915-headers
F: include/drm/intel/
INTEL DRM I915 DRIVER (Meteor Lake, DG2 and older excluding Poulsbo, Moorestown and derivative)
M: Jani Nikula <jani.nikula@linux.intel.com>
@ -11024,12 +11025,12 @@ W: https://drm.pages.freedesktop.org/intel-docs/
Q: http://patchwork.freedesktop.org/project/intel-gfx/
B: https://drm.pages.freedesktop.org/intel-docs/how-to-file-i915-bugs.html
C: irc://irc.oftc.net/intel-gfx
T: git git://anongit.freedesktop.org/drm-intel
T: git https://gitlab.freedesktop.org/drm/i915/kernel.git
F: Documentation/ABI/testing/sysfs-driver-intel-i915-hwmon
F: Documentation/gpu/i915.rst
F: drivers/gpu/drm/ci/xfails/i915*
F: drivers/gpu/drm/i915/
F: include/drm/i915*
F: include/drm/intel/
F: include/uapi/drm/i915_drm.h
INTEL DRM XE DRIVER (Lunar Lake and newer)
@ -11045,7 +11046,7 @@ T: git https://gitlab.freedesktop.org/drm/xe/kernel.git
F: Documentation/ABI/testing/sysfs-driver-intel-xe-hwmon
F: Documentation/gpu/xe/
F: drivers/gpu/drm/xe/
F: include/drm/xe*
F: include/drm/intel/
F: include/uapi/drm/xe_drm.h
INTEL ETHERNET DRIVERS

View File

@ -17,8 +17,8 @@
#include <linux/bcma/bcma.h>
#include <linux/bcma/bcma_regs.h>
#include <linux/platform_data/x86/apple.h>
#include <drm/i915_drm.h>
#include <drm/i915_pciids.h>
#include <drm/intel/i915_drm.h>
#include <drm/intel/i915_pciids.h>
#include <asm/pci-direct.h>
#include <asm/dma.h>
#include <asm/io_apic.h>
@ -518,47 +518,46 @@ static const struct intel_early_ops gen11_early_ops __initconst = {
/* Intel integrated GPUs for which we need to reserve "stolen memory" */
static const struct pci_device_id intel_early_ids[] __initconst = {
INTEL_I830_IDS(&i830_early_ops),
INTEL_I845G_IDS(&i845_early_ops),
INTEL_I85X_IDS(&i85x_early_ops),
INTEL_I865G_IDS(&i865_early_ops),
INTEL_I915G_IDS(&gen3_early_ops),
INTEL_I915GM_IDS(&gen3_early_ops),
INTEL_I945G_IDS(&gen3_early_ops),
INTEL_I945GM_IDS(&gen3_early_ops),
INTEL_VLV_IDS(&gen6_early_ops),
INTEL_PINEVIEW_G_IDS(&gen3_early_ops),
INTEL_PINEVIEW_M_IDS(&gen3_early_ops),
INTEL_I965G_IDS(&gen3_early_ops),
INTEL_G33_IDS(&gen3_early_ops),
INTEL_I965GM_IDS(&gen3_early_ops),
INTEL_GM45_IDS(&gen3_early_ops),
INTEL_G45_IDS(&gen3_early_ops),
INTEL_IRONLAKE_D_IDS(&gen3_early_ops),
INTEL_IRONLAKE_M_IDS(&gen3_early_ops),
INTEL_SNB_D_IDS(&gen6_early_ops),
INTEL_SNB_M_IDS(&gen6_early_ops),
INTEL_IVB_M_IDS(&gen6_early_ops),
INTEL_IVB_D_IDS(&gen6_early_ops),
INTEL_HSW_IDS(&gen6_early_ops),
INTEL_BDW_IDS(&gen8_early_ops),
INTEL_CHV_IDS(&chv_early_ops),
INTEL_SKL_IDS(&gen9_early_ops),
INTEL_BXT_IDS(&gen9_early_ops),
INTEL_KBL_IDS(&gen9_early_ops),
INTEL_CFL_IDS(&gen9_early_ops),
INTEL_GLK_IDS(&gen9_early_ops),
INTEL_CNL_IDS(&gen9_early_ops),
INTEL_ICL_11_IDS(&gen11_early_ops),
INTEL_EHL_IDS(&gen11_early_ops),
INTEL_JSL_IDS(&gen11_early_ops),
INTEL_TGL_12_IDS(&gen11_early_ops),
INTEL_RKL_IDS(&gen11_early_ops),
INTEL_ADLS_IDS(&gen11_early_ops),
INTEL_ADLP_IDS(&gen11_early_ops),
INTEL_ADLN_IDS(&gen11_early_ops),
INTEL_RPLS_IDS(&gen11_early_ops),
INTEL_RPLP_IDS(&gen11_early_ops),
INTEL_I830_IDS(INTEL_VGA_DEVICE, &i830_early_ops),
INTEL_I845G_IDS(INTEL_VGA_DEVICE, &i845_early_ops),
INTEL_I85X_IDS(INTEL_VGA_DEVICE, &i85x_early_ops),
INTEL_I865G_IDS(INTEL_VGA_DEVICE, &i865_early_ops),
INTEL_I915G_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_I915GM_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_I945G_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_I945GM_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_VLV_IDS(INTEL_VGA_DEVICE, &gen6_early_ops),
INTEL_PNV_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_I965G_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_G33_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_I965GM_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_GM45_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_G45_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_ILK_IDS(INTEL_VGA_DEVICE, &gen3_early_ops),
INTEL_SNB_IDS(INTEL_VGA_DEVICE, &gen6_early_ops),
INTEL_IVB_IDS(INTEL_VGA_DEVICE, &gen6_early_ops),
INTEL_HSW_IDS(INTEL_VGA_DEVICE, &gen6_early_ops),
INTEL_BDW_IDS(INTEL_VGA_DEVICE, &gen8_early_ops),
INTEL_CHV_IDS(INTEL_VGA_DEVICE, &chv_early_ops),
INTEL_SKL_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_BXT_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_KBL_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_CFL_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_WHL_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_CML_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_GLK_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_CNL_IDS(INTEL_VGA_DEVICE, &gen9_early_ops),
INTEL_ICL_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_EHL_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_JSL_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_TGL_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_RKL_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_ADLS_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_ADLP_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_ADLN_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_RPLS_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_RPLU_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
INTEL_RPLP_IDS(INTEL_VGA_DEVICE, &gen11_early_ops),
};
struct resource intel_graphics_stolen_res __ro_after_init = DEFINE_RES_MEM(0, 0);

View File

@ -12,7 +12,7 @@
#include <asm/smp.h>
#include "agp.h"
#include "intel-agp.h"
#include <drm/intel-gtt.h>
#include <drm/intel/intel-gtt.h>
static int intel_fetch_size(void)
{

View File

@ -25,7 +25,7 @@
#include <asm/smp.h>
#include "agp.h"
#include "intel-agp.h"
#include <drm/intel-gtt.h>
#include <drm/intel/intel-gtt.h>
#include <asm/set_memory.h>
/*

View File

@ -29,7 +29,6 @@ config DRM_I915
select X86_PLATFORM_DEVICES if ACPI
select ACPI_WMI if ACPI
select ACPI_VIDEO if ACPI
select ACPI_BUTTON if ACPI
select SYNC_FILE
select IOSF_MBI if X86
select CRC32

View File

@ -3,31 +3,8 @@
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
# Unconditionally enable W=1 warnings locally
# --- begin copy-paste W=1 warnings from scripts/Makefile.extrawarn
subdir-ccflags-y += -Wextra -Wunused -Wno-unused-parameter
subdir-ccflags-y += -Wmissing-declarations
subdir-ccflags-y += $(call cc-option, -Wrestrict)
subdir-ccflags-y += -Wmissing-format-attribute
subdir-ccflags-y += -Wmissing-prototypes
subdir-ccflags-y += -Wold-style-definition
subdir-ccflags-y += -Wmissing-include-dirs
subdir-ccflags-y += $(call cc-option, -Wunused-but-set-variable)
subdir-ccflags-y += $(call cc-option, -Wunused-const-variable)
subdir-ccflags-y += $(call cc-option, -Wpacked-not-aligned)
subdir-ccflags-y += $(call cc-option, -Wformat-overflow)
# Enable W=1 warnings not enabled in drm subsystem Makefile
subdir-ccflags-y += $(call cc-option, -Wformat-truncation)
subdir-ccflags-y += $(call cc-option, -Wstringop-truncation)
# The following turn off the warnings enabled by -Wextra
ifeq ($(findstring 2, $(KBUILD_EXTRA_WARN)),)
subdir-ccflags-y += -Wno-missing-field-initializers
subdir-ccflags-y += -Wno-type-limits
subdir-ccflags-y += -Wno-shift-negative-value
endif
ifeq ($(findstring 3, $(KBUILD_EXTRA_WARN)),)
subdir-ccflags-y += -Wno-sign-compare
endif
# --- end copy-paste
# Enable -Werror in CI and development
subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror
@ -243,6 +220,7 @@ i915-y += \
display/hsw_ips.o \
display/i9xx_plane.o \
display/i9xx_wm.o \
display/intel_alpm.o \
display/intel_atomic.o \
display/intel_atomic_plane.o \
display/intel_audio.o \
@ -351,6 +329,7 @@ i915-y += \
display/intel_dsi_dcs_backlight.o \
display/intel_dsi_vbt.o \
display/intel_dvo.o \
display/intel_encoder.o \
display/intel_gmbus.o \
display/intel_hdmi.o \
display/intel_lspcon.o \

View File

@ -27,7 +27,6 @@
*/
#include "i915_drv.h"
#include "i915_reg.h"
#include "intel_display_types.h"
#include "intel_dvo_dev.h"

View File

@ -20,6 +20,7 @@
#include "intel_dp_aux.h"
#include "intel_dp_link_training.h"
#include "intel_dpio_phy.h"
#include "intel_encoder.h"
#include "intel_fifo_underrun.h"
#include "intel_hdmi.h"
#include "intel_hotplug.h"
@ -706,7 +707,7 @@ static void intel_enable_dp(struct intel_atomic_state *state,
intel_dp_configure_protocol_converter(intel_dp, pipe_config);
intel_dp_check_frl_training(intel_dp);
intel_dp_pcon_dsc_configure(intel_dp, pipe_config);
intel_dp_start_link_train(intel_dp, pipe_config);
intel_dp_start_link_train(state, intel_dp, pipe_config);
intel_dp_stop_link_train(intel_dp, pipe_config);
}
@ -1159,9 +1160,7 @@ intel_dp_hotplug(struct intel_encoder *encoder,
struct intel_connector *connector)
{
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct drm_modeset_acquire_ctx ctx;
enum intel_hotplug_state state;
int ret;
if (intel_dp->compliance.test_active &&
intel_dp->compliance.test_type == DP_TEST_LINK_PHY_TEST_PATTERN) {
@ -1172,23 +1171,7 @@ intel_dp_hotplug(struct intel_encoder *encoder,
state = intel_encoder_hotplug(encoder, connector);
drm_modeset_acquire_init(&ctx, 0);
for (;;) {
ret = intel_dp_retrain_link(encoder, &ctx);
if (ret == -EDEADLK) {
drm_modeset_backoff(&ctx);
continue;
}
break;
}
drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
drm_WARN(encoder->base.dev, ret,
"Acquiring modeset locks failed with %i\n", ret);
intel_dp_check_link_state(intel_dp);
/*
* Keeping it consistent with intel_ddi_hotplug() and
@ -1228,7 +1211,7 @@ static bool g4x_digital_port_connected(struct intel_encoder *encoder)
return false;
}
return intel_de_read(dev_priv, PORT_HOTPLUG_STAT) & bit;
return intel_de_read(dev_priv, PORT_HOTPLUG_STAT(dev_priv)) & bit;
}
static bool ilk_digital_port_connected(struct intel_encoder *encoder)
@ -1239,6 +1222,15 @@ static bool ilk_digital_port_connected(struct intel_encoder *encoder)
return intel_de_read(dev_priv, DEISR) & bit;
}
static void g4x_dp_suspend_complete(struct intel_encoder *encoder)
{
/*
* TODO: Move this to intel_dp_encoder_suspend(),
* once modeset locking around that is removed.
*/
intel_encoder_link_check_flush_work(encoder);
}
static void intel_dp_encoder_destroy(struct drm_encoder *encoder)
{
intel_dp_encoder_flush_work(encoder);
@ -1325,6 +1317,8 @@ bool g4x_dp_init(struct drm_i915_private *dev_priv,
"DP %c", port_name(port)))
goto err_encoder_init;
intel_encoder_link_check_init(intel_encoder, intel_dp_link_check);
intel_encoder->hotplug = intel_dp_hotplug;
intel_encoder->compute_config = intel_dp_compute_config;
intel_encoder->get_hw_state = intel_dp_get_hw_state;
@ -1333,6 +1327,7 @@ bool g4x_dp_init(struct drm_i915_private *dev_priv,
intel_encoder->initial_fastset_check = intel_dp_initial_fastset_check;
intel_encoder->update_pipe = intel_backlight_update;
intel_encoder->suspend = intel_dp_encoder_suspend;
intel_encoder->suspend_complete = g4x_dp_suspend_complete;
intel_encoder->shutdown = intel_dp_encoder_shutdown;
if (IS_CHERRYVIEW(dev_priv)) {
intel_encoder->pre_pll_enable = chv_dp_pre_pll_enable;

View File

@ -10,6 +10,7 @@
#include "i915_reg.h"
#include "i9xx_plane.h"
#include "i9xx_plane_regs.h"
#include "intel_atomic.h"
#include "intel_atomic_plane.h"
#include "intel_de.h"
@ -266,7 +267,7 @@ int i9xx_check_plane_surface(struct intel_plane_state *plane_state)
* despite them not using the linear offset anymore.
*/
if (DISPLAY_VER(dev_priv) >= 4 && fb->modifier == I915_FORMAT_MOD_X_TILED) {
u32 alignment = intel_surf_alignment(fb, 0);
unsigned int alignment = intel_surf_alignment(fb, 0);
int cpp = fb->format->cpp[0];
while ((src_x + src_w) * cpp > plane_state->view.color_plane[0].mapping_stride) {
@ -422,7 +423,7 @@ static void i9xx_plane_update_noarm(struct intel_plane *plane,
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum i9xx_plane_id i9xx_plane = plane->i9xx_plane;
intel_de_write_fw(dev_priv, DSPSTRIDE(i9xx_plane),
intel_de_write_fw(dev_priv, DSPSTRIDE(dev_priv, i9xx_plane),
plane_state->view.color_plane[0].mapping_stride);
if (DISPLAY_VER(dev_priv) < 4) {
@ -436,9 +437,9 @@ static void i9xx_plane_update_noarm(struct intel_plane *plane,
* generator but let's assume we still need to
* program whatever is there.
*/
intel_de_write_fw(dev_priv, DSPPOS(i9xx_plane),
intel_de_write_fw(dev_priv, DSPPOS(dev_priv, i9xx_plane),
DISP_POS_Y(crtc_y) | DISP_POS_X(crtc_x));
intel_de_write_fw(dev_priv, DSPSIZE(i9xx_plane),
intel_de_write_fw(dev_priv, DSPSIZE(dev_priv, i9xx_plane),
DISP_HEIGHT(crtc_h - 1) | DISP_WIDTH(crtc_w - 1));
}
}
@ -455,6 +456,11 @@ static void i9xx_plane_update_arm(struct intel_plane *plane,
dspcntr = plane_state->ctl | i9xx_plane_ctl_crtc(crtc_state);
/* see intel_plane_atomic_calc_changes() */
if (plane->need_async_flip_toggle_wa &&
crtc_state->async_flip_planes & BIT(plane->id))
dspcntr |= DISP_ASYNC_FLIP;
linear_offset = intel_fb_xy_to_linear(x, y, plane_state, 0);
if (DISPLAY_VER(dev_priv) >= 4)
@ -468,20 +474,21 @@ static void i9xx_plane_update_arm(struct intel_plane *plane,
int crtc_w = drm_rect_width(&plane_state->uapi.dst);
int crtc_h = drm_rect_height(&plane_state->uapi.dst);
intel_de_write_fw(dev_priv, PRIMPOS(i9xx_plane),
intel_de_write_fw(dev_priv, PRIMPOS(dev_priv, i9xx_plane),
PRIM_POS_Y(crtc_y) | PRIM_POS_X(crtc_x));
intel_de_write_fw(dev_priv, PRIMSIZE(i9xx_plane),
intel_de_write_fw(dev_priv, PRIMSIZE(dev_priv, i9xx_plane),
PRIM_HEIGHT(crtc_h - 1) | PRIM_WIDTH(crtc_w - 1));
intel_de_write_fw(dev_priv, PRIMCNSTALPHA(i9xx_plane), 0);
intel_de_write_fw(dev_priv,
PRIMCNSTALPHA(dev_priv, i9xx_plane), 0);
}
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
intel_de_write_fw(dev_priv, DSPOFFSET(i9xx_plane),
intel_de_write_fw(dev_priv, DSPOFFSET(dev_priv, i9xx_plane),
DISP_OFFSET_Y(y) | DISP_OFFSET_X(x));
} else if (DISPLAY_VER(dev_priv) >= 4) {
intel_de_write_fw(dev_priv, DSPLINOFF(i9xx_plane),
intel_de_write_fw(dev_priv, DSPLINOFF(dev_priv, i9xx_plane),
linear_offset);
intel_de_write_fw(dev_priv, DSPTILEOFF(i9xx_plane),
intel_de_write_fw(dev_priv, DSPTILEOFF(dev_priv, i9xx_plane),
DISP_OFFSET_Y(y) | DISP_OFFSET_X(x));
}
@ -490,13 +497,13 @@ static void i9xx_plane_update_arm(struct intel_plane *plane,
* disabled. Try to make the plane enable atomic by writing
* the control register just before the surface register.
*/
intel_de_write_fw(dev_priv, DSPCNTR(i9xx_plane), dspcntr);
intel_de_write_fw(dev_priv, DSPCNTR(dev_priv, i9xx_plane), dspcntr);
if (DISPLAY_VER(dev_priv) >= 4)
intel_de_write_fw(dev_priv, DSPSURF(i9xx_plane),
intel_de_write_fw(dev_priv, DSPSURF(dev_priv, i9xx_plane),
intel_plane_ggtt_offset(plane_state) + dspaddr_offset);
else
intel_de_write_fw(dev_priv, DSPADDR(i9xx_plane),
intel_de_write_fw(dev_priv, DSPADDR(dev_priv, i9xx_plane),
intel_plane_ggtt_offset(plane_state) + dspaddr_offset);
}
@ -533,12 +540,12 @@ static void i9xx_plane_disable_arm(struct intel_plane *plane,
*/
dspcntr = i9xx_plane_ctl_crtc(crtc_state);
intel_de_write_fw(dev_priv, DSPCNTR(i9xx_plane), dspcntr);
intel_de_write_fw(dev_priv, DSPCNTR(dev_priv, i9xx_plane), dspcntr);
if (DISPLAY_VER(dev_priv) >= 4)
intel_de_write_fw(dev_priv, DSPSURF(i9xx_plane), 0);
intel_de_write_fw(dev_priv, DSPSURF(dev_priv, i9xx_plane), 0);
else
intel_de_write_fw(dev_priv, DSPADDR(i9xx_plane), 0);
intel_de_write_fw(dev_priv, DSPADDR(dev_priv, i9xx_plane), 0);
}
static void
@ -555,9 +562,9 @@ g4x_primary_async_flip(struct intel_plane *plane,
if (async_flip)
dspcntr |= DISP_ASYNC_FLIP;
intel_de_write_fw(dev_priv, DSPCNTR(i9xx_plane), dspcntr);
intel_de_write_fw(dev_priv, DSPCNTR(dev_priv, i9xx_plane), dspcntr);
intel_de_write_fw(dev_priv, DSPSURF(i9xx_plane),
intel_de_write_fw(dev_priv, DSPSURF(dev_priv, i9xx_plane),
intel_plane_ggtt_offset(plane_state) + dspaddr_offset);
}
@ -571,7 +578,7 @@ vlv_primary_async_flip(struct intel_plane *plane,
u32 dspaddr_offset = plane_state->view.color_plane[0].offset;
enum i9xx_plane_id i9xx_plane = plane->i9xx_plane;
intel_de_write_fw(dev_priv, DSPADDR_VLV(i9xx_plane),
intel_de_write_fw(dev_priv, DSPADDR_VLV(dev_priv, i9xx_plane),
intel_plane_ggtt_offset(plane_state) + dspaddr_offset);
}
@ -679,7 +686,7 @@ static bool i9xx_plane_get_hw_state(struct intel_plane *plane,
if (!wakeref)
return false;
val = intel_de_read(dev_priv, DSPCNTR(i9xx_plane));
val = intel_de_read(dev_priv, DSPCNTR(dev_priv, i9xx_plane));
ret = val & DISP_ENABLE;
@ -736,23 +743,25 @@ i965_plane_max_stride(struct intel_plane *plane,
}
static unsigned int
i9xx_plane_max_stride(struct intel_plane *plane,
i915_plane_max_stride(struct intel_plane *plane,
u32 pixel_format, u64 modifier,
unsigned int rotation)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
if (modifier == I915_FORMAT_MOD_X_TILED)
return 8 * 1024;
else
return 16 * 1024;
}
if (DISPLAY_VER(dev_priv) >= 3) {
if (modifier == I915_FORMAT_MOD_X_TILED)
return 8*1024;
else
return 16*1024;
} else {
if (plane->i9xx_plane == PLANE_C)
return 4*1024;
else
return 8*1024;
}
static unsigned int
i8xx_plane_max_stride(struct intel_plane *plane,
u32 pixel_format, u64 modifier,
unsigned int rotation)
{
if (plane->i9xx_plane == PLANE_C)
return 4 * 1024;
else
return 8 * 1024;
}
static const struct drm_plane_funcs i965_plane_funcs = {
@ -849,8 +858,10 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe)
if (HAS_GMCH(dev_priv)) {
if (DISPLAY_VER(dev_priv) >= 4)
plane->max_stride = i965_plane_max_stride;
else if (DISPLAY_VER(dev_priv) == 3)
plane->max_stride = i915_plane_max_stride;
else
plane->max_stride = i9xx_plane_max_stride;
plane->max_stride = i8xx_plane_max_stride;
} else {
if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv))
plane->max_stride = hsw_primary_max_stride;
@ -873,7 +884,7 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe)
plane->enable_flip_done = vlv_primary_enable_flip_done;
plane->disable_flip_done = vlv_primary_disable_flip_done;
} else if (IS_BROADWELL(dev_priv)) {
plane->need_async_flip_disable_wa = true;
plane->need_async_flip_toggle_wa = true;
plane->async_flip = g4x_primary_async_flip;
plane->enable_flip_done = bdw_primary_enable_flip_done;
plane->disable_flip_done = bdw_primary_disable_flip_done;
@ -1002,7 +1013,7 @@ i9xx_get_initial_plane_config(struct intel_crtc *crtc,
fb->dev = dev;
val = intel_de_read(dev_priv, DSPCNTR(i9xx_plane));
val = intel_de_read(dev_priv, DSPCNTR(dev_priv, i9xx_plane));
if (DISPLAY_VER(dev_priv) >= 4) {
if (val & DISP_TILED) {
@ -1023,29 +1034,30 @@ i9xx_get_initial_plane_config(struct intel_crtc *crtc,
fb->format = drm_format_info(fourcc);
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {
offset = intel_de_read(dev_priv, DSPOFFSET(i9xx_plane));
base = intel_de_read(dev_priv, DSPSURF(i9xx_plane)) & DISP_ADDR_MASK;
offset = intel_de_read(dev_priv,
DSPOFFSET(dev_priv, i9xx_plane));
base = intel_de_read(dev_priv, DSPSURF(dev_priv, i9xx_plane)) & DISP_ADDR_MASK;
} else if (DISPLAY_VER(dev_priv) >= 4) {
if (plane_config->tiling)
offset = intel_de_read(dev_priv,
DSPTILEOFF(i9xx_plane));
DSPTILEOFF(dev_priv, i9xx_plane));
else
offset = intel_de_read(dev_priv,
DSPLINOFF(i9xx_plane));
base = intel_de_read(dev_priv, DSPSURF(i9xx_plane)) & DISP_ADDR_MASK;
DSPLINOFF(dev_priv, i9xx_plane));
base = intel_de_read(dev_priv, DSPSURF(dev_priv, i9xx_plane)) & DISP_ADDR_MASK;
} else {
offset = 0;
base = intel_de_read(dev_priv, DSPADDR(i9xx_plane));
base = intel_de_read(dev_priv, DSPADDR(dev_priv, i9xx_plane));
}
plane_config->base = base;
drm_WARN_ON(&dev_priv->drm, offset != 0);
val = intel_de_read(dev_priv, PIPESRC(pipe));
val = intel_de_read(dev_priv, PIPESRC(dev_priv, pipe));
fb->width = REG_FIELD_GET(PIPESRC_WIDTH_MASK, val) + 1;
fb->height = REG_FIELD_GET(PIPESRC_HEIGHT_MASK, val) + 1;
val = intel_de_read(dev_priv, DSPSTRIDE(i9xx_plane));
val = intel_de_read(dev_priv, DSPSTRIDE(dev_priv, i9xx_plane));
fb->pitches[0] = val & 0xffffffc0;
aligned_height = intel_fb_align_height(fb, 0, fb->height);
@ -1084,9 +1096,9 @@ bool i9xx_fixup_initial_plane_config(struct intel_crtc *crtc,
return false;
if (DISPLAY_VER(dev_priv) >= 4)
intel_de_write(dev_priv, DSPSURF(i9xx_plane), base);
intel_de_write(dev_priv, DSPSURF(dev_priv, i9xx_plane), base);
else
intel_de_write(dev_priv, DSPADDR(i9xx_plane), base);
intel_de_write(dev_priv, DSPADDR(dev_priv, i9xx_plane), base);
return true;
}

View File

@ -0,0 +1,112 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef __I9XX_PLANE_REGS_H__
#define __I9XX_PLANE_REGS_H__
#include "intel_display_reg_defs.h"
#define _DSPAADDR_VLV 0x7017C /* vlv/chv */
#define DSPADDR_VLV(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPAADDR_VLV)
#define _DSPACNTR 0x70180
#define DSPCNTR(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPACNTR)
#define DISP_ENABLE REG_BIT(31)
#define DISP_PIPE_GAMMA_ENABLE REG_BIT(30)
#define DISP_FORMAT_MASK REG_GENMASK(29, 26)
#define DISP_FORMAT_8BPP REG_FIELD_PREP(DISP_FORMAT_MASK, 2)
#define DISP_FORMAT_BGRA555 REG_FIELD_PREP(DISP_FORMAT_MASK, 3)
#define DISP_FORMAT_BGRX555 REG_FIELD_PREP(DISP_FORMAT_MASK, 4)
#define DISP_FORMAT_BGRX565 REG_FIELD_PREP(DISP_FORMAT_MASK, 5)
#define DISP_FORMAT_BGRX888 REG_FIELD_PREP(DISP_FORMAT_MASK, 6)
#define DISP_FORMAT_BGRA888 REG_FIELD_PREP(DISP_FORMAT_MASK, 7)
#define DISP_FORMAT_RGBX101010 REG_FIELD_PREP(DISP_FORMAT_MASK, 8)
#define DISP_FORMAT_RGBA101010 REG_FIELD_PREP(DISP_FORMAT_MASK, 9)
#define DISP_FORMAT_BGRX101010 REG_FIELD_PREP(DISP_FORMAT_MASK, 10)
#define DISP_FORMAT_BGRA101010 REG_FIELD_PREP(DISP_FORMAT_MASK, 11)
#define DISP_FORMAT_RGBX161616 REG_FIELD_PREP(DISP_FORMAT_MASK, 12)
#define DISP_FORMAT_RGBX888 REG_FIELD_PREP(DISP_FORMAT_MASK, 14)
#define DISP_FORMAT_RGBA888 REG_FIELD_PREP(DISP_FORMAT_MASK, 15)
#define DISP_STEREO_ENABLE REG_BIT(25)
#define DISP_PIPE_CSC_ENABLE REG_BIT(24) /* ilk+ */
#define DISP_PIPE_SEL_MASK REG_GENMASK(25, 24)
#define DISP_PIPE_SEL(pipe) REG_FIELD_PREP(DISP_PIPE_SEL_MASK, (pipe))
#define DISP_SRC_KEY_ENABLE REG_BIT(22)
#define DISP_LINE_DOUBLE REG_BIT(20)
#define DISP_STEREO_POLARITY_SECOND REG_BIT(18)
#define DISP_ALPHA_PREMULTIPLY REG_BIT(16) /* CHV pipe B */
#define DISP_ROTATE_180 REG_BIT(15) /* i965+ */
#define DISP_ALPHA_TRANS_ENABLE REG_BIT(15) /* pre-g4x plane B */
#define DISP_TRICKLE_FEED_DISABLE REG_BIT(14) /* g4x+ */
#define DISP_TILED REG_BIT(10) /* i965+ */
#define DISP_ASYNC_FLIP REG_BIT(9) /* g4x+ */
#define DISP_MIRROR REG_BIT(8) /* CHV pipe B */
#define DISP_SPRITE_ABOVE_OVERLAY REG_BIT(0) /* pre-g4x plane B/C */
#define _DSPAADDR 0x70184 /* pre-i965 */
#define DSPADDR(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPAADDR)
#define _DSPALINOFF 0x70184 /* i965+ */
#define DSPLINOFF(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPALINOFF)
#define _DSPASTRIDE 0x70188
#define DSPSTRIDE(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPASTRIDE)
#define _DSPAPOS 0x7018C /* pre-g4x */
#define DSPPOS(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPAPOS)
#define DISP_POS_Y_MASK REG_GENMASK(31, 16)
#define DISP_POS_Y(y) REG_FIELD_PREP(DISP_POS_Y_MASK, (y))
#define DISP_POS_X_MASK REG_GENMASK(15, 0)
#define DISP_POS_X(x) REG_FIELD_PREP(DISP_POS_X_MASK, (x))
#define _DSPASIZE 0x70190 /* pre-g4x */
#define DSPSIZE(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPASIZE)
#define DISP_HEIGHT_MASK REG_GENMASK(31, 16)
#define DISP_HEIGHT(h) REG_FIELD_PREP(DISP_HEIGHT_MASK, (h))
#define DISP_WIDTH_MASK REG_GENMASK(15, 0)
#define DISP_WIDTH(w) REG_FIELD_PREP(DISP_WIDTH_MASK, (w))
#define _DSPASURF 0x7019C /* i965+ */
#define DSPSURF(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPASURF)
#define DISP_ADDR_MASK REG_GENMASK(31, 12)
#define _DSPATILEOFF 0x701A4 /* i965+ */
#define DSPTILEOFF(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPATILEOFF)
#define DISP_OFFSET_Y_MASK REG_GENMASK(31, 16)
#define DISP_OFFSET_Y(y) REG_FIELD_PREP(DISP_OFFSET_Y_MASK, (y))
#define DISP_OFFSET_X_MASK REG_GENMASK(15, 0)
#define DISP_OFFSET_X(x) REG_FIELD_PREP(DISP_OFFSET_X_MASK, (x))
#define _DSPAOFFSET 0x701A4 /* hsw+ */
#define DSPOFFSET(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPAOFFSET)
#define _DSPASURFLIVE 0x701AC /* g4x+ */
#define DSPSURFLIVE(dev_priv, plane) _MMIO_PIPE2(dev_priv, plane, _DSPASURFLIVE)
#define _DSPAGAMC 0x701E0 /* pre-g4x */
#define DSPGAMC(dev_priv, plane, i) _MMIO_PIPE2(dev_priv, plane, _DSPAGAMC + (5 - (i)) * 4) /* plane C only, 6 x u0.8 */
/* CHV pipe B primary plane */
#define _PRIMPOS_A 0x60a08
#define PRIMPOS(dev_priv, plane) _MMIO_TRANS2(dev_priv, plane, _PRIMPOS_A)
#define PRIM_POS_Y_MASK REG_GENMASK(31, 16)
#define PRIM_POS_Y(y) REG_FIELD_PREP(PRIM_POS_Y_MASK, (y))
#define PRIM_POS_X_MASK REG_GENMASK(15, 0)
#define PRIM_POS_X(x) REG_FIELD_PREP(PRIM_POS_X_MASK, (x))
#define _PRIMSIZE_A 0x60a0c
#define PRIMSIZE(dev_priv, plane) _MMIO_TRANS2(dev_priv, plane, _PRIMSIZE_A)
#define PRIM_HEIGHT_MASK REG_GENMASK(31, 16)
#define PRIM_HEIGHT(h) REG_FIELD_PREP(PRIM_HEIGHT_MASK, (h))
#define PRIM_WIDTH_MASK REG_GENMASK(15, 0)
#define PRIM_WIDTH(w) REG_FIELD_PREP(PRIM_WIDTH_MASK, (w))
#define _PRIMCNSTALPHA_A 0x60a10
#define PRIMCNSTALPHA(dev_priv, plane) _MMIO_TRANS2(dev_priv, plane, _PRIMCNSTALPHA_A)
#define PRIM_CONST_ALPHA_ENABLE REG_BIT(31)
#define PRIM_CONST_ALPHA_MASK REG_GENMASK(7, 0)
#define PRIM_CONST_ALPHA(alpha) REG_FIELD_PREP(PRIM_CONST_ALPHA_MASK, (alpha))
#endif /* __I9XX_PLANE_REGS_H__ */

View File

@ -70,25 +70,24 @@ static const struct cxsr_latency cxsr_latency_table[] = {
{0, 1, 400, 800, 6042, 36042, 6584, 36584}, /* DDR3-800 SC */
};
static const struct cxsr_latency *intel_get_cxsr_latency(struct drm_i915_private *i915)
static const struct cxsr_latency *pnv_get_cxsr_latency(struct drm_i915_private *i915)
{
int i;
if (i915->fsb_freq == 0 || i915->mem_freq == 0)
return NULL;
for (i = 0; i < ARRAY_SIZE(cxsr_latency_table); i++) {
const struct cxsr_latency *latency = &cxsr_latency_table[i];
bool is_desktop = !IS_MOBILE(i915);
if (is_desktop == latency->is_desktop &&
i915->is_ddr3 == latency->is_ddr3 &&
i915->fsb_freq == latency->fsb_freq &&
i915->mem_freq == latency->mem_freq)
DIV_ROUND_CLOSEST(i915->fsb_freq, 1000) == latency->fsb_freq &&
DIV_ROUND_CLOSEST(i915->mem_freq, 1000) == latency->mem_freq)
return latency;
}
drm_dbg_kms(&i915->drm, "Unknown FSB/MEM found, disable CxSR\n");
drm_dbg_kms(&i915->drm,
"Could not find CxSR latency for DDR%s, FSB %u kHz, MEM %u kHz\n",
i915->is_ddr3 ? "3" : "2", i915->fsb_freq, i915->mem_freq);
return NULL;
}
@ -149,14 +148,14 @@ static bool _intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enabl
intel_uncore_write(&dev_priv->uncore, FW_BLC_SELF, enable ? FW_BLC_SELF_EN : 0);
intel_uncore_posting_read(&dev_priv->uncore, FW_BLC_SELF);
} else if (IS_PINEVIEW(dev_priv)) {
val = intel_uncore_read(&dev_priv->uncore, DSPFW3);
val = intel_uncore_read(&dev_priv->uncore, DSPFW3(dev_priv));
was_enabled = val & PINEVIEW_SELF_REFRESH_EN;
if (enable)
val |= PINEVIEW_SELF_REFRESH_EN;
else
val &= ~PINEVIEW_SELF_REFRESH_EN;
intel_uncore_write(&dev_priv->uncore, DSPFW3, val);
intel_uncore_posting_read(&dev_priv->uncore, DSPFW3);
intel_uncore_write(&dev_priv->uncore, DSPFW3(dev_priv), val);
intel_uncore_posting_read(&dev_priv->uncore, DSPFW3(dev_priv));
} else if (IS_I945G(dev_priv) || IS_I945GM(dev_priv)) {
was_enabled = intel_uncore_read(&dev_priv->uncore, FW_BLC_SELF) & FW_BLC_SELF_EN;
val = enable ? _MASKED_BIT_ENABLE(FW_BLC_SELF_EN) :
@ -269,13 +268,15 @@ static void vlv_get_fifo_size(struct intel_crtc_state *crtc_state)
switch (pipe) {
case PIPE_A:
dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB);
dsparb = intel_uncore_read(&dev_priv->uncore,
DSPARB(dev_priv));
dsparb2 = intel_uncore_read(&dev_priv->uncore, DSPARB2);
sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 0, 0);
sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 8, 4);
break;
case PIPE_B:
dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB);
dsparb = intel_uncore_read(&dev_priv->uncore,
DSPARB(dev_priv));
dsparb2 = intel_uncore_read(&dev_priv->uncore, DSPARB2);
sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 16, 8);
sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 24, 12);
@ -300,7 +301,7 @@ static void vlv_get_fifo_size(struct intel_crtc_state *crtc_state)
static int i9xx_get_fifo_size(struct drm_i915_private *dev_priv,
enum i9xx_plane_id i9xx_plane)
{
u32 dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB);
u32 dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB(dev_priv));
int size;
size = dsparb & 0x7f;
@ -316,7 +317,7 @@ static int i9xx_get_fifo_size(struct drm_i915_private *dev_priv,
static int i830_get_fifo_size(struct drm_i915_private *dev_priv,
enum i9xx_plane_id i9xx_plane)
{
u32 dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB);
u32 dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB(dev_priv));
int size;
size = dsparb & 0x1ff;
@ -333,7 +334,7 @@ static int i830_get_fifo_size(struct drm_i915_private *dev_priv,
static int i845_get_fifo_size(struct drm_i915_private *dev_priv,
enum i9xx_plane_id i9xx_plane)
{
u32 dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB);
u32 dsparb = intel_uncore_read(&dev_priv->uncore, DSPARB(dev_priv));
int size;
size = dsparb & 0x7f;
@ -635,10 +636,9 @@ static void pnv_update_wm(struct drm_i915_private *dev_priv)
u32 reg;
unsigned int wm;
latency = intel_get_cxsr_latency(dev_priv);
latency = pnv_get_cxsr_latency(dev_priv);
if (!latency) {
drm_dbg_kms(&dev_priv->drm,
"Unknown FSB/MEM found, disable CxSR\n");
drm_dbg_kms(&dev_priv->drm, "Unknown FSB/MEM, disabling CxSR\n");
intel_set_memory_cxsr(dev_priv, false);
return;
}
@ -655,10 +655,10 @@ static void pnv_update_wm(struct drm_i915_private *dev_priv)
&pnv_display_wm,
pnv_display_wm.fifo_size,
cpp, latency->display_sr);
reg = intel_uncore_read(&dev_priv->uncore, DSPFW1);
reg = intel_uncore_read(&dev_priv->uncore, DSPFW1(dev_priv));
reg &= ~DSPFW_SR_MASK;
reg |= FW_WM(wm, SR);
intel_uncore_write(&dev_priv->uncore, DSPFW1, reg);
intel_uncore_write(&dev_priv->uncore, DSPFW1(dev_priv), reg);
drm_dbg_kms(&dev_priv->drm, "DSPFW1 register is %x\n", reg);
/* cursor SR */
@ -666,7 +666,8 @@ static void pnv_update_wm(struct drm_i915_private *dev_priv)
&pnv_cursor_wm,
pnv_display_wm.fifo_size,
4, latency->cursor_sr);
intel_uncore_rmw(&dev_priv->uncore, DSPFW3, DSPFW_CURSOR_SR_MASK,
intel_uncore_rmw(&dev_priv->uncore, DSPFW3(dev_priv),
DSPFW_CURSOR_SR_MASK,
FW_WM(wm, CURSOR_SR));
/* Display HPLL off SR */
@ -674,17 +675,18 @@ static void pnv_update_wm(struct drm_i915_private *dev_priv)
&pnv_display_hplloff_wm,
pnv_display_hplloff_wm.fifo_size,
cpp, latency->display_hpll_disable);
intel_uncore_rmw(&dev_priv->uncore, DSPFW3, DSPFW_HPLL_SR_MASK, FW_WM(wm, HPLL_SR));
intel_uncore_rmw(&dev_priv->uncore, DSPFW3(dev_priv),
DSPFW_HPLL_SR_MASK, FW_WM(wm, HPLL_SR));
/* cursor HPLL off SR */
wm = intel_calculate_wm(dev_priv, pixel_rate,
&pnv_cursor_hplloff_wm,
pnv_display_hplloff_wm.fifo_size,
4, latency->cursor_hpll_disable);
reg = intel_uncore_read(&dev_priv->uncore, DSPFW3);
reg = intel_uncore_read(&dev_priv->uncore, DSPFW3(dev_priv));
reg &= ~DSPFW_HPLL_CURSOR_MASK;
reg |= FW_WM(wm, HPLL_CURSOR);
intel_uncore_write(&dev_priv->uncore, DSPFW3, reg);
intel_uncore_write(&dev_priv->uncore, DSPFW3(dev_priv), reg);
drm_dbg_kms(&dev_priv->drm, "DSPFW3 register is %x\n", reg);
intel_set_memory_cxsr(dev_priv, true);
@ -718,25 +720,25 @@ static void g4x_write_wm_values(struct drm_i915_private *dev_priv,
for_each_pipe(dev_priv, pipe)
trace_g4x_wm(intel_crtc_for_pipe(dev_priv, pipe), wm);
intel_uncore_write(&dev_priv->uncore, DSPFW1,
intel_uncore_write(&dev_priv->uncore, DSPFW1(dev_priv),
FW_WM(wm->sr.plane, SR) |
FW_WM(wm->pipe[PIPE_B].plane[PLANE_CURSOR], CURSORB) |
FW_WM(wm->pipe[PIPE_B].plane[PLANE_PRIMARY], PLANEB) |
FW_WM(wm->pipe[PIPE_A].plane[PLANE_PRIMARY], PLANEA));
intel_uncore_write(&dev_priv->uncore, DSPFW2,
intel_uncore_write(&dev_priv->uncore, DSPFW2(dev_priv),
(wm->fbc_en ? DSPFW_FBC_SR_EN : 0) |
FW_WM(wm->sr.fbc, FBC_SR) |
FW_WM(wm->hpll.fbc, FBC_HPLL_SR) |
FW_WM(wm->pipe[PIPE_B].plane[PLANE_SPRITE0], SPRITEB) |
FW_WM(wm->pipe[PIPE_A].plane[PLANE_CURSOR], CURSORA) |
FW_WM(wm->pipe[PIPE_A].plane[PLANE_SPRITE0], SPRITEA));
intel_uncore_write(&dev_priv->uncore, DSPFW3,
intel_uncore_write(&dev_priv->uncore, DSPFW3(dev_priv),
(wm->hpll_en ? DSPFW_HPLL_SR_EN : 0) |
FW_WM(wm->sr.cursor, CURSOR_SR) |
FW_WM(wm->hpll.cursor, HPLL_CURSOR) |
FW_WM(wm->hpll.plane, HPLL_SR));
intel_uncore_posting_read(&dev_priv->uncore, DSPFW1);
intel_uncore_posting_read(&dev_priv->uncore, DSPFW1(dev_priv));
}
#define FW_WM_VLV(value, plane) \
@ -768,16 +770,16 @@ static void vlv_write_wm_values(struct drm_i915_private *dev_priv,
intel_uncore_write(&dev_priv->uncore, DSPFW5, 0);
intel_uncore_write(&dev_priv->uncore, DSPFW6, 0);
intel_uncore_write(&dev_priv->uncore, DSPFW1,
intel_uncore_write(&dev_priv->uncore, DSPFW1(dev_priv),
FW_WM(wm->sr.plane, SR) |
FW_WM(wm->pipe[PIPE_B].plane[PLANE_CURSOR], CURSORB) |
FW_WM_VLV(wm->pipe[PIPE_B].plane[PLANE_PRIMARY], PLANEB) |
FW_WM_VLV(wm->pipe[PIPE_A].plane[PLANE_PRIMARY], PLANEA));
intel_uncore_write(&dev_priv->uncore, DSPFW2,
intel_uncore_write(&dev_priv->uncore, DSPFW2(dev_priv),
FW_WM_VLV(wm->pipe[PIPE_A].plane[PLANE_SPRITE1], SPRITEB) |
FW_WM(wm->pipe[PIPE_A].plane[PLANE_CURSOR], CURSORA) |
FW_WM_VLV(wm->pipe[PIPE_A].plane[PLANE_SPRITE0], SPRITEA));
intel_uncore_write(&dev_priv->uncore, DSPFW3,
intel_uncore_write(&dev_priv->uncore, DSPFW3(dev_priv),
FW_WM(wm->sr.cursor, CURSOR_SR));
if (IS_CHERRYVIEW(dev_priv)) {
@ -815,7 +817,7 @@ static void vlv_write_wm_values(struct drm_i915_private *dev_priv,
FW_WM(wm->pipe[PIPE_A].plane[PLANE_PRIMARY] >> 8, PLANEA_HI));
}
intel_uncore_posting_read(&dev_priv->uncore, DSPFW1);
intel_uncore_posting_read(&dev_priv->uncore, DSPFW1(dev_priv));
}
#undef FW_WM_VLV
@ -1787,7 +1789,7 @@ static void vlv_atomic_update_fifo(struct intel_atomic_state *state,
switch (crtc->pipe) {
case PIPE_A:
dsparb = intel_uncore_read_fw(uncore, DSPARB);
dsparb = intel_uncore_read_fw(uncore, DSPARB(dev_priv));
dsparb2 = intel_uncore_read_fw(uncore, DSPARB2);
dsparb &= ~(VLV_FIFO(SPRITEA, 0xff) |
@ -1800,11 +1802,11 @@ static void vlv_atomic_update_fifo(struct intel_atomic_state *state,
dsparb2 |= (VLV_FIFO(SPRITEA_HI, sprite0_start >> 8) |
VLV_FIFO(SPRITEB_HI, sprite1_start >> 8));
intel_uncore_write_fw(uncore, DSPARB, dsparb);
intel_uncore_write_fw(uncore, DSPARB(dev_priv), dsparb);
intel_uncore_write_fw(uncore, DSPARB2, dsparb2);
break;
case PIPE_B:
dsparb = intel_uncore_read_fw(uncore, DSPARB);
dsparb = intel_uncore_read_fw(uncore, DSPARB(dev_priv));
dsparb2 = intel_uncore_read_fw(uncore, DSPARB2);
dsparb &= ~(VLV_FIFO(SPRITEC, 0xff) |
@ -1817,7 +1819,7 @@ static void vlv_atomic_update_fifo(struct intel_atomic_state *state,
dsparb2 |= (VLV_FIFO(SPRITEC_HI, sprite0_start >> 8) |
VLV_FIFO(SPRITED_HI, sprite1_start >> 8));
intel_uncore_write_fw(uncore, DSPARB, dsparb);
intel_uncore_write_fw(uncore, DSPARB(dev_priv), dsparb);
intel_uncore_write_fw(uncore, DSPARB2, dsparb2);
break;
case PIPE_C:
@ -1841,7 +1843,7 @@ static void vlv_atomic_update_fifo(struct intel_atomic_state *state,
break;
}
intel_uncore_posting_read_fw(uncore, DSPARB);
intel_uncore_posting_read_fw(uncore, DSPARB(dev_priv));
spin_unlock(&uncore->lock);
}
@ -2065,14 +2067,17 @@ static void i965_update_wm(struct drm_i915_private *dev_priv)
srwm);
/* 965 has limitations... */
intel_uncore_write(&dev_priv->uncore, DSPFW1, FW_WM(srwm, SR) |
FW_WM(8, CURSORB) |
FW_WM(8, PLANEB) |
FW_WM(8, PLANEA));
intel_uncore_write(&dev_priv->uncore, DSPFW2, FW_WM(8, CURSORA) |
FW_WM(8, PLANEC_OLD));
intel_uncore_write(&dev_priv->uncore, DSPFW1(dev_priv),
FW_WM(srwm, SR) |
FW_WM(8, CURSORB) |
FW_WM(8, PLANEB) |
FW_WM(8, PLANEA));
intel_uncore_write(&dev_priv->uncore, DSPFW2(dev_priv),
FW_WM(8, CURSORA) |
FW_WM(8, PLANEC_OLD));
/* update cursor SR watermark */
intel_uncore_write(&dev_priv->uncore, DSPFW3, FW_WM(cursor_sr, CURSOR_SR));
intel_uncore_write(&dev_priv->uncore, DSPFW3(dev_priv),
FW_WM(cursor_sr, CURSOR_SR));
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
@ -3519,13 +3524,13 @@ static void g4x_read_wm_values(struct drm_i915_private *dev_priv,
{
u32 tmp;
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW1);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW1(dev_priv));
wm->sr.plane = _FW_WM(tmp, SR);
wm->pipe[PIPE_B].plane[PLANE_CURSOR] = _FW_WM(tmp, CURSORB);
wm->pipe[PIPE_B].plane[PLANE_PRIMARY] = _FW_WM(tmp, PLANEB);
wm->pipe[PIPE_A].plane[PLANE_PRIMARY] = _FW_WM(tmp, PLANEA);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW2);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW2(dev_priv));
wm->fbc_en = tmp & DSPFW_FBC_SR_EN;
wm->sr.fbc = _FW_WM(tmp, FBC_SR);
wm->hpll.fbc = _FW_WM(tmp, FBC_HPLL_SR);
@ -3533,7 +3538,7 @@ static void g4x_read_wm_values(struct drm_i915_private *dev_priv,
wm->pipe[PIPE_A].plane[PLANE_CURSOR] = _FW_WM(tmp, CURSORA);
wm->pipe[PIPE_A].plane[PLANE_SPRITE0] = _FW_WM(tmp, SPRITEA);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW3);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW3(dev_priv));
wm->hpll_en = tmp & DSPFW_HPLL_SR_EN;
wm->sr.cursor = _FW_WM(tmp, CURSOR_SR);
wm->hpll.cursor = _FW_WM(tmp, HPLL_CURSOR);
@ -3559,18 +3564,18 @@ static void vlv_read_wm_values(struct drm_i915_private *dev_priv,
(tmp >> DDL_SPRITE_SHIFT(1)) & (DDL_PRECISION_HIGH | DRAIN_LATENCY_MASK);
}
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW1);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW1(dev_priv));
wm->sr.plane = _FW_WM(tmp, SR);
wm->pipe[PIPE_B].plane[PLANE_CURSOR] = _FW_WM(tmp, CURSORB);
wm->pipe[PIPE_B].plane[PLANE_PRIMARY] = _FW_WM_VLV(tmp, PLANEB);
wm->pipe[PIPE_A].plane[PLANE_PRIMARY] = _FW_WM_VLV(tmp, PLANEA);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW2);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW2(dev_priv));
wm->pipe[PIPE_A].plane[PLANE_SPRITE1] = _FW_WM_VLV(tmp, SPRITEB);
wm->pipe[PIPE_A].plane[PLANE_CURSOR] = _FW_WM(tmp, CURSORA);
wm->pipe[PIPE_A].plane[PLANE_SPRITE0] = _FW_WM_VLV(tmp, SPRITEA);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW3);
tmp = intel_uncore_read(&dev_priv->uncore, DSPFW3(dev_priv));
wm->sr.cursor = _FW_WM(tmp, CURSOR_SR);
if (IS_CHERRYVIEW(dev_priv)) {
@ -4022,13 +4027,8 @@ void i9xx_wm_init(struct drm_i915_private *dev_priv)
g4x_setup_wm_latency(dev_priv);
dev_priv->display.funcs.wm = &g4x_wm_funcs;
} else if (IS_PINEVIEW(dev_priv)) {
if (!intel_get_cxsr_latency(dev_priv)) {
drm_info(&dev_priv->drm,
"failed to find known CxSR latency "
"(found ddr%s fsb freq %d, mem freq %d), "
"disabling CxSR\n",
(dev_priv->is_ddr3 == 1) ? "3" : "2",
dev_priv->fsb_freq, dev_priv->mem_freq);
if (!pnv_get_cxsr_latency(dev_priv)) {
drm_info(&dev_priv->drm, "Unknown FSB/MEM, disabling CxSR\n");
/* Disable CxSR and never update its watermark again */
intel_set_memory_cxsr(dev_priv, false);
dev_priv->display.funcs.wm = &nop_funcs;

View File

@ -784,7 +784,8 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
if (intel_dsi->dual_link) {
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL2(dsi_trans),
intel_de_rmw(dev_priv,
TRANS_DDI_FUNC_CTL2(dev_priv, dsi_trans),
0, PORT_SYNC_MODE_ENABLE);
}
@ -796,7 +797,8 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
dsi_trans = dsi_port_to_transcoder(port);
/* select data lane width */
tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans));
tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans));
tmp &= ~DDI_PORT_WIDTH_MASK;
tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count);
@ -822,7 +824,8 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
/* enable DDI buffer */
tmp |= TRANS_DDI_FUNC_ENABLE;
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans), tmp);
intel_de_write(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans), tmp);
}
/* wait for link ready */
@ -915,7 +918,7 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
/* program TRANS_HTOTAL register */
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_write(dev_priv, TRANS_HTOTAL(dsi_trans),
intel_de_write(dev_priv, TRANS_HTOTAL(dev_priv, dsi_trans),
HACTIVE(hactive - 1) | HTOTAL(htotal - 1));
}
@ -938,7 +941,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_write(dev_priv, TRANS_HSYNC(dsi_trans),
intel_de_write(dev_priv,
TRANS_HSYNC(dev_priv, dsi_trans),
HSYNC_START(hsync_start - 1) | HSYNC_END(hsync_end - 1));
}
}
@ -952,7 +956,7 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
* struct drm_display_mode.
* For interlace mode: program required pixel minus 2
*/
intel_de_write(dev_priv, TRANS_VTOTAL(dsi_trans),
intel_de_write(dev_priv, TRANS_VTOTAL(dev_priv, dsi_trans),
VACTIVE(vactive - 1) | VTOTAL(vtotal - 1));
}
@ -966,7 +970,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
if (is_vid_mode(intel_dsi)) {
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_write(dev_priv, TRANS_VSYNC(dsi_trans),
intel_de_write(dev_priv,
TRANS_VSYNC(dev_priv, dsi_trans),
VSYNC_START(vsync_start - 1) | VSYNC_END(vsync_end - 1));
}
}
@ -980,7 +985,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
if (is_vid_mode(intel_dsi)) {
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_write(dev_priv, TRANS_VSYNCSHIFT(dsi_trans),
intel_de_write(dev_priv,
TRANS_VSYNCSHIFT(dev_priv, dsi_trans),
vsync_shift);
}
}
@ -994,7 +1000,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
if (DISPLAY_VER(dev_priv) >= 12) {
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_write(dev_priv, TRANS_VBLANK(dsi_trans),
intel_de_write(dev_priv,
TRANS_VBLANK(dev_priv, dsi_trans),
VBLANK_START(vactive - 1) | VBLANK_END(vtotal - 1));
}
}
@ -1009,10 +1016,11 @@ static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder)
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_rmw(dev_priv, TRANSCONF(dsi_trans), 0, TRANSCONF_ENABLE);
intel_de_rmw(dev_priv, TRANSCONF(dev_priv, dsi_trans), 0,
TRANSCONF_ENABLE);
/* wait for transcoder to be enabled */
if (intel_de_wait_for_set(dev_priv, TRANSCONF(dsi_trans),
if (intel_de_wait_for_set(dev_priv, TRANSCONF(dev_priv, dsi_trans),
TRANSCONF_STATE_ENABLE, 10))
drm_err(&dev_priv->drm,
"DSI transcoder not enabled\n");
@ -1275,10 +1283,11 @@ static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder)
dsi_trans = dsi_port_to_transcoder(port);
/* disable transcoder */
intel_de_rmw(dev_priv, TRANSCONF(dsi_trans), TRANSCONF_ENABLE, 0);
intel_de_rmw(dev_priv, TRANSCONF(dev_priv, dsi_trans),
TRANSCONF_ENABLE, 0);
/* wait for transcoder to be disabled */
if (intel_de_wait_for_clear(dev_priv, TRANSCONF(dsi_trans),
if (intel_de_wait_for_clear(dev_priv, TRANSCONF(dev_priv, dsi_trans),
TRANSCONF_STATE_ENABLE, 50))
drm_err(&dev_priv->drm,
"DSI trancoder not disabled\n");
@ -1327,7 +1336,8 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
/* disable ddi function */
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans),
intel_de_rmw(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans),
TRANS_DDI_FUNC_ENABLE, 0);
}
@ -1335,7 +1345,8 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
if (intel_dsi->dual_link) {
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL2(dsi_trans),
intel_de_rmw(dev_priv,
TRANS_DDI_FUNC_CTL2(dev_priv, dsi_trans),
PORT_SYNC_MODE_ENABLE, 0);
}
}
@ -1691,7 +1702,8 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
for_each_dsi_port(port, intel_dsi->ports) {
dsi_trans = dsi_port_to_transcoder(port);
tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(dsi_trans));
tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans));
switch (tmp & TRANS_DDI_EDP_INPUT_MASK) {
case TRANS_DDI_EDP_INPUT_A_ON:
*pipe = PIPE_A;
@ -1710,7 +1722,7 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
goto out;
}
tmp = intel_de_read(dev_priv, TRANSCONF(dsi_trans));
tmp = intel_de_read(dev_priv, TRANSCONF(dev_priv, dsi_trans));
ret = tmp & TRANSCONF_ENABLE;
}
out:

View File

@ -0,0 +1,414 @@
// SPDX-License-Identifier: MIT
/*
* Copyright 2024, Intel Corporation.
*/
#include "intel_alpm.h"
#include "intel_crtc.h"
#include "intel_de.h"
#include "intel_display_types.h"
#include "intel_dp.h"
#include "intel_dp_aux.h"
#include "intel_psr_regs.h"
bool intel_alpm_aux_wake_supported(struct intel_dp *intel_dp)
{
return intel_dp->alpm_dpcd & DP_ALPM_CAP;
}
bool intel_alpm_aux_less_wake_supported(struct intel_dp *intel_dp)
{
return intel_dp->alpm_dpcd & DP_ALPM_AUX_LESS_CAP;
}
void intel_alpm_init_dpcd(struct intel_dp *intel_dp)
{
u8 dpcd;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_RECEIVER_ALPM_CAP, &dpcd) < 0)
return;
intel_dp->alpm_dpcd = dpcd;
}
/*
* See Bspec: 71632 for the table
*
* Silence_period = tSilence,Min + ((tSilence,Max - tSilence,Min) / 2)
*
* Half cycle duration:
*
* Link rates 1.62 - 4.32 and tLFPS_Cycle = 70 ns
* FLOOR( (Link Rate * tLFPS_Cycle) / (2 * 10) )
*
* Link rates 5.4 - 8.1
* PORT_ALPM_LFPS_CTL[ LFPS Cycle Count ] = 10
* LFPS Period chosen is the mid-point of the min:max values from the table
* FLOOR( LFPS Period in Symbol clocks /
* (2 * PORT_ALPM_LFPS_CTL[ LFPS Cycle Count ]) )
*/
static bool _lnl_get_silence_period_and_lfps_half_cycle(int link_rate,
int *silence_period,
int *lfps_half_cycle)
{
switch (link_rate) {
case 162000:
*silence_period = 20;
*lfps_half_cycle = 5;
break;
case 216000:
*silence_period = 27;
*lfps_half_cycle = 7;
break;
case 243000:
*silence_period = 31;
*lfps_half_cycle = 8;
break;
case 270000:
*silence_period = 34;
*lfps_half_cycle = 9;
break;
case 324000:
*silence_period = 41;
*lfps_half_cycle = 11;
break;
case 432000:
*silence_period = 56;
*lfps_half_cycle = 15;
break;
case 540000:
*silence_period = 69;
*lfps_half_cycle = 12;
break;
case 648000:
*silence_period = 84;
*lfps_half_cycle = 15;
break;
case 675000:
*silence_period = 87;
*lfps_half_cycle = 15;
break;
case 810000:
*silence_period = 104;
*lfps_half_cycle = 19;
break;
default:
*silence_period = *lfps_half_cycle = -1;
return false;
}
return true;
}
/*
* AUX-Less Wake Time = CEILING( ((PHY P2 to P0) + tLFPS_Period, Max+
* tSilence, Max+ tPHY Establishment + tCDS) / tline)
* For the "PHY P2 to P0" latency see the PHY Power Control page
* (PHY P2 to P0) : https://gfxspecs.intel.com/Predator/Home/Index/68965
* : 12 us
* The tLFPS_Period, Max term is 800ns
* The tSilence, Max term is 180ns
* The tPHY Establishment (a.k.a. t1) term is 50us
* The tCDS term is 1 or 2 times t2
* t2 = Number ML_PHY_LOCK * tML_PHY_LOCK
* Number ML_PHY_LOCK = ( 7 + CEILING( 6.5us / tML_PHY_LOCK ) + 1)
* Rounding up the 6.5us padding to the next ML_PHY_LOCK boundary and
* adding the "+ 1" term ensures all ML_PHY_LOCK sequences that start
* within the CDS period complete within the CDS period regardless of
* entry into the period
* tML_PHY_LOCK = TPS4 Length * ( 10 / (Link Rate in MHz) )
* TPS4 Length = 252 Symbols
*/
static int _lnl_compute_aux_less_wake_time(int port_clock)
{
int tphy2_p2_to_p0 = 12 * 1000;
int tlfps_period_max = 800;
int tsilence_max = 180;
int t1 = 50 * 1000;
int tps4 = 252;
/* port_clock is link rate in 10kbit/s units */
int tml_phy_lock = 1000 * 1000 * tps4 / port_clock;
int num_ml_phy_lock = 7 + DIV_ROUND_UP(6500, tml_phy_lock) + 1;
int t2 = num_ml_phy_lock * tml_phy_lock;
int tcds = 1 * t2;
return DIV_ROUND_UP(tphy2_p2_to_p0 + tlfps_period_max + tsilence_max +
t1 + tcds, 1000);
}
static int _lnl_compute_aux_less_alpm_params(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
int aux_less_wake_time, aux_less_wake_lines, silence_period,
lfps_half_cycle;
aux_less_wake_time =
_lnl_compute_aux_less_wake_time(crtc_state->port_clock);
aux_less_wake_lines = intel_usecs_to_scanlines(&crtc_state->hw.adjusted_mode,
aux_less_wake_time);
if (!_lnl_get_silence_period_and_lfps_half_cycle(crtc_state->port_clock,
&silence_period,
&lfps_half_cycle))
return false;
if (aux_less_wake_lines > ALPM_CTL_AUX_LESS_WAKE_TIME_MASK ||
silence_period > PORT_ALPM_CTL_SILENCE_PERIOD_MASK ||
lfps_half_cycle > PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK)
return false;
if (i915->display.params.psr_safest_params)
aux_less_wake_lines = ALPM_CTL_AUX_LESS_WAKE_TIME_MASK;
intel_dp->alpm_parameters.aux_less_wake_lines = aux_less_wake_lines;
intel_dp->alpm_parameters.silence_period_sym_clocks = silence_period;
intel_dp->alpm_parameters.lfps_half_cycle_num_of_syms = lfps_half_cycle;
return true;
}
static bool _lnl_compute_alpm_params(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
int check_entry_lines;
if (DISPLAY_VER(i915) < 20)
return true;
/* ALPM Entry Check = 2 + CEILING( 5us /tline ) */
check_entry_lines = 2 +
intel_usecs_to_scanlines(&crtc_state->hw.adjusted_mode, 5);
if (check_entry_lines > 15)
return false;
if (!_lnl_compute_aux_less_alpm_params(intel_dp, crtc_state))
return false;
if (i915->display.params.psr_safest_params)
check_entry_lines = 15;
intel_dp->alpm_parameters.check_entry_lines = check_entry_lines;
return true;
}
/*
* IO wake time for DISPLAY_VER < 12 is not directly mentioned in Bspec. There
* are 50 us io wake time and 32 us fast wake time. Clearly preharge pulses are
* not (improperly) included in 32 us fast wake time. 50 us - 32 us = 18 us.
*/
static int skl_io_buffer_wake_time(void)
{
return 18;
}
static int tgl_io_buffer_wake_time(void)
{
return 10;
}
static int io_buffer_wake_time(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
if (DISPLAY_VER(i915) >= 12)
return tgl_io_buffer_wake_time();
else
return skl_io_buffer_wake_time();
}
bool intel_alpm_compute_params(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
int io_wake_lines, io_wake_time, fast_wake_lines, fast_wake_time;
int tfw_exit_latency = 20; /* eDP spec */
int phy_wake = 4; /* eDP spec */
int preamble = 8; /* eDP spec */
int precharge = intel_dp_aux_fw_sync_len() - preamble;
u8 max_wake_lines;
io_wake_time = max(precharge, io_buffer_wake_time(crtc_state)) +
preamble + phy_wake + tfw_exit_latency;
fast_wake_time = precharge + preamble + phy_wake +
tfw_exit_latency;
if (DISPLAY_VER(i915) >= 20)
max_wake_lines = 68;
else if (DISPLAY_VER(i915) >= 12)
max_wake_lines = 12;
else
max_wake_lines = 8;
io_wake_lines = intel_usecs_to_scanlines(
&crtc_state->hw.adjusted_mode, io_wake_time);
fast_wake_lines = intel_usecs_to_scanlines(
&crtc_state->hw.adjusted_mode, fast_wake_time);
if (io_wake_lines > max_wake_lines ||
fast_wake_lines > max_wake_lines)
return false;
if (!_lnl_compute_alpm_params(intel_dp, crtc_state))
return false;
if (i915->display.params.psr_safest_params)
io_wake_lines = fast_wake_lines = max_wake_lines;
/* According to Bspec lower limit should be set as 7 lines. */
intel_dp->alpm_parameters.io_wake_lines = max(io_wake_lines, 7);
intel_dp->alpm_parameters.fast_wake_lines = max(fast_wake_lines, 7);
return true;
}
void intel_alpm_lobf_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode;
int waketime_in_lines, first_sdp_position;
int context_latency, guardband;
if (!intel_dp_is_edp(intel_dp))
return;
if (DISPLAY_VER(i915) < 20)
return;
if (!intel_dp_as_sdp_supported(intel_dp))
return;
if (crtc_state->has_psr)
return;
if (!(intel_alpm_aux_wake_supported(intel_dp) ||
intel_alpm_aux_less_wake_supported(intel_dp)))
return;
if (!intel_alpm_compute_params(intel_dp, crtc_state))
return;
context_latency = adjusted_mode->crtc_vblank_start - adjusted_mode->crtc_vdisplay;
guardband = adjusted_mode->crtc_vtotal -
adjusted_mode->crtc_vdisplay - context_latency;
first_sdp_position = adjusted_mode->crtc_vtotal - adjusted_mode->crtc_vsync_start;
if (intel_alpm_aux_less_wake_supported(intel_dp))
waketime_in_lines = intel_dp->alpm_parameters.io_wake_lines;
else
waketime_in_lines = intel_dp->alpm_parameters.aux_less_wake_lines;
crtc_state->has_lobf = (context_latency + guardband) >
(first_sdp_position + waketime_in_lines);
}
static void lnl_alpm_configure(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
enum port port = dp_to_dig_port(intel_dp)->base.port;
u32 alpm_ctl;
if (DISPLAY_VER(dev_priv) < 20 || (!intel_dp->psr.sel_update_enabled &&
!intel_dp_is_edp(intel_dp)))
return;
/*
* Panel Replay on eDP is always using ALPM aux less. I.e. no need to
* check panel support at this point.
*/
if ((intel_dp->psr.panel_replay_enabled && intel_dp_is_edp(intel_dp)) ||
(crtc_state->has_lobf && intel_alpm_aux_less_wake_supported(intel_dp))) {
alpm_ctl = ALPM_CTL_ALPM_ENABLE |
ALPM_CTL_ALPM_AUX_LESS_ENABLE |
ALPM_CTL_AUX_LESS_SLEEP_HOLD_TIME_50_SYMBOLS |
ALPM_CTL_AUX_LESS_WAKE_TIME(intel_dp->alpm_parameters.aux_less_wake_lines);
intel_de_write(dev_priv,
PORT_ALPM_CTL(dev_priv, port),
PORT_ALPM_CTL_ALPM_AUX_LESS_ENABLE |
PORT_ALPM_CTL_MAX_PHY_SWING_SETUP(15) |
PORT_ALPM_CTL_MAX_PHY_SWING_HOLD(0) |
PORT_ALPM_CTL_SILENCE_PERIOD(
intel_dp->alpm_parameters.silence_period_sym_clocks));
intel_de_write(dev_priv,
PORT_ALPM_LFPS_CTL(dev_priv, port),
PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT(10) |
PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION(
intel_dp->alpm_parameters.lfps_half_cycle_num_of_syms) |
PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION(
intel_dp->alpm_parameters.lfps_half_cycle_num_of_syms) |
PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION(
intel_dp->alpm_parameters.lfps_half_cycle_num_of_syms));
} else {
alpm_ctl = ALPM_CTL_EXTENDED_FAST_WAKE_ENABLE |
ALPM_CTL_EXTENDED_FAST_WAKE_TIME(intel_dp->alpm_parameters.fast_wake_lines);
}
if (crtc_state->has_lobf)
alpm_ctl |= ALPM_CTL_LOBF_ENABLE;
alpm_ctl |= ALPM_CTL_ALPM_ENTRY_CHECK(intel_dp->alpm_parameters.check_entry_lines);
intel_de_write(dev_priv, ALPM_CTL(dev_priv, cpu_transcoder), alpm_ctl);
}
void intel_alpm_configure(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
lnl_alpm_configure(intel_dp, crtc_state);
}
static int i915_edp_lobf_info_show(struct seq_file *m, void *data)
{
struct intel_connector *connector = m->private;
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct drm_crtc *crtc;
struct intel_crtc_state *crtc_state;
enum transcoder cpu_transcoder;
u32 alpm_ctl;
int ret;
ret = drm_modeset_lock_single_interruptible(&dev_priv->drm.mode_config.connection_mutex);
if (ret)
return ret;
crtc = connector->base.state->crtc;
if (connector->base.status != connector_status_connected || !crtc) {
ret = -ENODEV;
goto out;
}
crtc_state = to_intel_crtc_state(crtc->state);
cpu_transcoder = crtc_state->cpu_transcoder;
alpm_ctl = intel_de_read(dev_priv, ALPM_CTL(dev_priv, cpu_transcoder));
seq_printf(m, "LOBF status: %s\n", str_enabled_disabled(alpm_ctl & ALPM_CTL_LOBF_ENABLE));
seq_printf(m, "Aux-wake alpm status: %s\n",
str_enabled_disabled(!(alpm_ctl & ALPM_CTL_ALPM_AUX_LESS_ENABLE)));
seq_printf(m, "Aux-less alpm status: %s\n",
str_enabled_disabled(alpm_ctl & ALPM_CTL_ALPM_AUX_LESS_ENABLE));
out:
drm_modeset_unlock(&dev_priv->drm.mode_config.connection_mutex);
return ret;
}
DEFINE_SHOW_ATTRIBUTE(i915_edp_lobf_info);
void intel_alpm_lobf_debugfs_add(struct intel_connector *connector)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct dentry *root = connector->base.debugfs_entry;
if (DISPLAY_VER(i915) < 20 ||
connector->base.connector_type != DRM_MODE_CONNECTOR_eDP)
return;
debugfs_create_file("i915_edp_lobf_info", 0444, root,
connector, &i915_edp_lobf_info_fops);
}

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: MIT
*
* Copyright © 2024 Intel Corporation
*/
#ifndef _INTEL_ALPM_H
#define _INTEL_ALPM_H
#include <linux/types.h>
struct intel_dp;
struct intel_crtc_state;
struct drm_connector_state;
struct intel_connector;
void intel_alpm_init_dpcd(struct intel_dp *intel_dp);
bool intel_alpm_compute_params(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state);
void intel_alpm_lobf_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state,
struct drm_connector_state *conn_state);
void intel_alpm_configure(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state);
void intel_alpm_lobf_debugfs_add(struct intel_connector *connector);
bool intel_alpm_aux_wake_supported(struct intel_dp *intel_dp);
bool intel_alpm_aux_less_wake_supported(struct intel_dp *intel_dp);
#endif

View File

@ -35,7 +35,6 @@
#include <drm/drm_fourcc.h>
#include "i915_drv.h"
#include "i915_reg.h"
#include "intel_atomic.h"
#include "intel_cdclk.h"
#include "intel_display_types.h"

View File

@ -32,6 +32,7 @@
*/
#include <linux/dma-fence-chain.h>
#include <linux/dma-resv.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_gem_atomic_helper.h>
@ -39,7 +40,7 @@
#include <drm/drm_fourcc.h>
#include "i915_config.h"
#include "i915_reg.h"
#include "i9xx_plane_regs.h"
#include "intel_atomic_plane.h"
#include "intel_cdclk.h"
#include "intel_display_rps.h"
@ -144,6 +145,14 @@ intel_plane_destroy_state(struct drm_plane *plane,
kfree(plane_state);
}
bool intel_plane_needs_physical(struct intel_plane *plane)
{
struct drm_i915_private *i915 = to_i915(plane->base.dev);
return plane->id == PLANE_CURSOR &&
DISPLAY_INFO(i915)->cursor_needs_physical;
}
unsigned int intel_adjusted_rate(const struct drm_rect *src,
const struct drm_rect *dst,
unsigned int rate)
@ -327,10 +336,10 @@ void intel_plane_copy_uapi_to_hw_state(struct intel_plane_state *plane_state,
intel_plane_clear_hw_state(plane_state);
/*
* For the bigjoiner slave uapi.crtc will point at
* the master crtc. So we explicitly assign the right
* slave crtc to hw.crtc. uapi.crtc!=NULL simply indicates
* the plane is logically enabled on the uapi level.
* For the joiner secondary uapi.crtc will point at
* the primary crtc. So we explicitly assign the right
* secondary crtc to hw.crtc. uapi.crtc!=NULL simply
* indicates the plane is logically enabled on the uapi level.
*/
plane_state->hw.crtc = from_plane_state->uapi.crtc ? &crtc->base : NULL;
@ -429,10 +438,16 @@ static bool intel_plane_do_async_flip(struct intel_plane *plane,
* In platforms after DISPLAY13, we might need to override
* first async flip in order to change watermark levels
* as part of optimization.
* So for those, we are checking if this is a first async flip.
* For platforms earlier than DISPLAY13 we always do async flip.
*
* And let's do this for all skl+ so that we can eg. change the
* modifier as well.
*
* TODO: For older platforms there is less reason to do this as
* only X-tile is supported with async flips, though we could
* extend this so other scanout parameters (stride/etc) could
* be changed as well...
*/
return DISPLAY_VER(i915) < 13 || old_crtc_state->uapi.async_flip;
return DISPLAY_VER(i915) < 9 || old_crtc_state->uapi.async_flip;
}
static bool i9xx_must_disable_cxsr(const struct intel_crtc_state *new_crtc_state,
@ -594,6 +609,17 @@ static int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_cr
if (intel_plane_do_async_flip(plane, old_crtc_state, new_crtc_state)) {
new_crtc_state->do_async_flip = true;
new_crtc_state->async_flip_planes |= BIT(plane->id);
} else if (plane->need_async_flip_toggle_wa &&
new_crtc_state->uapi.async_flip) {
/*
* On platforms with double buffered async flip bit we
* set the bit already one frame early during the sync
* flip (see {i9xx,skl}_plane_update_arm()). The
* hardware will therefore be ready to perform a real
* async flip during the next commit, without having
* to wait yet another frame for the bit to latch.
*/
new_crtc_state->async_flip_planes |= BIT(plane->id);
}
return 0;
@ -688,27 +714,27 @@ int intel_plane_atomic_check(struct intel_atomic_state *state,
intel_atomic_get_new_plane_state(state, plane);
const struct intel_plane_state *old_plane_state =
intel_atomic_get_old_plane_state(state, plane);
const struct intel_plane_state *new_master_plane_state;
const struct intel_plane_state *new_primary_crtc_plane_state;
struct intel_crtc *crtc = intel_crtc_for_pipe(i915, plane->pipe);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
struct intel_crtc_state *new_crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
if (new_crtc_state && intel_crtc_is_bigjoiner_slave(new_crtc_state)) {
struct intel_crtc *master_crtc =
intel_master_crtc(new_crtc_state);
struct intel_plane *master_plane =
intel_crtc_get_plane(master_crtc, plane->id);
if (new_crtc_state && intel_crtc_is_joiner_secondary(new_crtc_state)) {
struct intel_crtc *primary_crtc =
intel_primary_crtc(new_crtc_state);
struct intel_plane *primary_crtc_plane =
intel_crtc_get_plane(primary_crtc, plane->id);
new_master_plane_state =
intel_atomic_get_new_plane_state(state, master_plane);
new_primary_crtc_plane_state =
intel_atomic_get_new_plane_state(state, primary_crtc_plane);
} else {
new_master_plane_state = new_plane_state;
new_primary_crtc_plane_state = new_plane_state;
}
intel_plane_copy_uapi_to_hw_state(new_plane_state,
new_master_plane_state,
new_primary_crtc_plane_state,
crtc);
new_plane_state->uapi.visible = false;

View File

@ -66,5 +66,6 @@ int intel_plane_check_src_coordinates(struct intel_plane_state *plane_state);
void intel_plane_set_invisible(struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state);
void intel_plane_helper_add(struct intel_plane *plane);
bool intel_plane_needs_physical(struct intel_plane *plane);
#endif /* __INTEL_ATOMIC_PLANE_H__ */

View File

@ -26,7 +26,7 @@
#include <drm/drm_edid.h>
#include <drm/drm_eld.h>
#include <drm/i915_component.h>
#include <drm/intel/i915_component.h>
#include "i915_drv.h"
#include "intel_atomic.h"
@ -183,6 +183,15 @@ static const struct hdmi_aud_ncts hdmi_aud_ncts_36bpp[] = {
{ 192000, TMDS_445_5M, 20480, 371250 },
};
/*
* WA_14020863754: Implement Audio Workaround
* Corner case with Min Hblank Fix can cause audio hang
*/
static bool needs_wa_14020863754(struct drm_i915_private *i915)
{
return (DISPLAY_VER(i915) == 20 || IS_BATTLEMAGE(i915));
}
/* get AUD_CONFIG_PIXEL_CLOCK_HDMI_* value for mode */
static u32 audio_config_hdmi_pixel_clock(const struct intel_crtc_state *crtc_state)
{
@ -415,6 +424,9 @@ static void hsw_audio_codec_disable(struct intel_encoder *encoder,
intel_de_rmw(i915, HSW_AUD_PIN_ELD_CP_VLD,
AUDIO_OUTPUT_ENABLE(cpu_transcoder), 0);
if (needs_wa_14020863754(i915))
intel_de_rmw(i915, AUD_CHICKENBIT_REG3, DACBE_DISABLE_MIN_HBLANK_FIX, 0);
mutex_unlock(&i915->display.audio.mutex);
}
@ -540,6 +552,9 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP))
enable_audio_dsc_wa(encoder, crtc_state);
if (needs_wa_14020863754(i915))
intel_de_rmw(i915, AUD_CHICKENBIT_REG3, 0, DACBE_DISABLE_MIN_HBLANK_FIX);
/* Enable audio presence detect */
intel_de_rmw(i915, HSW_AUD_PIN_ELD_CP_VLD,
0, AUDIO_OUTPUT_ENABLE(cpu_transcoder));

View File

@ -164,4 +164,7 @@
_VLV_AUD_PORT_EN_D_DBG)
#define VLV_AMP_MUTE (1 << 1)
#define AUD_CHICKENBIT_REG3 _MMIO(0x65F1C)
#define DACBE_DISABLE_MIN_HBLANK_FIX REG_BIT(18)
#endif /* __INTEL_AUDIO_REGS_H__ */

View File

@ -36,6 +36,7 @@
#include "intel_display.h"
#include "intel_display_types.h"
#include "intel_gmbus.h"
#include "intel_uncore.h"
#define _INTEL_BIOS_PRIVATE
#include "intel_vbt_defs.h"
@ -170,22 +171,22 @@ static const struct {
.min_size = sizeof(struct bdb_driver_features), },
{ .section_id = BDB_SDVO_LVDS_OPTIONS,
.min_size = sizeof(struct bdb_sdvo_lvds_options), },
{ .section_id = BDB_SDVO_PANEL_DTDS,
.min_size = sizeof(struct bdb_sdvo_panel_dtds), },
{ .section_id = BDB_SDVO_LVDS_DTD,
.min_size = sizeof(struct bdb_sdvo_lvds_dtd), },
{ .section_id = BDB_EDP,
.min_size = sizeof(struct bdb_edp), },
{ .section_id = BDB_LVDS_OPTIONS,
.min_size = sizeof(struct bdb_lvds_options), },
{ .section_id = BDB_LFP_OPTIONS,
.min_size = sizeof(struct bdb_lfp_options), },
/*
* BDB_LVDS_LFP_DATA depends on BDB_LVDS_LFP_DATA_PTRS,
* BDB_LFP_DATA depends on BDB_LFP_DATA_PTRS,
* so keep the two ordered.
*/
{ .section_id = BDB_LVDS_LFP_DATA_PTRS,
.min_size = sizeof(struct bdb_lvds_lfp_data_ptrs), },
{ .section_id = BDB_LVDS_LFP_DATA,
{ .section_id = BDB_LFP_DATA_PTRS,
.min_size = sizeof(struct bdb_lfp_data_ptrs), },
{ .section_id = BDB_LFP_DATA,
.min_size = 0, /* special case */ },
{ .section_id = BDB_LVDS_BACKLIGHT,
.min_size = sizeof(struct bdb_lfp_backlight_data), },
{ .section_id = BDB_LFP_BACKLIGHT,
.min_size = sizeof(struct bdb_lfp_backlight), },
{ .section_id = BDB_LFP_POWER,
.min_size = sizeof(struct bdb_lfp_power), },
{ .section_id = BDB_MIPI_CONFIG,
@ -200,30 +201,30 @@ static const struct {
static size_t lfp_data_min_size(struct drm_i915_private *i915)
{
const struct bdb_lvds_lfp_data_ptrs *ptrs;
const struct bdb_lfp_data_ptrs *ptrs;
size_t size;
ptrs = bdb_find_section(i915, BDB_LVDS_LFP_DATA_PTRS);
ptrs = bdb_find_section(i915, BDB_LFP_DATA_PTRS);
if (!ptrs)
return 0;
size = sizeof(struct bdb_lvds_lfp_data);
size = sizeof(struct bdb_lfp_data);
if (ptrs->panel_name.table_size)
size = max(size, ptrs->panel_name.offset +
sizeof(struct bdb_lvds_lfp_data_tail));
sizeof(struct bdb_lfp_data_tail));
return size;
}
static bool validate_lfp_data_ptrs(const void *bdb,
const struct bdb_lvds_lfp_data_ptrs *ptrs)
const struct bdb_lfp_data_ptrs *ptrs)
{
int fp_timing_size, dvo_timing_size, panel_pnp_id_size, panel_name_size;
int data_block_size, lfp_data_size;
const void *data_block;
int i;
data_block = find_raw_section(bdb, BDB_LVDS_LFP_DATA);
data_block = find_raw_section(bdb, BDB_LFP_DATA);
if (!data_block)
return false;
@ -232,7 +233,7 @@ static bool validate_lfp_data_ptrs(const void *bdb,
return false;
/* always 3 indicating the presence of fp_timing+dvo_timing+panel_pnp_id */
if (ptrs->lvds_entries != 3)
if (ptrs->num_entries != 3)
return false;
fp_timing_size = ptrs->ptr[0].fp_timing.table_size;
@ -242,13 +243,13 @@ static bool validate_lfp_data_ptrs(const void *bdb,
/* fp_timing has variable size */
if (fp_timing_size < 32 ||
dvo_timing_size != sizeof(struct lvds_dvo_timing) ||
panel_pnp_id_size != sizeof(struct lvds_pnp_id))
dvo_timing_size != sizeof(struct bdb_edid_dtd) ||
panel_pnp_id_size != sizeof(struct bdb_edid_pnp_id))
return false;
/* panel_name is not present in old VBTs */
if (panel_name_size != 0 &&
panel_name_size != sizeof(struct lvds_lfp_panel_name))
panel_name_size != sizeof(struct bdb_edid_product_name))
return false;
lfp_data_size = ptrs->ptr[1].fp_timing.offset - ptrs->ptr[0].fp_timing.offset;
@ -311,11 +312,11 @@ static bool validate_lfp_data_ptrs(const void *bdb,
/* make the data table offsets relative to the data block */
static bool fixup_lfp_data_ptrs(const void *bdb, void *ptrs_block)
{
struct bdb_lvds_lfp_data_ptrs *ptrs = ptrs_block;
struct bdb_lfp_data_ptrs *ptrs = ptrs_block;
u32 offset;
int i;
offset = raw_block_offset(bdb, BDB_LVDS_LFP_DATA);
offset = raw_block_offset(bdb, BDB_LFP_DATA);
for (i = 0; i < 16; i++) {
if (ptrs->ptr[i].fp_timing.offset < offset ||
@ -338,7 +339,7 @@ static bool fixup_lfp_data_ptrs(const void *bdb, void *ptrs_block)
return validate_lfp_data_ptrs(bdb, ptrs);
}
static int make_lfp_data_ptr(struct lvds_lfp_data_ptr_table *table,
static int make_lfp_data_ptr(struct lfp_data_ptr_table *table,
int table_size, int total_size)
{
if (total_size < table_size)
@ -350,8 +351,8 @@ static int make_lfp_data_ptr(struct lvds_lfp_data_ptr_table *table,
return total_size - table_size;
}
static void next_lfp_data_ptr(struct lvds_lfp_data_ptr_table *next,
const struct lvds_lfp_data_ptr_table *prev,
static void next_lfp_data_ptr(struct lfp_data_ptr_table *next,
const struct lfp_data_ptr_table *prev,
int size)
{
next->table_size = prev->table_size;
@ -362,7 +363,7 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
const void *bdb)
{
int i, size, table_size, block_size, offset, fp_timing_size;
struct bdb_lvds_lfp_data_ptrs *ptrs;
struct bdb_lfp_data_ptrs *ptrs;
const void *block;
void *ptrs_block;
@ -377,7 +378,7 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
fp_timing_size = 38;
block = find_raw_section(bdb, BDB_LVDS_LFP_DATA);
block = find_raw_section(bdb, BDB_LFP_DATA);
if (!block)
return NULL;
@ -385,8 +386,8 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
block_size = get_blocksize(block);
size = fp_timing_size + sizeof(struct lvds_dvo_timing) +
sizeof(struct lvds_pnp_id);
size = fp_timing_size + sizeof(struct bdb_edid_dtd) +
sizeof(struct bdb_edid_pnp_id);
if (size * 16 > block_size)
return NULL;
@ -394,40 +395,40 @@ static void *generate_lfp_data_ptrs(struct drm_i915_private *i915,
if (!ptrs_block)
return NULL;
*(u8 *)(ptrs_block + 0) = BDB_LVDS_LFP_DATA_PTRS;
*(u8 *)(ptrs_block + 0) = BDB_LFP_DATA_PTRS;
*(u16 *)(ptrs_block + 1) = sizeof(*ptrs);
ptrs = ptrs_block + 3;
table_size = sizeof(struct lvds_pnp_id);
table_size = sizeof(struct bdb_edid_pnp_id);
size = make_lfp_data_ptr(&ptrs->ptr[0].panel_pnp_id, table_size, size);
table_size = sizeof(struct lvds_dvo_timing);
table_size = sizeof(struct bdb_edid_dtd);
size = make_lfp_data_ptr(&ptrs->ptr[0].dvo_timing, table_size, size);
table_size = fp_timing_size;
size = make_lfp_data_ptr(&ptrs->ptr[0].fp_timing, table_size, size);
if (ptrs->ptr[0].fp_timing.table_size)
ptrs->lvds_entries++;
ptrs->num_entries++;
if (ptrs->ptr[0].dvo_timing.table_size)
ptrs->lvds_entries++;
ptrs->num_entries++;
if (ptrs->ptr[0].panel_pnp_id.table_size)
ptrs->lvds_entries++;
ptrs->num_entries++;
if (size != 0 || ptrs->lvds_entries != 3) {
if (size != 0 || ptrs->num_entries != 3) {
kfree(ptrs_block);
return NULL;
}
size = fp_timing_size + sizeof(struct lvds_dvo_timing) +
sizeof(struct lvds_pnp_id);
size = fp_timing_size + sizeof(struct bdb_edid_dtd) +
sizeof(struct bdb_edid_pnp_id);
for (i = 1; i < 16; i++) {
next_lfp_data_ptr(&ptrs->ptr[i].fp_timing, &ptrs->ptr[i-1].fp_timing, size);
next_lfp_data_ptr(&ptrs->ptr[i].dvo_timing, &ptrs->ptr[i-1].dvo_timing, size);
next_lfp_data_ptr(&ptrs->ptr[i].panel_pnp_id, &ptrs->ptr[i-1].panel_pnp_id, size);
}
table_size = sizeof(struct lvds_lfp_panel_name);
table_size = sizeof(struct bdb_edid_product_name);
if (16 * (size + table_size) <= block_size) {
ptrs->panel_name.table_size = table_size;
@ -461,7 +462,7 @@ init_bdb_block(struct drm_i915_private *i915,
block = find_raw_section(bdb, section_id);
/* Modern VBTs lack the LFP data table pointers block, make one up */
if (!block && section_id == BDB_LVDS_LFP_DATA_PTRS) {
if (!block && section_id == BDB_LFP_DATA_PTRS) {
temp_block = generate_lfp_data_ptrs(i915, bdb);
if (temp_block)
block = temp_block + 3;
@ -496,7 +497,7 @@ init_bdb_block(struct drm_i915_private *i915,
drm_dbg_kms(&i915->drm, "Found BDB block %d (size %zu, min size %zu)\n",
section_id, block_size, min_size);
if (section_id == BDB_LVDS_LFP_DATA_PTRS &&
if (section_id == BDB_LFP_DATA_PTRS &&
!fixup_lfp_data_ptrs(bdb, entry->data + 3)) {
drm_err(&i915->drm, "VBT has malformed LFP data table pointers\n");
kfree(entry);
@ -515,7 +516,7 @@ static void init_bdb_blocks(struct drm_i915_private *i915,
enum bdb_block_id section_id = bdb_blocks[i].section_id;
size_t min_size = bdb_blocks[i].min_size;
if (section_id == BDB_LVDS_LFP_DATA)
if (section_id == BDB_LFP_DATA)
min_size = lfp_data_min_size(i915);
init_bdb_block(i915, bdb, section_id, min_size);
@ -525,7 +526,7 @@ static void init_bdb_blocks(struct drm_i915_private *i915,
static void
fill_detail_timing_data(struct drm_i915_private *i915,
struct drm_display_mode *panel_fixed_mode,
const struct lvds_dvo_timing *dvo_timing)
const struct bdb_edid_dtd *dvo_timing)
{
panel_fixed_mode->hdisplay = (dvo_timing->hactive_hi << 8) |
dvo_timing->hactive_lo;
@ -579,36 +580,36 @@ fill_detail_timing_data(struct drm_i915_private *i915,
drm_mode_set_name(panel_fixed_mode);
}
static const struct lvds_dvo_timing *
get_lvds_dvo_timing(const struct bdb_lvds_lfp_data *data,
const struct bdb_lvds_lfp_data_ptrs *ptrs,
int index)
static const struct bdb_edid_dtd *
get_lfp_dvo_timing(const struct bdb_lfp_data *data,
const struct bdb_lfp_data_ptrs *ptrs,
int index)
{
return (const void *)data + ptrs->ptr[index].dvo_timing.offset;
}
static const struct lvds_fp_timing *
get_lvds_fp_timing(const struct bdb_lvds_lfp_data *data,
const struct bdb_lvds_lfp_data_ptrs *ptrs,
int index)
static const struct fp_timing *
get_lfp_fp_timing(const struct bdb_lfp_data *data,
const struct bdb_lfp_data_ptrs *ptrs,
int index)
{
return (const void *)data + ptrs->ptr[index].fp_timing.offset;
}
static const struct drm_edid_product_id *
get_lvds_pnp_id(const struct bdb_lvds_lfp_data *data,
const struct bdb_lvds_lfp_data_ptrs *ptrs,
int index)
get_lfp_pnp_id(const struct bdb_lfp_data *data,
const struct bdb_lfp_data_ptrs *ptrs,
int index)
{
/* These two are supposed to have the same layout in memory. */
BUILD_BUG_ON(sizeof(struct lvds_pnp_id) != sizeof(struct drm_edid_product_id));
BUILD_BUG_ON(sizeof(struct bdb_edid_pnp_id) != sizeof(struct drm_edid_product_id));
return (const void *)data + ptrs->ptr[index].panel_pnp_id.offset;
}
static const struct bdb_lvds_lfp_data_tail *
get_lfp_data_tail(const struct bdb_lvds_lfp_data *data,
const struct bdb_lvds_lfp_data_ptrs *ptrs)
static const struct bdb_lfp_data_tail *
get_lfp_data_tail(const struct bdb_lfp_data *data,
const struct bdb_lfp_data_ptrs *ptrs)
{
if (ptrs->panel_name.table_size)
return (const void *)data + ptrs->panel_name.offset;
@ -627,33 +628,33 @@ static int vbt_get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct drm_edid *drm_edid, bool use_fallback)
{
const struct bdb_lvds_options *lvds_options;
const struct bdb_lfp_options *lfp_options;
lvds_options = bdb_find_section(i915, BDB_LVDS_OPTIONS);
if (!lvds_options)
lfp_options = bdb_find_section(i915, BDB_LFP_OPTIONS);
if (!lfp_options)
return -1;
if (lvds_options->panel_type > 0xf &&
lvds_options->panel_type != 0xff) {
if (lfp_options->panel_type > 0xf &&
lfp_options->panel_type != 0xff) {
drm_dbg_kms(&i915->drm, "Invalid VBT panel type 0x%x\n",
lvds_options->panel_type);
lfp_options->panel_type);
return -1;
}
if (devdata && devdata->child.handle == DEVICE_HANDLE_LFP2)
return lvds_options->panel_type2;
return lfp_options->panel_type2;
drm_WARN_ON(&i915->drm, devdata && devdata->child.handle != DEVICE_HANDLE_LFP1);
return lvds_options->panel_type;
return lfp_options->panel_type;
}
static int pnpid_get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct drm_edid *drm_edid, bool use_fallback)
{
const struct bdb_lvds_lfp_data *data;
const struct bdb_lvds_lfp_data_ptrs *ptrs;
const struct bdb_lfp_data *data;
const struct bdb_lfp_data_ptrs *ptrs;
struct drm_edid_product_id product_id, product_id_nodate;
struct drm_printer p;
int i, best = -1;
@ -670,17 +671,17 @@ static int pnpid_get_panel_type(struct drm_i915_private *i915,
p = drm_dbg_printer(&i915->drm, DRM_UT_KMS, "EDID");
drm_edid_print_product_id(&p, &product_id, true);
ptrs = bdb_find_section(i915, BDB_LVDS_LFP_DATA_PTRS);
ptrs = bdb_find_section(i915, BDB_LFP_DATA_PTRS);
if (!ptrs)
return -1;
data = bdb_find_section(i915, BDB_LVDS_LFP_DATA);
data = bdb_find_section(i915, BDB_LFP_DATA);
if (!data)
return -1;
for (i = 0; i < 16; i++) {
const struct drm_edid_product_id *vbt_id =
get_lvds_pnp_id(data, ptrs, i);
get_lfp_pnp_id(data, ptrs, i);
/* full match? */
if (!memcmp(vbt_id, &product_id, sizeof(*vbt_id)))
@ -786,25 +787,25 @@ static void
parse_panel_options(struct drm_i915_private *i915,
struct intel_panel *panel)
{
const struct bdb_lvds_options *lvds_options;
const struct bdb_lfp_options *lfp_options;
int panel_type = panel->vbt.panel_type;
int drrs_mode;
lvds_options = bdb_find_section(i915, BDB_LVDS_OPTIONS);
if (!lvds_options)
lfp_options = bdb_find_section(i915, BDB_LFP_OPTIONS);
if (!lfp_options)
return;
panel->vbt.lvds_dither = lvds_options->pixel_dither;
panel->vbt.lvds_dither = lfp_options->pixel_dither;
/*
* Empirical evidence indicates the block size can be
* either 4,14,16,24+ bytes. For older VBTs no clear
* relationship between the block size vs. BDB version.
*/
if (get_blocksize(lvds_options) < 16)
if (get_blocksize(lfp_options) < 16)
return;
drrs_mode = panel_bits(lvds_options->dps_panel_type_bits,
drrs_mode = panel_bits(lfp_options->dps_panel_type_bits,
panel_type, 2);
/*
* VBT has static DRRS = 0 and seamless DRRS = 2.
@ -832,17 +833,17 @@ parse_panel_options(struct drm_i915_private *i915,
static void
parse_lfp_panel_dtd(struct drm_i915_private *i915,
struct intel_panel *panel,
const struct bdb_lvds_lfp_data *lvds_lfp_data,
const struct bdb_lvds_lfp_data_ptrs *lvds_lfp_data_ptrs)
const struct bdb_lfp_data *lfp_data,
const struct bdb_lfp_data_ptrs *lfp_data_ptrs)
{
const struct lvds_dvo_timing *panel_dvo_timing;
const struct lvds_fp_timing *fp_timing;
const struct bdb_edid_dtd *panel_dvo_timing;
const struct fp_timing *fp_timing;
struct drm_display_mode *panel_fixed_mode;
int panel_type = panel->vbt.panel_type;
panel_dvo_timing = get_lvds_dvo_timing(lvds_lfp_data,
lvds_lfp_data_ptrs,
panel_type);
panel_dvo_timing = get_lfp_dvo_timing(lfp_data,
lfp_data_ptrs,
panel_type);
panel_fixed_mode = kzalloc(sizeof(*panel_fixed_mode), GFP_KERNEL);
if (!panel_fixed_mode)
@ -850,15 +851,15 @@ parse_lfp_panel_dtd(struct drm_i915_private *i915,
fill_detail_timing_data(i915, panel_fixed_mode, panel_dvo_timing);
panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
panel->vbt.lfp_vbt_mode = panel_fixed_mode;
drm_dbg_kms(&i915->drm,
"Found panel mode in BIOS VBT legacy lfp table: " DRM_MODE_FMT "\n",
DRM_MODE_ARG(panel_fixed_mode));
fp_timing = get_lvds_fp_timing(lvds_lfp_data,
lvds_lfp_data_ptrs,
panel_type);
fp_timing = get_lfp_fp_timing(lfp_data,
lfp_data_ptrs,
panel_type);
/* check the resolution, just to be sure */
if (fp_timing->x_res == panel_fixed_mode->hdisplay &&
@ -874,25 +875,25 @@ static void
parse_lfp_data(struct drm_i915_private *i915,
struct intel_panel *panel)
{
const struct bdb_lvds_lfp_data *data;
const struct bdb_lvds_lfp_data_tail *tail;
const struct bdb_lvds_lfp_data_ptrs *ptrs;
const struct bdb_lfp_data *data;
const struct bdb_lfp_data_tail *tail;
const struct bdb_lfp_data_ptrs *ptrs;
const struct drm_edid_product_id *pnp_id;
struct drm_printer p;
int panel_type = panel->vbt.panel_type;
ptrs = bdb_find_section(i915, BDB_LVDS_LFP_DATA_PTRS);
ptrs = bdb_find_section(i915, BDB_LFP_DATA_PTRS);
if (!ptrs)
return;
data = bdb_find_section(i915, BDB_LVDS_LFP_DATA);
data = bdb_find_section(i915, BDB_LFP_DATA);
if (!data)
return;
if (!panel->vbt.lfp_lvds_vbt_mode)
if (!panel->vbt.lfp_vbt_mode)
parse_lfp_panel_dtd(i915, panel, data, ptrs);
pnp_id = get_lvds_pnp_id(data, ptrs, panel_type);
pnp_id = get_lfp_pnp_id(data, ptrs, panel_type);
p = drm_dbg_printer(&i915->drm, DRM_UT_KMS, "Panel");
drm_edid_print_product_id(&p, pnp_id, false);
@ -1001,19 +1002,19 @@ parse_generic_dtd(struct drm_i915_private *i915,
"Found panel mode in BIOS VBT generic dtd table: " DRM_MODE_FMT "\n",
DRM_MODE_ARG(panel_fixed_mode));
panel->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
panel->vbt.lfp_vbt_mode = panel_fixed_mode;
}
static void
parse_lfp_backlight(struct drm_i915_private *i915,
struct intel_panel *panel)
{
const struct bdb_lfp_backlight_data *backlight_data;
const struct bdb_lfp_backlight *backlight_data;
const struct lfp_backlight_data_entry *entry;
int panel_type = panel->vbt.panel_type;
u16 level;
backlight_data = bdb_find_section(i915, BDB_LVDS_BACKLIGHT);
backlight_data = bdb_find_section(i915, BDB_LFP_BACKLIGHT);
if (!backlight_data)
return;
@ -1091,19 +1092,18 @@ parse_lfp_backlight(struct drm_i915_private *i915,
panel->vbt.backlight.controller);
}
/* Try to find sdvo panel data */
static void
parse_sdvo_panel_data(struct drm_i915_private *i915,
struct intel_panel *panel)
parse_sdvo_lvds_data(struct drm_i915_private *i915,
struct intel_panel *panel)
{
const struct bdb_sdvo_panel_dtds *dtds;
const struct bdb_sdvo_lvds_dtd *dtd;
struct drm_display_mode *panel_fixed_mode;
int index;
index = i915->display.params.vbt_sdvo_panel_type;
if (index == -2) {
drm_dbg_kms(&i915->drm,
"Ignore SDVO panel mode from BIOS VBT tables.\n");
"Ignore SDVO LVDS mode from BIOS VBT tables.\n");
return;
}
@ -1117,20 +1117,32 @@ parse_sdvo_panel_data(struct drm_i915_private *i915,
index = sdvo_lvds_options->panel_type;
}
dtds = bdb_find_section(i915, BDB_SDVO_PANEL_DTDS);
if (!dtds)
dtd = bdb_find_section(i915, BDB_SDVO_LVDS_DTD);
if (!dtd)
return;
/*
* This should not happen, as long as the panel_type
* enumeration doesn't grow over 4 items. But if it does, it
* could lead to hard-to-detect bugs, so better double-check
* it here to be sure.
*/
if (index >= ARRAY_SIZE(dtd->dtd)) {
drm_err(&i915->drm, "index %d is larger than dtd->dtd[4] array\n",
index);
return;
}
panel_fixed_mode = kzalloc(sizeof(*panel_fixed_mode), GFP_KERNEL);
if (!panel_fixed_mode)
return;
fill_detail_timing_data(i915, panel_fixed_mode, &dtds->dtds[index]);
fill_detail_timing_data(i915, panel_fixed_mode, &dtd->dtd[index]);
panel->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
drm_dbg_kms(&i915->drm,
"Found SDVO panel mode in BIOS VBT tables: " DRM_MODE_FMT "\n",
"Found SDVO LVDS mode in BIOS VBT tables: " DRM_MODE_FMT "\n",
DRM_MODE_ARG(panel_fixed_mode));
}
@ -1513,6 +1525,10 @@ parse_edp(struct drm_i915_private *i915,
if (i915->display.vbt.version >= 244)
panel->vbt.edp.max_link_rate =
edp->edp_max_port_link_rate[panel_type] * 20;
if (i915->display.vbt.version >= 251)
panel->vbt.edp.dsc_disable =
panel_bool(edp->edp_dsc_disable, panel_type);
}
static void
@ -1677,7 +1693,7 @@ parse_mipi_config(struct drm_i915_private *i915,
panel->vbt.dsi.panel_id = MIPI_DSI_UNDEFINED_PANEL_ID;
/* Block #40 is already parsed and panel_fixed_mode is
* stored in i915->lfp_lvds_vbt_mode
* stored in i915->lfp_vbt_mode
* resuse this when needed
*/
@ -2220,15 +2236,14 @@ static u8 map_ddc_pin(struct drm_i915_private *i915, u8 vbt_pin)
const u8 *ddc_pin_map;
int i, n_entries;
if (IS_DGFX(i915))
return vbt_pin;
if (INTEL_PCH_TYPE(i915) >= PCH_MTL || IS_ALDERLAKE_P(i915)) {
ddc_pin_map = adlp_ddc_pin_map;
n_entries = ARRAY_SIZE(adlp_ddc_pin_map);
} else if (IS_ALDERLAKE_S(i915)) {
ddc_pin_map = adls_ddc_pin_map;
n_entries = ARRAY_SIZE(adls_ddc_pin_map);
} else if (INTEL_PCH_TYPE(i915) >= PCH_DG1) {
return vbt_pin;
} else if (IS_ROCKETLAKE(i915) && INTEL_PCH_TYPE(i915) == PCH_TGP) {
ddc_pin_map = rkl_pch_tgp_ddc_pin_map;
n_entries = ARRAY_SIZE(rkl_pch_tgp_ddc_pin_map);
@ -3258,7 +3273,7 @@ static void intel_bios_init_panel(struct drm_i915_private *i915,
parse_generic_dtd(i915, panel);
parse_lfp_data(i915, panel);
parse_lfp_backlight(i915, panel);
parse_sdvo_panel_data(i915, panel);
parse_sdvo_lvds_data(i915, panel);
parse_panel_driver_features(i915, panel);
parse_power_conservation_features(i915, panel);
parse_edp(i915, panel);
@ -3307,8 +3322,8 @@ void intel_bios_fini_panel(struct intel_panel *panel)
{
kfree(panel->vbt.sdvo_lvds_vbt_mode);
panel->vbt.sdvo_lvds_vbt_mode = NULL;
kfree(panel->vbt.lfp_lvds_vbt_mode);
panel->vbt.lfp_lvds_vbt_mode = NULL;
kfree(panel->vbt.lfp_vbt_mode);
panel->vbt.lfp_vbt_mode = NULL;
kfree(panel->vbt.dsi.data);
panel->vbt.dsi.data = NULL;
kfree(panel->vbt.dsi.pps);

View File

@ -22,6 +22,8 @@ struct intel_qgv_point {
u16 dclk, t_rp, t_rdpre, t_rc, t_ras, t_rcd;
};
#define DEPROGBWPCLIMIT 60
struct intel_psf_gv_point {
u8 clk; /* clock in multiples of 16.6666 MHz */
};
@ -241,6 +243,9 @@ static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
qi->channel_width = 16;
qi->deinterleave = 4;
break;
case INTEL_DRAM_GDDR:
qi->channel_width = 32;
break;
default:
MISSING_CASE(dram_info->type);
return -EINVAL;
@ -387,6 +392,12 @@ static const struct intel_sa_info mtl_sa_info = {
.derating = 10,
};
static const struct intel_sa_info xe2_hpd_sa_info = {
.derating = 30,
.deprogbwlimit = 53,
/* Other values not used by simplified algorithm */
};
static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel_sa_info *sa)
{
struct intel_qgv_info qi = {};
@ -493,7 +504,7 @@ static int tgl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
dclk_max = icl_sagv_max_dclk(&qi);
peakbw = num_channels * DIV_ROUND_UP(qi.channel_width, 8) * dclk_max;
maxdebw = min(sa->deprogbwlimit * 1000, peakbw * 6 / 10); /* 60% */
maxdebw = min(sa->deprogbwlimit * 1000, peakbw * DEPROGBWPCLIMIT / 100);
ipqdepth = min(ipqdepthpch, sa->displayrtids / num_channels);
/*
@ -598,6 +609,54 @@ static void dg2_get_bw_info(struct drm_i915_private *i915)
i915->display.sagv.status = I915_SAGV_NOT_CONTROLLED;
}
static int xe2_hpd_get_bw_info(struct drm_i915_private *i915,
const struct intel_sa_info *sa)
{
struct intel_qgv_info qi = {};
int num_channels = i915->dram_info.num_channels;
int peakbw, maxdebw;
int ret, i;
ret = icl_get_qgv_points(i915, &qi, true);
if (ret) {
drm_dbg_kms(&i915->drm,
"Failed to get memory subsystem information, ignoring bandwidth limits");
return ret;
}
peakbw = num_channels * qi.channel_width / 8 * icl_sagv_max_dclk(&qi);
maxdebw = min(sa->deprogbwlimit * 1000, peakbw * DEPROGBWPCLIMIT / 10);
for (i = 0; i < qi.num_points; i++) {
const struct intel_qgv_point *point = &qi.points[i];
int bw = num_channels * (qi.channel_width / 8) * point->dclk;
i915->display.bw.max[0].deratedbw[i] =
min(maxdebw, (100 - sa->derating) * bw / 100);
i915->display.bw.max[0].peakbw[i] = bw;
drm_dbg_kms(&i915->drm, "QGV %d: deratedbw=%u peakbw: %u\n",
i, i915->display.bw.max[0].deratedbw[i],
i915->display.bw.max[0].peakbw[i]);
}
/* Bandwidth does not depend on # of planes; set all groups the same */
i915->display.bw.max[0].num_planes = 1;
i915->display.bw.max[0].num_qgv_points = qi.num_points;
for (i = 1; i < ARRAY_SIZE(i915->display.bw.max); i++)
memcpy(&i915->display.bw.max[i], &i915->display.bw.max[0],
sizeof(i915->display.bw.max[0]));
/*
* Xe2_HPD should always have exactly two QGV points representing
* battery and plugged-in operation.
*/
drm_WARN_ON(&i915->drm, qi.num_points != 2);
i915->display.sagv.status = I915_SAGV_ENABLED;
return 0;
}
static unsigned int icl_max_bw_index(struct drm_i915_private *dev_priv,
int num_planes, int qgv_point)
{
@ -684,7 +743,9 @@ void intel_bw_init_hw(struct drm_i915_private *dev_priv)
if (!HAS_DISPLAY(dev_priv))
return;
if (DISPLAY_VER(dev_priv) >= 14)
if (DISPLAY_VER_FULL(dev_priv) >= IP_VER(14, 1) && IS_DGFX(dev_priv))
xe2_hpd_get_bw_info(dev_priv, &xe2_hpd_sa_info);
else if (DISPLAY_VER(dev_priv) >= 14)
tgl_get_bw_info(dev_priv, &mtl_sa_info);
else if (IS_DG2(dev_priv))
dg2_get_bw_info(dev_priv);

View File

@ -23,6 +23,7 @@
#include <linux/time.h>
#include "soc/intel_dram.h"
#include "hsw_ips.h"
#include "i915_reg.h"
#include "intel_atomic.h"
@ -113,7 +114,7 @@ struct intel_cdclk_funcs {
void (*set_cdclk)(struct drm_i915_private *i915,
const struct intel_cdclk_config *cdclk_config,
enum pipe pipe);
int (*modeset_calc_cdclk)(struct intel_cdclk_state *state);
int (*modeset_calc_cdclk)(struct intel_atomic_state *state);
u8 (*calc_voltage_level)(int cdclk);
};
@ -130,10 +131,11 @@ static void intel_cdclk_set_cdclk(struct drm_i915_private *dev_priv,
dev_priv->display.funcs.cdclk->set_cdclk(dev_priv, cdclk_config, pipe);
}
static int intel_cdclk_modeset_calc_cdclk(struct drm_i915_private *dev_priv,
struct intel_cdclk_state *cdclk_config)
static int intel_cdclk_modeset_calc_cdclk(struct intel_atomic_state *state)
{
return dev_priv->display.funcs.cdclk->modeset_calc_cdclk(cdclk_config);
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
return dev_priv->display.funcs.cdclk->modeset_calc_cdclk(state);
}
static u8 intel_cdclk_calc_voltage_level(struct drm_i915_private *dev_priv,
@ -1443,6 +1445,14 @@ static const struct intel_cdclk_vals xe2lpd_cdclk_table[] = {
{}
};
/*
* Xe2_HPD always uses the minimal cdclk table from Wa_15015413771
*/
static const struct intel_cdclk_vals xe2hpd_cdclk_table[] = {
{ .refclk = 38400, .cdclk = 652800, .ratio = 34, .waveform = 0xffff },
{}
};
static const int cdclk_squash_len = 16;
static int cdclk_squash_divider(u16 waveform)
@ -2723,7 +2733,7 @@ static int intel_vdsc_min_cdclk(const struct intel_crtc_state *crtc_state)
min_cdclk = max_t(int, min_cdclk,
DIV_ROUND_UP(crtc_state->pixel_rate, num_vdsc_instances));
if (crtc_state->bigjoiner_pipes) {
if (crtc_state->joiner_pipes) {
int pixel_clock = intel_dp_mode_to_fec_clock(crtc_state->hw.adjusted_mode.clock);
/*
@ -2826,10 +2836,11 @@ int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state)
return min_cdclk;
}
static int intel_compute_min_cdclk(struct intel_cdclk_state *cdclk_state)
static int intel_compute_min_cdclk(struct intel_atomic_state *state)
{
struct intel_atomic_state *state = cdclk_state->base.state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
const struct intel_bw_state *bw_state;
struct intel_crtc *crtc;
struct intel_crtc_state *crtc_state;
@ -2908,10 +2919,11 @@ static int intel_compute_min_cdclk(struct intel_cdclk_state *cdclk_state)
* future platforms this code will need to be
* adjusted.
*/
static int bxt_compute_min_voltage_level(struct intel_cdclk_state *cdclk_state)
static int bxt_compute_min_voltage_level(struct intel_atomic_state *state)
{
struct intel_atomic_state *state = cdclk_state->base.state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
struct intel_crtc *crtc;
struct intel_crtc_state *crtc_state;
u8 min_voltage_level;
@ -2944,13 +2956,14 @@ static int bxt_compute_min_voltage_level(struct intel_cdclk_state *cdclk_state)
return min_voltage_level;
}
static int vlv_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
static int vlv_modeset_calc_cdclk(struct intel_atomic_state *state)
{
struct intel_atomic_state *state = cdclk_state->base.state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
int min_cdclk, cdclk;
min_cdclk = intel_compute_min_cdclk(cdclk_state);
min_cdclk = intel_compute_min_cdclk(state);
if (min_cdclk < 0)
return min_cdclk;
@ -2973,11 +2986,13 @@ static int vlv_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
return 0;
}
static int bdw_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
static int bdw_modeset_calc_cdclk(struct intel_atomic_state *state)
{
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
int min_cdclk, cdclk;
min_cdclk = intel_compute_min_cdclk(cdclk_state);
min_cdclk = intel_compute_min_cdclk(state);
if (min_cdclk < 0)
return min_cdclk;
@ -3000,10 +3015,11 @@ static int bdw_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
return 0;
}
static int skl_dpll0_vco(struct intel_cdclk_state *cdclk_state)
static int skl_dpll0_vco(struct intel_atomic_state *state)
{
struct intel_atomic_state *state = cdclk_state->base.state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
struct intel_crtc *crtc;
struct intel_crtc_state *crtc_state;
int vco, i;
@ -3037,15 +3053,17 @@ static int skl_dpll0_vco(struct intel_cdclk_state *cdclk_state)
return vco;
}
static int skl_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
static int skl_modeset_calc_cdclk(struct intel_atomic_state *state)
{
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
int min_cdclk, cdclk, vco;
min_cdclk = intel_compute_min_cdclk(cdclk_state);
min_cdclk = intel_compute_min_cdclk(state);
if (min_cdclk < 0)
return min_cdclk;
vco = skl_dpll0_vco(cdclk_state);
vco = skl_dpll0_vco(state);
cdclk = skl_calc_cdclk(min_cdclk, vco);
@ -3068,17 +3086,18 @@ static int skl_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
return 0;
}
static int bxt_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
static int bxt_modeset_calc_cdclk(struct intel_atomic_state *state)
{
struct intel_atomic_state *state = cdclk_state->base.state;
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_cdclk_state *cdclk_state =
intel_atomic_get_new_cdclk_state(state);
int min_cdclk, min_voltage_level, cdclk, vco;
min_cdclk = intel_compute_min_cdclk(cdclk_state);
min_cdclk = intel_compute_min_cdclk(state);
if (min_cdclk < 0)
return min_cdclk;
min_voltage_level = bxt_compute_min_voltage_level(cdclk_state);
min_voltage_level = bxt_compute_min_voltage_level(state);
if (min_voltage_level < 0)
return min_voltage_level;
@ -3106,7 +3125,7 @@ static int bxt_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
return 0;
}
static int fixed_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
static int fixed_modeset_calc_cdclk(struct intel_atomic_state *state)
{
int min_cdclk;
@ -3115,7 +3134,7 @@ static int fixed_modeset_calc_cdclk(struct intel_cdclk_state *cdclk_state)
* check that the required minimum frequency doesn't exceed
* the actual cdclk frequency.
*/
min_cdclk = intel_compute_min_cdclk(cdclk_state);
min_cdclk = intel_compute_min_cdclk(state);
if (min_cdclk < 0)
return min_cdclk;
@ -3255,7 +3274,7 @@ int intel_modeset_calc_cdclk(struct intel_atomic_state *state)
new_cdclk_state->active_pipes =
intel_calc_active_pipes(state, old_cdclk_state->active_pipes);
ret = intel_cdclk_modeset_calc_cdclk(dev_priv, new_cdclk_state);
ret = intel_cdclk_modeset_calc_cdclk(state);
if (ret)
return ret;
@ -3521,60 +3540,10 @@ static int vlv_hrawclk(struct drm_i915_private *dev_priv)
CCK_DISPLAY_REF_CLOCK_CONTROL);
}
static int i9xx_hrawclk(struct drm_i915_private *dev_priv)
static int i9xx_hrawclk(struct drm_i915_private *i915)
{
u32 clkcfg;
/*
* hrawclock is 1/4 the FSB frequency
*
* Note that this only reads the state of the FSB
* straps, not the actual FSB frequency. Some BIOSen
* let you configure each independently. Ideally we'd
* read out the actual FSB frequency but sadly we
* don't know which registers have that information,
* and all the relevant docs have gone to bit heaven :(
*/
clkcfg = intel_de_read(dev_priv, CLKCFG) & CLKCFG_FSB_MASK;
if (IS_MOBILE(dev_priv)) {
switch (clkcfg) {
case CLKCFG_FSB_400:
return 100000;
case CLKCFG_FSB_533:
return 133333;
case CLKCFG_FSB_667:
return 166667;
case CLKCFG_FSB_800:
return 200000;
case CLKCFG_FSB_1067:
return 266667;
case CLKCFG_FSB_1333:
return 333333;
default:
MISSING_CASE(clkcfg);
return 133333;
}
} else {
switch (clkcfg) {
case CLKCFG_FSB_400_ALT:
return 100000;
case CLKCFG_FSB_533:
return 133333;
case CLKCFG_FSB_667:
return 166667;
case CLKCFG_FSB_800:
return 200000;
case CLKCFG_FSB_1067_ALT:
return 266667;
case CLKCFG_FSB_1333_ALT:
return 333333;
case CLKCFG_FSB_1600_ALT:
return 400000;
default:
return 133333;
}
}
/* hrawclock is 1/4 the FSB frequency */
return DIV_ROUND_CLOSEST(i9xx_fsb_freq(i915), 4);
}
/**
@ -3778,6 +3747,9 @@ void intel_init_cdclk_hooks(struct drm_i915_private *dev_priv)
if (DISPLAY_VER(dev_priv) >= 20) {
dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs;
dev_priv->display.cdclk.table = xe2lpd_cdclk_table;
} else if (DISPLAY_VER_FULL(dev_priv) >= IP_VER(14, 1)) {
dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs;
dev_priv->display.cdclk.table = xe2hpd_cdclk_table;
} else if (DISPLAY_VER(dev_priv) >= 14) {
dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs;
dev_priv->display.cdclk.table = mtl_cdclk_table;

View File

@ -22,7 +22,7 @@
*
*/
#include "i915_reg.h"
#include "i9xx_plane_regs.h"
#include "intel_color.h"
#include "intel_color_regs.h"
#include "intel_de.h"
@ -30,7 +30,8 @@
#include "intel_dsb.h"
struct intel_color_funcs {
int (*color_check)(struct intel_crtc_state *crtc_state);
int (*color_check)(struct intel_atomic_state *state,
struct intel_crtc *crtc);
/*
* Program non-arming double buffered color management registers
* before vblank evasion. The registers should then latch after
@ -1038,7 +1039,7 @@ static void i9xx_get_config(struct intel_crtc_state *crtc_state)
enum i9xx_plane_id i9xx_plane = plane->i9xx_plane;
u32 tmp;
tmp = intel_de_read(dev_priv, DSPCNTR(i9xx_plane));
tmp = intel_de_read(dev_priv, DSPCNTR(dev_priv, i9xx_plane));
if (tmp & DISP_PIPE_GAMMA_ENABLE)
crtc_state->gamma_enable = true;
@ -1284,9 +1285,9 @@ static void i965_load_lut_10p6(struct intel_crtc *crtc,
i965_lut_10p6_udw(&lut[i]));
}
intel_de_write_fw(dev_priv, PIPEGCMAX(pipe, 0), lut[i].red);
intel_de_write_fw(dev_priv, PIPEGCMAX(pipe, 1), lut[i].green);
intel_de_write_fw(dev_priv, PIPEGCMAX(pipe, 2), lut[i].blue);
intel_de_write_fw(dev_priv, PIPEGCMAX(dev_priv, pipe, 0), lut[i].red);
intel_de_write_fw(dev_priv, PIPEGCMAX(dev_priv, pipe, 1), lut[i].green);
intel_de_write_fw(dev_priv, PIPEGCMAX(dev_priv, pipe, 2), lut[i].blue);
}
static void i965_load_luts(const struct intel_crtc_state *crtc_state)
@ -1913,7 +1914,7 @@ void intel_color_prepare_commit(struct intel_crtc_state *crtc_state)
if (!crtc_state->pre_csc_lut && !crtc_state->post_csc_lut)
return;
crtc_state->dsb = intel_dsb_prepare(crtc_state, 1024);
crtc_state->dsb = intel_dsb_prepare(crtc_state, INTEL_DSB_0, 1024);
if (!crtc_state->dsb)
return;
@ -1942,11 +1943,9 @@ bool intel_color_uses_dsb(const struct intel_crtc_state *crtc_state)
return crtc_state->dsb;
}
static bool intel_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
static bool intel_can_preload_luts(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
@ -1954,11 +1953,9 @@ static bool intel_can_preload_luts(const struct intel_crtc_state *new_crtc_state
!old_crtc_state->pre_csc_lut;
}
static bool vlv_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
static bool vlv_can_preload_luts(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
@ -1966,13 +1963,13 @@ static bool vlv_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
!old_crtc_state->post_csc_lut;
}
static bool chv_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
static bool chv_can_preload_luts(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
const struct intel_crtc_state *new_crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
/*
* CGM_PIPE_MODE is itself single buffered. We'd have to
@ -1982,14 +1979,29 @@ static bool chv_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
if (old_crtc_state->cgm_mode || new_crtc_state->cgm_mode)
return false;
return vlv_can_preload_luts(new_crtc_state);
return vlv_can_preload_luts(state, crtc);
}
int intel_color_check(struct intel_crtc_state *crtc_state)
int intel_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct drm_i915_private *i915 = to_i915(state->base.dev);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
struct intel_crtc_state *new_crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
return i915->display.funcs.color->color_check(crtc_state);
/*
* May need to update pipe gamma enable bits
* when C8 planes are getting enabled/disabled.
*/
if (!old_crtc_state->c8_planes != !new_crtc_state->c8_planes)
new_crtc_state->uapi.color_mgmt_changed = true;
if (!intel_crtc_needs_color_update(new_crtc_state))
return 0;
return i915->display.funcs.color->color_check(state, crtc);
}
void intel_color_get_config(struct intel_crtc_state *crtc_state)
@ -2039,14 +2051,14 @@ static bool need_plane_update(struct intel_plane *plane,
}
static int
intel_color_add_affected_planes(struct intel_crtc_state *new_crtc_state)
intel_color_add_affected_planes(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->uapi.state);
struct drm_i915_private *i915 = to_i915(state->base.dev);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
struct intel_crtc_state *new_crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
struct intel_plane *plane;
if (!new_crtc_state->hw.active ||
@ -2240,9 +2252,12 @@ static void intel_assign_luts(struct intel_crtc_state *crtc_state)
crtc_state->hw.gamma_lut);
}
static int i9xx_color_check(struct intel_crtc_state *crtc_state)
static int i9xx_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct drm_i915_private *i915 = to_i915(state->base.dev);
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = check_luts(crtc_state);
@ -2262,13 +2277,13 @@ static int i9xx_color_check(struct intel_crtc_state *crtc_state)
return ret;
}
ret = intel_color_add_affected_planes(crtc_state);
ret = intel_color_add_affected_planes(state, crtc);
if (ret)
return ret;
intel_assign_luts(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(state, crtc);
return 0;
}
@ -2277,8 +2292,11 @@ static int i9xx_color_check(struct intel_crtc_state *crtc_state)
* VLV color pipeline:
* u0.10 -> WGC csc -> u0.10 -> pipe gamma -> u0.10
*/
static int vlv_color_check(struct intel_crtc_state *crtc_state)
static int vlv_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = check_luts(crtc_state);
@ -2293,7 +2311,7 @@ static int vlv_color_check(struct intel_crtc_state *crtc_state)
crtc_state->wgc_enable = crtc_state->hw.ctm;
ret = intel_color_add_affected_planes(crtc_state);
ret = intel_color_add_affected_planes(state, crtc);
if (ret)
return ret;
@ -2301,7 +2319,7 @@ static int vlv_color_check(struct intel_crtc_state *crtc_state)
vlv_assign_csc(crtc_state);
crtc_state->preload_luts = vlv_can_preload_luts(crtc_state);
crtc_state->preload_luts = vlv_can_preload_luts(state, crtc);
return 0;
}
@ -2336,8 +2354,11 @@ static u32 chv_cgm_mode(const struct intel_crtc_state *crtc_state)
* We always bypass the WGC csc and use the CGM csc
* instead since it has degamma and better precision.
*/
static int chv_color_check(struct intel_crtc_state *crtc_state)
static int chv_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = check_luts(crtc_state);
@ -2362,7 +2383,7 @@ static int chv_color_check(struct intel_crtc_state *crtc_state)
*/
crtc_state->wgc_enable = false;
ret = intel_color_add_affected_planes(crtc_state);
ret = intel_color_add_affected_planes(state, crtc);
if (ret)
return ret;
@ -2370,7 +2391,7 @@ static int chv_color_check(struct intel_crtc_state *crtc_state)
chv_assign_csc(crtc_state);
crtc_state->preload_luts = chv_can_preload_luts(crtc_state);
crtc_state->preload_luts = chv_can_preload_luts(state, crtc);
return 0;
}
@ -2454,9 +2475,12 @@ static int ilk_assign_luts(struct intel_crtc_state *crtc_state)
return 0;
}
static int ilk_color_check(struct intel_crtc_state *crtc_state)
static int ilk_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct drm_i915_private *i915 = to_i915(state->base.dev);
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = check_luts(crtc_state);
@ -2484,7 +2508,7 @@ static int ilk_color_check(struct intel_crtc_state *crtc_state)
crtc_state->csc_mode = ilk_csc_mode(crtc_state);
ret = intel_color_add_affected_planes(crtc_state);
ret = intel_color_add_affected_planes(state, crtc);
if (ret)
return ret;
@ -2494,7 +2518,7 @@ static int ilk_color_check(struct intel_crtc_state *crtc_state)
ilk_assign_csc(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(state, crtc);
return 0;
}
@ -2555,9 +2579,12 @@ static int ivb_assign_luts(struct intel_crtc_state *crtc_state)
return 0;
}
static int ivb_color_check(struct intel_crtc_state *crtc_state)
static int ivb_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct drm_i915_private *i915 = to_i915(state->base.dev);
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = check_luts(crtc_state);
@ -2592,7 +2619,7 @@ static int ivb_color_check(struct intel_crtc_state *crtc_state)
crtc_state->csc_mode = ivb_csc_mode(crtc_state);
ret = intel_color_add_affected_planes(crtc_state);
ret = intel_color_add_affected_planes(state, crtc);
if (ret)
return ret;
@ -2602,7 +2629,7 @@ static int ivb_color_check(struct intel_crtc_state *crtc_state)
ilk_assign_csc(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(state, crtc);
return 0;
}
@ -2686,9 +2713,12 @@ static int glk_check_luts(const struct intel_crtc_state *crtc_state)
return _check_luts(crtc_state, degamma_tests, gamma_tests);
}
static int glk_color_check(struct intel_crtc_state *crtc_state)
static int glk_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct drm_i915_private *i915 = to_i915(state->base.dev);
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = glk_check_luts(crtc_state);
@ -2725,7 +2755,7 @@ static int glk_color_check(struct intel_crtc_state *crtc_state)
crtc_state->csc_mode = 0;
ret = intel_color_add_affected_planes(crtc_state);
ret = intel_color_add_affected_planes(state, crtc);
if (ret)
return ret;
@ -2735,7 +2765,7 @@ static int glk_color_check(struct intel_crtc_state *crtc_state)
ilk_assign_csc(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(state, crtc);
return 0;
}
@ -2783,8 +2813,11 @@ static u32 icl_csc_mode(const struct intel_crtc_state *crtc_state)
return csc_mode;
}
static int icl_color_check(struct intel_crtc_state *crtc_state)
static int icl_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
int ret;
ret = check_luts(crtc_state);
@ -2799,7 +2832,7 @@ static int icl_color_check(struct intel_crtc_state *crtc_state)
icl_assign_csc(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(crtc_state);
crtc_state->preload_luts = intel_can_preload_luts(state, crtc);
return 0;
}
@ -3239,9 +3272,9 @@ static struct drm_property_blob *i965_read_lut_10p6(struct intel_crtc *crtc)
i965_lut_10p6_pack(&lut[i], ldw, udw);
}
lut[i].red = i965_lut_11p6_max_pack(intel_de_read_fw(dev_priv, PIPEGCMAX(pipe, 0)));
lut[i].green = i965_lut_11p6_max_pack(intel_de_read_fw(dev_priv, PIPEGCMAX(pipe, 1)));
lut[i].blue = i965_lut_11p6_max_pack(intel_de_read_fw(dev_priv, PIPEGCMAX(pipe, 2)));
lut[i].red = i965_lut_11p6_max_pack(intel_de_read_fw(dev_priv, PIPEGCMAX(dev_priv, pipe, 0)));
lut[i].green = i965_lut_11p6_max_pack(intel_de_read_fw(dev_priv, PIPEGCMAX(dev_priv, pipe, 1)));
lut[i].blue = i965_lut_11p6_max_pack(intel_de_read_fw(dev_priv, PIPEGCMAX(dev_priv, pipe, 2)));
return blob;
}

View File

@ -8,6 +8,7 @@
#include <linux/types.h>
struct intel_atomic_state;
struct intel_crtc_state;
struct intel_crtc;
struct drm_i915_private;
@ -16,7 +17,8 @@ struct drm_property_blob;
void intel_color_init_hooks(struct drm_i915_private *i915);
int intel_color_init(struct drm_i915_private *i915);
void intel_color_crtc_init(struct intel_crtc *crtc);
int intel_color_check(struct intel_crtc_state *crtc_state);
int intel_color_check(struct intel_atomic_state *state,
struct intel_crtc *crtc);
void intel_color_prepare_commit(struct intel_crtc_state *crtc_state);
void intel_color_cleanup_commit(struct intel_crtc_state *crtc_state);
bool intel_color_uses_dsb(const struct intel_crtc_state *crtc_state);

View File

@ -36,6 +36,11 @@
_CHV_PALETTE_C, _CHV_PALETTE_C) + \
(i) * 4)
/* i965/g4x/vlv/chv */
#define _PIPEAGCMAX 0x70010
#define _PIPEBGCMAX 0x71010
#define PIPEGCMAX(dev_priv, pipe, i) _MMIO_PIPE2(dev_priv, pipe, _PIPEAGCMAX + (i) * 4) /* u1.16 */
/* ilk+ palette */
#define _LGC_PALETTE_A 0x4a000
#define _LGC_PALETTE_B 0x4a800

View File

@ -193,7 +193,7 @@ static void intel_crt_set_dpms(struct intel_encoder *encoder,
adpa |= ADPA_PIPE_SEL(crtc->pipe);
if (!HAS_PCH_SPLIT(dev_priv))
intel_de_write(dev_priv, BCLRPAT(crtc->pipe), 0);
intel_de_write(dev_priv, BCLRPAT(dev_priv, crtc->pipe), 0);
switch (mode) {
case DRM_MODE_DPMS_ON:
@ -603,18 +603,19 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
CRT_HOTPLUG_FORCE_DETECT,
CRT_HOTPLUG_FORCE_DETECT);
/* wait for FORCE_DETECT to go off */
if (intel_de_wait_for_clear(dev_priv, PORT_HOTPLUG_EN,
if (intel_de_wait_for_clear(dev_priv, PORT_HOTPLUG_EN(dev_priv),
CRT_HOTPLUG_FORCE_DETECT, 1000))
drm_dbg_kms(&dev_priv->drm,
"timed out waiting for FORCE_DETECT to go off");
}
stat = intel_de_read(dev_priv, PORT_HOTPLUG_STAT);
stat = intel_de_read(dev_priv, PORT_HOTPLUG_STAT(dev_priv));
if ((stat & CRT_HOTPLUG_MONITOR_MASK) != CRT_HOTPLUG_MONITOR_NONE)
ret = true;
/* clear the interrupt we just generated, if any */
intel_de_write(dev_priv, PORT_HOTPLUG_STAT, CRT_HOTPLUG_INT_STATUS);
intel_de_write(dev_priv, PORT_HOTPLUG_STAT(dev_priv),
CRT_HOTPLUG_INT_STATUS);
i915_hotplug_interrupt_update(dev_priv, CRT_HOTPLUG_FORCE_DETECT, 0);
@ -707,9 +708,12 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
drm_dbg_kms(&dev_priv->drm, "starting load-detect on CRT\n");
save_bclrpat = intel_de_read(dev_priv, BCLRPAT(cpu_transcoder));
save_vtotal = intel_de_read(dev_priv, TRANS_VTOTAL(cpu_transcoder));
vblank = intel_de_read(dev_priv, TRANS_VBLANK(cpu_transcoder));
save_bclrpat = intel_de_read(dev_priv,
BCLRPAT(dev_priv, cpu_transcoder));
save_vtotal = intel_de_read(dev_priv,
TRANS_VTOTAL(dev_priv, cpu_transcoder));
vblank = intel_de_read(dev_priv,
TRANS_VBLANK(dev_priv, cpu_transcoder));
vtotal = REG_FIELD_GET(VTOTAL_MASK, save_vtotal) + 1;
vactive = REG_FIELD_GET(VACTIVE_MASK, save_vtotal) + 1;
@ -718,14 +722,16 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
vblank_end = REG_FIELD_GET(VBLANK_END_MASK, vblank) + 1;
/* Set the border color to purple. */
intel_de_write(dev_priv, BCLRPAT(cpu_transcoder), 0x500050);
intel_de_write(dev_priv, BCLRPAT(dev_priv, cpu_transcoder), 0x500050);
if (DISPLAY_VER(dev_priv) != 2) {
u32 transconf = intel_de_read(dev_priv, TRANSCONF(cpu_transcoder));
u32 transconf = intel_de_read(dev_priv,
TRANSCONF(dev_priv, cpu_transcoder));
intel_de_write(dev_priv, TRANSCONF(cpu_transcoder),
intel_de_write(dev_priv, TRANSCONF(dev_priv, cpu_transcoder),
transconf | TRANSCONF_FORCE_BORDER);
intel_de_posting_read(dev_priv, TRANSCONF(cpu_transcoder));
intel_de_posting_read(dev_priv,
TRANSCONF(dev_priv, cpu_transcoder));
/* Wait for next Vblank to substitue
* border color for Color info */
intel_crtc_wait_for_next_vblank(intel_crtc_for_pipe(dev_priv, pipe));
@ -734,7 +740,8 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
connector_status_connected :
connector_status_disconnected;
intel_de_write(dev_priv, TRANSCONF(cpu_transcoder), transconf);
intel_de_write(dev_priv, TRANSCONF(dev_priv, cpu_transcoder),
transconf);
} else {
bool restore_vblank = false;
int count, detect;
@ -744,11 +751,13 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
* Yes, this will flicker
*/
if (vblank_start <= vactive && vblank_end >= vtotal) {
u32 vsync = intel_de_read(dev_priv, TRANS_VSYNC(cpu_transcoder));
u32 vsync = intel_de_read(dev_priv,
TRANS_VSYNC(dev_priv, cpu_transcoder));
u32 vsync_start = REG_FIELD_GET(VSYNC_START_MASK, vsync) + 1;
vblank_start = vsync_start;
intel_de_write(dev_priv, TRANS_VBLANK(cpu_transcoder),
intel_de_write(dev_priv,
TRANS_VBLANK(dev_priv, cpu_transcoder),
VBLANK_START(vblank_start - 1) |
VBLANK_END(vblank_end - 1));
restore_vblank = true;
@ -762,9 +771,9 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
/*
* Wait for the border to be displayed
*/
while (intel_de_read(dev_priv, PIPEDSL(pipe)) >= vactive)
while (intel_de_read(dev_priv, PIPEDSL(dev_priv, pipe)) >= vactive)
;
while ((dsl = intel_de_read(dev_priv, PIPEDSL(pipe))) <= vsample)
while ((dsl = intel_de_read(dev_priv, PIPEDSL(dev_priv, pipe))) <= vsample)
;
/*
* Watch ST00 for an entire scanline
@ -777,11 +786,13 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
st00 = intel_de_read8(dev_priv, _VGA_MSR_WRITE);
if (st00 & (1 << 4))
detect++;
} while ((intel_de_read(dev_priv, PIPEDSL(pipe)) == dsl));
} while ((intel_de_read(dev_priv, PIPEDSL(dev_priv, pipe)) == dsl));
/* restore vblank if necessary */
if (restore_vblank)
intel_de_write(dev_priv, TRANS_VBLANK(cpu_transcoder), vblank);
intel_de_write(dev_priv,
TRANS_VBLANK(dev_priv, cpu_transcoder),
vblank);
/*
* If more than 3/4 of the scanline detected a monitor,
* then it is assumed to be present. This works even on i830,
@ -794,7 +805,8 @@ intel_crt_load_detect(struct intel_crt *crt, enum pipe pipe)
}
/* Restore previous settings */
intel_de_write(dev_priv, BCLRPAT(cpu_transcoder), save_bclrpat);
intel_de_write(dev_priv, BCLRPAT(dev_priv, cpu_transcoder),
save_bclrpat);
return status;
}

View File

@ -78,8 +78,7 @@ void intel_wait_for_vblank_if_active(struct drm_i915_private *i915,
u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
struct drm_vblank_crtc *vblank = &dev->vblank[drm_crtc_index(&crtc->base)];
struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(&crtc->base);
if (!crtc->active)
return 0;
@ -311,8 +310,7 @@ int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe)
crtc->num_scalers = DISPLAY_RUNTIME_INFO(dev_priv)->num_scalers[pipe];
if (DISPLAY_VER(dev_priv) >= 9)
primary = skl_universal_plane_create(dev_priv, pipe,
PLANE_PRIMARY);
primary = skl_universal_plane_create(dev_priv, pipe, PLANE_1);
else
primary = intel_primary_plane_create(dev_priv, pipe);
if (IS_ERR(primary)) {
@ -327,8 +325,7 @@ int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe)
struct intel_plane *plane;
if (DISPLAY_VER(dev_priv) >= 9)
plane = skl_universal_plane_create(dev_priv, pipe,
PLANE_SPRITE0 + sprite);
plane = skl_universal_plane_create(dev_priv, pipe, PLANE_2 + sprite);
else
plane = intel_sprite_plane_create(dev_priv, pipe, sprite);
if (IS_ERR(plane)) {
@ -414,8 +411,8 @@ static void intel_crtc_vblank_work(struct kthread_work *base)
if (crtc_state->uapi.event) {
spin_lock_irq(&crtc->base.dev->event_lock);
drm_crtc_send_vblank_event(&crtc->base, crtc_state->uapi.event);
crtc_state->uapi.event = NULL;
spin_unlock_irq(&crtc->base.dev->event_lock);
crtc_state->uapi.event = NULL;
}
trace_intel_crtc_vblank_work_end(crtc);
@ -457,8 +454,8 @@ int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode,
if (!adjusted_mode->crtc_htotal)
return 1;
return DIV_ROUND_UP(usecs * adjusted_mode->crtc_clock,
1000 * adjusted_mode->crtc_htotal);
return DIV_ROUND_UP_ULL(mul_u32_u32(usecs, adjusted_mode->crtc_clock),
1000 * adjusted_mode->crtc_htotal);
}
/**

View File

@ -222,10 +222,10 @@ void intel_crtc_state_dump(const struct intel_crtc_state *pipe_config,
transcoder_name(pipe_config->master_transcoder),
pipe_config->sync_mode_slaves_mask);
drm_printf(&p, "bigjoiner: %s, pipes: 0x%x\n",
intel_crtc_is_bigjoiner_slave(pipe_config) ? "slave" :
intel_crtc_is_bigjoiner_master(pipe_config) ? "master" : "no",
pipe_config->bigjoiner_pipes);
drm_printf(&p, "joiner: %s, pipes: 0x%x\n",
intel_crtc_is_joiner_secondary(pipe_config) ? "secondary" :
intel_crtc_is_joiner_primary(pipe_config) ? "primary" : "no",
pipe_config->joiner_pipes);
drm_printf(&p, "splitter: %s, link count %d, overlap %d\n",
str_enabled_disabled(pipe_config->splitter.enable),
@ -251,9 +251,10 @@ void intel_crtc_state_dump(const struct intel_crtc_state *pipe_config,
drm_printf(&p, "sdp split: %s\n",
str_enabled_disabled(pipe_config->sdp_split_enable));
drm_printf(&p, "psr: %s, psr2: %s, panel replay: %s, selective fetch: %s\n",
str_enabled_disabled(pipe_config->has_psr),
str_enabled_disabled(pipe_config->has_psr2),
drm_printf(&p, "psr: %s, selective update: %s, panel replay: %s, selective fetch: %s\n",
str_enabled_disabled(pipe_config->has_psr &&
!pipe_config->has_panel_replay),
str_enabled_disabled(pipe_config->has_sel_update),
str_enabled_disabled(pipe_config->has_panel_replay),
str_enabled_disabled(pipe_config->enable_psr2_sel_fetch));
}

View File

@ -14,6 +14,7 @@
#include "intel_atomic.h"
#include "intel_atomic_plane.h"
#include "intel_cursor.h"
#include "intel_cursor_regs.h"
#include "intel_de.h"
#include "intel_display.h"
#include "intel_display_types.h"
@ -293,17 +294,17 @@ static void i845_cursor_update_arm(struct intel_plane *plane,
if (plane->cursor.base != base ||
plane->cursor.size != size ||
plane->cursor.cntl != cntl) {
intel_de_write_fw(dev_priv, CURCNTR(PIPE_A), 0);
intel_de_write_fw(dev_priv, CURBASE(PIPE_A), base);
intel_de_write_fw(dev_priv, CURSIZE(PIPE_A), size);
intel_de_write_fw(dev_priv, CURPOS(PIPE_A), pos);
intel_de_write_fw(dev_priv, CURCNTR(PIPE_A), cntl);
intel_de_write_fw(dev_priv, CURCNTR(dev_priv, PIPE_A), 0);
intel_de_write_fw(dev_priv, CURBASE(dev_priv, PIPE_A), base);
intel_de_write_fw(dev_priv, CURSIZE(dev_priv, PIPE_A), size);
intel_de_write_fw(dev_priv, CURPOS(dev_priv, PIPE_A), pos);
intel_de_write_fw(dev_priv, CURCNTR(dev_priv, PIPE_A), cntl);
plane->cursor.base = base;
plane->cursor.size = size;
plane->cursor.cntl = cntl;
} else {
intel_de_write_fw(dev_priv, CURPOS(PIPE_A), pos);
intel_de_write_fw(dev_priv, CURPOS(dev_priv, PIPE_A), pos);
}
}
@ -326,7 +327,7 @@ static bool i845_cursor_get_hw_state(struct intel_plane *plane,
if (!wakeref)
return false;
ret = intel_de_read(dev_priv, CURCNTR(PIPE_A)) & CURSOR_ENABLE;
ret = intel_de_read(dev_priv, CURCNTR(dev_priv, PIPE_A)) & CURSOR_ENABLE;
*pipe = PIPE_A;
@ -506,7 +507,7 @@ static void i9xx_cursor_disable_sel_fetch_arm(struct intel_plane *plane,
if (!crtc_state->enable_psr2_sel_fetch)
return;
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), 0);
intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), 0);
}
static void wa_16021440873(struct intel_plane *plane,
@ -521,10 +522,10 @@ static void wa_16021440873(struct intel_plane *plane,
ctl &= ~MCURSOR_MODE_MASK;
ctl |= MCURSOR_MODE_64_2B;
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), ctl);
intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), ctl);
intel_de_write(dev_priv, PIPE_SRCSZ_ERLY_TPT(pipe),
PIPESRC_HEIGHT(et_y_position));
intel_de_write(dev_priv, CURPOS_ERLY_TPT(dev_priv, pipe),
CURSOR_POS_Y(et_y_position));
}
static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
@ -541,10 +542,12 @@ static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
if (crtc_state->enable_psr2_su_region_et) {
u32 val = intel_cursor_position(crtc_state, plane_state,
true);
intel_de_write_fw(dev_priv, CURPOS_ERLY_TPT(pipe), val);
intel_de_write_fw(dev_priv,
CURPOS_ERLY_TPT(dev_priv, pipe),
val);
}
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id),
intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe),
plane_state->ctl);
} else {
/* Wa_16021440873 */
@ -555,6 +558,60 @@ static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
}
}
static u32 skl_cursor_ddb_reg_val(const struct skl_ddb_entry *entry)
{
if (!entry->end)
return 0;
return CUR_BUF_END(entry->end - 1) |
CUR_BUF_START(entry->start);
}
static u32 skl_cursor_wm_reg_val(const struct skl_wm_level *level)
{
u32 val = 0;
if (level->enable)
val |= CUR_WM_EN;
if (level->ignore_lines)
val |= CUR_WM_IGNORE_LINES;
val |= REG_FIELD_PREP(CUR_WM_BLOCKS_MASK, level->blocks);
val |= REG_FIELD_PREP(CUR_WM_LINES_MASK, level->lines);
return val;
}
static void skl_write_cursor_wm(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = to_i915(plane->base.dev);
enum plane_id plane_id = plane->id;
enum pipe pipe = plane->pipe;
const struct skl_pipe_wm *pipe_wm = &crtc_state->wm.skl.optimal;
const struct skl_ddb_entry *ddb =
&crtc_state->wm.skl.plane_ddb[plane_id];
int level;
for (level = 0; level < i915->display.wm.num_levels; level++)
intel_de_write_fw(i915, CUR_WM(pipe, level),
skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
intel_de_write_fw(i915, CUR_WM_TRANS(pipe),
skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
if (HAS_HW_SAGV_WM(i915)) {
const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
intel_de_write_fw(i915, CUR_WM_SAGV(pipe),
skl_cursor_wm_reg_val(&wm->sagv.wm0));
intel_de_write_fw(i915, CUR_WM_SAGV_TRANS(pipe),
skl_cursor_wm_reg_val(&wm->sagv.trans_wm));
}
intel_de_write_fw(i915, CUR_BUF_CFG(pipe),
skl_cursor_ddb_reg_val(ddb));
}
/* TODO: split into noarm+arm pair */
static void i9xx_cursor_update_arm(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
@ -611,18 +668,19 @@ static void i9xx_cursor_update_arm(struct intel_plane *plane,
plane->cursor.size != fbc_ctl ||
plane->cursor.cntl != cntl) {
if (HAS_CUR_FBC(dev_priv))
intel_de_write_fw(dev_priv, CUR_FBC_CTL(pipe),
intel_de_write_fw(dev_priv,
CUR_FBC_CTL(dev_priv, pipe),
fbc_ctl);
intel_de_write_fw(dev_priv, CURCNTR(pipe), cntl);
intel_de_write_fw(dev_priv, CURPOS(pipe), pos);
intel_de_write_fw(dev_priv, CURBASE(pipe), base);
intel_de_write_fw(dev_priv, CURCNTR(dev_priv, pipe), cntl);
intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos);
intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base);
plane->cursor.base = base;
plane->cursor.size = fbc_ctl;
plane->cursor.cntl = cntl;
} else {
intel_de_write_fw(dev_priv, CURPOS(pipe), pos);
intel_de_write_fw(dev_priv, CURBASE(pipe), base);
intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos);
intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base);
}
}
@ -651,7 +709,7 @@ static bool i9xx_cursor_get_hw_state(struct intel_plane *plane,
if (!wakeref)
return false;
val = intel_de_read(dev_priv, CURCNTR(plane->pipe));
val = intel_de_read(dev_priv, CURCNTR(dev_priv, plane->pipe));
ret = val & MCURSOR_MODE_MASK;
@ -703,12 +761,12 @@ intel_legacy_cursor_update(struct drm_plane *_plane,
* PSR2 plane and transcoder registers can only be updated during
* vblank.
*
* FIXME bigjoiner fastpath would be good
* FIXME joiner fastpath would be good
*/
if (!crtc_state->hw.active ||
intel_crtc_needs_modeset(crtc_state) ||
intel_crtc_needs_fastset(crtc_state) ||
crtc_state->bigjoiner_pipes)
crtc_state->joiner_pipes)
goto slow;
/*

View File

@ -0,0 +1,112 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef __INTEL_CURSOR_REGS_H__
#define __INTEL_CURSOR_REGS_H__
#include "intel_display_reg_defs.h"
#define _CURACNTR 0x70080
#define CURCNTR(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CURACNTR)
/* Old style CUR*CNTR flags (desktop 8xx) */
#define CURSOR_ENABLE REG_BIT(31)
#define CURSOR_PIPE_GAMMA_ENABLE REG_BIT(30)
#define CURSOR_STRIDE_MASK REG_GENMASK(29, 28)
#define CURSOR_STRIDE(stride) REG_FIELD_PREP(CURSOR_STRIDE_MASK, ffs(stride) - 9) /* 256,512,1k,2k */
#define CURSOR_FORMAT_MASK REG_GENMASK(26, 24)
#define CURSOR_FORMAT_2C REG_FIELD_PREP(CURSOR_FORMAT_MASK, 0)
#define CURSOR_FORMAT_3C REG_FIELD_PREP(CURSOR_FORMAT_MASK, 1)
#define CURSOR_FORMAT_4C REG_FIELD_PREP(CURSOR_FORMAT_MASK, 2)
#define CURSOR_FORMAT_ARGB REG_FIELD_PREP(CURSOR_FORMAT_MASK, 4)
#define CURSOR_FORMAT_XRGB REG_FIELD_PREP(CURSOR_FORMAT_MASK, 5)
/* New style CUR*CNTR flags */
#define MCURSOR_ARB_SLOTS_MASK REG_GENMASK(30, 28) /* icl+ */
#define MCURSOR_ARB_SLOTS(x) REG_FIELD_PREP(MCURSOR_ARB_SLOTS_MASK, (x)) /* icl+ */
#define MCURSOR_PIPE_SEL_MASK REG_GENMASK(29, 28)
#define MCURSOR_PIPE_SEL(pipe) REG_FIELD_PREP(MCURSOR_PIPE_SEL_MASK, (pipe))
#define MCURSOR_PIPE_GAMMA_ENABLE REG_BIT(26)
#define MCURSOR_PIPE_CSC_ENABLE REG_BIT(24) /* ilk+ */
#define MCURSOR_ROTATE_180 REG_BIT(15)
#define MCURSOR_TRICKLE_FEED_DISABLE REG_BIT(14)
#define MCURSOR_MODE_MASK 0x27
#define MCURSOR_MODE_DISABLE 0x00
#define MCURSOR_MODE_128_32B_AX 0x02
#define MCURSOR_MODE_256_32B_AX 0x03
#define MCURSOR_MODE_64_2B 0x04
#define MCURSOR_MODE_64_32B_AX 0x07
#define MCURSOR_MODE_128_ARGB_AX (0x20 | MCURSOR_MODE_128_32B_AX)
#define MCURSOR_MODE_256_ARGB_AX (0x20 | MCURSOR_MODE_256_32B_AX)
#define MCURSOR_MODE_64_ARGB_AX (0x20 | MCURSOR_MODE_64_32B_AX)
#define _CURABASE 0x70084
#define CURBASE(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CURABASE)
#define _CURAPOS 0x70088
#define CURPOS(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CURAPOS)
#define CURSOR_POS_Y_SIGN REG_BIT(31)
#define CURSOR_POS_Y_MASK REG_GENMASK(30, 16)
#define CURSOR_POS_Y(y) REG_FIELD_PREP(CURSOR_POS_Y_MASK, (y))
#define CURSOR_POS_X_SIGN REG_BIT(15)
#define CURSOR_POS_X_MASK REG_GENMASK(14, 0)
#define CURSOR_POS_X(x) REG_FIELD_PREP(CURSOR_POS_X_MASK, (x))
#define _CURAPOS_ERLY_TPT 0x7008c
#define CURPOS_ERLY_TPT(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CURAPOS_ERLY_TPT)
#define _CURASIZE 0x700a0 /* 845/865 */
#define CURSIZE(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CURASIZE)
#define CURSOR_HEIGHT_MASK REG_GENMASK(21, 12)
#define CURSOR_HEIGHT(h) REG_FIELD_PREP(CURSOR_HEIGHT_MASK, (h))
#define CURSOR_WIDTH_MASK REG_GENMASK(9, 0)
#define CURSOR_WIDTH(w) REG_FIELD_PREP(CURSOR_WIDTH_MASK, (w))
#define _CUR_FBC_CTL_A 0x700a0 /* ivb+ */
#define CUR_FBC_CTL(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CUR_FBC_CTL_A)
#define CUR_FBC_EN REG_BIT(31)
#define CUR_FBC_HEIGHT_MASK REG_GENMASK(7, 0)
#define CUR_FBC_HEIGHT(h) REG_FIELD_PREP(CUR_FBC_HEIGHT_MASK, (h))
#define _CUR_CHICKEN_A 0x700a4 /* mtl+ */
#define CUR_CHICKEN(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CUR_CHICKEN_A)
#define _CURASURFLIVE 0x700ac /* g4x+ */
#define CURSURFLIVE(dev_priv, pipe) _MMIO_CURSOR2((dev_priv), (pipe), _CURASURFLIVE)
/* skl+ */
#define _CUR_WM_A_0 0x70140
#define _CUR_WM_B_0 0x71140
#define CUR_WM(pipe, level) _MMIO(_PIPE((pipe), _CUR_WM_A_0, _CUR_WM_B_0) + (level) * 4)
#define CUR_WM_EN REG_BIT(31)
#define CUR_WM_IGNORE_LINES REG_BIT(30)
#define CUR_WM_LINES_MASK REG_GENMASK(26, 14)
#define CUR_WM_BLOCKS_MASK REG_GENMASK(11, 0)
#define _CUR_WM_SAGV_A 0x70158
#define _CUR_WM_SAGV_B 0x71158
#define CUR_WM_SAGV(pipe) _MMIO_PIPE((pipe), _CUR_WM_SAGV_A, _CUR_WM_SAGV_B)
#define _CUR_WM_SAGV_TRANS_A 0x7015C
#define _CUR_WM_SAGV_TRANS_B 0x7115C
#define CUR_WM_SAGV_TRANS(pipe) _MMIO_PIPE((pipe), _CUR_WM_SAGV_TRANS_A, _CUR_WM_SAGV_TRANS_B)
#define _CUR_WM_TRANS_A 0x70168
#define _CUR_WM_TRANS_B 0x71168
#define CUR_WM_TRANS(pipe) _MMIO_PIPE((pipe), _CUR_WM_TRANS_A, _CUR_WM_TRANS_B)
#define _CUR_BUF_CFG_A 0x7017c
#define _CUR_BUF_CFG_B 0x7117c
#define CUR_BUF_CFG(pipe) _MMIO_PIPE((pipe), _CUR_BUF_CFG_A, _CUR_BUF_CFG_B)
/* skl+: 10 bits, icl+ 11 bits, adlp+ 12 bits */
#define CUR_BUF_END_MASK REG_GENMASK(27, 16)
#define CUR_BUF_END(end) REG_FIELD_PREP(CUR_BUF_END_MASK, (end))
#define CUR_BUF_START_MASK REG_GENMASK(11, 0)
#define CUR_BUF_START(start) REG_FIELD_PREP(CUR_BUF_START_MASK, (start))
/* tgl+ */
#define _SEL_FETCH_CUR_CTL_A 0x70880
#define _SEL_FETCH_CUR_CTL_B 0x71880
#define SEL_FETCH_CUR_CTL(pipe) _MMIO_PIPE((pipe), _SEL_FETCH_CUR_CTL_A, _SEL_FETCH_CUR_CTL_B)
#endif /* __INTEL_CURSOR_REGS_H__ */

View File

@ -945,6 +945,183 @@ static const struct intel_c20pll_state * const mtl_c20_dp_tables[] = {
NULL,
};
/*
* eDP link rates with 38.4 MHz reference clock.
*/
static const struct intel_c20pll_state xe2hpd_c20_edp_r216 = {
.clock = 216000,
.tx = { 0xbe88,
0x4800,
0x0000,
},
.cmn = { 0x0500,
0x0005,
0x0000,
0x0000,
},
.mpllb = { 0x50e1,
0x2120,
0x8e18,
0xbfc1,
0x9000,
0x78f6,
0x0000,
0x0000,
0x0000,
0x0000,
0x0000,
},
};
static const struct intel_c20pll_state xe2hpd_c20_edp_r243 = {
.clock = 243000,
.tx = { 0xbe88,
0x4800,
0x0000,
},
.cmn = { 0x0500,
0x0005,
0x0000,
0x0000,
},
.mpllb = { 0x50fd,
0x2120,
0x8f18,
0xbfc1,
0xa200,
0x8814,
0x2000,
0x0001,
0x1000,
0x0000,
0x0000,
},
};
static const struct intel_c20pll_state xe2hpd_c20_edp_r324 = {
.clock = 324000,
.tx = { 0xbe88,
0x4800,
0x0000,
},
.cmn = { 0x0500,
0x0005,
0x0000,
0x0000,
},
.mpllb = { 0x30a8,
0x2110,
0xcd9a,
0xbfc1,
0x6c00,
0x5ab8,
0x2000,
0x0001,
0x6000,
0x0000,
0x0000,
},
};
static const struct intel_c20pll_state xe2hpd_c20_edp_r432 = {
.clock = 432000,
.tx = { 0xbe88,
0x4800,
0x0000,
},
.cmn = { 0x0500,
0x0005,
0x0000,
0x0000,
},
.mpllb = { 0x30e1,
0x2110,
0x8e18,
0xbfc1,
0x9000,
0x78f6,
0x0000,
0x0000,
0x0000,
0x0000,
0x0000,
},
};
static const struct intel_c20pll_state xe2hpd_c20_edp_r675 = {
.clock = 675000,
.tx = { 0xbe88,
0x4800,
0x0000,
},
.cmn = { 0x0500,
0x0005,
0x0000,
0x0000,
},
.mpllb = { 0x10af,
0x2108,
0xce1a,
0xbfc1,
0x7080,
0x5e80,
0x2000,
0x0001,
0x6400,
0x0000,
0x0000,
},
};
static const struct intel_c20pll_state * const xe2hpd_c20_edp_tables[] = {
&mtl_c20_dp_rbr,
&xe2hpd_c20_edp_r216,
&xe2hpd_c20_edp_r243,
&mtl_c20_dp_hbr1,
&xe2hpd_c20_edp_r324,
&xe2hpd_c20_edp_r432,
&mtl_c20_dp_hbr2,
&xe2hpd_c20_edp_r675,
&mtl_c20_dp_hbr3,
NULL,
};
static const struct intel_c20pll_state xe2hpd_c20_dp_uhbr13_5 = {
.clock = 1350000, /* 13.5 Gbps */
.tx = { 0xbea0, /* tx cfg0 */
0x4800, /* tx cfg1 */
0x0000, /* tx cfg2 */
},
.cmn = {0x0500, /* cmn cfg0*/
0x0005, /* cmn cfg1 */
0x0000, /* cmn cfg2 */
0x0000, /* cmn cfg3 */
},
.mpllb = { 0x015f, /* mpllb cfg0 */
0x2205, /* mpllb cfg1 */
0x1b17, /* mpllb cfg2 */
0xffc1, /* mpllb cfg3 */
0xbd00, /* mpllb cfg4 */
0x9ec3, /* mpllb cfg5 */
0x2000, /* mpllb cfg6 */
0x0001, /* mpllb cfg7 */
0x4800, /* mpllb cfg8 */
0x0000, /* mpllb cfg9 */
0x0000, /* mpllb cfg10 */
},
};
static const struct intel_c20pll_state * const xe2hpd_c20_dp_tables[] = {
&mtl_c20_dp_rbr,
&mtl_c20_dp_hbr1,
&mtl_c20_dp_hbr2,
&mtl_c20_dp_hbr3,
&mtl_c20_dp_uhbr10,
&xe2hpd_c20_dp_uhbr13_5,
NULL,
};
/*
* HDMI link rates with 38.4 MHz reference clock.
*/
@ -1861,6 +2038,7 @@ static int intel_c10pll_calc_state(struct intel_crtc_state *crtc_state,
if (crtc_state->port_clock == tables[i]->clock) {
crtc_state->dpll_hw_state.cx0pll.c10 = *tables[i];
intel_c10pll_update_pll(crtc_state, encoder);
crtc_state->dpll_hw_state.cx0pll.use_c10 = true;
return 0;
}
@ -1928,8 +2106,8 @@ static void intel_c10_pll_program(struct drm_i915_private *i915,
MB_WRITE_COMMITTED);
}
void intel_c10pll_dump_hw_state(struct drm_i915_private *i915,
const struct intel_c10pll_state *hw_state)
static void intel_c10pll_dump_hw_state(struct drm_i915_private *i915,
const struct intel_c10pll_state *hw_state)
{
bool fracen;
int i;
@ -2061,10 +2239,20 @@ static const struct intel_c20pll_state * const *
intel_c20_pll_tables_get(struct intel_crtc_state *crtc_state,
struct intel_encoder *encoder)
{
if (intel_crtc_has_dp_encoder(crtc_state))
return mtl_c20_dp_tables;
else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI))
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
if (intel_crtc_has_dp_encoder(crtc_state)) {
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP))
return xe2hpd_c20_edp_tables;
if (DISPLAY_VER_FULL(i915) == IP_VER(14, 1))
return xe2hpd_c20_dp_tables;
else
return mtl_c20_dp_tables;
} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) {
return mtl_c20_hdmi_tables;
}
MISSING_CASE(encoder->type);
return NULL;
@ -2090,6 +2278,7 @@ static int intel_c20pll_calc_state(struct intel_crtc_state *crtc_state,
for (i = 0; tables[i]; i++) {
if (crtc_state->port_clock == tables[i]->clock) {
crtc_state->dpll_hw_state.cx0pll.c20 = *tables[i];
crtc_state->dpll_hw_state.cx0pll.use_c10 = false;
return 0;
}
}
@ -2161,6 +2350,7 @@ static void intel_c20pll_readout_hw_state(struct intel_encoder *encoder,
bool cntx;
intel_wakeref_t wakeref;
int i;
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
wakeref = intel_cx0_phy_transaction_begin(encoder);
@ -2170,42 +2360,50 @@ static void intel_c20pll_readout_hw_state(struct intel_encoder *encoder,
/* Read Tx configuration */
for (i = 0; i < ARRAY_SIZE(pll_state->tx); i++) {
if (cntx)
pll_state->tx[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_B_TX_CNTX_CFG(i));
pll_state->tx[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_B_TX_CNTX_CFG(i915, i));
else
pll_state->tx[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_A_TX_CNTX_CFG(i));
pll_state->tx[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_A_TX_CNTX_CFG(i915, i));
}
/* Read common configuration */
for (i = 0; i < ARRAY_SIZE(pll_state->cmn); i++) {
if (cntx)
pll_state->cmn[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_B_CMN_CNTX_CFG(i));
pll_state->cmn[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_B_CMN_CNTX_CFG(i915, i));
else
pll_state->cmn[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_A_CMN_CNTX_CFG(i));
pll_state->cmn[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_A_CMN_CNTX_CFG(i915, i));
}
if (intel_c20phy_use_mpllb(pll_state)) {
/* MPLLB configuration */
for (i = 0; i < ARRAY_SIZE(pll_state->mpllb); i++) {
if (cntx)
pll_state->mpllb[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_B_MPLLB_CNTX_CFG(i));
pll_state->mpllb[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_B_MPLLB_CNTX_CFG(i915, i));
else
pll_state->mpllb[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_A_MPLLB_CNTX_CFG(i));
pll_state->mpllb[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_A_MPLLB_CNTX_CFG(i915, i));
}
} else {
/* MPLLA configuration */
for (i = 0; i < ARRAY_SIZE(pll_state->mplla); i++) {
if (cntx)
pll_state->mplla[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_B_MPLLA_CNTX_CFG(i));
pll_state->mplla[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_B_MPLLA_CNTX_CFG(i915, i));
else
pll_state->mplla[i] = intel_c20_sram_read(encoder, INTEL_CX0_LANE0,
PHY_C20_A_MPLLA_CNTX_CFG(i));
pll_state->mplla[i] = intel_c20_sram_read(encoder,
INTEL_CX0_LANE0,
PHY_C20_A_MPLLA_CNTX_CFG(i915, i));
}
}
@ -2214,8 +2412,8 @@ static void intel_c20pll_readout_hw_state(struct intel_encoder *encoder,
intel_cx0_phy_transaction_end(encoder, wakeref);
}
void intel_c20pll_dump_hw_state(struct drm_i915_private *i915,
const struct intel_c20pll_state *hw_state)
static void intel_c20pll_dump_hw_state(struct drm_i915_private *i915,
const struct intel_c20pll_state *hw_state)
{
int i;
@ -2234,6 +2432,15 @@ void intel_c20pll_dump_hw_state(struct drm_i915_private *i915,
}
}
void intel_cx0pll_dump_hw_state(struct drm_i915_private *i915,
const struct intel_cx0pll_state *hw_state)
{
if (hw_state->use_c10)
intel_c10pll_dump_hw_state(i915, &hw_state->c10);
else
intel_c20pll_dump_hw_state(i915, &hw_state->c20);
}
static u8 intel_c20_get_dp_rate(u32 clock)
{
switch (clock) {
@ -2337,7 +2544,7 @@ static void intel_c20_pll_program(struct drm_i915_private *i915,
{
const struct intel_c20pll_state *pll_state = &crtc_state->dpll_hw_state.cx0pll.c20;
bool dp = false;
int lane = crtc_state->lane_count > 2 ? INTEL_CX0_BOTH_LANES : INTEL_CX0_LANE0;
u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(encoder);
u32 clock = crtc_state->port_clock;
bool cntx;
int i;
@ -2363,17 +2570,25 @@ static void intel_c20_pll_program(struct drm_i915_private *i915,
/* 3.1 Tx configuration */
for (i = 0; i < ARRAY_SIZE(pll_state->tx); i++) {
if (cntx)
intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_A_TX_CNTX_CFG(i), pll_state->tx[i]);
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_A_TX_CNTX_CFG(i915, i),
pll_state->tx[i]);
else
intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_B_TX_CNTX_CFG(i), pll_state->tx[i]);
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_B_TX_CNTX_CFG(i915, i),
pll_state->tx[i]);
}
/* 3.2 common configuration */
for (i = 0; i < ARRAY_SIZE(pll_state->cmn); i++) {
if (cntx)
intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_A_CMN_CNTX_CFG(i), pll_state->cmn[i]);
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_A_CMN_CNTX_CFG(i915, i),
pll_state->cmn[i]);
else
intel_c20_sram_write(encoder, INTEL_CX0_LANE0, PHY_C20_B_CMN_CNTX_CFG(i), pll_state->cmn[i]);
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_B_CMN_CNTX_CFG(i915, i),
pll_state->cmn[i]);
}
/* 3.3 mpllb or mplla configuration */
@ -2381,40 +2596,40 @@ static void intel_c20_pll_program(struct drm_i915_private *i915,
for (i = 0; i < ARRAY_SIZE(pll_state->mpllb); i++) {
if (cntx)
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_A_MPLLB_CNTX_CFG(i),
PHY_C20_A_MPLLB_CNTX_CFG(i915, i),
pll_state->mpllb[i]);
else
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_B_MPLLB_CNTX_CFG(i),
PHY_C20_B_MPLLB_CNTX_CFG(i915, i),
pll_state->mpllb[i]);
}
} else {
for (i = 0; i < ARRAY_SIZE(pll_state->mplla); i++) {
if (cntx)
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_A_MPLLA_CNTX_CFG(i),
PHY_C20_A_MPLLA_CNTX_CFG(i915, i),
pll_state->mplla[i]);
else
intel_c20_sram_write(encoder, INTEL_CX0_LANE0,
PHY_C20_B_MPLLA_CNTX_CFG(i),
PHY_C20_B_MPLLA_CNTX_CFG(i915, i),
pll_state->mplla[i]);
}
}
/* 4. Program custom width to match the link protocol */
intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_WIDTH,
intel_cx0_rmw(encoder, owned_lane_mask, PHY_C20_VDR_CUSTOM_WIDTH,
PHY_C20_CUSTOM_WIDTH_MASK,
PHY_C20_CUSTOM_WIDTH(intel_get_c20_custom_width(clock, dp)),
MB_WRITE_COMMITTED);
/* 5. For DP or 6. For HDMI */
if (dp) {
intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE,
intel_cx0_rmw(encoder, owned_lane_mask, PHY_C20_VDR_CUSTOM_SERDES_RATE,
BIT(6) | PHY_C20_CUSTOM_SERDES_MASK,
BIT(6) | PHY_C20_CUSTOM_SERDES(intel_c20_get_dp_rate(clock)),
MB_WRITE_COMMITTED);
} else {
intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE,
intel_cx0_rmw(encoder, owned_lane_mask, PHY_C20_VDR_CUSTOM_SERDES_RATE,
BIT(7) | PHY_C20_CUSTOM_SERDES_MASK,
is_hdmi_frl(clock) ? BIT(7) : 0,
MB_WRITE_COMMITTED);
@ -2428,7 +2643,7 @@ static void intel_c20_pll_program(struct drm_i915_private *i915,
* 7. Write Vendor specific registers to toggle context setting to load
* the updated programming toggle context bit
*/
intel_cx0_rmw(encoder, lane, PHY_C20_VDR_CUSTOM_SERDES_RATE,
intel_cx0_rmw(encoder, owned_lane_mask, PHY_C20_VDR_CUSTOM_SERDES_RATE,
BIT(0), cntx ? 0 : 1, MB_WRITE_COMMITTED);
}
@ -2900,17 +3115,28 @@ void intel_mtl_pll_enable(struct intel_encoder *encoder,
intel_cx0pll_enable(encoder, crtc_state);
}
static u8 cx0_power_control_disable_val(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
if (intel_encoder_is_c10phy(encoder))
return CX0_P2PG_STATE_DISABLE;
if (IS_BATTLEMAGE(i915) && encoder->port == PORT_A)
return CX0_P2PG_STATE_DISABLE;
return CX0_P4PG_STATE_DISABLE;
}
static void intel_cx0pll_disable(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
enum phy phy = intel_encoder_to_phy(encoder);
bool is_c10 = intel_encoder_is_c10phy(encoder);
intel_wakeref_t wakeref = intel_cx0_phy_transaction_begin(encoder);
/* 1. Change owned PHY lane power to Disable state. */
intel_cx0_powerdown_change_sequence(encoder, INTEL_CX0_BOTH_LANES,
is_c10 ? CX0_P2PG_STATE_DISABLE :
CX0_P4PG_STATE_DISABLE);
cx0_power_control_disable_val(encoder));
/*
* 2. Follow the Display Voltage Frequency Switching Sequence Before
@ -3028,9 +3254,6 @@ static void intel_c10pll_state_verify(const struct intel_crtc_state *state,
const struct intel_c10pll_state *mpllb_sw_state = &state->dpll_hw_state.cx0pll.c10;
int i;
if (intel_crtc_needs_fastset(state))
return;
for (i = 0; i < ARRAY_SIZE(mpllb_sw_state->pll); i++) {
u8 expected = mpllb_sw_state->pll[i];
@ -3054,10 +3277,64 @@ static void intel_c10pll_state_verify(const struct intel_crtc_state *state,
void intel_cx0pll_readout_hw_state(struct intel_encoder *encoder,
struct intel_cx0pll_state *pll_state)
{
if (intel_encoder_is_c10phy(encoder))
pll_state->use_c10 = false;
if (intel_encoder_is_c10phy(encoder)) {
intel_c10pll_readout_hw_state(encoder, &pll_state->c10);
else
pll_state->use_c10 = true;
} else {
intel_c20pll_readout_hw_state(encoder, &pll_state->c20);
}
}
static bool mtl_compare_hw_state_c10(const struct intel_c10pll_state *a,
const struct intel_c10pll_state *b)
{
if (a->tx != b->tx)
return false;
if (a->cmn != b->cmn)
return false;
if (memcmp(&a->pll, &b->pll, sizeof(a->pll)) != 0)
return false;
return true;
}
static bool mtl_compare_hw_state_c20(const struct intel_c20pll_state *a,
const struct intel_c20pll_state *b)
{
if (memcmp(&a->tx, &b->tx, sizeof(a->tx)) != 0)
return false;
if (memcmp(&a->cmn, &b->cmn, sizeof(a->cmn)) != 0)
return false;
if (a->tx[0] & C20_PHY_USE_MPLLB) {
if (memcmp(&a->mpllb, &b->mpllb, sizeof(a->mpllb)) != 0)
return false;
} else {
if (memcmp(&a->mplla, &b->mplla, sizeof(a->mplla)) != 0)
return false;
}
return true;
}
bool intel_cx0pll_compare_hw_state(const struct intel_cx0pll_state *a,
const struct intel_cx0pll_state *b)
{
if (a->use_c10 != b->use_c10)
return false;
if (a->use_c10)
return mtl_compare_hw_state_c10(&a->c10,
&b->c10);
else
return mtl_compare_hw_state_c20(&a->c20,
&b->c20);
}
int intel_cx0pll_calc_port_clock(struct intel_encoder *encoder,
@ -3078,9 +3355,10 @@ static void intel_c20pll_state_verify(const struct intel_crtc_state *state,
const struct intel_c20pll_state *mpll_sw_state = &state->dpll_hw_state.cx0pll.c20;
bool sw_use_mpllb = intel_c20phy_use_mpllb(mpll_sw_state);
bool hw_use_mpllb = intel_c20phy_use_mpllb(mpll_hw_state);
int clock = intel_c20pll_calc_port_clock(encoder, mpll_sw_state);
int i;
I915_STATE_WARN(i915, mpll_hw_state->clock != mpll_sw_state->clock,
I915_STATE_WARN(i915, mpll_hw_state->clock != clock,
"[CRTC:%d:%s] mismatch in C20: Register CLOCK (expected %d, found %d)",
crtc->base.base.id, crtc->base.name,
mpll_sw_state->clock, mpll_hw_state->clock);

View File

@ -35,12 +35,12 @@ void intel_cx0pll_readout_hw_state(struct intel_encoder *encoder,
int intel_cx0pll_calc_port_clock(struct intel_encoder *encoder,
const struct intel_cx0pll_state *pll_state);
void intel_c10pll_dump_hw_state(struct drm_i915_private *dev_priv,
const struct intel_c10pll_state *hw_state);
void intel_cx0pll_dump_hw_state(struct drm_i915_private *dev_priv,
const struct intel_cx0pll_state *hw_state);
void intel_cx0pll_state_verify(struct intel_atomic_state *state,
struct intel_crtc *crtc);
void intel_c20pll_dump_hw_state(struct drm_i915_private *i915,
const struct intel_c20pll_state *hw_state);
bool intel_cx0pll_compare_hw_state(const struct intel_cx0pll_state *a,
const struct intel_cx0pll_state *b);
void intel_cx0_phy_set_signal_levels(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
int intel_cx0_phy_check_hdmi_link_rate(struct intel_hdmi *hdmi, int clock);

View File

@ -254,18 +254,50 @@
#define PHY_C20_VDR_CUSTOM_WIDTH 0xD02
#define PHY_C20_CUSTOM_WIDTH_MASK REG_GENMASK(1, 0)
#define PHY_C20_CUSTOM_WIDTH(val) REG_FIELD_PREP8(PHY_C20_CUSTOM_WIDTH_MASK, val)
#define PHY_C20_A_TX_CNTX_CFG(idx) (0xCF2E - (idx))
#define PHY_C20_B_TX_CNTX_CFG(idx) (0xCF2A - (idx))
#define _MTL_C20_A_TX_CNTX_CFG 0xCF2E
#define _MTL_C20_B_TX_CNTX_CFG 0xCF2A
#define _MTL_C20_A_CMN_CNTX_CFG 0xCDAA
#define _MTL_C20_B_CMN_CNTX_CFG 0xCDA5
#define _MTL_C20_A_MPLLA_CFG 0xCCF0
#define _MTL_C20_B_MPLLA_CFG 0xCCE5
#define _MTL_C20_A_MPLLB_CFG 0xCB5A
#define _MTL_C20_B_MPLLB_CFG 0xCB4E
#define _XE2HPD_C20_A_TX_CNTX_CFG 0xCF5E
#define _XE2HPD_C20_B_TX_CNTX_CFG 0xCF5A
#define _XE2HPD_C20_A_CMN_CNTX_CFG 0xCE8E
#define _XE2HPD_C20_B_CMN_CNTX_CFG 0xCE89
#define _XE2HPD_C20_A_MPLLA_CFG 0xCE58
#define _XE2HPD_C20_B_MPLLA_CFG 0xCE4D
#define _XE2HPD_C20_A_MPLLB_CFG 0xCCC2
#define _XE2HPD_C20_B_MPLLB_CFG 0xCCB6
#define _IS_XE2HPD_C20(i915) (DISPLAY_VER_FULL(i915) == IP_VER(14, 1))
#define PHY_C20_A_TX_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_A_TX_CNTX_CFG : _MTL_C20_A_TX_CNTX_CFG) - (idx))
#define PHY_C20_B_TX_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_B_TX_CNTX_CFG : _MTL_C20_B_TX_CNTX_CFG) - (idx))
#define C20_PHY_TX_RATE REG_GENMASK(2, 0)
#define PHY_C20_A_CMN_CNTX_CFG(idx) (0xCDAA - (idx))
#define PHY_C20_B_CMN_CNTX_CFG(idx) (0xCDA5 - (idx))
#define PHY_C20_A_MPLLA_CNTX_CFG(idx) (0xCCF0 - (idx))
#define PHY_C20_B_MPLLA_CNTX_CFG(idx) (0xCCE5 - (idx))
#define PHY_C20_A_CMN_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_A_CMN_CNTX_CFG : _MTL_C20_A_CMN_CNTX_CFG) - (idx))
#define PHY_C20_B_CMN_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_B_CMN_CNTX_CFG : _MTL_C20_B_CMN_CNTX_CFG) - (idx))
#define PHY_C20_A_MPLLA_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_A_MPLLA_CFG : _MTL_C20_A_MPLLA_CFG) - (idx))
#define PHY_C20_B_MPLLA_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_B_MPLLA_CFG : _MTL_C20_B_MPLLA_CFG) - (idx))
#define C20_MPLLA_FRACEN REG_BIT(14)
#define C20_FB_CLK_DIV4_EN REG_BIT(13)
#define C20_MPLLA_TX_CLK_DIV_MASK REG_GENMASK(10, 8)
#define PHY_C20_A_MPLLB_CNTX_CFG(idx) (0xCB5A - (idx))
#define PHY_C20_B_MPLLB_CNTX_CFG(idx) (0xCB4E - (idx))
#define PHY_C20_A_MPLLB_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_A_MPLLB_CFG : _MTL_C20_A_MPLLB_CFG) - (idx))
#define PHY_C20_B_MPLLB_CNTX_CFG(i915, idx) \
((_IS_XE2HPD_C20(i915) ? _XE2HPD_C20_B_MPLLB_CFG : _MTL_C20_B_MPLLB_CFG) - (idx))
#define C20_MPLLB_TX_CLK_DIV_MASK REG_GENMASK(15, 13)
#define C20_MPLLB_FRACEN REG_BIT(13)
#define C20_REF_CLK_MPLLB_DIV_MASK REG_GENMASK(12, 10)

View File

@ -57,6 +57,7 @@
#include "intel_dp_tunnel.h"
#include "intel_dpio_phy.h"
#include "intel_dsi.h"
#include "intel_encoder.h"
#include "intel_fdi.h"
#include "intel_fifo_underrun.h"
#include "intel_gmbus.h"
@ -440,7 +441,8 @@ void intel_ddi_set_dp_msa(const struct intel_crtc_state *crtc_state,
if (intel_dp_needs_vsc_sdp(crtc_state, conn_state))
temp |= DP_MSA_MISC_COLOR_VSC_SDP;
intel_de_write(dev_priv, TRANS_MSA_MISC(cpu_transcoder), temp);
intel_de_write(dev_priv, TRANS_MSA_MISC(dev_priv, cpu_transcoder),
temp);
}
static u32 bdw_trans_port_sync_master_select(enum transcoder master_transcoder)
@ -603,10 +605,11 @@ void intel_ddi_enable_transcoder_func(struct intel_encoder *encoder,
}
intel_de_write(dev_priv,
TRANS_DDI_FUNC_CTL2(cpu_transcoder), ctl2);
TRANS_DDI_FUNC_CTL2(dev_priv, cpu_transcoder),
ctl2);
}
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder),
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder),
intel_ddi_transcoder_func_reg_val_get(encoder,
crtc_state));
}
@ -626,7 +629,8 @@ intel_ddi_config_transcoder_func(struct intel_encoder *encoder,
ctl = intel_ddi_transcoder_func_reg_val_get(encoder, crtc_state);
ctl &= ~TRANS_DDI_FUNC_ENABLE;
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), ctl);
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder),
ctl);
}
void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state)
@ -639,9 +643,11 @@ void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state
if (DISPLAY_VER(dev_priv) >= 11)
intel_de_write(dev_priv,
TRANS_DDI_FUNC_CTL2(cpu_transcoder), 0);
TRANS_DDI_FUNC_CTL2(dev_priv, cpu_transcoder),
0);
ctl = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder));
ctl = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder));
drm_WARN_ON(crtc->base.dev, ctl & TRANS_DDI_HDCP_SIGNALLING);
@ -660,7 +666,8 @@ void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state
ctl &= ~(TRANS_DDI_PORT_MASK | TRANS_DDI_MODE_SELECT_MASK);
}
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), ctl);
intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder),
ctl);
if (intel_has_quirk(display, QUIRK_INCREASE_DDI_DISABLED_TIME) &&
intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) {
@ -684,7 +691,7 @@ int intel_ddi_toggle_hdcp_bits(struct intel_encoder *intel_encoder,
if (drm_WARN_ON(dev, !wakeref))
return -ENXIO;
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder),
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder),
hdcp_mask, enable ? hdcp_mask : 0);
intel_display_power_put(dev_priv, intel_encoder->power_domain, wakeref);
return ret;
@ -718,7 +725,8 @@ bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector)
else
cpu_transcoder = (enum transcoder) pipe;
tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder));
tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder));
switch (tmp & TRANS_DDI_MODE_SELECT_MASK) {
case TRANS_DDI_MODE_SELECT_HDMI:
@ -782,7 +790,7 @@ static void intel_ddi_get_encoder_pipes(struct intel_encoder *encoder,
if (HAS_TRANSCODER(dev_priv, TRANSCODER_EDP) && port == PORT_A) {
tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(TRANSCODER_EDP));
TRANS_DDI_FUNC_CTL(dev_priv, TRANSCODER_EDP));
switch (tmp & TRANS_DDI_EDP_INPUT_MASK) {
default:
@ -823,7 +831,7 @@ static void intel_ddi_get_encoder_pipes(struct intel_encoder *encoder,
}
tmp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(cpu_transcoder));
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder));
intel_display_power_put(dev_priv, POWER_DOMAIN_TRANSCODER(cpu_transcoder),
trans_wakeref);
@ -2072,9 +2080,9 @@ void intel_ddi_sanitize_encoder_pll_mapping(struct intel_encoder *encoder)
!encoder->is_clock_enabled(encoder))
return;
drm_notice(&i915->drm,
"[ENCODER:%d:%s] is disabled/in DSI mode with an ungated DDI clock, gate it\n",
encoder->base.base.id, encoder->base.name);
drm_dbg_kms(&i915->drm,
"[ENCODER:%d:%s] is disabled/in DSI mode with an ungated DDI clock, gate it\n",
encoder->base.base.id, encoder->base.name);
encoder->disable_clock(encoder);
}
@ -2178,7 +2186,8 @@ i915_reg_t dp_tp_ctl_reg(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
if (DISPLAY_VER(dev_priv) >= 12)
return TGL_DP_TP_CTL(tgl_dp_tp_transcoder(crtc_state));
return TGL_DP_TP_CTL(dev_priv,
tgl_dp_tp_transcoder(crtc_state));
else
return DP_TP_CTL(encoder->port);
}
@ -2189,7 +2198,8 @@ i915_reg_t dp_tp_status_reg(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
if (DISPLAY_VER(dev_priv) >= 12)
return TGL_DP_TP_STATUS(tgl_dp_tp_transcoder(crtc_state));
return TGL_DP_TP_STATUS(dev_priv,
tgl_dp_tp_transcoder(crtc_state));
else
return DP_TP_STATUS(encoder->port);
}
@ -2586,7 +2596,7 @@ static void mtl_ddi_pre_enable_dp(struct intel_atomic_state *state,
* Pattern, wait for 5 idle patterns (DP_TP_STATUS Min_Idles_Sent)
* (timeout after 800 us)
*/
intel_dp_start_link_train(intel_dp, crtc_state);
intel_dp_start_link_train(state, intel_dp, crtc_state);
/* 6.n Set DP_TP_CTL link training to Normal */
if (!is_trans_port_sync_mode(crtc_state))
@ -2728,7 +2738,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
* Pattern, wait for 5 idle patterns (DP_TP_STATUS Min_Idles_Sent)
* (timeout after 800 us)
*/
intel_dp_start_link_train(intel_dp, crtc_state);
intel_dp_start_link_train(state, intel_dp, crtc_state);
/* 7.k Set DP_TP_CTL link training to Normal */
if (!is_trans_port_sync_mode(crtc_state))
@ -2795,7 +2805,7 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
to_intel_connector(conn_state->connector),
crtc_state);
intel_dp_sink_set_fec_ready(intel_dp, crtc_state, true);
intel_dp_start_link_train(intel_dp, crtc_state);
intel_dp_start_link_train(state, intel_dp, crtc_state);
if ((port != PORT_A || DISPLAY_VER(dev_priv) >= 9) &&
!is_trans_port_sync_mode(crtc_state))
intel_dp_stop_link_train(intel_dp, crtc_state);
@ -3025,7 +3035,8 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state,
if (is_mst) {
enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder;
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder),
intel_de_rmw(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder),
TGL_TRANS_DDI_PORT_MASK | TRANS_DDI_MODE_SELECT_MASK,
0);
}
@ -3506,11 +3517,10 @@ intel_ddi_pre_pll_enable(struct intel_atomic_state *state,
bool is_tc_port = intel_encoder_is_tc(encoder);
if (is_tc_port) {
struct intel_crtc *master_crtc =
to_intel_crtc(crtc_state->uapi.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
intel_tc_port_get_link(dig_port, crtc_state->lane_count);
intel_ddi_update_active_dpll(state, encoder, master_crtc);
intel_ddi_update_active_dpll(state, encoder, crtc);
}
main_link_aux_power_domain_get(dig_port, crtc_state);
@ -3752,14 +3762,16 @@ static enum transcoder bdw_transcoder_master_readout(struct drm_i915_private *de
u32 master_select;
if (DISPLAY_VER(dev_priv) >= 11) {
u32 ctl2 = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL2(cpu_transcoder));
u32 ctl2 = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL2(dev_priv, cpu_transcoder));
if ((ctl2 & PORT_SYNC_MODE_ENABLE) == 0)
return INVALID_TRANSCODER;
master_select = REG_FIELD_GET(PORT_SYNC_MODE_MASTER_SELECT_MASK, ctl2);
} else {
u32 ctl = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder));
u32 ctl = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder));
if ((ctl & TRANS_DDI_PORT_SYNC_ENABLE) == 0)
return INVALID_TRANSCODER;
@ -3815,7 +3827,8 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
u32 temp, flags = 0;
temp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder));
temp = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder));
if (temp & TRANS_DDI_PHSYNC)
flags |= DRM_MODE_FLAG_PHSYNC;
else
@ -4264,10 +4277,10 @@ static bool crtcs_port_sync_compatible(const struct intel_crtc_state *crtc_state
{
/*
* FIXME the modeset sequence is currently wrong and
* can't deal with bigjoiner + port sync at the same time.
* can't deal with joiner + port sync at the same time.
*/
return crtc_state1->hw.active && crtc_state2->hw.active &&
!crtc_state1->bigjoiner_pipes && !crtc_state2->bigjoiner_pipes &&
!crtc_state1->joiner_pipes && !crtc_state2->joiner_pipes &&
crtc_state1->output_types == crtc_state2->output_types &&
crtc_state1->output_format == crtc_state2->output_format &&
crtc_state1->lane_count == crtc_state2->lane_count &&
@ -4441,35 +4454,6 @@ intel_ddi_init_dp_connector(struct intel_digital_port *dig_port)
return connector;
}
static int modeset_pipe(struct drm_crtc *crtc,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_atomic_state *state;
struct drm_crtc_state *crtc_state;
int ret;
state = drm_atomic_state_alloc(crtc->dev);
if (!state)
return -ENOMEM;
state->acquire_ctx = ctx;
to_intel_atomic_state(state)->internal = true;
crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(crtc_state)) {
ret = PTR_ERR(crtc_state);
goto out;
}
crtc_state->connectors_changed = true;
ret = drm_atomic_commit(state);
out:
drm_atomic_state_put(state);
return ret;
}
static int intel_hdmi_reset_link(struct intel_encoder *encoder,
struct drm_modeset_acquire_ctx *ctx)
{
@ -4539,7 +4523,18 @@ static int intel_hdmi_reset_link(struct intel_encoder *encoder,
* would be perfectly happy if were to just reconfigure
* the SCDC settings on the fly.
*/
return modeset_pipe(&crtc->base, ctx);
return intel_modeset_commit_pipes(dev_priv, BIT(crtc->pipe), ctx);
}
static void intel_ddi_link_check(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
/* TODO: Move checking the HDMI link state here as well. */
drm_WARN_ON(&i915->drm, !dig_port->dp.attached_connector);
intel_dp_link_check(encoder);
}
static enum intel_hotplug_state
@ -4563,14 +4558,13 @@ intel_ddi_hotplug(struct intel_encoder *encoder,
state = intel_encoder_hotplug(encoder, connector);
if (!intel_tc_port_link_reset(dig_port)) {
intel_modeset_lock_ctx_retry(&ctx, NULL, 0, ret) {
if (connector->base.connector_type == DRM_MODE_CONNECTOR_HDMIA)
if (connector->base.connector_type == DRM_MODE_CONNECTOR_HDMIA) {
intel_modeset_lock_ctx_retry(&ctx, NULL, 0, ret)
ret = intel_hdmi_reset_link(encoder, &ctx);
else
ret = intel_dp_retrain_link(encoder, &ctx);
drm_WARN_ON(encoder->base.dev, ret);
} else {
intel_dp_check_link_state(intel_dp);
}
drm_WARN_ON(encoder->base.dev, ret);
}
/*
@ -4785,6 +4779,11 @@ static void intel_ddi_tc_encoder_suspend_complete(struct intel_encoder *encoder)
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
/*
* TODO: Move this to intel_dp_encoder_suspend(),
* once modeset locking around that is removed.
*/
intel_encoder_link_check_flush_work(encoder);
intel_tc_port_suspend(dig_port);
}
@ -4975,6 +4974,8 @@ void intel_ddi_init(struct drm_i915_private *dev_priv,
"DDI %c/PHY %c", port_name(port), phy_name(phy));
}
intel_encoder_link_check_init(encoder, intel_ddi_link_check);
mutex_init(&dig_port->hdcp_mutex);
dig_port->num_hdcp_streams = 0;

File diff suppressed because it is too large Load Diff

View File

@ -415,7 +415,7 @@ u32 intel_plane_fb_max_stride(struct drm_i915_private *dev_priv,
enum drm_mode_status
intel_mode_valid_max_plane_size(struct drm_i915_private *dev_priv,
const struct drm_display_mode *mode,
bool bigjoiner);
bool joiner);
enum drm_mode_status
intel_cpu_transcoder_mode_valid(struct drm_i915_private *i915,
const struct drm_display_mode *mode);
@ -423,10 +423,10 @@ enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port);
bool is_trans_port_sync_mode(const struct intel_crtc_state *state);
bool is_trans_port_sync_master(const struct intel_crtc_state *state);
u8 intel_crtc_joined_pipe_mask(const struct intel_crtc_state *crtc_state);
bool intel_crtc_is_bigjoiner_slave(const struct intel_crtc_state *crtc_state);
bool intel_crtc_is_bigjoiner_master(const struct intel_crtc_state *crtc_state);
u8 intel_crtc_bigjoiner_slave_pipes(const struct intel_crtc_state *crtc_state);
struct intel_crtc *intel_master_crtc(const struct intel_crtc_state *crtc_state);
bool intel_crtc_is_joiner_secondary(const struct intel_crtc_state *crtc_state);
bool intel_crtc_is_joiner_primary(const struct intel_crtc_state *crtc_state);
u8 intel_crtc_joiner_secondary_pipes(const struct intel_crtc_state *crtc_state);
struct intel_crtc *intel_primary_crtc(const struct intel_crtc_state *crtc_state);
bool intel_crtc_get_pipe_config(struct intel_crtc_state *crtc_state);
bool intel_pipe_config_compare(const struct intel_crtc_state *current_config,
const struct intel_crtc_state *pipe_config,
@ -537,6 +537,9 @@ int intel_modeset_pipes_in_mask_early(struct intel_atomic_state *state,
const char *reason, u8 pipe_mask);
int intel_modeset_all_pipes_late(struct intel_atomic_state *state,
const char *reason);
int intel_modeset_commit_pipes(struct drm_i915_private *i915,
u8 pipe_mask,
struct drm_modeset_acquire_ctx *ctx);
void intel_modeset_get_crtc_power_domains(struct intel_crtc_state *crtc_state,
struct intel_power_domain_mask *old_domains);
void intel_modeset_put_crtc_power_domains(struct intel_crtc *crtc,

View File

@ -13,6 +13,7 @@
#include "i915_debugfs.h"
#include "i915_irq.h"
#include "i915_reg.h"
#include "intel_alpm.h"
#include "intel_crtc.h"
#include "intel_de.h"
#include "intel_crtc_state_dump.h"
@ -23,6 +24,7 @@
#include "intel_display_types.h"
#include "intel_dmc.h"
#include "intel_dp.h"
#include "intel_dp_link_training.h"
#include "intel_dp_mst.h"
#include "intel_drrs.h"
#include "intel_fbc.h"
@ -76,7 +78,7 @@ static int i915_sr_status(struct seq_file *m, void *unused)
else if (IS_I915GM(dev_priv))
sr_enabled = intel_de_read(dev_priv, INSTPM) & INSTPM_SELF_EN;
else if (IS_PINEVIEW(dev_priv))
sr_enabled = intel_de_read(dev_priv, DSPFW3) & PINEVIEW_SELF_REFRESH_EN;
sr_enabled = intel_de_read(dev_priv, DSPFW3(dev_priv)) & PINEVIEW_SELF_REFRESH_EN;
else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
sr_enabled = intel_de_read(dev_priv, FW_BLC_SELF_VLV) & FW_CSPWRDWNEN;
@ -574,10 +576,10 @@ static void intel_crtc_info(struct seq_file *m, struct intel_crtc *crtc)
intel_scaler_info(m, crtc);
if (crtc_state->bigjoiner_pipes)
if (crtc_state->joiner_pipes)
seq_printf(m, "\tLinked to 0x%x pipes as a %s\n",
crtc_state->bigjoiner_pipes,
intel_crtc_is_bigjoiner_slave(crtc_state) ? "slave" : "master");
crtc_state->joiner_pipes,
intel_crtc_is_joiner_secondary(crtc_state) ? "slave" : "master");
for_each_intel_encoder_mask(&dev_priv->drm, encoder,
crtc_state->uapi.encoder_mask)
@ -1515,6 +1517,8 @@ void intel_connector_debugfs_add(struct intel_connector *connector)
intel_drrs_connector_debugfs_add(connector);
intel_pps_connector_debugfs_add(connector);
intel_psr_connector_debugfs_add(connector);
intel_alpm_lobf_debugfs_add(connector);
intel_dp_link_training_debugfs_add(connector);
if (connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
connector_type == DRM_MODE_CONNECTOR_HDMIA ||

File diff suppressed because it is too large Load Diff

View File

@ -14,6 +14,89 @@
struct drm_i915_private;
struct drm_printer;
/* Keep in gen based order, and chronological order within a gen */
enum intel_display_platform {
INTEL_DISPLAY_PLATFORM_UNINITIALIZED = 0,
/* Display ver 2 */
INTEL_DISPLAY_I830,
INTEL_DISPLAY_I845G,
INTEL_DISPLAY_I85X,
INTEL_DISPLAY_I865G,
/* Display ver 3 */
INTEL_DISPLAY_I915G,
INTEL_DISPLAY_I915GM,
INTEL_DISPLAY_I945G,
INTEL_DISPLAY_I945GM,
INTEL_DISPLAY_G33,
INTEL_DISPLAY_PINEVIEW,
/* Display ver 4 */
INTEL_DISPLAY_I965G,
INTEL_DISPLAY_I965GM,
INTEL_DISPLAY_G45,
INTEL_DISPLAY_GM45,
/* Display ver 5 */
INTEL_DISPLAY_IRONLAKE,
/* Display ver 6 */
INTEL_DISPLAY_SANDYBRIDGE,
/* Display ver 7 */
INTEL_DISPLAY_IVYBRIDGE,
INTEL_DISPLAY_VALLEYVIEW,
INTEL_DISPLAY_HASWELL,
/* Display ver 8 */
INTEL_DISPLAY_BROADWELL,
INTEL_DISPLAY_CHERRYVIEW,
/* Display ver 9 */
INTEL_DISPLAY_SKYLAKE,
INTEL_DISPLAY_BROXTON,
INTEL_DISPLAY_KABYLAKE,
INTEL_DISPLAY_GEMINILAKE,
INTEL_DISPLAY_COFFEELAKE,
INTEL_DISPLAY_COMETLAKE,
/* Display ver 11 */
INTEL_DISPLAY_ICELAKE,
INTEL_DISPLAY_JASPERLAKE,
INTEL_DISPLAY_ELKHARTLAKE,
/* Display ver 12 */
INTEL_DISPLAY_TIGERLAKE,
INTEL_DISPLAY_ROCKETLAKE,
INTEL_DISPLAY_DG1,
INTEL_DISPLAY_ALDERLAKE_S,
/* Display ver 13 */
INTEL_DISPLAY_ALDERLAKE_P,
INTEL_DISPLAY_DG2,
/* Display ver 14 (based on GMD ID) */
INTEL_DISPLAY_METEORLAKE,
/* Display ver 20 (based on GMD ID) */
INTEL_DISPLAY_LUNARLAKE,
/* Display ver 14.1 (based on GMD ID) */
INTEL_DISPLAY_BATTLEMAGE,
};
enum intel_display_subplatform {
INTEL_DISPLAY_SUBPLATFORM_UNINITIALIZED = 0,
INTEL_DISPLAY_HASWELL_ULT,
INTEL_DISPLAY_HASWELL_ULX,
INTEL_DISPLAY_BROADWELL_ULT,
INTEL_DISPLAY_BROADWELL_ULX,
INTEL_DISPLAY_SKYLAKE_ULT,
INTEL_DISPLAY_SKYLAKE_ULX,
INTEL_DISPLAY_KABYLAKE_ULT,
INTEL_DISPLAY_KABYLAKE_ULX,
INTEL_DISPLAY_COFFEELAKE_ULT,
INTEL_DISPLAY_COFFEELAKE_ULX,
INTEL_DISPLAY_COMETLAKE_ULT,
INTEL_DISPLAY_COMETLAKE_ULX,
INTEL_DISPLAY_ICELAKE_PORT_F,
INTEL_DISPLAY_TIGERLAKE_UY,
INTEL_DISPLAY_ALDERLAKE_S_RAPTORLAKE_S,
INTEL_DISPLAY_ALDERLAKE_P_ALDERLAKE_N,
INTEL_DISPLAY_ALDERLAKE_P_RAPTORLAKE_P,
INTEL_DISPLAY_ALDERLAKE_P_RAPTORLAKE_U,
INTEL_DISPLAY_DG2_G10,
INTEL_DISPLAY_DG2_G11,
INTEL_DISPLAY_DG2_G12,
};
#define DEV_INFO_DISPLAY_FOR_EACH_FLAG(func) \
/* Keep in alphabetical order */ \
func(cursor_needs_physical); \
@ -71,6 +154,7 @@ struct drm_printer;
BIT(trans)) != 0)
#define HAS_VRR(i915) (DISPLAY_VER(i915) >= 11)
#define HAS_AS_SDP(i915) (DISPLAY_VER(i915) >= 13)
#define HAS_CMRR(i915) (DISPLAY_VER(i915) >= 20)
#define INTEL_NUM_PIPES(i915) (hweight8(DISPLAY_RUNTIME_INFO(i915)->pipe_mask))
#define I915_HAS_HOTPLUG(i915) (DISPLAY_INFO(i915)->has_hotplug)
#define OVERLAY_NEEDS_PHYSICAL(i915) (DISPLAY_INFO(i915)->overlay_needs_physical)
@ -111,7 +195,10 @@ struct drm_printer;
(DISPLAY_VER(i915) >= (from) && DISPLAY_VER(i915) <= (until))
struct intel_display_runtime_info {
struct {
enum intel_display_platform platform;
enum intel_display_subplatform subplatform;
struct intel_display_ip_ver {
u16 ver;
u16 rel;
u16 step;

View File

@ -18,6 +18,7 @@
#include "intel_fifo_underrun.h"
#include "intel_gmbus.h"
#include "intel_hotplug_irq.h"
#include "intel_pipe_crc_regs.h"
#include "intel_pmdemand.h"
#include "intel_psr.h"
#include "intel_psr_regs.h"
@ -224,7 +225,7 @@ out:
void i915_enable_pipestat(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 status_mask)
{
i915_reg_t reg = PIPESTAT(pipe);
i915_reg_t reg = PIPESTAT(dev_priv, pipe);
u32 enable_mask;
drm_WARN_ONCE(&dev_priv->drm, status_mask & ~PIPESTAT_INT_STATUS_MASK,
@ -247,7 +248,7 @@ void i915_enable_pipestat(struct drm_i915_private *dev_priv,
void i915_disable_pipestat(struct drm_i915_private *dev_priv,
enum pipe pipe, u32 status_mask)
{
i915_reg_t reg = PIPESTAT(pipe);
i915_reg_t reg = PIPESTAT(dev_priv, pipe);
u32 enable_mask;
drm_WARN_ONCE(&dev_priv->drm, status_mask & ~PIPESTAT_INT_STATUS_MASK,
@ -356,7 +357,7 @@ static void hsw_pipe_crc_irq_handler(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
display_pipe_crc_irq_handler(dev_priv, pipe,
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_1_IVB(pipe)),
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_HSW(pipe)),
0, 0, 0, 0);
}
@ -377,19 +378,21 @@ static void i9xx_pipe_crc_irq_handler(struct drm_i915_private *dev_priv,
u32 res1, res2;
if (DISPLAY_VER(dev_priv) >= 3)
res1 = intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_RES1_I915(pipe));
res1 = intel_uncore_read(&dev_priv->uncore,
PIPE_CRC_RES_RES1_I915(dev_priv, pipe));
else
res1 = 0;
if (DISPLAY_VER(dev_priv) >= 5 || IS_G4X(dev_priv))
res2 = intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_RES2_G4X(pipe));
res2 = intel_uncore_read(&dev_priv->uncore,
PIPE_CRC_RES_RES2_G4X(dev_priv, pipe));
else
res2 = 0;
display_pipe_crc_irq_handler(dev_priv, pipe,
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_RED(pipe)),
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_GREEN(pipe)),
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_BLUE(pipe)),
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_RED(dev_priv, pipe)),
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_GREEN(dev_priv, pipe)),
intel_uncore_read(&dev_priv->uncore, PIPE_CRC_RES_BLUE(dev_priv, pipe)),
res1, res2);
}
@ -398,7 +401,8 @@ void i9xx_pipestat_irq_reset(struct drm_i915_private *dev_priv)
enum pipe pipe;
for_each_pipe(dev_priv, pipe) {
intel_uncore_write(&dev_priv->uncore, PIPESTAT(pipe),
intel_uncore_write(&dev_priv->uncore,
PIPESTAT(dev_priv, pipe),
PIPESTAT_INT_STATUS_MASK |
PIPE_FIFO_UNDERRUN_STATUS);
@ -451,7 +455,7 @@ void i9xx_pipestat_irq_ack(struct drm_i915_private *dev_priv,
if (!status_mask)
continue;
reg = PIPESTAT(pipe);
reg = PIPESTAT(dev_priv, pipe);
pipe_stats[pipe] = intel_uncore_read(&dev_priv->uncore, reg) & status_mask;
enable_mask = i915_pipestat_enable_mask(dev_priv, pipe);
@ -876,7 +880,8 @@ gen8_de_misc_irq_handler(struct drm_i915_private *dev_priv, u32 iir)
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
if (DISPLAY_VER(dev_priv) >= 12)
iir_reg = TRANS_PSR_IIR(intel_dp->psr.transcoder);
iir_reg = TRANS_PSR_IIR(dev_priv,
intel_dp->psr.transcoder);
else
iir_reg = EDP_PSR_IIR;
@ -909,7 +914,8 @@ static void gen11_dsi_te_interrupt_handler(struct drm_i915_private *dev_priv,
* Incase of dual link, TE comes from DSI_1
* this is to check if dual link is enabled
*/
val = intel_uncore_read(&dev_priv->uncore, TRANS_DDI_FUNC_CTL2(TRANSCODER_DSI_0));
val = intel_uncore_read(&dev_priv->uncore,
TRANS_DDI_FUNC_CTL2(dev_priv, TRANSCODER_DSI_0));
val &= PORT_SYNC_MODE_ENABLE;
/*
@ -930,7 +936,8 @@ static void gen11_dsi_te_interrupt_handler(struct drm_i915_private *dev_priv,
}
/* Get PIPE for handling VBLANK event */
val = intel_uncore_read(&dev_priv->uncore, TRANS_DDI_FUNC_CTL(dsi_trans));
val = intel_uncore_read(&dev_priv->uncore,
TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans));
switch (val & TRANS_DDI_EDP_INPUT_MASK) {
case TRANS_DDI_EDP_INPUT_A_ON:
pipe = PIPE_A;
@ -1374,7 +1381,7 @@ void vlv_display_irq_reset(struct drm_i915_private *dev_priv)
intel_uncore_write(uncore, DPINVGTT, DPINVGTT_STATUS_MASK_VLV);
i915_hotplug_interrupt_update_locked(dev_priv, 0xffffffff, 0);
intel_uncore_rmw(uncore, PORT_HOTPLUG_STAT, 0, 0);
intel_uncore_rmw(uncore, PORT_HOTPLUG_STAT(dev_priv), 0, 0);
i9xx_pipestat_irq_reset(dev_priv);
@ -1455,8 +1462,12 @@ void gen11_display_irq_reset(struct drm_i915_private *dev_priv)
if (!intel_display_power_is_enabled(dev_priv, domain))
continue;
intel_uncore_write(uncore, TRANS_PSR_IMR(trans), 0xffffffff);
intel_uncore_write(uncore, TRANS_PSR_IIR(trans), 0xffffffff);
intel_uncore_write(uncore,
TRANS_PSR_IMR(dev_priv, trans),
0xffffffff);
intel_uncore_write(uncore,
TRANS_PSR_IIR(dev_priv, trans),
0xffffffff);
}
} else {
intel_uncore_write(uncore, EDP_PSR_IMR, 0xffffffff);
@ -1688,7 +1699,8 @@ void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
if (!intel_display_power_is_enabled(dev_priv, domain))
continue;
gen3_assert_iir_is_zero(uncore, TRANS_PSR_IIR(trans));
gen3_assert_iir_is_zero(uncore,
TRANS_PSR_IIR(dev_priv, trans));
}
} else {
gen3_assert_iir_is_zero(uncore, EDP_PSR_IIR);

View File

@ -60,16 +60,23 @@ enum transcoder {
* (eg. PLANE_CTL(), PS_PLANE_SEL(), etc.) so adjust with care.
*/
enum plane_id {
PLANE_PRIMARY,
PLANE_SPRITE0,
PLANE_SPRITE1,
PLANE_SPRITE2,
PLANE_SPRITE3,
PLANE_SPRITE4,
PLANE_SPRITE5,
/* skl+ universal plane names */
PLANE_1,
PLANE_2,
PLANE_3,
PLANE_4,
PLANE_5,
PLANE_6,
PLANE_7,
PLANE_CURSOR,
I915_MAX_PLANES,
/* pre-skl plane names */
PLANE_PRIMARY = PLANE_1,
PLANE_SPRITE0,
PLANE_SPRITE1,
};
enum port {

View File

@ -116,7 +116,7 @@ intel_display_param_named(psr_safest_params, bool, 0400,
"Default: 0");
intel_display_param_named_unsafe(enable_psr2_sel_fetch, bool, 0400,
"Enable PSR2 selective fetch "
"Enable PSR2 and Panel Replay selective fetch "
"(0=disabled, 1=enabled) "
"Default: 1");

View File

@ -675,6 +675,12 @@ intel_display_power_put_async_work(struct work_struct *work)
release_async_put_domains(power_domains,
&power_domains->async_put_domains[0]);
/*
* Cancel the work that got queued after this one got dequeued,
* since here we released the corresponding async-put reference.
*/
cancel_async_put_work(power_domains, false);
/* Requeue the work if more domains were async put meanwhile. */
if (!bitmap_empty(power_domains->async_put_domains[1].bits, POWER_DOMAIN_NUM)) {
bitmap_copy(power_domains->async_put_domains[0].bits,
@ -686,12 +692,6 @@ intel_display_power_put_async_work(struct work_struct *work)
fetch_and_zero(&new_work_wakeref),
power_domains->async_put_next_delay);
power_domains->async_put_next_delay = 0;
} else {
/*
* Cancel the work that got queued after this one got dequeued,
* since here we released the corresponding async-put reference.
*/
cancel_async_put_work(power_domains, false);
}
out_verify:
@ -1207,7 +1207,7 @@ static void assert_can_disable_lcpll(struct drm_i915_private *dev_priv)
intel_de_read(dev_priv, WRPLL_CTL(1)) & WRPLL_PLL_ENABLE,
"WRPLL2 enabled\n");
I915_STATE_WARN(dev_priv,
intel_de_read(dev_priv, PP_STATUS(0)) & PP_ON,
intel_de_read(dev_priv, PP_STATUS(dev_priv, 0)) & PP_ON,
"Panel power on\n");
I915_STATE_WARN(dev_priv,
intel_de_read(dev_priv, BLC_PWM_CPU_CTL2) & BLM_PWM_ENABLE,
@ -1688,6 +1688,10 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
if (IS_DG2(dev_priv))
intel_snps_phy_wait_for_calibration(dev_priv);
/* 9. XE2_HPD: Program CHICKEN_MISC_2 before any cursor or planes are enabled */
if (DISPLAY_VER_FULL(dev_priv) == IP_VER(14, 1))
intel_de_rmw(dev_priv, CHICKEN_MISC_2, BMG_DARB_HALF_BLK_END_BURST, 1);
if (resume)
intel_dmc_load_program(dev_priv);
@ -1768,7 +1772,7 @@ static void chv_phy_control_init(struct drm_i915_private *dev_priv)
* current lane status.
*/
if (intel_power_well_is_enabled(dev_priv, cmn_bc)) {
u32 status = intel_de_read(dev_priv, DPLL(PIPE_A));
u32 status = intel_de_read(dev_priv, DPLL(dev_priv, PIPE_A));
unsigned int mask;
mask = status & DPLL_PORTB_READY_MASK;

View File

@ -1044,9 +1044,9 @@ static bool i9xx_always_on_power_well_enabled(struct drm_i915_private *dev_priv,
static void i830_pipes_power_well_enable(struct drm_i915_private *dev_priv,
struct i915_power_well *power_well)
{
if ((intel_de_read(dev_priv, TRANSCONF(PIPE_A)) & TRANSCONF_ENABLE) == 0)
if ((intel_de_read(dev_priv, TRANSCONF(dev_priv, PIPE_A)) & TRANSCONF_ENABLE) == 0)
i830_enable_pipe(dev_priv, PIPE_A);
if ((intel_de_read(dev_priv, TRANSCONF(PIPE_B)) & TRANSCONF_ENABLE) == 0)
if ((intel_de_read(dev_priv, TRANSCONF(dev_priv, PIPE_B)) & TRANSCONF_ENABLE) == 0)
i830_enable_pipe(dev_priv, PIPE_B);
}
@ -1060,8 +1060,8 @@ static void i830_pipes_power_well_disable(struct drm_i915_private *dev_priv,
static bool i830_pipes_power_well_enabled(struct drm_i915_private *dev_priv,
struct i915_power_well *power_well)
{
return intel_de_read(dev_priv, TRANSCONF(PIPE_A)) & TRANSCONF_ENABLE &&
intel_de_read(dev_priv, TRANSCONF(PIPE_B)) & TRANSCONF_ENABLE;
return intel_de_read(dev_priv, TRANSCONF(dev_priv, PIPE_A)) & TRANSCONF_ENABLE &&
intel_de_read(dev_priv, TRANSCONF(dev_priv, PIPE_B)) & TRANSCONF_ENABLE;
}
static void i830_pipes_power_well_sync_hw(struct drm_i915_private *dev_priv,
@ -1196,13 +1196,13 @@ static void vlv_display_power_well_init(struct drm_i915_private *dev_priv)
* CHV DPLL B/C have some issues if VGA mode is enabled.
*/
for_each_pipe(dev_priv, pipe) {
u32 val = intel_de_read(dev_priv, DPLL(pipe));
u32 val = intel_de_read(dev_priv, DPLL(dev_priv, pipe));
val |= DPLL_REF_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS;
if (pipe != PIPE_A)
val |= DPLL_INTEGRATED_CRI_CLK_VLV;
intel_de_write(dev_priv, DPLL(pipe), val);
intel_de_write(dev_priv, DPLL(dev_priv, pipe), val);
}
vlv_init_display_clock_gating(dev_priv);
@ -1355,7 +1355,7 @@ static void assert_chv_phy_status(struct drm_i915_private *dev_priv)
*/
if (BITS_SET(phy_control,
PHY_CH_POWER_DOWN_OVRD(0xf, DPIO_PHY0, DPIO_CH1)) &&
(intel_de_read(dev_priv, DPLL(PIPE_B)) & DPLL_VCO_ENABLE) == 0)
(intel_de_read(dev_priv, DPLL(dev_priv, PIPE_B)) & DPLL_VCO_ENABLE) == 0)
phy_status |= PHY_STATUS_CMN_LDO(DPIO_PHY0, DPIO_CH1);
if (BITS_SET(phy_control,

View File

@ -44,9 +44,10 @@
#include <drm/drm_rect.h>
#include <drm/drm_vblank.h>
#include <drm/drm_vblank_work.h>
#include <drm/i915_hdcp_interface.h>
#include <drm/intel/i915_hdcp_interface.h>
#include <media/cec-notifier.h>
#include "gem/i915_gem_object_types.h" /* for to_intel_bo() */
#include "i915_vma.h"
#include "i915_vma_types.h"
#include "intel_bios.h"
@ -160,6 +161,11 @@ struct intel_encoder {
enum port port;
u16 cloneable;
u8 pipe_mask;
/* Check and recover a bad link state. */
struct delayed_work link_check_work;
void (*link_check)(struct intel_encoder *encoder);
enum intel_hotplug_state (*hotplug)(struct intel_encoder *encoder,
struct intel_connector *connector);
enum intel_output_type (*compute_output_type)(struct intel_encoder *,
@ -305,7 +311,7 @@ enum drrs_type {
};
struct intel_vbt_panel_data {
struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
struct drm_display_mode *lfp_vbt_mode; /* if any */
struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
/* Feature bits */
@ -329,6 +335,7 @@ struct intel_vbt_panel_data {
u8 drrs_msa_timing_delay;
bool low_vswing;
bool hobl;
bool dsc_disable;
} edp;
struct {
@ -401,7 +408,12 @@ struct intel_panel {
} vesa;
struct {
bool sdr_uses_aux;
} intel;
bool supports_2084_decode;
bool supports_2020_gamut;
bool supports_segmented_backlight;
bool supports_sdp_colorimetry;
bool supports_tone_mapping;
} intel_cap;
} edp;
struct backlight_device *device;
@ -1042,7 +1054,7 @@ struct intel_crtc_state {
*
* During initial hw readout, they need to be copied to uapi.
*
* Bigjoiner will allow a transcoder mode that spans 2 pipes;
* Joiner will allow a transcoder mode that spans 2 pipes;
* Use the pipe_mode for calculations like watermarks, pipe
* scaler, and bandwidth.
*
@ -1189,7 +1201,7 @@ struct intel_crtc_state {
/* PSR is supported but might not be enabled due the lack of enabled planes */
bool has_psr;
bool has_psr2;
bool has_sel_update;
bool enable_psr2_sel_fetch;
bool enable_psr2_su_region_et;
bool req_psr2_sdp_prior_scanline;
@ -1338,8 +1350,8 @@ struct intel_crtc_state {
/* enable vlv/chv wgc csc? */
bool wgc_enable;
/* big joiner pipe bitmask */
u8 bigjoiner_pipes;
/* joiner pipe bitmask */
u8 joiner_pipes;
/* Display Stream compression state */
struct {
@ -1396,6 +1408,12 @@ struct intel_crtc_state {
u32 vsync_end, vsync_start;
} vrr;
/* Content Match Refresh Rate state */
struct {
bool enable;
u64 cmrr_n, cmrr_m;
} cmrr;
/* Stream Splitter for eDP MSO */
struct {
bool enable;
@ -1405,6 +1423,9 @@ struct intel_crtc_state {
/* for loading single buffered registers during vblank */
struct drm_vblank_work vblank_work;
/* LOBF flag */
bool has_lobf;
};
enum intel_pipe_crc_source {
@ -1521,7 +1542,7 @@ struct intel_plane {
enum i9xx_plane_id i9xx_plane;
enum plane_id id;
enum pipe pipe;
bool need_async_flip_disable_wa;
bool need_async_flip_toggle_wa;
u32 frontbuffer_bit;
struct {
@ -1682,6 +1703,7 @@ struct intel_psr {
#define I915_PSR_DEBUG_ENABLE_SEL_FETCH 0x4
#define I915_PSR_DEBUG_IRQ 0x10
#define I915_PSR_DEBUG_SU_REGION_ET_DISABLE 0x20
#define I915_PSR_DEBUG_PANEL_REPLAY_DISABLE 0x40
u32 debug;
bool sink_support;
@ -1695,22 +1717,12 @@ struct intel_psr {
unsigned int busy_frontbuffer_bits;
bool sink_psr2_support;
bool link_standby;
bool psr2_enabled;
bool sel_update_enabled;
bool psr2_sel_fetch_enabled;
bool psr2_sel_fetch_cff_enabled;
bool su_region_et_enabled;
bool req_psr2_sdp_prior_scanline;
u8 sink_sync_latency;
struct {
u8 io_wake_lines;
u8 fast_wake_lines;
/* LNL and beyond */
u8 check_entry_lines;
u8 silence_period_sym_clocks;
u8 lfps_half_cycle_num_of_syms;
} alpm_parameters;
ktime_t last_entry_attempt;
ktime_t last_exit;
bool sink_not_reliable;
@ -1719,6 +1731,7 @@ struct intel_psr {
u16 su_y_granularity;
bool source_panel_replay_support;
bool sink_panel_replay_support;
bool sink_panel_replay_su_support;
bool panel_replay_enabled;
u32 dc3co_exitline;
u32 dc3co_exit_delay;
@ -1733,10 +1746,10 @@ struct intel_dp {
u8 lane_count;
u8 sink_count;
bool link_trained;
bool reset_link_params;
bool use_max_params;
u8 dpcd[DP_RECEIVER_CAP_SIZE];
u8 psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
u8 pr_dpcd;
u8 downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
u8 edp_dpcd[EDP_DISPLAY_CTL_CAP_SIZE];
u8 lttpr_common_caps[DP_LTTPR_COMMON_CAP_SIZE];
@ -1754,10 +1767,21 @@ struct intel_dp {
/* intersection of source and sink rates */
int num_common_rates;
int common_rates[DP_MAX_SUPPORTED_RATES];
/* Max lane count for the current link */
int max_link_lane_count;
/* Max rate for the current link */
int max_link_rate;
struct {
/* TODO: move the rest of link specific fields to here */
/* Max lane count for the current link */
int max_lane_count;
/* Max rate for the current link */
int max_rate;
int force_lane_count;
int force_rate;
bool retrain_disabled;
/* Sequential link training failures after a passing LT */
int seq_train_failures;
int force_train_failure;
bool force_retrain;
} link;
bool reset_link_params;
int mso_link_count;
int mso_pixel_overlap;
/* sink or branch descriptor */
@ -1840,6 +1864,19 @@ struct intel_dp {
unsigned long last_oui_write;
bool colorimetry_support;
struct {
u8 io_wake_lines;
u8 fast_wake_lines;
/* LNL and beyond */
u8 check_entry_lines;
u8 aux_less_wake_lines;
u8 silence_period_sym_clocks;
u8 lfps_half_cycle_num_of_syms;
} alpm_parameters;
u8 alpm_dpcd;
};
enum lspcon_vendor {

View File

@ -4,7 +4,6 @@
*/
#include "i915_drv.h"
#include "i915_reg.h"
#include "intel_de.h"
#include "intel_display.h"

View File

@ -30,6 +30,7 @@
#include "intel_de.h"
#include "intel_dmc.h"
#include "intel_dmc_regs.h"
#include "intel_step.h"
/**
* DOC: DMC Firmware Support
@ -115,6 +116,9 @@ static bool dmc_firmware_param_disabled(struct drm_i915_private *i915)
#define XE2LPD_DMC_PATH DMC_PATH(xe2lpd)
MODULE_FIRMWARE(XE2LPD_DMC_PATH);
#define BMG_DMC_PATH DMC_PATH(bmg)
MODULE_FIRMWARE(BMG_DMC_PATH);
#define MTL_DMC_PATH DMC_PATH(mtl)
MODULE_FIRMWARE(MTL_DMC_PATH);
@ -166,6 +170,9 @@ static const char *dmc_firmware_default(struct drm_i915_private *i915, u32 *size
if (DISPLAY_VER_FULL(i915) == IP_VER(20, 0)) {
fw_path = XE2LPD_DMC_PATH;
max_fw_size = XE2LPD_DMC_MAX_FW_SIZE;
} else if (DISPLAY_VER_FULL(i915) == IP_VER(14, 1)) {
fw_path = BMG_DMC_PATH;
max_fw_size = XELPDP_DMC_MAX_FW_SIZE;
} else if (DISPLAY_VER_FULL(i915) == IP_VER(14, 0)) {
fw_path = MTL_DMC_PATH;
max_fw_size = XELPDP_DMC_MAX_FW_SIZE;
@ -1177,7 +1184,7 @@ void intel_dmc_fini(struct drm_i915_private *i915)
}
}
void intel_dmc_print_error_state(struct drm_i915_error_state_buf *m,
void intel_dmc_print_error_state(struct drm_printer *p,
struct drm_i915_private *i915)
{
struct intel_dmc *dmc = i915_to_dmc(i915);
@ -1185,13 +1192,13 @@ void intel_dmc_print_error_state(struct drm_i915_error_state_buf *m,
if (!HAS_DMC(i915))
return;
i915_error_printf(m, "DMC initialized: %s\n", str_yes_no(dmc));
i915_error_printf(m, "DMC loaded: %s\n",
str_yes_no(intel_dmc_has_payload(i915)));
drm_printf(p, "DMC initialized: %s\n", str_yes_no(dmc));
drm_printf(p, "DMC loaded: %s\n",
str_yes_no(intel_dmc_has_payload(i915)));
if (dmc)
i915_error_printf(m, "DMC fw version: %d.%d\n",
DMC_VERSION_MAJOR(dmc->version),
DMC_VERSION_MINOR(dmc->version));
drm_printf(p, "DMC fw version: %d.%d\n",
DMC_VERSION_MAJOR(dmc->version),
DMC_VERSION_MINOR(dmc->version));
}
static int intel_dmc_debugfs_status_show(struct seq_file *m, void *unused)

View File

@ -8,9 +8,9 @@
#include <linux/types.h>
struct drm_i915_error_state_buf;
struct drm_i915_private;
enum pipe;
struct drm_i915_private;
struct drm_printer;
void intel_dmc_init(struct drm_i915_private *i915);
void intel_dmc_load_program(struct drm_i915_private *i915);
@ -22,7 +22,7 @@ void intel_dmc_suspend(struct drm_i915_private *i915);
void intel_dmc_resume(struct drm_i915_private *i915);
bool intel_dmc_has_payload(struct drm_i915_private *i915);
void intel_dmc_debugfs_register(struct drm_i915_private *i915);
void intel_dmc_print_error_state(struct drm_i915_error_state_buf *m,
void intel_dmc_print_error_state(struct drm_printer *p,
struct drm_i915_private *i915);
void assert_dmc_loaded(struct drm_i915_private *i915);

View File

@ -48,6 +48,7 @@
#include "i915_drv.h"
#include "i915_irq.h"
#include "i915_reg.h"
#include "intel_alpm.h"
#include "intel_atomic.h"
#include "intel_audio.h"
#include "intel_backlight.h"
@ -68,6 +69,7 @@
#include "intel_dpio_phy.h"
#include "intel_dpll.h"
#include "intel_drrs.h"
#include "intel_encoder.h"
#include "intel_fifo_underrun.h"
#include "intel_hdcp.h"
#include "intel_hdmi.h"
@ -75,6 +77,7 @@
#include "intel_hotplug_irq.h"
#include "intel_lspcon.h"
#include "intel_lvds.h"
#include "intel_modeset_lock.h"
#include "intel_panel.h"
#include "intel_pch_display.h"
#include "intel_pps.h"
@ -329,7 +332,7 @@ static int intel_dp_common_len_rate_limit(const struct intel_dp *intel_dp,
intel_dp->num_common_rates, max_rate);
}
static int intel_dp_common_rate(struct intel_dp *intel_dp, int index)
int intel_dp_common_rate(struct intel_dp *intel_dp, int index)
{
if (drm_WARN_ON(&dp_to_i915(intel_dp)->drm,
index < 0 || index >= intel_dp->num_common_rates))
@ -344,7 +347,7 @@ int intel_dp_max_common_rate(struct intel_dp *intel_dp)
return intel_dp_common_rate(intel_dp, intel_dp->num_common_rates - 1);
}
static int intel_dp_max_source_lane_count(struct intel_digital_port *dig_port)
int intel_dp_max_source_lane_count(struct intel_digital_port *dig_port)
{
int vbt_max_lanes = intel_bios_dp_max_lane_count(dig_port->base.devdata);
int max_lanes = dig_port->max_lanes;
@ -370,19 +373,39 @@ int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
return min3(source_max, sink_max, lane_max);
}
static int forced_lane_count(struct intel_dp *intel_dp)
{
return clamp(intel_dp->link.force_lane_count, 1, intel_dp_max_common_lane_count(intel_dp));
}
int intel_dp_max_lane_count(struct intel_dp *intel_dp)
{
switch (intel_dp->max_link_lane_count) {
int lane_count;
if (intel_dp->link.force_lane_count)
lane_count = forced_lane_count(intel_dp);
else
lane_count = intel_dp->link.max_lane_count;
switch (lane_count) {
case 1:
case 2:
case 4:
return intel_dp->max_link_lane_count;
return lane_count;
default:
MISSING_CASE(intel_dp->max_link_lane_count);
MISSING_CASE(lane_count);
return 1;
}
}
static int intel_dp_min_lane_count(struct intel_dp *intel_dp)
{
if (intel_dp->link.force_lane_count)
return forced_lane_count(intel_dp);
return 1;
}
/*
* The required data bandwidth for a mode with given pixel clock and bpp. This
* is the required net bandwidth independent of the data bandwidth efficiency.
@ -436,12 +459,16 @@ int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
return max_rate;
}
bool intel_dp_has_bigjoiner(struct intel_dp *intel_dp)
bool intel_dp_has_joiner(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct intel_encoder *encoder = &intel_dig_port->base;
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
/* eDP MSO is not compatible with joiner */
if (intel_dp->mso_link_count)
return false;
return DISPLAY_VER(dev_priv) >= 12 ||
(DISPLAY_VER(dev_priv) == 11 &&
encoder->port != PORT_A);
@ -477,6 +504,9 @@ static int mtl_max_source_rate(struct intel_dp *intel_dp)
if (intel_encoder_is_c10phy(encoder))
return 810000;
if (DISPLAY_VER_FULL(to_i915(encoder->base.dev)) == IP_VER(14, 1))
return 1350000;
return 2000000;
}
@ -601,7 +631,7 @@ static int intersect_rates(const int *source_rates, int source_len,
}
/* return index of rate in rates array, or -1 if not found */
static int intel_dp_rate_index(const int *rates, int len, int rate)
int intel_dp_rate_index(const int *rates, int len, int rate)
{
int i;
@ -641,7 +671,7 @@ static bool intel_dp_link_params_valid(struct intel_dp *intel_dp, int link_rate,
* boot-up.
*/
if (link_rate == 0 ||
link_rate > intel_dp->max_link_rate)
link_rate > intel_dp->link.max_rate)
return false;
if (lane_count == 0 ||
@ -651,78 +681,6 @@ static bool intel_dp_link_params_valid(struct intel_dp *intel_dp, int link_rate,
return true;
}
static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
int link_rate,
u8 lane_count)
{
/* FIXME figure out what we actually want here */
const struct drm_display_mode *fixed_mode =
intel_panel_preferred_fixed_mode(intel_dp->attached_connector);
int mode_rate, max_rate;
mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
max_rate = intel_dp_max_link_data_rate(intel_dp, link_rate, lane_count);
if (mode_rate > max_rate)
return false;
return true;
}
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
int link_rate, u8 lane_count)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
int index;
/*
* TODO: Enable fallback on MST links once MST link compute can handle
* the fallback params.
*/
if (intel_dp->is_mst) {
drm_err(&i915->drm, "Link Training Unsuccessful\n");
return -1;
}
if (intel_dp_is_edp(intel_dp) && !intel_dp->use_max_params) {
drm_dbg_kms(&i915->drm,
"Retrying Link training for eDP with max parameters\n");
intel_dp->use_max_params = true;
return 0;
}
index = intel_dp_rate_index(intel_dp->common_rates,
intel_dp->num_common_rates,
link_rate);
if (index > 0) {
if (intel_dp_is_edp(intel_dp) &&
!intel_dp_can_link_train_fallback_for_edp(intel_dp,
intel_dp_common_rate(intel_dp, index - 1),
lane_count)) {
drm_dbg_kms(&i915->drm,
"Retrying Link training for eDP with same parameters\n");
return 0;
}
intel_dp->max_link_rate = intel_dp_common_rate(intel_dp, index - 1);
intel_dp->max_link_lane_count = lane_count;
} else if (lane_count > 1) {
if (intel_dp_is_edp(intel_dp) &&
!intel_dp_can_link_train_fallback_for_edp(intel_dp,
intel_dp_max_common_rate(intel_dp),
lane_count >> 1)) {
drm_dbg_kms(&i915->drm,
"Retrying Link training for eDP with same parameters\n");
return 0;
}
intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
intel_dp->max_link_lane_count = lane_count >> 1;
} else {
drm_err(&i915->drm, "Link Training Unsuccessful\n");
return -1;
}
return 0;
}
u32 intel_dp_mode_to_fec_clock(u32 mode_clock)
{
return div_u64(mul_u32_u32(mode_clock, DP_DSC_FEC_OVERHEAD_FACTOR),
@ -1204,19 +1162,39 @@ intel_dp_mode_valid_downstream(struct intel_connector *connector,
return MODE_OK;
}
bool intel_dp_need_bigjoiner(struct intel_dp *intel_dp,
struct intel_connector *connector,
int hdisplay, int clock)
bool intel_dp_need_joiner(struct intel_dp *intel_dp,
struct intel_connector *connector,
int hdisplay, int clock)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
if (!intel_dp_has_bigjoiner(intel_dp))
if (!intel_dp_has_joiner(intel_dp))
return false;
return clock > i915->display.cdclk.max_dotclk_freq || hdisplay > 5120 ||
connector->force_bigjoiner_enable;
}
bool intel_dp_has_dsc(const struct intel_connector *connector)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
if (!HAS_DSC(i915))
return false;
if (connector->mst_port && !HAS_DSC_MST(i915))
return false;
if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP &&
connector->panel.vbt.edp.dsc_disable)
return false;
if (!drm_dp_sink_supports_dsc(connector->dp.dsc_dpcd))
return false;
return true;
}
static enum drm_mode_status
intel_dp_mode_valid(struct drm_connector *_connector,
struct drm_display_mode *mode)
@ -1231,7 +1209,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
u16 dsc_max_compressed_bpp = 0;
u8 dsc_slice_count = 0;
enum drm_mode_status status;
bool dsc = false, bigjoiner = false;
bool dsc = false, joiner = false;
status = intel_cpu_transcoder_mode_valid(dev_priv, mode);
if (status != MODE_OK)
@ -1252,9 +1230,9 @@ intel_dp_mode_valid(struct drm_connector *_connector,
target_clock = fixed_mode->clock;
}
if (intel_dp_need_bigjoiner(intel_dp, connector,
mode->hdisplay, target_clock)) {
bigjoiner = true;
if (intel_dp_need_joiner(intel_dp, connector,
mode->hdisplay, target_clock)) {
joiner = true;
max_dotclk *= 2;
}
if (target_clock > max_dotclk)
@ -1271,8 +1249,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
mode_rate = intel_dp_link_required(target_clock,
intel_dp_mode_min_output_bpp(connector, mode));
if (HAS_DSC(dev_priv) &&
drm_dp_sink_supports_dsc(connector->dp.dsc_dpcd)) {
if (intel_dp_has_dsc(connector)) {
enum intel_output_format sink_format, output_format;
int pipe_bpp;
@ -1301,20 +1278,20 @@ intel_dp_mode_valid(struct drm_connector *_connector,
max_lanes,
target_clock,
mode->hdisplay,
bigjoiner,
joiner,
output_format,
pipe_bpp, 64);
dsc_slice_count =
intel_dp_dsc_get_slice_count(connector,
target_clock,
mode->hdisplay,
bigjoiner);
joiner);
}
dsc = dsc_max_compressed_bpp && dsc_slice_count;
}
if (intel_dp_joiner_needs_dsc(dev_priv, bigjoiner) && !dsc)
if (intel_dp_joiner_needs_dsc(dev_priv, joiner) && !dsc)
return MODE_CLOCK_HIGH;
if (mode_rate > max_rate && !dsc)
@ -1324,7 +1301,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
if (status != MODE_OK)
return status;
return intel_mode_valid_max_plane_size(dev_priv, mode, bigjoiner);
return intel_mode_valid_max_plane_size(dev_priv, mode, joiner);
}
bool intel_dp_source_supports_tps3(struct drm_i915_private *i915)
@ -1374,16 +1351,38 @@ static void intel_dp_print_rates(struct intel_dp *intel_dp)
drm_dbg_kms(&i915->drm, "common rates: %s\n", str);
}
static int forced_link_rate(struct intel_dp *intel_dp)
{
int len = intel_dp_common_len_rate_limit(intel_dp, intel_dp->link.force_rate);
if (len == 0)
return intel_dp_common_rate(intel_dp, 0);
return intel_dp_common_rate(intel_dp, len - 1);
}
int
intel_dp_max_link_rate(struct intel_dp *intel_dp)
{
int len;
len = intel_dp_common_len_rate_limit(intel_dp, intel_dp->max_link_rate);
if (intel_dp->link.force_rate)
return forced_link_rate(intel_dp);
len = intel_dp_common_len_rate_limit(intel_dp, intel_dp->link.max_rate);
return intel_dp_common_rate(intel_dp, len - 1);
}
static int
intel_dp_min_link_rate(struct intel_dp *intel_dp)
{
if (intel_dp->link.force_rate)
return forced_link_rate(intel_dp);
return intel_dp_common_rate(intel_dp, 0);
}
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
@ -1441,15 +1440,16 @@ bool intel_dp_supports_fec(struct intel_dp *intel_dp,
drm_dp_sink_supports_fec(connector->dp.fec_capability);
}
static bool intel_dp_supports_dsc(const struct intel_connector *connector,
const struct intel_crtc_state *crtc_state)
bool intel_dp_supports_dsc(const struct intel_connector *connector,
const struct intel_crtc_state *crtc_state)
{
if (!intel_dp_has_dsc(connector))
return false;
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP) && !crtc_state->fec_enable)
return false;
return intel_dsc_source_support(crtc_state) &&
connector->dp.dsc_decompression_aux &&
drm_dp_sink_supports_dsc(connector->dp.dsc_dpcd);
return intel_dsc_source_support(crtc_state);
}
static int intel_dp_hdmi_compute_bpc(struct intel_dp *intel_dp,
@ -2015,7 +2015,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(i915, adjusted_mode->clock,
adjusted_mode->hdisplay,
pipe_config->bigjoiner_pipes);
pipe_config->joiner_pipes);
dsc_max_bpp = min(dsc_max_bpp, dsc_joiner_max_bpp);
dsc_max_bpp = min(dsc_max_bpp, to_bpp_int(limits->link.max_bpp_x16));
@ -2249,7 +2249,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
intel_dp_dsc_get_slice_count(connector,
adjusted_mode->crtc_clock,
adjusted_mode->crtc_hdisplay,
pipe_config->bigjoiner_pipes);
pipe_config->joiner_pipes);
if (!dsc_dp_slice_count) {
drm_dbg_kms(&dev_priv->drm,
"Compressed Slice Count not supported\n");
@ -2263,7 +2263,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
* is greater than the maximum Cdclock and if slice count is even
* then we need to use 2 VDSC instances.
*/
if (pipe_config->bigjoiner_pipes || pipe_config->dsc.slice_count > 1)
if (pipe_config->joiner_pipes || pipe_config->dsc.slice_count > 1)
pipe_config->dsc.dsc_split = true;
ret = intel_dp_dsc_compute_params(connector, pipe_config);
@ -2353,13 +2353,14 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
bool dsc,
struct link_config_limits *limits)
{
limits->min_rate = intel_dp_common_rate(intel_dp, 0);
limits->min_rate = intel_dp_min_link_rate(intel_dp);
limits->max_rate = intel_dp_max_link_rate(intel_dp);
/* FIXME 128b/132b SST support missing */
limits->max_rate = min(limits->max_rate, 810000);
limits->min_rate = min(limits->min_rate, limits->max_rate);
limits->min_lane_count = 1;
limits->min_lane_count = intel_dp_min_lane_count(intel_dp);
limits->max_lane_count = intel_dp_max_lane_count(intel_dp);
limits->pipe.min_bpp = intel_dp_min_bpp(crtc_state->output_format);
@ -2429,12 +2430,12 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
!intel_dp_supports_fec(intel_dp, connector, pipe_config))
return -EINVAL;
if (intel_dp_need_bigjoiner(intel_dp, connector,
adjusted_mode->crtc_hdisplay,
adjusted_mode->crtc_clock))
pipe_config->bigjoiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe);
if (intel_dp_need_joiner(intel_dp, connector,
adjusted_mode->crtc_hdisplay,
adjusted_mode->crtc_clock))
pipe_config->joiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe);
joiner_needs_dsc = intel_dp_joiner_needs_dsc(i915, pipe_config->bigjoiner_pipes);
joiner_needs_dsc = intel_dp_joiner_needs_dsc(i915, pipe_config->joiner_pipes);
dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en ||
!intel_dp_compute_config_limits(intel_dp, pipe_config,
@ -2633,11 +2634,19 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
/* Currently only DP_AS_SDP_AVT_FIXED_VTOTAL mode supported */
as_sdp->sdp_type = DP_SDP_ADAPTIVE_SYNC;
as_sdp->length = 0x9;
as_sdp->mode = DP_AS_SDP_AVT_FIXED_VTOTAL;
as_sdp->vtotal = adjusted_mode->vtotal;
as_sdp->target_rr = 0;
as_sdp->duration_incr_ms = 0;
as_sdp->duration_incr_ms = 0;
if (crtc_state->cmrr.enable) {
as_sdp->mode = DP_AS_SDP_FAVT_TRR_REACHED;
as_sdp->vtotal = adjusted_mode->vtotal;
as_sdp->target_rr = drm_mode_vrefresh(adjusted_mode);
as_sdp->target_rr_divider = true;
} else {
as_sdp->mode = DP_AS_SDP_AVT_FIXED_VTOTAL;
as_sdp->vtotal = adjusted_mode->vtotal;
as_sdp->target_rr = 0;
}
}
static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp,
@ -2660,14 +2669,6 @@ static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp,
if (intel_dp_needs_vsc_sdp(crtc_state, conn_state)) {
intel_dp_compute_vsc_colorimetry(crtc_state, conn_state,
vsc);
} else if (crtc_state->has_psr2) {
/*
* [PSR2 without colorimetry]
* Prepare VSC Header for SU as per eDP 1.4 spec, Table 6-11
* 3D stereo + PSR/PSR2 + Y-coordinate.
*/
vsc->revision = 0x4;
vsc->length = 0xe;
} else if (crtc_state->has_panel_replay) {
/*
* [Panel Replay without colorimetry info]
@ -2676,6 +2677,14 @@ static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp,
*/
vsc->revision = 0x6;
vsc->length = 0x10;
} else if (crtc_state->has_sel_update) {
/*
* [PSR2 without colorimetry]
* Prepare VSC Header for SU as per eDP 1.4 spec, Table 6-11
* 3D stereo + PSR/PSR2 + Y-coordinate.
*/
vsc->revision = 0x4;
vsc->length = 0xe;
} else {
/*
* [PSR1]
@ -2754,7 +2763,7 @@ intel_dp_drrs_compute_config(struct intel_connector *connector,
* FIXME all joined pipes share the same transcoder.
* Need to account for that when updating M/N live.
*/
if (has_seamless_m_n(connector) && !pipe_config->bigjoiner_pipes)
if (has_seamless_m_n(connector) && !pipe_config->joiner_pipes)
pipe_config->update_m_n = true;
if (!can_enable_drrs(connector, pipe_config, downclock_mode)) {
@ -2783,7 +2792,6 @@ intel_dp_drrs_compute_config(struct intel_connector *connector,
}
static bool intel_dp_has_audio(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
@ -2792,8 +2800,7 @@ static bool intel_dp_has_audio(struct intel_encoder *encoder,
struct intel_connector *connector =
to_intel_connector(conn_state->connector);
if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
!intel_dp_port_has_audio(i915, encoder->port))
if (!intel_dp_port_has_audio(i915, encoder->port))
return false;
if (intel_conn_state->force_audio == HDMI_AUDIO_AUTO)
@ -2852,14 +2859,14 @@ intel_dp_audio_compute_config(struct intel_encoder *encoder,
struct drm_connector_state *conn_state)
{
pipe_config->has_audio =
intel_dp_has_audio(encoder, pipe_config, conn_state) &&
intel_dp_has_audio(encoder, conn_state) &&
intel_audio_compute_config(encoder, pipe_config, conn_state);
pipe_config->sdp_split_enable = pipe_config->has_audio &&
intel_dp_is_uhbr(pipe_config);
}
void intel_dp_queue_modeset_retry_work(struct intel_connector *connector)
static void intel_dp_queue_modeset_retry_work(struct intel_connector *connector)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
@ -2868,6 +2875,7 @@ void intel_dp_queue_modeset_retry_work(struct intel_connector *connector)
drm_connector_put(&connector->base);
}
/* NOTE: @state is only valid for MST links and can be %NULL for SST. */
void
intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
struct intel_encoder *encoder,
@ -2876,6 +2884,7 @@ intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
struct intel_connector *connector;
struct intel_digital_connector_state *conn_state;
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
int i;
if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) {
@ -2884,6 +2893,9 @@ intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
return;
}
if (drm_WARN_ON(&i915->drm, !state))
return;
for_each_new_intel_connector_in_state(state, connector, conn_state, i) {
if (!conn_state->base.crtc)
continue;
@ -2997,6 +3009,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
intel_vrr_compute_config(pipe_config, conn_state);
intel_dp_compute_as_sdp(intel_dp, pipe_config);
intel_psr_compute_config(intel_dp, pipe_config, conn_state);
intel_alpm_lobf_compute_config(intel_dp, pipe_config, conn_state);
intel_dp_drrs_compute_config(connector, pipe_config, link_bpp_x16);
intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state);
intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state);
@ -3014,10 +3027,12 @@ void intel_dp_set_link_params(struct intel_dp *intel_dp,
intel_dp->lane_count = lane_count;
}
static void intel_dp_reset_max_link_params(struct intel_dp *intel_dp)
void intel_dp_reset_link_params(struct intel_dp *intel_dp)
{
intel_dp->max_link_lane_count = intel_dp_max_common_lane_count(intel_dp);
intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
intel_dp->link.max_lane_count = intel_dp_max_common_lane_count(intel_dp);
intel_dp->link.max_rate = intel_dp_max_common_rate(intel_dp);
intel_dp->link.retrain_disabled = false;
intel_dp->link.seq_train_failures = 0;
}
/* Enable backlight PWM and backlight PP control. */
@ -3352,7 +3367,7 @@ void intel_dp_sync_state(struct intel_encoder *encoder,
intel_dp_tunnel_resume(intel_dp, crtc_state, dpcd_updated);
if (crtc_state)
intel_dp_reset_max_link_params(intel_dp);
intel_dp_reset_link_params(intel_dp);
}
bool intel_dp_initial_fastset_check(struct intel_encoder *encoder,
@ -4227,6 +4242,9 @@ static ssize_t intel_dp_as_sdp_pack(const struct drm_dp_as_sdp *as_sdp,
sdp->db[3] = as_sdp->target_rr & 0xFF;
sdp->db[4] = (as_sdp->target_rr >> 8) & 0x3;
if (as_sdp->target_rr_divider)
sdp->db[4] |= 0x20;
return length;
}
@ -4350,7 +4368,8 @@ void intel_dp_set_infoframes(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
i915_reg_t reg = HSW_TVIDEO_DIP_CTL(crtc_state->cpu_transcoder);
i915_reg_t reg = HSW_TVIDEO_DIP_CTL(dev_priv,
crtc_state->cpu_transcoder);
u32 dip_enable = VIDEO_DIP_ENABLE_AVI_HSW | VIDEO_DIP_ENABLE_GCP_HSW |
VIDEO_DIP_ENABLE_VS_HSW | VIDEO_DIP_ENABLE_GMP_HSW |
VIDEO_DIP_ENABLE_SPD_HSW | VIDEO_DIP_ENABLE_DRM_GLK;
@ -4407,6 +4426,7 @@ int intel_dp_as_sdp_unpack(struct drm_dp_as_sdp *as_sdp,
as_sdp->mode = sdp->db[0] & DP_ADAPTIVE_SYNC_SDP_OPERATION_MODE;
as_sdp->vtotal = (sdp->db[2] << 8) | sdp->db[1];
as_sdp->target_rr = (u64)sdp->db[3] | ((u64)sdp->db[4] & 0x3);
as_sdp->target_rr_divider = sdp->db[4] & 0x20 ? true : false;
return 0;
}
@ -4432,7 +4452,8 @@ static int intel_dp_vsc_sdp_unpack(struct drm_dp_vsc_sdp *vsc,
vsc->length = sdp->sdp_header.HB3;
if ((sdp->sdp_header.HB2 == 0x2 && sdp->sdp_header.HB3 == 0x8) ||
(sdp->sdp_header.HB2 == 0x4 && sdp->sdp_header.HB3 == 0xe)) {
(sdp->sdp_header.HB2 == 0x4 && sdp->sdp_header.HB3 == 0xe) ||
(sdp->sdp_header.HB2 == 0x6 && sdp->sdp_header.HB3 == 0x10)) {
/*
* - HB2 = 0x2, HB3 = 0x8
* VSC SDP supporting 3D stereo + PSR
@ -4440,6 +4461,8 @@ static int intel_dp_vsc_sdp_unpack(struct drm_dp_vsc_sdp *vsc,
* VSC SDP supporting 3D stereo + PSR2 with Y-coordinate of
* first scan line of the SU region (applies to eDP v1.4b
* and higher).
* - HB2 = 0x6, HB3 = 0x10
* VSC SDP supporting 3D stereo + Panel Replay.
*/
return 0;
} else if (sdp->sdp_header.HB2 == 0x5 && sdp->sdp_header.HB3 == 0x13) {
@ -5017,6 +5040,8 @@ static bool
intel_dp_check_mst_status(struct intel_dp *intel_dp)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct intel_encoder *encoder = &dig_port->base;
bool link_ok = true;
bool reprobe_needed = false;
@ -5062,7 +5087,10 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
drm_dp_mst_hpd_irq_send_new_request(&intel_dp->mst_mgr);
}
return link_ok && !reprobe_needed;
if (!link_ok || intel_dp->link.force_retrain)
intel_encoder_link_check_queue_work(encoder, 0);
return !reprobe_needed;
}
static void
@ -5108,6 +5136,9 @@ intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
if (intel_psr_enabled(intel_dp))
return false;
if (intel_dp->link.force_retrain)
return true;
if (drm_dp_dpcd_read_phy_link_status(&intel_dp->aux, DP_PHY_DPRX,
link_status) < 0)
return false;
@ -5124,6 +5155,12 @@ intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
intel_dp->lane_count))
return false;
if (intel_dp->link.retrain_disabled)
return false;
if (intel_dp->link.seq_train_failures)
return true;
/* Retrain if link not ok */
return !intel_dp_link_ok(intel_dp, link_status);
}
@ -5209,12 +5246,13 @@ static bool intel_dp_is_connected(struct intel_dp *intel_dp)
intel_dp->is_mst;
}
int intel_dp_retrain_link(struct intel_encoder *encoder,
struct drm_modeset_acquire_ctx *ctx)
static int intel_dp_retrain_link(struct intel_encoder *encoder,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct intel_crtc *crtc;
bool mst_output = false;
u8 pipe_mask;
int ret;
@ -5239,13 +5277,19 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
if (!intel_dp_needs_link_retrain(intel_dp))
return 0;
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] retraining link\n",
encoder->base.base.id, encoder->base.name);
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] retraining link (forced %s)\n",
encoder->base.base.id, encoder->base.name,
str_yes_no(intel_dp->link.force_retrain));
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
const struct intel_crtc_state *crtc_state =
to_intel_crtc_state(crtc->base.state);
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) {
mst_output = true;
break;
}
/* Suppress underruns caused by re-training */
intel_set_cpu_fifo_underrun_reporting(dev_priv, crtc->pipe, false);
if (crtc_state->has_pch_encoder)
@ -5253,19 +5297,26 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
intel_crtc_pch_transcoder(crtc), false);
}
/* TODO: use a modeset for SST as well. */
if (mst_output) {
ret = intel_modeset_commit_pipes(dev_priv, pipe_mask, ctx);
if (ret && ret != -EDEADLK)
drm_dbg_kms(&dev_priv->drm,
"[ENCODER:%d:%s] link retraining failed: %pe\n",
encoder->base.base.id, encoder->base.name,
ERR_PTR(ret));
goto out;
}
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
const struct intel_crtc_state *crtc_state =
to_intel_crtc_state(crtc->base.state);
/* retrain on the MST master transcoder */
if (DISPLAY_VER(dev_priv) >= 12 &&
intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
!intel_dp_mst_is_master_trans(crtc_state))
continue;
intel_dp_check_frl_training(intel_dp);
intel_dp_pcon_dsc_configure(intel_dp, crtc_state);
intel_dp_start_link_train(intel_dp, crtc_state);
intel_dp_start_link_train(NULL, intel_dp, crtc_state);
intel_dp_stop_link_train(intel_dp, crtc_state);
break;
}
@ -5283,7 +5334,37 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
intel_crtc_pch_transcoder(crtc), true);
}
return 0;
out:
if (ret != -EDEADLK)
intel_dp->link.force_retrain = false;
return ret;
}
void intel_dp_link_check(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
struct drm_modeset_acquire_ctx ctx;
int ret;
intel_modeset_lock_ctx_retry(&ctx, NULL, 0, ret)
ret = intel_dp_retrain_link(encoder, &ctx);
drm_WARN_ON(&i915->drm, ret);
}
void intel_dp_check_link_state(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct intel_encoder *encoder = &dig_port->base;
if (!intel_dp_is_connected(intel_dp))
return;
if (!intel_dp_needs_link_retrain(intel_dp))
return;
intel_encoder_link_check_queue_work(encoder, 0);
}
static int intel_dp_prep_phy_test(struct intel_dp *intel_dp,
@ -5496,9 +5577,7 @@ intel_dp_short_pulse(struct intel_dp *intel_dp)
/* Handle CEC interrupts, if any */
drm_dp_cec_irq(&intel_dp->aux);
/* defer to the hotplug work for link retraining if needed */
if (intel_dp_needs_link_retrain(intel_dp))
return false;
intel_dp_check_link_state(intel_dp);
intel_psr_short_pulse(intel_dp);
@ -5858,6 +5937,7 @@ intel_dp_detect(struct drm_connector *connector,
memset(&intel_dp->compliance, 0, sizeof(intel_dp->compliance));
memset(intel_connector->dp.dsc_dpcd, 0, sizeof(intel_connector->dp.dsc_dpcd));
intel_dp->psr.sink_panel_replay_support = false;
intel_dp->psr.sink_panel_replay_su_support = false;
intel_dp_mst_disconnect(intel_dp);
@ -5880,12 +5960,8 @@ intel_dp_detect(struct drm_connector *connector,
intel_dp_mst_configure(intel_dp);
/*
* TODO: Reset link params when switching to MST mode, until MST
* supports link training fallback params.
*/
if (intel_dp->reset_link_params || intel_dp->is_mst) {
intel_dp_reset_max_link_params(intel_dp);
if (intel_dp->reset_link_params) {
intel_dp_reset_link_params(intel_dp);
intel_dp->reset_link_params = false;
}
@ -5904,12 +5980,13 @@ intel_dp_detect(struct drm_connector *connector,
/*
* Some external monitors do not signal loss of link synchronization
* with an IRQ_HPD, so force a link status check.
*
* TODO: this probably became redundant, so remove it: the link state
* is rechecked/recovered now after modesets, where the loss of
* synchronization tends to occur.
*/
if (!intel_dp_is_edp(intel_dp)) {
ret = intel_dp_retrain_link(encoder, ctx);
if (ret)
return ret;
}
if (!intel_dp_is_edp(intel_dp))
intel_dp_check_link_state(intel_dp);
/*
* Clearing NACK and defer counts to get their exact values
@ -6051,11 +6128,14 @@ void intel_dp_connector_sync_state(struct intel_connector *connector,
}
}
void intel_dp_encoder_flush_work(struct drm_encoder *encoder)
void intel_dp_encoder_flush_work(struct drm_encoder *_encoder)
{
struct intel_digital_port *dig_port = enc_to_dig_port(to_intel_encoder(encoder));
struct intel_encoder *encoder = to_intel_encoder(_encoder);
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
struct intel_dp *intel_dp = &dig_port->dp;
intel_encoder_link_check_flush_work(encoder);
intel_dp_mst_encoder_cleanup(dig_port);
intel_dp_tunnel_destroy(intel_dp);
@ -6361,8 +6441,8 @@ bool intel_dp_is_port_edp(struct drm_i915_private *i915, enum port port)
return _intel_dp_is_port_edp(i915, devdata, port);
}
static bool
has_gamut_metadata_dip(struct intel_encoder *encoder)
bool
intel_dp_has_gamut_metadata_dip(struct intel_encoder *encoder)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
enum port port = encoder->port;
@ -6409,7 +6489,7 @@ intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connect
intel_attach_dp_colorspace_property(connector);
}
if (has_gamut_metadata_dip(&dp_to_dig_port(intel_dp)->base))
if (intel_dp_has_gamut_metadata_dip(&dp_to_dig_port(intel_dp)->base))
drm_connector_attach_hdr_output_metadata_property(connector);
if (HAS_VRR(dev_priv))
@ -6508,6 +6588,8 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
*/
intel_hpd_enable_detection(encoder);
intel_alpm_init_dpcd(intel_dp);
/* Cache DPCD and EDID for edp. */
has_dpcd = intel_edp_init_dpcd(intel_dp, intel_connector);
@ -6737,7 +6819,7 @@ intel_dp_init_connector(struct intel_digital_port *dig_port,
intel_dp_set_source_rates(intel_dp);
intel_dp_set_common_rates(intel_dp);
intel_dp_reset_max_link_params(intel_dp);
intel_dp_reset_link_params(intel_dp);
/* init MST on ports that can support it */
intel_dp_mst_encoder_init(dig_port,

View File

@ -44,7 +44,6 @@ bool intel_dp_limited_color_range(const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state);
int intel_dp_min_bpp(enum intel_output_format output_format);
void intel_dp_init_modeset_retry_work(struct intel_connector *connector);
void intel_dp_queue_modeset_retry_work(struct intel_connector *connector);
void
intel_dp_queue_modeset_retry_for_link(struct intel_atomic_state *state,
struct intel_encoder *encoder,
@ -55,13 +54,11 @@ void intel_dp_connector_sync_state(struct intel_connector *connector,
const struct intel_crtc_state *crtc_state);
void intel_dp_set_link_params(struct intel_dp *intel_dp,
int link_rate, int lane_count);
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
int link_rate, u8 lane_count);
int intel_dp_get_active_pipes(struct intel_dp *intel_dp,
struct drm_modeset_acquire_ctx *ctx,
u8 *pipe_mask);
int intel_dp_retrain_link(struct intel_encoder *encoder,
struct drm_modeset_acquire_ctx *ctx);
void intel_dp_link_check(struct intel_encoder *encoder);
void intel_dp_check_link_state(struct intel_dp *intel_dp);
void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode);
void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state);
@ -90,6 +87,7 @@ bool intel_dp_has_hdmi_sink(struct intel_dp *intel_dp);
bool intel_dp_is_edp(struct intel_dp *intel_dp);
bool intel_dp_as_sdp_supported(struct intel_dp *intel_dp);
bool intel_dp_is_uhbr(const struct intel_crtc_state *crtc_state);
bool intel_dp_has_dsc(const struct intel_connector *connector);
int intel_dp_link_symbol_size(int rate);
int intel_dp_link_symbol_clock(int rate);
bool intel_dp_is_port_edp(struct drm_i915_private *dev_priv, enum port port);
@ -101,13 +99,17 @@ void intel_edp_backlight_off(const struct drm_connector_state *conn_state);
void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp);
void intel_dp_mst_suspend(struct drm_i915_private *dev_priv);
void intel_dp_mst_resume(struct drm_i915_private *dev_priv);
int intel_dp_max_source_lane_count(struct intel_digital_port *dig_port);
int intel_dp_max_link_rate(struct intel_dp *intel_dp);
int intel_dp_max_lane_count(struct intel_dp *intel_dp);
int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state);
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
int intel_dp_max_common_rate(struct intel_dp *intel_dp);
int intel_dp_max_common_lane_count(struct intel_dp *intel_dp);
int intel_dp_common_rate(struct intel_dp *intel_dp, int index);
int intel_dp_rate_index(const int *rates, int len, int rate);
void intel_dp_update_sink_caps(struct intel_dp *intel_dp);
void intel_dp_reset_link_params(struct intel_dp *intel_dp);
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
u8 *link_bw, u8 *rate_select);
@ -121,7 +123,7 @@ int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
int max_dprx_rate, int max_dprx_lanes);
bool intel_dp_joiner_needs_dsc(struct drm_i915_private *i915, bool use_joiner);
bool intel_dp_has_bigjoiner(struct intel_dp *intel_dp);
bool intel_dp_has_joiner(struct intel_dp *intel_dp);
bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state);
void intel_dp_set_infoframes(struct intel_encoder *encoder, bool enable,
@ -150,9 +152,9 @@ int intel_dp_dsc_sink_max_compressed_bpp(const struct intel_connector *connector
u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
int mode_clock, int mode_hdisplay,
bool bigjoiner);
bool intel_dp_need_bigjoiner(struct intel_dp *intel_dp,
struct intel_connector *connector,
int hdisplay, int clock);
bool intel_dp_need_joiner(struct intel_dp *intel_dp,
struct intel_connector *connector,
int hdisplay, int clock);
static inline unsigned int intel_dp_unused_lane_mask(int lane_count)
{
@ -169,6 +171,9 @@ bool intel_dp_supports_fec(struct intel_dp *intel_dp,
const struct intel_connector *connector,
const struct intel_crtc_state *pipe_config);
bool intel_dp_supports_dsc(const struct intel_connector *connector,
const struct intel_crtc_state *crtc_state);
u32 intel_dp_dsc_nearest_valid_bpp(struct drm_i915_private *i915, u32 bpp, u32 pipe_bpp);
void intel_ddi_update_pipe(struct intel_atomic_state *state,
@ -196,5 +201,6 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
struct link_config_limits *limits);
void intel_dp_get_dsc_sink_cap(u8 dpcd_rev, struct intel_connector *connector);
bool intel_dp_has_gamut_metadata_dip(struct intel_encoder *encoder);
#endif /* __INTEL_DP_H__ */

View File

@ -40,11 +40,6 @@
#include "intel_dp.h"
#include "intel_dp_aux_backlight.h"
/* TODO:
* Implement HDR, right now we just implement the bare minimum to bring us back into SDR mode so we
* can make people's backlights work in the mean time
*/
/*
* DP AUX registers for Intel's proprietary HDR backlight interface. We define
* them here since we'll likely be the only driver to ever use these.
@ -69,14 +64,14 @@
#define INTEL_EDP_HDR_GETSET_CTRL_PARAMS 0x344
# define INTEL_EDP_HDR_TCON_2084_DECODE_ENABLE BIT(0)
# define INTEL_EDP_HDR_TCON_2020_GAMUT_ENABLE BIT(1)
# define INTEL_EDP_HDR_TCON_TONE_MAPPING_ENABLE BIT(2) /* Pre-TGL+ */
# define INTEL_EDP_HDR_TCON_TONE_MAPPING_ENABLE BIT(2)
# define INTEL_EDP_HDR_TCON_SEGMENTED_BACKLIGHT_ENABLE BIT(3)
# define INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE BIT(4)
# define INTEL_EDP_HDR_TCON_SRGB_TO_PANEL_GAMUT_ENABLE BIT(5)
/* Bit 6 is reserved */
# define INTEL_EDP_HDR_TCON_SDP_COLORIMETRY_ENABLE BIT(7)
# define INTEL_EDP_HDR_TCON_SDP_OVERRIDE_AUX BIT(7)
#define INTEL_EDP_HDR_CONTENT_LUMINANCE 0x346 /* Pre-TGL+ */
#define INTEL_EDP_HDR_CONTENT_LUMINANCE 0x346
#define INTEL_EDP_HDR_PANEL_LUMINANCE_OVERRIDE 0x34A
#define INTEL_EDP_SDR_LUMINANCE_LEVEL 0x352
#define INTEL_EDP_BRIGHTNESS_NITS_LSB 0x354
@ -127,9 +122,6 @@ intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector)
if (ret != sizeof(tcon_cap))
return false;
if (!(tcon_cap[1] & INTEL_EDP_HDR_TCON_BRIGHTNESS_NITS_CAP))
return false;
drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] Detected %s HDR backlight interface version %d\n",
connector->base.base.id, connector->base.name,
is_intel_tcon_cap(tcon_cap) ? "Intel" : "unsupported", tcon_cap[0]);
@ -137,6 +129,9 @@ intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector)
if (!is_intel_tcon_cap(tcon_cap))
return false;
if (!(tcon_cap[1] & INTEL_EDP_HDR_TCON_BRIGHTNESS_NITS_CAP))
return false;
/*
* If we don't have HDR static metadata there is no way to
* runtime detect used range for nits based control. For now
@ -156,8 +151,18 @@ intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector)
return false;
}
panel->backlight.edp.intel.sdr_uses_aux =
panel->backlight.edp.intel_cap.sdr_uses_aux =
tcon_cap[2] & INTEL_EDP_SDR_TCON_BRIGHTNESS_AUX_CAP;
panel->backlight.edp.intel_cap.supports_2084_decode =
tcon_cap[1] & INTEL_EDP_HDR_TCON_2084_DECODE_CAP;
panel->backlight.edp.intel_cap.supports_2020_gamut =
tcon_cap[1] & INTEL_EDP_HDR_TCON_2020_GAMUT_CAP;
panel->backlight.edp.intel_cap.supports_segmented_backlight =
tcon_cap[1] & INTEL_EDP_HDR_TCON_SEGMENTED_BACKLIGHT_CAP;
panel->backlight.edp.intel_cap.supports_sdp_colorimetry =
tcon_cap[1] & INTEL_EDP_HDR_TCON_SDP_COLORIMETRY_CAP;
panel->backlight.edp.intel_cap.supports_tone_mapping =
tcon_cap[1] & INTEL_EDP_HDR_TCON_TONE_MAPPING_CAP;
return true;
}
@ -178,7 +183,7 @@ intel_dp_aux_hdr_get_backlight(struct intel_connector *connector, enum pipe pipe
}
if (!(tmp & INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE)) {
if (!panel->backlight.edp.intel.sdr_uses_aux) {
if (!panel->backlight.edp.intel_cap.sdr_uses_aux) {
u32 pwm_level = panel->backlight.pwm_funcs->get(connector, pipe);
return intel_backlight_level_from_pwm(connector, pwm_level);
@ -215,13 +220,27 @@ intel_dp_aux_hdr_set_aux_backlight(const struct drm_connector_state *conn_state,
connector->base.base.id, connector->base.name);
}
static bool
intel_dp_in_hdr_mode(const struct drm_connector_state *conn_state)
{
struct hdr_output_metadata *hdr_metadata;
if (!conn_state->hdr_output_metadata)
return false;
hdr_metadata = conn_state->hdr_output_metadata->data;
return hdr_metadata->hdmi_metadata_type1.eotf == HDMI_EOTF_SMPTE_ST2084;
}
static void
intel_dp_aux_hdr_set_backlight(const struct drm_connector_state *conn_state, u32 level)
{
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_panel *panel = &connector->panel;
if (panel->backlight.edp.intel.sdr_uses_aux) {
if (intel_dp_in_hdr_mode(conn_state) ||
panel->backlight.edp.intel_cap.sdr_uses_aux) {
intel_dp_aux_hdr_set_aux_backlight(conn_state, level);
} else {
const u32 pwm_level = intel_backlight_level_to_pwm(connector, level);
@ -230,6 +249,64 @@ intel_dp_aux_hdr_set_backlight(const struct drm_connector_state *conn_state, u32
}
}
static void
intel_dp_aux_write_content_luminance(struct intel_connector *connector,
struct hdr_output_metadata *hdr_metadata)
{
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
int ret;
u8 buf[4];
if (!intel_dp_has_gamut_metadata_dip(connector->encoder))
return;
buf[0] = hdr_metadata->hdmi_metadata_type1.max_cll & 0xFF;
buf[1] = (hdr_metadata->hdmi_metadata_type1.max_cll & 0xFF00) >> 8;
buf[2] = hdr_metadata->hdmi_metadata_type1.max_fall & 0xFF;
buf[3] = (hdr_metadata->hdmi_metadata_type1.max_fall & 0xFF00) >> 8;
ret = drm_dp_dpcd_write(&intel_dp->aux,
INTEL_EDP_HDR_CONTENT_LUMINANCE,
buf, sizeof(buf));
if (ret < 0)
drm_dbg_kms(&i915->drm,
"Content Luminance DPCD reg write failed, err:-%d\n",
ret);
}
static void
intel_dp_aux_fill_hdr_tcon_params(const struct drm_connector_state *conn_state, u8 *ctrl)
{
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_panel *panel = &connector->panel;
struct drm_i915_private *i915 = to_i915(connector->base.dev);
/*
* According to spec segmented backlight needs to be set whenever panel is in
* HDR mode.
*/
if (intel_dp_in_hdr_mode(conn_state)) {
*ctrl |= INTEL_EDP_HDR_TCON_SEGMENTED_BACKLIGHT_ENABLE;
*ctrl |= INTEL_EDP_HDR_TCON_2084_DECODE_ENABLE;
}
if (DISPLAY_VER(i915) < 11)
*ctrl &= ~INTEL_EDP_HDR_TCON_TONE_MAPPING_ENABLE;
if (panel->backlight.edp.intel_cap.supports_2020_gamut &&
(conn_state->colorspace == DRM_MODE_COLORIMETRY_BT2020_RGB ||
conn_state->colorspace == DRM_MODE_COLORIMETRY_BT2020_YCC ||
conn_state->colorspace == DRM_MODE_COLORIMETRY_BT2020_CYCC))
*ctrl |= INTEL_EDP_HDR_TCON_2020_GAMUT_ENABLE;
if (panel->backlight.edp.intel_cap.supports_sdp_colorimetry &&
intel_dp_has_gamut_metadata_dip(connector->encoder))
*ctrl |= INTEL_EDP_HDR_TCON_SDP_OVERRIDE_AUX;
else
*ctrl &= ~INTEL_EDP_HDR_TCON_SDP_OVERRIDE_AUX;
}
static void
intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state, u32 level)
@ -238,6 +315,7 @@ intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_panel *panel = &connector->panel;
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
struct hdr_output_metadata *hdr_metadata;
int ret;
u8 old_ctrl, ctrl;
@ -251,8 +329,10 @@ intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
}
ctrl = old_ctrl;
if (panel->backlight.edp.intel.sdr_uses_aux) {
if (intel_dp_in_hdr_mode(conn_state) ||
panel->backlight.edp.intel_cap.sdr_uses_aux) {
ctrl |= INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE;
intel_dp_aux_hdr_set_aux_backlight(conn_state, level);
} else {
u32 pwm_level = intel_backlight_level_to_pwm(connector, level);
@ -262,10 +342,17 @@ intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state,
ctrl &= ~INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE;
}
intel_dp_aux_fill_hdr_tcon_params(conn_state, &ctrl);
if (ctrl != old_ctrl &&
drm_dp_dpcd_writeb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, ctrl) != 1)
drm_err(&i915->drm, "[CONNECTOR:%d:%s] Failed to configure DPCD brightness controls\n",
connector->base.base.id, connector->base.name);
if (intel_dp_in_hdr_mode(conn_state)) {
hdr_metadata = conn_state->hdr_output_metadata->data;
intel_dp_aux_write_content_luminance(connector, hdr_metadata);
}
}
static void
@ -275,7 +362,7 @@ intel_dp_aux_hdr_disable_backlight(const struct drm_connector_state *conn_state,
struct intel_panel *panel = &connector->panel;
/* Nothing to do for AUX based backlight controls */
if (panel->backlight.edp.intel.sdr_uses_aux)
if (panel->backlight.edp.intel_cap.sdr_uses_aux)
return;
/* Note we want the actual pwm_level to be 0, regardless of pwm_min */
@ -287,6 +374,29 @@ static const char *dpcd_vs_pwm_str(bool aux)
return aux ? "DPCD" : "PWM";
}
static void
intel_dp_aux_write_panel_luminance_override(struct intel_connector *connector)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel;
struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);
int ret;
u8 buf[4] = {};
buf[0] = panel->backlight.min & 0xFF;
buf[1] = (panel->backlight.min & 0xFF00) >> 8;
buf[2] = panel->backlight.max & 0xFF;
buf[3] = (panel->backlight.max & 0xFF00) >> 8;
ret = drm_dp_dpcd_write(&intel_dp->aux,
INTEL_EDP_HDR_PANEL_LUMINANCE_OVERRIDE,
buf, sizeof(buf));
if (ret < 0)
drm_dbg_kms(&i915->drm,
"Panel Luminance DPCD reg write failed, err:-%d\n",
ret);
}
static int
intel_dp_aux_hdr_setup_backlight(struct intel_connector *connector, enum pipe pipe)
{
@ -298,9 +408,9 @@ intel_dp_aux_hdr_setup_backlight(struct intel_connector *connector, enum pipe pi
drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] SDR backlight is controlled through %s\n",
connector->base.base.id, connector->base.name,
dpcd_vs_pwm_str(panel->backlight.edp.intel.sdr_uses_aux));
dpcd_vs_pwm_str(panel->backlight.edp.intel_cap.sdr_uses_aux));
if (!panel->backlight.edp.intel.sdr_uses_aux) {
if (!panel->backlight.edp.intel_cap.sdr_uses_aux) {
ret = panel->backlight.pwm_funcs->setup(connector, pipe);
if (ret < 0) {
drm_err(&i915->drm,
@ -318,11 +428,12 @@ intel_dp_aux_hdr_setup_backlight(struct intel_connector *connector, enum pipe pi
panel->backlight.min = 0;
}
intel_dp_aux_write_panel_luminance_override(connector);
drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] Using AUX HDR interface for backlight control (range %d..%d)\n",
connector->base.base.id, connector->base.name,
panel->backlight.min, panel->backlight.max);
panel->backlight.level = intel_dp_aux_hdr_get_backlight(connector, pipe);
panel->backlight.enabled = panel->backlight.level != 0;

View File

@ -23,12 +23,17 @@
#define _DPA_AUX_CH_CTL 0x64010
#define _DPB_AUX_CH_CTL 0x64110
#define _XELPDP_USBC1_AUX_CH_CTL 0x16f210
#define _XELPDP_USBC2_AUX_CH_CTL 0x16f410
#define DP_AUX_CH_CTL(aux_ch) _MMIO_PORT(aux_ch, _DPA_AUX_CH_CTL, \
_DPB_AUX_CH_CTL)
#define VLV_DP_AUX_CH_CTL(aux_ch) _MMIO(VLV_DISPLAY_BASE + \
_PORT(aux_ch, _DPA_AUX_CH_CTL, _DPB_AUX_CH_CTL))
#define _PCH_DPB_AUX_CH_CTL 0xe4110
#define _PCH_DPC_AUX_CH_CTL 0xe4210
#define PCH_DP_AUX_CH_CTL(aux_ch) _MMIO_PORT((aux_ch) - AUX_CH_B, _PCH_DPB_AUX_CH_CTL, _PCH_DPC_AUX_CH_CTL)
#define _XELPDP_USBC1_AUX_CH_CTL 0x16f210
#define _XELPDP_USBC2_AUX_CH_CTL 0x16f410
#define _XELPDP_DP_AUX_CH_CTL(aux_ch) \
_MMIO(_PICK_EVEN_2RANGES(aux_ch, AUX_CH_USBC1, \
_DPA_AUX_CH_CTL, _DPB_AUX_CH_CTL, \
@ -72,12 +77,17 @@
#define _DPA_AUX_CH_DATA1 0x64014
#define _DPB_AUX_CH_DATA1 0x64114
#define _XELPDP_USBC1_AUX_CH_DATA1 0x16f214
#define _XELPDP_USBC2_AUX_CH_DATA1 0x16f414
#define DP_AUX_CH_DATA(aux_ch, i) _MMIO(_PORT(aux_ch, _DPA_AUX_CH_DATA1, \
_DPB_AUX_CH_DATA1) + (i) * 4) /* 5 registers */
#define VLV_DP_AUX_CH_DATA(aux_ch, i) _MMIO(VLV_DISPLAY_BASE + _PORT(aux_ch, _DPA_AUX_CH_DATA1, \
_DPB_AUX_CH_DATA1) + (i) * 4) /* 5 registers */
#define _PCH_DPB_AUX_CH_DATA1 0xe4114
#define _PCH_DPC_AUX_CH_DATA1 0xe4214
#define PCH_DP_AUX_CH_DATA(aux_ch, i) _MMIO(_PORT((aux_ch) - AUX_CH_B, _PCH_DPB_AUX_CH_DATA1, _PCH_DPC_AUX_CH_DATA1) + (i) * 4) /* 5 registers */
#define _XELPDP_USBC1_AUX_CH_DATA1 0x16f214
#define _XELPDP_USBC2_AUX_CH_DATA1 0x16f414
#define _XELPDP_DP_AUX_CH_DATA(aux_ch, i) \
_MMIO(_PICK_EVEN_2RANGES(aux_ch, AUX_CH_USBC1, \
_DPA_AUX_CH_DATA1, _DPB_AUX_CH_DATA1, \

View File

@ -687,15 +687,16 @@ int intel_dp_hdcp_get_remote_capability(struct intel_connector *connector,
bool *hdcp2_capable)
{
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct drm_dp_aux *aux = &connector->port->aux;
struct drm_dp_aux *aux;
u8 bcaps;
int ret;
*hdcp_capable = false;
*hdcp2_capable = false;
if (!intel_encoder_is_mst(connector->encoder))
if (!connector->mst_port)
return -EINVAL;
aux = &connector->port->aux;
ret = _intel_dp_hdcp2_get_capability(aux, hdcp2_capable);
if (ret)
drm_dbg_kms(&i915->drm,

View File

@ -25,6 +25,9 @@
#include "intel_display_types.h"
#include "intel_dp.h"
#include "intel_dp_link_training.h"
#include "intel_encoder.h"
#include "intel_hotplug.h"
#include "intel_panel.h"
#define LT_MSG_PREFIX "[CONNECTOR:%d:%s][ENCODER:%d:%s][%s] "
#define LT_MSG_ARGS(_intel_dp, _dp_phy) (_intel_dp)->attached_connector->base.base.id, \
@ -1091,28 +1094,129 @@ out:
return ret;
}
static void intel_dp_schedule_fallback_link_training(struct intel_dp *intel_dp,
static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
int link_rate,
u8 lane_count)
{
/* FIXME figure out what we actually want here */
const struct drm_display_mode *fixed_mode =
intel_panel_preferred_fixed_mode(intel_dp->attached_connector);
int mode_rate, max_rate;
mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
max_rate = intel_dp_max_link_data_rate(intel_dp, link_rate, lane_count);
if (mode_rate > max_rate)
return false;
return true;
}
static int reduce_link_rate(struct intel_dp *intel_dp, int current_rate)
{
int rate_index;
int new_rate;
if (intel_dp->link.force_rate)
return -1;
rate_index = intel_dp_rate_index(intel_dp->common_rates,
intel_dp->num_common_rates,
current_rate);
if (rate_index <= 0)
return -1;
new_rate = intel_dp_common_rate(intel_dp, rate_index - 1);
/* TODO: Make switching from UHBR to non-UHBR rates work. */
if (drm_dp_is_uhbr_rate(current_rate) != drm_dp_is_uhbr_rate(new_rate))
return -1;
return new_rate;
}
static int reduce_lane_count(struct intel_dp *intel_dp, int current_lane_count)
{
if (intel_dp->link.force_lane_count)
return -1;
if (current_lane_count == 1)
return -1;
return current_lane_count >> 1;
}
static int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
int new_link_rate;
int new_lane_count;
if (intel_dp_is_edp(intel_dp) && !intel_dp->use_max_params) {
lt_dbg(intel_dp, DP_PHY_DPRX,
"Retrying Link training for eDP with max parameters\n");
intel_dp->use_max_params = true;
return 0;
}
new_lane_count = crtc_state->lane_count;
new_link_rate = reduce_link_rate(intel_dp, crtc_state->port_clock);
if (new_link_rate < 0) {
new_lane_count = reduce_lane_count(intel_dp, crtc_state->lane_count);
new_link_rate = intel_dp_max_common_rate(intel_dp);
}
if (new_lane_count < 0)
return -1;
if (intel_dp_is_edp(intel_dp) &&
!intel_dp_can_link_train_fallback_for_edp(intel_dp, new_link_rate, new_lane_count)) {
lt_dbg(intel_dp, DP_PHY_DPRX,
"Retrying Link training for eDP with same parameters\n");
return 0;
}
lt_dbg(intel_dp, DP_PHY_DPRX,
"Reducing link parameters from %dx%d to %dx%d\n",
crtc_state->lane_count, crtc_state->port_clock,
new_lane_count, new_link_rate);
intel_dp->link.max_rate = new_link_rate;
intel_dp->link.max_lane_count = new_lane_count;
return 0;
}
/* NOTE: @state is only valid for MST links and can be %NULL for SST. */
static bool intel_dp_schedule_fallback_link_training(struct intel_atomic_state *state,
struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct intel_connector *intel_connector = intel_dp->attached_connector;
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
if (!intel_digital_port_connected(&dp_to_dig_port(intel_dp)->base)) {
lt_dbg(intel_dp, DP_PHY_DPRX, "Link Training failed on disconnected sink.\n");
return;
return true;
}
if (intel_dp->hobl_active) {
lt_dbg(intel_dp, DP_PHY_DPRX,
"Link Training failed with HOBL active, not enabling it from now on\n");
intel_dp->hobl_failed = true;
} else if (intel_dp_get_link_train_fallback_values(intel_dp,
crtc_state->port_clock,
crtc_state->lane_count)) {
return;
} else if (intel_dp_get_link_train_fallback_values(intel_dp, crtc_state)) {
return false;
}
if (drm_WARN_ON(&i915->drm,
intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
!state))
return false;
/* Schedule a Hotplug Uevent to userspace to start modeset */
intel_dp_queue_modeset_retry_work(intel_connector);
intel_dp_queue_modeset_retry_for_link(state, encoder, crtc_state);
return true;
}
/* Perform the link training on all LTTPRs and the DPRX on a link. */
@ -1359,6 +1463,7 @@ intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
/**
* intel_dp_start_link_train - start link training
* @state: Atomic state
* @intel_dp: DP struct
* @crtc_state: state for CRTC attached to the encoder
*
@ -1366,11 +1471,16 @@ intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
* retraining with reduced link rate/lane parameters if the link training
* fails.
* After calling this function intel_dp_stop_link_train() must be called.
*
* NOTE: @state is only valid for MST links and can be %NULL for SST.
*/
void intel_dp_start_link_train(struct intel_dp *intel_dp,
void intel_dp_start_link_train(struct intel_atomic_state *state,
struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct intel_encoder *encoder = &dig_port->base;
bool passed;
/*
@ -1379,6 +1489,11 @@ void intel_dp_start_link_train(struct intel_dp *intel_dp,
*/
int lttpr_count = intel_dp_init_lttpr_and_dprx_caps(intel_dp);
if (drm_WARN_ON(&i915->drm,
intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
!state))
return;
if (lttpr_count < 0)
/* Still continue with enabling the port and link training. */
lttpr_count = 0;
@ -1390,6 +1505,17 @@ void intel_dp_start_link_train(struct intel_dp *intel_dp,
else
passed = intel_dp_link_train_all_phys(intel_dp, crtc_state, lttpr_count);
if (intel_dp->link.force_train_failure) {
intel_dp->link.force_train_failure--;
lt_dbg(intel_dp, DP_PHY_DPRX, "Forcing link training failure\n");
} else if (passed) {
intel_dp->link.seq_train_failures = 0;
intel_encoder_link_check_queue_work(encoder, 2000);
return;
}
intel_dp->link.seq_train_failures++;
/*
* Ignore the link failure in CI
*
@ -1402,13 +1528,25 @@ void intel_dp_start_link_train(struct intel_dp *intel_dp,
* For test cases which rely on the link training or processing of HPDs
* ignore_long_hpd flag can unset from the testcase.
*/
if (!passed && i915->display.hotplug.ignore_long_hpd) {
if (i915->display.hotplug.ignore_long_hpd) {
lt_dbg(intel_dp, DP_PHY_DPRX, "Ignore the link failure\n");
return;
}
if (intel_dp->link.seq_train_failures < 2) {
intel_encoder_link_check_queue_work(encoder, 0);
return;
}
if (intel_dp_schedule_fallback_link_training(state, intel_dp, crtc_state))
return;
intel_dp->link.retrain_disabled = true;
if (!passed)
intel_dp_schedule_fallback_link_training(intel_dp, crtc_state);
lt_err(intel_dp, DP_PHY_DPRX, "Can't reduce link training parameters after failure\n");
else
lt_dbg(intel_dp, DP_PHY_DPRX, "Can't reduce link training parameters after forced failure\n");
}
void intel_dp_128b132b_sdp_crc16(struct intel_dp *intel_dp,
@ -1430,3 +1568,381 @@ void intel_dp_128b132b_sdp_crc16(struct intel_dp *intel_dp,
lt_dbg(intel_dp, DP_PHY_DPRX, "DP2.0 SDP CRC16 for 128b/132b enabled\n");
}
static struct intel_dp *intel_connector_to_intel_dp(struct intel_connector *connector)
{
if (connector->mst_port)
return connector->mst_port;
else
return enc_to_intel_dp(intel_attached_encoder(connector));
}
static int i915_dp_force_link_rate_show(struct seq_file *m, void *data)
{
struct intel_connector *connector = to_intel_connector(m->private);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int current_rate = -1;
int force_rate;
int err;
int i;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
if (intel_dp->link_trained)
current_rate = intel_dp->link_rate;
force_rate = intel_dp->link.force_rate;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
seq_printf(m, "%sauto%s",
force_rate == 0 ? "[" : "",
force_rate == 0 ? "]" : "");
for (i = 0; i < intel_dp->num_source_rates; i++)
seq_printf(m, " %s%d%s%s",
intel_dp->source_rates[i] == force_rate ? "[" : "",
intel_dp->source_rates[i],
intel_dp->source_rates[i] == current_rate ? "*" : "",
intel_dp->source_rates[i] == force_rate ? "]" : "");
seq_putc(m, '\n');
return 0;
}
static int parse_link_rate(struct intel_dp *intel_dp, const char __user *ubuf, size_t len)
{
char *kbuf;
const char *p;
int rate;
int ret = 0;
kbuf = memdup_user_nul(ubuf, len);
if (IS_ERR(kbuf))
return PTR_ERR(kbuf);
p = strim(kbuf);
if (!strcmp(p, "auto")) {
rate = 0;
} else {
ret = kstrtoint(p, 0, &rate);
if (ret < 0)
goto out_free;
if (intel_dp_rate_index(intel_dp->source_rates,
intel_dp->num_source_rates,
rate) < 0)
ret = -EINVAL;
}
out_free:
kfree(kbuf);
return ret < 0 ? ret : rate;
}
static ssize_t i915_dp_force_link_rate_write(struct file *file,
const char __user *ubuf,
size_t len, loff_t *offp)
{
struct seq_file *m = file->private_data;
struct intel_connector *connector = to_intel_connector(m->private);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int rate;
int err;
rate = parse_link_rate(intel_dp, ubuf, len);
if (rate < 0)
return rate;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
intel_dp_reset_link_params(intel_dp);
intel_dp->link.force_rate = rate;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
*offp += len;
return len;
}
DEFINE_SHOW_STORE_ATTRIBUTE(i915_dp_force_link_rate);
static int i915_dp_force_lane_count_show(struct seq_file *m, void *data)
{
struct intel_connector *connector = to_intel_connector(m->private);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int current_lane_count = -1;
int force_lane_count;
int err;
int i;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
if (intel_dp->link_trained)
current_lane_count = intel_dp->lane_count;
force_lane_count = intel_dp->link.force_lane_count;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
seq_printf(m, "%sauto%s",
force_lane_count == 0 ? "[" : "",
force_lane_count == 0 ? "]" : "");
for (i = 1; i <= 4; i <<= 1)
seq_printf(m, " %s%d%s%s",
i == force_lane_count ? "[" : "",
i,
i == current_lane_count ? "*" : "",
i == force_lane_count ? "]" : "");
seq_putc(m, '\n');
return 0;
}
static int parse_lane_count(const char __user *ubuf, size_t len)
{
char *kbuf;
const char *p;
int lane_count;
int ret = 0;
kbuf = memdup_user_nul(ubuf, len);
if (IS_ERR(kbuf))
return PTR_ERR(kbuf);
p = strim(kbuf);
if (!strcmp(p, "auto")) {
lane_count = 0;
} else {
ret = kstrtoint(p, 0, &lane_count);
if (ret < 0)
goto out_free;
switch (lane_count) {
case 1:
case 2:
case 4:
break;
default:
ret = -EINVAL;
}
}
out_free:
kfree(kbuf);
return ret < 0 ? ret : lane_count;
}
static ssize_t i915_dp_force_lane_count_write(struct file *file,
const char __user *ubuf,
size_t len, loff_t *offp)
{
struct seq_file *m = file->private_data;
struct intel_connector *connector = to_intel_connector(m->private);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int lane_count;
int err;
lane_count = parse_lane_count(ubuf, len);
if (lane_count < 0)
return lane_count;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
intel_dp_reset_link_params(intel_dp);
intel_dp->link.force_lane_count = lane_count;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
*offp += len;
return len;
}
DEFINE_SHOW_STORE_ATTRIBUTE(i915_dp_force_lane_count);
static int i915_dp_max_link_rate_show(void *data, u64 *val)
{
struct intel_connector *connector = to_intel_connector(data);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
*val = intel_dp->link.max_rate;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(i915_dp_max_link_rate_fops, i915_dp_max_link_rate_show, NULL, "%llu\n");
static int i915_dp_max_lane_count_show(void *data, u64 *val)
{
struct intel_connector *connector = to_intel_connector(data);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
*val = intel_dp->link.max_lane_count;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(i915_dp_max_lane_count_fops, i915_dp_max_lane_count_show, NULL, "%llu\n");
static int i915_dp_force_link_training_failure_show(void *data, u64 *val)
{
struct intel_connector *connector = to_intel_connector(data);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
*val = intel_dp->link.force_train_failure;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
return 0;
}
static int i915_dp_force_link_training_failure_write(void *data, u64 val)
{
struct intel_connector *connector = to_intel_connector(data);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
if (val > 2)
return -EINVAL;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
intel_dp->link.force_train_failure = val;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(i915_dp_force_link_training_failure_fops,
i915_dp_force_link_training_failure_show,
i915_dp_force_link_training_failure_write, "%llu\n");
static int i915_dp_force_link_retrain_show(void *data, u64 *val)
{
struct intel_connector *connector = to_intel_connector(data);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
*val = intel_dp->link.force_retrain;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
return 0;
}
static int i915_dp_force_link_retrain_write(void *data, u64 val)
{
struct intel_connector *connector = to_intel_connector(data);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
intel_dp->link.force_retrain = val;
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
intel_hpd_trigger_irq(dp_to_dig_port(intel_dp));
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(i915_dp_force_link_retrain_fops,
i915_dp_force_link_retrain_show,
i915_dp_force_link_retrain_write, "%llu\n");
static int i915_dp_link_retrain_disabled_show(struct seq_file *m, void *data)
{
struct intel_connector *connector = to_intel_connector(m->private);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
struct intel_dp *intel_dp = intel_connector_to_intel_dp(connector);
int err;
err = drm_modeset_lock_single_interruptible(&i915->drm.mode_config.connection_mutex);
if (err)
return err;
seq_printf(m, "%s\n", str_yes_no(intel_dp->link.retrain_disabled));
drm_modeset_unlock(&i915->drm.mode_config.connection_mutex);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(i915_dp_link_retrain_disabled);
void intel_dp_link_training_debugfs_add(struct intel_connector *connector)
{
struct dentry *root = connector->base.debugfs_entry;
if (connector->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort &&
connector->base.connector_type != DRM_MODE_CONNECTOR_eDP)
return;
debugfs_create_file("i915_dp_force_link_rate", 0644, root,
connector, &i915_dp_force_link_rate_fops);
debugfs_create_file("i915_dp_force_lane_count", 0644, root,
connector, &i915_dp_force_lane_count_fops);
debugfs_create_file("i915_dp_max_link_rate", 0444, root,
connector, &i915_dp_max_link_rate_fops);
debugfs_create_file("i915_dp_max_lane_count", 0444, root,
connector, &i915_dp_max_lane_count_fops);
debugfs_create_file("i915_dp_force_link_training_failure", 0644, root,
connector, &i915_dp_force_link_training_failure_fops);
debugfs_create_file("i915_dp_force_link_retrain", 0644, root,
connector, &i915_dp_force_link_retrain_fops);
debugfs_create_file("i915_dp_link_retrain_disabled", 0444, root,
connector, &i915_dp_link_retrain_disabled_fops);
}

View File

@ -8,6 +8,8 @@
#include <drm/display/drm_dp_helper.h>
struct intel_atomic_state;
struct intel_connector;
struct intel_crtc_state;
struct intel_dp;
@ -25,7 +27,8 @@ void intel_dp_program_link_training_pattern(struct intel_dp *intel_dp,
void intel_dp_set_signal_levels(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state,
enum drm_dp_phy dp_phy);
void intel_dp_start_link_train(struct intel_dp *intel_dp,
void intel_dp_start_link_train(struct intel_atomic_state *state,
struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state);
void intel_dp_stop_link_train(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state);
@ -42,4 +45,7 @@ static inline u8 intel_dp_training_pattern_symbol(u8 pattern)
void intel_dp_128b132b_sdp_crc16(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state);
void intel_dp_link_training_debugfs_add(struct intel_connector *connector);
#endif /* __INTEL_DP_LINK_TRAINING_H__ */

View File

@ -105,7 +105,7 @@ static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state,
dsc_slice_count = intel_dp_dsc_get_slice_count(connector,
adjusted_mode->clock,
adjusted_mode->hdisplay,
crtc_state->bigjoiner_pipes);
crtc_state->joiner_pipes);
}
overhead = drm_dp_bw_overhead(crtc_state->lane_count,
@ -207,6 +207,7 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
int remote_bw_overhead;
int link_bpp_x16;
int remote_tu;
fixed20_12 pbn;
drm_dbg_kms(&i915->drm, "Trying bpp %d\n", bpp);
@ -237,11 +238,29 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
* crtc_state->dp_m_n.tu), provided that the driver doesn't
* enable SSC on the corresponding link.
*/
crtc_state->pbn = intel_dp_mst_calc_pbn(adjusted_mode->crtc_clock,
link_bpp_x16,
remote_bw_overhead);
pbn.full = dfixed_const(intel_dp_mst_calc_pbn(adjusted_mode->crtc_clock,
link_bpp_x16,
remote_bw_overhead));
remote_tu = DIV_ROUND_UP(pbn.full, mst_state->pbn_div.full);
remote_tu = DIV_ROUND_UP(dfixed_const(crtc_state->pbn), mst_state->pbn_div.full);
/*
* Aligning the TUs ensures that symbols consisting of multiple
* (4) symbol cycles don't get split between two consecutive
* MTPs, as required by Bspec.
* TODO: remove the alignment restriction for 128b/132b links
* on some platforms, where Bspec allows this.
*/
remote_tu = ALIGN(remote_tu, 4 / crtc_state->lane_count);
/*
* Also align PBNs accordingly, since MST core will derive its
* own copy of TU from the PBN in drm_dp_atomic_find_time_slots().
* The above comment about the difference between the PBN
* allocated for the whole path and the TUs allocated for the
* first branch device's link also applies here.
*/
pbn.full = remote_tu * mst_state->pbn_div.full;
crtc_state->pbn = dfixed_trunc(pbn);
drm_WARN_ON(&i915->drm, remote_tu < crtc_state->dp_m_n.tu);
crtc_state->dp_m_n.tu = remote_tu;
@ -349,6 +368,8 @@ static int intel_dp_dsc_mst_compute_link_config(struct intel_encoder *encoder,
if (max_bpp > sink_max_bpp)
max_bpp = sink_max_bpp;
crtc_state->pipe_bpp = max_bpp;
max_compressed_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
crtc_state,
max_bpp / 3);
@ -400,18 +421,6 @@ static int intel_dp_mst_update_slots(struct intel_encoder *encoder,
return 0;
}
static bool
intel_dp_mst_dsc_source_support(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
/*
* FIXME: Enabling DSC on ICL results in blank screen and FIFO pipe /
* transcoder underruns, re-enable DSC after fixing this issue.
*/
return DISPLAY_VER(i915) >= 12 && intel_dsc_source_support(crtc_state);
}
static int mode_hblank_period_ns(const struct drm_display_mode *mode)
{
return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(mode->htotal - mode->hdisplay,
@ -456,7 +465,7 @@ adjust_limits_for_dsc_hblank_expansion_quirk(const struct intel_connector *conne
return true;
if (!dsc) {
if (intel_dp_mst_dsc_source_support(crtc_state)) {
if (intel_dp_supports_dsc(connector, crtc_state)) {
drm_dbg_kms(&i915->drm,
"[CRTC:%d:%s][CONNECTOR:%d:%s] DSC needed by hblank expansion quirk\n",
crtc->base.base.id, crtc->base.name,
@ -567,16 +576,16 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return -EINVAL;
if (intel_dp_need_bigjoiner(intel_dp, connector,
adjusted_mode->crtc_hdisplay,
adjusted_mode->crtc_clock))
pipe_config->bigjoiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe);
if (intel_dp_need_joiner(intel_dp, connector,
adjusted_mode->crtc_hdisplay,
adjusted_mode->crtc_clock))
pipe_config->joiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe);
pipe_config->sink_format = INTEL_OUTPUT_FORMAT_RGB;
pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB;
pipe_config->has_pch_encoder = false;
joiner_needs_dsc = intel_dp_joiner_needs_dsc(dev_priv, pipe_config->bigjoiner_pipes);
joiner_needs_dsc = intel_dp_joiner_needs_dsc(dev_priv, pipe_config->joiner_pipes);
dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en ||
!intel_dp_mst_compute_config_limits(intel_dp,
@ -602,7 +611,7 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
str_yes_no(ret), str_yes_no(joiner_needs_dsc),
str_yes_no(intel_dp->force_dsc_en));
if (!intel_dp_mst_dsc_source_support(pipe_config))
if (!intel_dp_supports_dsc(connector, pipe_config))
return -EINVAL;
if (!intel_dp_mst_compute_config_limits(intel_dp,
@ -962,6 +971,9 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
drm_dbg_kms(&i915->drm, "active links %d\n",
intel_dp->active_mst_links);
if (intel_dp->active_mst_links == 1)
intel_dp->link_trained = false;
intel_hdcp_disable(intel_mst->connector);
intel_dp_sink_disable_decompression(state, connector, old_crtc_state);
@ -1009,7 +1021,8 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
clear_act_sent(encoder, old_crtc_state);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
intel_de_rmw(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, old_crtc_state->cpu_transcoder),
TRANS_DDI_DP_VC_PAYLOAD_ALLOC, 0);
wait_for_act_sent(encoder, old_crtc_state);
@ -1230,7 +1243,7 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
clear_act_sent(encoder, pipe_config);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(trans), 0,
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(dev_priv, trans), 0,
TRANS_DDI_DP_VC_PAYLOAD_ALLOC);
drm_dbg_kms(&dev_priv->drm, "active links %d\n",
@ -1375,7 +1388,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
int max_dotclk = to_i915(connector->dev)->display.cdclk.max_dotclk_freq;
int max_rate, mode_rate, max_lanes, max_link_clock;
int ret;
bool dsc = false, bigjoiner = false;
bool dsc = false, joiner = false;
u16 dsc_max_compressed_bpp = 0;
u8 dsc_slice_count = 0;
int target_clock = mode->clock;
@ -1418,9 +1431,9 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
* corresponding link capabilities of the sink) in case the
* stream is uncompressed for it by the last branch device.
*/
if (intel_dp_need_bigjoiner(intel_dp, intel_connector,
mode->hdisplay, target_clock)) {
bigjoiner = true;
if (intel_dp_need_joiner(intel_dp, intel_connector,
mode->hdisplay, target_clock)) {
joiner = true;
max_dotclk *= 2;
}
@ -1434,8 +1447,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
return 0;
}
if (HAS_DSC_MST(dev_priv) &&
drm_dp_sink_supports_dsc(intel_connector->dp.dsc_dpcd)) {
if (intel_dp_has_dsc(intel_connector)) {
/*
* TBD pass the connector BPC,
* for now U8_MAX so that max BPC on that platform would be picked
@ -1449,20 +1461,20 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
max_lanes,
target_clock,
mode->hdisplay,
bigjoiner,
joiner,
INTEL_OUTPUT_FORMAT_RGB,
pipe_bpp, 64);
dsc_slice_count =
intel_dp_dsc_get_slice_count(intel_connector,
target_clock,
mode->hdisplay,
bigjoiner);
joiner);
}
dsc = dsc_max_compressed_bpp && dsc_slice_count;
}
if (intel_dp_joiner_needs_dsc(dev_priv, bigjoiner) && !dsc) {
if (intel_dp_joiner_needs_dsc(dev_priv, joiner) && !dsc) {
*status = MODE_CLOCK_HIGH;
return 0;
}
@ -1472,7 +1484,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
return 0;
}
*status = intel_mode_valid_max_plane_size(dev_priv, mode, bigjoiner);
*status = intel_mode_valid_max_plane_size(dev_priv, mode, joiner);
return 0;
}

View File

@ -398,12 +398,13 @@ void i9xx_dpll_get_hw_state(struct intel_crtc *crtc,
if (IS_CHERRYVIEW(dev_priv) && crtc->pipe != PIPE_A)
tmp = dev_priv->display.state.chv_dpll_md[crtc->pipe];
else
tmp = intel_de_read(dev_priv, DPLL_MD(crtc->pipe));
tmp = intel_de_read(dev_priv,
DPLL_MD(dev_priv, crtc->pipe));
hw_state->dpll_md = tmp;
}
hw_state->dpll = intel_de_read(dev_priv, DPLL(crtc->pipe));
hw_state->dpll = intel_de_read(dev_priv, DPLL(dev_priv, crtc->pipe));
if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv)) {
hw_state->fp0 = intel_de_read(dev_priv, FP0(crtc->pipe));
@ -1842,28 +1843,30 @@ void i9xx_enable_pll(const struct intel_crtc_state *crtc_state)
* the P1/P2 dividers. Otherwise the DPLL will keep using the old
* dividers, even though the register value does change.
*/
intel_de_write(dev_priv, DPLL(pipe), hw_state->dpll & ~DPLL_VGA_MODE_DIS);
intel_de_write(dev_priv, DPLL(pipe), hw_state->dpll);
intel_de_write(dev_priv, DPLL(dev_priv, pipe),
hw_state->dpll & ~DPLL_VGA_MODE_DIS);
intel_de_write(dev_priv, DPLL(dev_priv, pipe), hw_state->dpll);
/* Wait for the clocks to stabilize. */
intel_de_posting_read(dev_priv, DPLL(pipe));
intel_de_posting_read(dev_priv, DPLL(dev_priv, pipe));
udelay(150);
if (DISPLAY_VER(dev_priv) >= 4) {
intel_de_write(dev_priv, DPLL_MD(pipe), hw_state->dpll_md);
intel_de_write(dev_priv, DPLL_MD(dev_priv, pipe),
hw_state->dpll_md);
} else {
/* The pixel multiplier can only be updated once the
* DPLL is enabled and the clocks are stable.
*
* So write it again.
*/
intel_de_write(dev_priv, DPLL(pipe), hw_state->dpll);
intel_de_write(dev_priv, DPLL(dev_priv, pipe), hw_state->dpll);
}
/* We do this three times for luck */
for (i = 0; i < 3; i++) {
intel_de_write(dev_priv, DPLL(pipe), hw_state->dpll);
intel_de_posting_read(dev_priv, DPLL(pipe));
intel_de_write(dev_priv, DPLL(dev_priv, pipe), hw_state->dpll);
intel_de_posting_read(dev_priv, DPLL(dev_priv, pipe));
udelay(150); /* wait for warmup */
}
}
@ -1991,11 +1994,11 @@ static void _vlv_enable_pll(const struct intel_crtc_state *crtc_state)
const struct i9xx_dpll_hw_state *hw_state = &crtc_state->dpll_hw_state.i9xx;
enum pipe pipe = crtc->pipe;
intel_de_write(dev_priv, DPLL(pipe), hw_state->dpll);
intel_de_posting_read(dev_priv, DPLL(pipe));
intel_de_write(dev_priv, DPLL(dev_priv, pipe), hw_state->dpll);
intel_de_posting_read(dev_priv, DPLL(dev_priv, pipe));
udelay(150);
if (intel_de_wait_for_set(dev_priv, DPLL(pipe), DPLL_LOCK_VLV, 1))
if (intel_de_wait_for_set(dev_priv, DPLL(dev_priv, pipe), DPLL_LOCK_VLV, 1))
drm_err(&dev_priv->drm, "DPLL %d failed to lock\n", pipe);
}
@ -2012,7 +2015,7 @@ void vlv_enable_pll(const struct intel_crtc_state *crtc_state)
assert_pps_unlocked(dev_priv, pipe);
/* Enable Refclk */
intel_de_write(dev_priv, DPLL(pipe),
intel_de_write(dev_priv, DPLL(dev_priv, pipe),
hw_state->dpll & ~(DPLL_VCO_ENABLE | DPLL_EXT_BUFFER_ENABLE_VLV));
if (hw_state->dpll & DPLL_VCO_ENABLE) {
@ -2020,8 +2023,8 @@ void vlv_enable_pll(const struct intel_crtc_state *crtc_state)
_vlv_enable_pll(crtc_state);
}
intel_de_write(dev_priv, DPLL_MD(pipe), hw_state->dpll_md);
intel_de_posting_read(dev_priv, DPLL_MD(pipe));
intel_de_write(dev_priv, DPLL_MD(dev_priv, pipe), hw_state->dpll_md);
intel_de_posting_read(dev_priv, DPLL_MD(dev_priv, pipe));
}
static void chv_prepare_pll(const struct intel_crtc_state *crtc_state)
@ -2138,10 +2141,10 @@ static void _chv_enable_pll(const struct intel_crtc_state *crtc_state)
udelay(1);
/* Enable PLL */
intel_de_write(dev_priv, DPLL(pipe), hw_state->dpll);
intel_de_write(dev_priv, DPLL(dev_priv, pipe), hw_state->dpll);
/* Check PLL is locked */
if (intel_de_wait_for_set(dev_priv, DPLL(pipe), DPLL_LOCK_VLV, 1))
if (intel_de_wait_for_set(dev_priv, DPLL(dev_priv, pipe), DPLL_LOCK_VLV, 1))
drm_err(&dev_priv->drm, "PLL %d failed to lock\n", pipe);
}
@ -2158,7 +2161,7 @@ void chv_enable_pll(const struct intel_crtc_state *crtc_state)
assert_pps_unlocked(dev_priv, pipe);
/* Enable Refclk and SSC */
intel_de_write(dev_priv, DPLL(pipe),
intel_de_write(dev_priv, DPLL(dev_priv, pipe),
hw_state->dpll & ~DPLL_VCO_ENABLE);
if (hw_state->dpll & DPLL_VCO_ENABLE) {
@ -2174,7 +2177,8 @@ void chv_enable_pll(const struct intel_crtc_state *crtc_state)
* the value from DPLLBMD to either pipe B or C.
*/
intel_de_write(dev_priv, CBR4_VLV, CBR_DPLLBMD_PIPE(pipe));
intel_de_write(dev_priv, DPLL_MD(PIPE_B), hw_state->dpll_md);
intel_de_write(dev_priv, DPLL_MD(dev_priv, PIPE_B),
hw_state->dpll_md);
intel_de_write(dev_priv, CBR4_VLV, 0);
dev_priv->display.state.chv_dpll_md[pipe] = hw_state->dpll_md;
@ -2183,11 +2187,12 @@ void chv_enable_pll(const struct intel_crtc_state *crtc_state)
* We should always have it disabled.
*/
drm_WARN_ON(&dev_priv->drm,
(intel_de_read(dev_priv, DPLL(PIPE_B)) &
(intel_de_read(dev_priv, DPLL(dev_priv, PIPE_B)) &
DPLL_VGA_MODE_DIS) == 0);
} else {
intel_de_write(dev_priv, DPLL_MD(pipe), hw_state->dpll_md);
intel_de_posting_read(dev_priv, DPLL_MD(pipe));
intel_de_write(dev_priv, DPLL_MD(dev_priv, pipe),
hw_state->dpll_md);
intel_de_posting_read(dev_priv, DPLL_MD(dev_priv, pipe));
}
}
@ -2241,8 +2246,8 @@ void vlv_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe)
if (pipe != PIPE_A)
val |= DPLL_INTEGRATED_CRI_CLK_VLV;
intel_de_write(dev_priv, DPLL(pipe), val);
intel_de_posting_read(dev_priv, DPLL(pipe));
intel_de_write(dev_priv, DPLL(dev_priv, pipe), val);
intel_de_posting_read(dev_priv, DPLL(dev_priv, pipe));
}
void chv_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe)
@ -2259,8 +2264,8 @@ void chv_disable_pll(struct drm_i915_private *dev_priv, enum pipe pipe)
if (pipe != PIPE_A)
val |= DPLL_INTEGRATED_CRI_CLK_VLV;
intel_de_write(dev_priv, DPLL(pipe), val);
intel_de_posting_read(dev_priv, DPLL(pipe));
intel_de_write(dev_priv, DPLL(dev_priv, pipe), val);
intel_de_posting_read(dev_priv, DPLL(dev_priv, pipe));
vlv_dpio_get(dev_priv);
@ -2285,8 +2290,8 @@ void i9xx_disable_pll(const struct intel_crtc_state *crtc_state)
/* Make sure the pipe isn't still relying on us */
assert_transcoder_disabled(dev_priv, crtc_state->cpu_transcoder);
intel_de_write(dev_priv, DPLL(pipe), DPLL_VGA_MODE_DIS);
intel_de_posting_read(dev_priv, DPLL(pipe));
intel_de_write(dev_priv, DPLL(dev_priv, pipe), DPLL_VGA_MODE_DIS);
intel_de_posting_read(dev_priv, DPLL(dev_priv, pipe));
}
@ -2312,7 +2317,7 @@ static void assert_pll(struct drm_i915_private *dev_priv,
{
bool cur_state;
cur_state = intel_de_read(dev_priv, DPLL(pipe)) & DPLL_VCO_ENABLE;
cur_state = intel_de_read(dev_priv, DPLL(dev_priv, pipe)) & DPLL_VCO_ENABLE;
I915_STATE_WARN(dev_priv, cur_state != state,
"PLL state assertion failure (expected %s, current %s)\n",
str_on_off(state), str_on_off(cur_state));

View File

@ -264,6 +264,7 @@ struct intel_cx0pll_state {
struct intel_c20pll_state c20;
};
bool ssc_enabled;
bool use_c10;
};
struct intel_dpll_hw_state {

View File

@ -121,7 +121,8 @@ static void dpt_cleanup(struct i915_address_space *vm)
i915_gem_object_put(dpt->obj);
}
struct i915_vma *intel_dpt_pin(struct i915_address_space *vm)
struct i915_vma *intel_dpt_pin_to_ggtt(struct i915_address_space *vm,
unsigned int alignment)
{
struct drm_i915_private *i915 = vm->i915;
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
@ -143,8 +144,8 @@ struct i915_vma *intel_dpt_pin(struct i915_address_space *vm)
if (err)
continue;
vma = i915_gem_object_ggtt_pin_ww(dpt->obj, &ww, NULL, 0, 4096,
pin_flags);
vma = i915_gem_object_ggtt_pin_ww(dpt->obj, &ww, NULL, 0,
alignment, pin_flags);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
continue;
@ -172,7 +173,7 @@ struct i915_vma *intel_dpt_pin(struct i915_address_space *vm)
return err ? ERR_PTR(err) : vma;
}
void intel_dpt_unpin(struct i915_address_space *vm)
void intel_dpt_unpin_from_ggtt(struct i915_address_space *vm)
{
struct i915_dpt *dpt = i915_vm_to_dpt(vm);

View File

@ -13,8 +13,9 @@ struct i915_vma;
struct intel_framebuffer;
void intel_dpt_destroy(struct i915_address_space *vm);
struct i915_vma *intel_dpt_pin(struct i915_address_space *vm);
void intel_dpt_unpin(struct i915_address_space *vm);
struct i915_vma *intel_dpt_pin_to_ggtt(struct i915_address_space *vm,
unsigned int alignment);
void intel_dpt_unpin_from_ggtt(struct i915_address_space *vm);
void intel_dpt_suspend(struct drm_i915_private *i915);
void intel_dpt_resume(struct drm_i915_private *i915);
struct i915_address_space *

View File

@ -7,6 +7,7 @@
#include "intel_de.h"
#include "intel_display_types.h"
#include "intel_dpt_common.h"
#include "skl_universal_plane_regs.h"
void intel_dpt_configure(struct intel_crtc *crtc)
{

View File

@ -85,7 +85,7 @@ intel_drrs_set_refresh_rate_pipeconf(struct intel_crtc *crtc,
else
bit = TRANSCONF_REFRESH_RATE_ALT_ILK;
intel_de_rmw(dev_priv, TRANSCONF(cpu_transcoder),
intel_de_rmw(dev_priv, TRANSCONF(dev_priv, cpu_transcoder),
bit, refresh_rate == DRRS_REFRESH_RATE_LOW ? bit : 0);
}
@ -135,7 +135,7 @@ static unsigned int intel_drrs_frontbuffer_bits(const struct intel_crtc_state *c
frontbuffer_bits = INTEL_FRONTBUFFER_ALL_MASK(crtc->pipe);
for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc,
crtc_state->bigjoiner_pipes)
crtc_state->joiner_pipes)
frontbuffer_bits |= INTEL_FRONTBUFFER_ALL_MASK(crtc->pipe);
return frontbuffer_bits;
@ -157,7 +157,7 @@ void intel_drrs_activate(const struct intel_crtc_state *crtc_state)
if (!crtc_state->hw.active)
return;
if (intel_crtc_is_bigjoiner_slave(crtc_state))
if (intel_crtc_is_joiner_secondary(crtc_state))
return;
mutex_lock(&crtc->drrs.mutex);
@ -189,7 +189,7 @@ void intel_drrs_deactivate(const struct intel_crtc_state *old_crtc_state)
if (!old_crtc_state->hw.active)
return;
if (intel_crtc_is_bigjoiner_slave(old_crtc_state))
if (intel_crtc_is_joiner_secondary(old_crtc_state))
return;
mutex_lock(&crtc->drrs.mutex);

View File

@ -6,7 +6,6 @@
#include "i915_drv.h"
#include "i915_irq.h"
#include "i915_reg.h"
#include "intel_crtc.h"
#include "intel_de.h"
#include "intel_display_types.h"
@ -19,16 +18,8 @@
#define CACHELINE_BYTES 64
enum dsb_id {
INVALID_DSB = -1,
DSB1,
DSB2,
DSB3,
MAX_DSB_PER_PIPE
};
struct intel_dsb {
enum dsb_id id;
enum intel_dsb_id id;
struct intel_dsb_buffer dsb_buf;
struct intel_crtc *crtc;
@ -121,9 +112,9 @@ static void intel_dsb_dump(struct intel_dsb *dsb)
}
static bool is_dsb_busy(struct drm_i915_private *i915, enum pipe pipe,
enum dsb_id id)
enum intel_dsb_id dsb_id)
{
return intel_de_read_fw(i915, DSB_CTRL(pipe, id)) & DSB_STATUS_BUSY;
return intel_de_read_fw(i915, DSB_CTRL(pipe, dsb_id)) & DSB_STATUS_BUSY;
}
static void intel_dsb_emit(struct intel_dsb *dsb, u32 ldw, u32 udw)
@ -328,14 +319,10 @@ static int intel_dsb_dewake_scanline(const struct intel_crtc_state *crtc_state)
unsigned int latency = skl_watermark_max_latency(i915, 0);
int vblank_start;
if (crtc_state->vrr.enable) {
if (crtc_state->vrr.enable)
vblank_start = intel_vrr_vmin_vblank_start(crtc_state);
} else {
vblank_start = adjusted_mode->crtc_vblank_start;
if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
vblank_start = DIV_ROUND_UP(vblank_start, 2);
}
else
vblank_start = intel_mode_vblank_start(adjusted_mode);
return max(0, vblank_start - intel_usecs_to_scanlines(adjusted_mode, latency));
}
@ -448,6 +435,7 @@ void intel_dsb_wait(struct intel_dsb *dsb)
/**
* intel_dsb_prepare() - Allocate, pin and map the DSB command buffer.
* @crtc_state: the CRTC state
* @dsb_id: the DSB engine to use
* @max_cmds: number of commands we need to fit into command buffer
*
* This function prepare the command buffer which is used to store dsb
@ -457,6 +445,7 @@ void intel_dsb_wait(struct intel_dsb *dsb)
* DSB context, NULL on failure
*/
struct intel_dsb *intel_dsb_prepare(const struct intel_crtc_state *crtc_state,
enum intel_dsb_id dsb_id,
unsigned int max_cmds)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
@ -486,7 +475,7 @@ struct intel_dsb *intel_dsb_prepare(const struct intel_crtc_state *crtc_state,
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
dsb->id = DSB1;
dsb->id = dsb_id;
dsb->crtc = crtc;
dsb->size = size / 4; /* in dwords */
dsb->free_pos = 0;
@ -501,7 +490,7 @@ out_put_rpm:
out:
drm_info_once(&i915->drm,
"[CRTC:%d:%s] DSB %d queue setup failed, will fallback to MMIO for display HW programming\n",
crtc->base.base.id, crtc->base.name, DSB1);
crtc->base.base.id, crtc->base.name, dsb_id);
return NULL;
}

View File

@ -14,7 +14,16 @@ struct intel_crtc;
struct intel_crtc_state;
struct intel_dsb;
enum intel_dsb_id {
INTEL_DSB_0,
INTEL_DSB_1,
INTEL_DSB_2,
I915_MAX_DSBS,
};
struct intel_dsb *intel_dsb_prepare(const struct intel_crtc_state *crtc_state,
enum intel_dsb_id dsb_id,
unsigned int max_cmds);
void intel_dsb_finish(struct intel_dsb *dsb);
void intel_dsb_cleanup(struct intel_dsb *dsb);

View File

@ -353,14 +353,14 @@ static void icl_native_gpio_set_value(struct drm_i915_private *dev_priv,
case MIPI_AVDD_EN_2:
index = gpio == MIPI_AVDD_EN_1 ? 0 : 1;
intel_de_rmw(dev_priv, PP_CONTROL(index), PANEL_POWER_ON,
intel_de_rmw(dev_priv, PP_CONTROL(dev_priv, index), PANEL_POWER_ON,
value ? PANEL_POWER_ON : 0);
break;
case MIPI_BKLT_EN_1:
case MIPI_BKLT_EN_2:
index = gpio == MIPI_BKLT_EN_1 ? 0 : 1;
intel_de_rmw(dev_priv, PP_CONTROL(index), EDP_BLC_ENABLE,
intel_de_rmw(dev_priv, PP_CONTROL(dev_priv, index), EDP_BLC_ENABLE,
value ? EDP_BLC_ENABLE : 0);
break;
case MIPI_AVEE_EN_1:
@ -751,7 +751,7 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
struct intel_connector *connector = intel_dsi->attached_connector;
struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
struct mipi_pps_data *pps = connector->panel.vbt.dsi.pps;
struct drm_display_mode *mode = connector->panel.vbt.lfp_lvds_vbt_mode;
struct drm_display_mode *mode = connector->panel.vbt.lfp_vbt_mode;
u16 burst_mode_ratio;
enum port port;

View File

@ -456,13 +456,14 @@ static bool intel_dvo_init_dev(struct drm_i915_private *dev_priv,
* the device.
*/
for_each_pipe(dev_priv, pipe)
dpll[pipe] = intel_de_rmw(dev_priv, DPLL(pipe), 0, DPLL_DVO_2X_MODE);
dpll[pipe] = intel_de_rmw(dev_priv, DPLL(dev_priv, pipe), 0,
DPLL_DVO_2X_MODE);
ret = dvo->dev_ops->init(&intel_dvo->dev, i2c);
/* restore the DVO 2x clock state to original */
for_each_pipe(dev_priv, pipe) {
intel_de_write(dev_priv, DPLL(pipe), dpll[pipe]);
intel_de_write(dev_priv, DPLL(dev_priv, pipe), dpll[pipe]);
}
intel_gmbus_force_bit(i2c, false);

View File

@ -0,0 +1,39 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2024 Intel Corporation
*/
#include <linux/workqueue.h>
#include "i915_drv.h"
#include "intel_display_types.h"
#include "intel_encoder.h"
static void intel_encoder_link_check_work_fn(struct work_struct *work)
{
struct intel_encoder *encoder =
container_of(work, typeof(*encoder), link_check_work.work);
encoder->link_check(encoder);
}
void intel_encoder_link_check_init(struct intel_encoder *encoder,
void (*callback)(struct intel_encoder *encoder))
{
INIT_DELAYED_WORK(&encoder->link_check_work, intel_encoder_link_check_work_fn);
encoder->link_check = callback;
}
void intel_encoder_link_check_flush_work(struct intel_encoder *encoder)
{
cancel_delayed_work_sync(&encoder->link_check_work);
}
void intel_encoder_link_check_queue_work(struct intel_encoder *encoder, int delay_ms)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
mod_delayed_work(i915->unordered_wq,
&encoder->link_check_work, msecs_to_jiffies(delay_ms));
}

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef __INTEL_ENCODER_H__
#define __INTEL_ENCODER_H__
struct intel_encoder;
void intel_encoder_link_check_init(struct intel_encoder *encoder,
void (*callback)(struct intel_encoder *encoder));
void intel_encoder_link_check_queue_work(struct intel_encoder *encoder, int delay_ms);
void intel_encoder_link_check_flush_work(struct intel_encoder *encoder);
#endif /* __INTEL_ENCODER_H__ */

View File

@ -9,6 +9,7 @@
#include <linux/dma-fence.h>
#include <linux/dma-resv.h>
#include "gem/i915_gem_object.h"
#include "i915_drv.h"
#include "intel_display.h"
#include "intel_display_types.h"
@ -805,8 +806,23 @@ unsigned int intel_surf_alignment(const struct drm_framebuffer *fb,
{
struct drm_i915_private *dev_priv = to_i915(fb->dev);
if (intel_fb_uses_dpt(fb))
if (intel_fb_uses_dpt(fb)) {
/* AUX_DIST needs only 4K alignment */
if (intel_fb_is_ccs_aux_plane(fb, color_plane))
return 512 * 4096;
/*
* FIXME ADL sees GGTT/DMAR faults with async
* flips unless we align to 16k at least.
* Figure out what's going on here...
*/
if (IS_ALDERLAKE_P(dev_priv) &&
!intel_fb_is_ccs_modifier(fb->modifier) &&
HAS_ASYNC_FLIPS(dev_priv))
return 512 * 16 * 1024;
return 512 * 4096;
}
/* AUX_DIST needs only 4K alignment */
if (intel_fb_is_ccs_aux_plane(fb, color_plane))
@ -1030,7 +1046,7 @@ static u32 intel_compute_aligned_offset(struct drm_i915_private *i915,
int color_plane,
unsigned int pitch,
unsigned int rotation,
u32 alignment)
unsigned int alignment)
{
unsigned int cpp = fb->format->cpp[color_plane];
u32 offset, offset_aligned;
@ -1087,8 +1103,8 @@ u32 intel_plane_compute_aligned_offset(int *x, int *y,
struct drm_i915_private *i915 = to_i915(intel_plane->base.dev);
const struct drm_framebuffer *fb = state->hw.fb;
unsigned int rotation = state->hw.rotation;
int pitch = state->view.color_plane[color_plane].mapping_stride;
u32 alignment;
unsigned int pitch = state->view.color_plane[color_plane].mapping_stride;
unsigned int alignment;
if (intel_plane->id == PLANE_CURSOR)
alignment = intel_cursor_alignment(i915);
@ -1105,8 +1121,7 @@ static int intel_fb_offset_to_xy(int *x, int *y,
int color_plane)
{
struct drm_i915_private *i915 = to_i915(fb->dev);
unsigned int height;
u32 alignment, unused;
unsigned int height, alignment, unused;
if (DISPLAY_VER(i915) >= 12 &&
!intel_fb_needs_pot_stride_remap(to_intel_framebuffer(fb)) &&
@ -1493,8 +1508,8 @@ static u32 calc_plane_remap_info(const struct intel_framebuffer *fb, int color_p
check_array_bounds(i915, view->gtt.remapped.plane, color_plane);
if (view->gtt.remapped.plane_alignment) {
unsigned int aligned_offset = ALIGN(gtt_offset,
view->gtt.remapped.plane_alignment);
u32 aligned_offset = ALIGN(gtt_offset,
view->gtt.remapped.plane_alignment);
size += aligned_offset - gtt_offset;
gtt_offset = aligned_offset;
@ -1780,16 +1795,16 @@ u32 intel_fb_max_stride(struct drm_i915_private *dev_priv,
return 128 * 1024;
}
static u32
static unsigned int
intel_fb_stride_alignment(const struct drm_framebuffer *fb, int color_plane)
{
struct drm_i915_private *dev_priv = to_i915(fb->dev);
u32 tile_width;
unsigned int tile_width;
if (is_surface_linear(fb, color_plane)) {
u32 max_stride = intel_plane_fb_max_stride(dev_priv,
fb->format->format,
fb->modifier);
unsigned int max_stride = intel_plane_fb_max_stride(dev_priv,
fb->format->format,
fb->modifier);
/*
* To make remapping with linear generally feasible
@ -2046,7 +2061,7 @@ int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
drm_helper_mode_fill_fb_struct(&dev_priv->drm, fb, mode_cmd);
for (i = 0; i < fb->format->num_planes; i++) {
u32 stride_alignment;
unsigned int stride_alignment;
if (mode_cmd->handles[i] != mode_cmd->handles[0]) {
drm_dbg_kms(&dev_priv->drm, "bad plane %d handle\n",
@ -2063,7 +2078,7 @@ int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
}
if (intel_fb_is_gen12_ccs_aux_plane(fb, i)) {
int ccs_aux_stride = gen12_ccs_aux_stride(intel_fb, i);
unsigned int ccs_aux_stride = gen12_ccs_aux_stride(intel_fb, i);
if (fb->pitches[i] != ccs_aux_stride) {
drm_dbg_kms(&dev_priv->drm,

View File

@ -11,24 +11,24 @@
#include "gem/i915_gem_object.h"
#include "i915_drv.h"
#include "intel_atomic_plane.h"
#include "intel_display_types.h"
#include "intel_dpt.h"
#include "intel_fb.h"
#include "intel_fb_pin.h"
static struct i915_vma *
intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
const struct i915_gtt_view *view,
bool uses_fence,
unsigned long *out_flags,
struct i915_address_space *vm)
intel_fb_pin_to_dpt(const struct drm_framebuffer *fb,
const struct i915_gtt_view *view,
unsigned int alignment,
unsigned long *out_flags,
struct i915_address_space *vm)
{
struct drm_device *dev = fb->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
struct i915_gem_ww_ctx ww;
struct i915_vma *vma;
u32 alignment;
int ret;
/*
@ -41,8 +41,6 @@ intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
if (WARN_ON(!i915_gem_object_is_framebuffer(obj)))
return ERR_PTR(-EINVAL);
alignment = 4096 * 512;
atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
for_i915_gem_ww(&ww, ret, true) {
@ -104,20 +102,20 @@ err:
}
struct i915_vma *
intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
bool phys_cursor,
const struct i915_gtt_view *view,
bool uses_fence,
unsigned long *out_flags)
intel_fb_pin_to_ggtt(const struct drm_framebuffer *fb,
bool phys_cursor,
const struct i915_gtt_view *view,
bool uses_fence,
unsigned long *out_flags)
{
struct drm_device *dev = fb->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
intel_wakeref_t wakeref;
struct i915_gem_ww_ctx ww;
unsigned int alignment;
struct i915_vma *vma;
unsigned int pinctl;
u32 alignment;
int ret;
if (drm_WARN_ON(dev, !i915_gem_object_is_framebuffer(obj)))
@ -228,7 +226,7 @@ err:
return vma;
}
void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags)
void intel_fb_unpin_vma(struct i915_vma *vma, unsigned long flags)
{
if (flags & PLANE_HAS_FENCE)
i915_vma_unpin_fence(vma);
@ -239,18 +237,15 @@ void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags)
int intel_plane_pin_fb(struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
struct drm_framebuffer *fb = plane_state->hw.fb;
const struct intel_framebuffer *fb =
to_intel_framebuffer(plane_state->hw.fb);
struct i915_vma *vma;
bool phys_cursor =
plane->id == PLANE_CURSOR &&
DISPLAY_INFO(dev_priv)->cursor_needs_physical;
if (!intel_fb_uses_dpt(fb)) {
vma = intel_pin_and_fence_fb_obj(fb, phys_cursor,
&plane_state->view.gtt,
intel_plane_uses_fence(plane_state),
&plane_state->flags);
if (!intel_fb_uses_dpt(&fb->base)) {
vma = intel_fb_pin_to_ggtt(&fb->base, intel_plane_needs_physical(plane),
&plane_state->view.gtt,
intel_plane_uses_fence(plane_state),
&plane_state->flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@ -262,22 +257,23 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
* will trigger might_sleep() even if it won't actually sleep,
* which is the case when the fb has already been pinned.
*/
if (phys_cursor)
if (intel_plane_needs_physical(plane))
plane_state->phys_dma_addr =
i915_gem_object_get_dma_address(intel_fb_obj(fb), 0);
i915_gem_object_get_dma_address(intel_fb_obj(&fb->base), 0);
} else {
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
unsigned int alignment = intel_surf_alignment(&fb->base, 0);
vma = intel_dpt_pin(intel_fb->dpt_vm);
vma = intel_dpt_pin_to_ggtt(fb->dpt_vm, alignment / 512);
if (IS_ERR(vma))
return PTR_ERR(vma);
plane_state->ggtt_vma = vma;
vma = intel_pin_fb_obj_dpt(fb, &plane_state->view.gtt, false,
&plane_state->flags, intel_fb->dpt_vm);
vma = intel_fb_pin_to_dpt(&fb->base, &plane_state->view.gtt,
alignment, &plane_state->flags,
fb->dpt_vm);
if (IS_ERR(vma)) {
intel_dpt_unpin(intel_fb->dpt_vm);
intel_dpt_unpin_from_ggtt(fb->dpt_vm);
plane_state->ggtt_vma = NULL;
return PTR_ERR(vma);
}
@ -292,22 +288,21 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state)
{
struct drm_framebuffer *fb = old_plane_state->hw.fb;
const struct intel_framebuffer *fb =
to_intel_framebuffer(old_plane_state->hw.fb);
struct i915_vma *vma;
if (!intel_fb_uses_dpt(fb)) {
if (!intel_fb_uses_dpt(&fb->base)) {
vma = fetch_and_zero(&old_plane_state->ggtt_vma);
if (vma)
intel_unpin_fb_vma(vma, old_plane_state->flags);
intel_fb_unpin_vma(vma, old_plane_state->flags);
} else {
struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
vma = fetch_and_zero(&old_plane_state->dpt_vma);
if (vma)
intel_unpin_fb_vma(vma, old_plane_state->flags);
intel_fb_unpin_vma(vma, old_plane_state->flags);
vma = fetch_and_zero(&old_plane_state->ggtt_vma);
if (vma)
intel_dpt_unpin(intel_fb->dpt_vm);
intel_dpt_unpin_from_ggtt(fb->dpt_vm);
}
}

View File

@ -14,13 +14,13 @@ struct intel_plane_state;
struct i915_gtt_view;
struct i915_vma *
intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
bool phys_cursor,
const struct i915_gtt_view *view,
bool uses_fence,
unsigned long *out_flags);
intel_fb_pin_to_ggtt(const struct drm_framebuffer *fb,
bool phys_cursor,
const struct i915_gtt_view *view,
bool uses_fence,
unsigned long *out_flags);
void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags);
void intel_fb_unpin_vma(struct i915_vma *vma, unsigned long flags);
int intel_plane_pin_fb(struct intel_plane_state *plane_state);
void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state);

View File

@ -43,11 +43,14 @@
#include <drm/drm_blend.h>
#include <drm/drm_fourcc.h>
#include "gem/i915_gem_stolen.h"
#include "gt/intel_gt_types.h"
#include "i915_drv.h"
#include "i915_reg.h"
#include "i915_utils.h"
#include "i915_vgpu.h"
#include "i915_vma.h"
#include "i9xx_plane_regs.h"
#include "intel_cdclk.h"
#include "intel_de.h"
#include "intel_display_device.h"
@ -326,8 +329,8 @@ static void i8xx_fbc_nuke(struct intel_fbc *fbc)
enum i9xx_plane_id i9xx_plane = fbc_state->plane->i9xx_plane;
struct drm_i915_private *dev_priv = fbc->i915;
intel_de_write_fw(dev_priv, DSPADDR(i9xx_plane),
intel_de_read_fw(dev_priv, DSPADDR(i9xx_plane)));
intel_de_write_fw(dev_priv, DSPADDR(dev_priv, i9xx_plane),
intel_de_read_fw(dev_priv, DSPADDR(dev_priv, i9xx_plane)));
}
static void i8xx_fbc_program_cfb(struct intel_fbc *fbc)
@ -363,8 +366,8 @@ static void i965_fbc_nuke(struct intel_fbc *fbc)
enum i9xx_plane_id i9xx_plane = fbc_state->plane->i9xx_plane;
struct drm_i915_private *dev_priv = fbc->i915;
intel_de_write_fw(dev_priv, DSPSURF(i9xx_plane),
intel_de_read_fw(dev_priv, DSPSURF(i9xx_plane)));
intel_de_write_fw(dev_priv, DSPSURF(dev_priv, i9xx_plane),
intel_de_read_fw(dev_priv, DSPSURF(dev_priv, i9xx_plane)));
}
static const struct intel_fbc_funcs i965_fbc_funcs = {
@ -1234,6 +1237,12 @@ static int intel_fbc_check_plane(struct intel_atomic_state *state,
return 0;
}
/* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */
if (i915_vtd_active(i915) && (IS_SKYLAKE(i915) || IS_BROXTON(i915))) {
plane_state->no_fbc_reason = "VT-d enabled";
return 0;
}
crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) {
@ -1251,7 +1260,8 @@ static int intel_fbc_check_plane(struct intel_atomic_state *state,
* Recommendation is to keep this combination disabled
* Bspec: 50422 HSD: 14010260002
*/
if (IS_DISPLAY_VER(i915, 12, 14) && crtc_state->has_psr2) {
if (IS_DISPLAY_VER(i915, 12, 14) && crtc_state->has_sel_update &&
!crtc_state->has_panel_replay) {
plane_state->no_fbc_reason = "PSR2 enabled";
return 0;
}
@ -1259,7 +1269,7 @@ static int intel_fbc_check_plane(struct intel_atomic_state *state,
/* Wa_14016291713 */
if ((IS_DISPLAY_VER(i915, 12, 13) ||
IS_DISPLAY_IP_STEP(i915, IP_VER(14, 0), STEP_A0, STEP_C0)) &&
crtc_state->has_psr) {
crtc_state->has_psr && !crtc_state->has_panel_replay) {
plane_state->no_fbc_reason = "PSR1 enabled (Wa_14016291713)";
return 0;
}
@ -1818,19 +1828,6 @@ static int intel_sanitize_fbc_option(struct drm_i915_private *i915)
return 0;
}
static bool need_fbc_vtd_wa(struct drm_i915_private *i915)
{
/* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */
if (i915_vtd_active(i915) &&
(IS_SKYLAKE(i915) || IS_BROXTON(i915))) {
drm_info(&i915->drm,
"Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled\n");
return true;
}
return false;
}
void intel_fbc_add_plane(struct intel_fbc *fbc, struct intel_plane *plane)
{
plane->fbc = fbc;
@ -1876,9 +1873,6 @@ void intel_fbc_init(struct drm_i915_private *i915)
{
enum intel_fbc_id fbc_id;
if (need_fbc_vtd_wa(i915))
DISPLAY_RUNTIME_INFO(i915)->fbc_mask = 0;
i915->display.params.enable_fbc = intel_sanitize_fbc_option(i915);
drm_dbg_kms(&i915->drm, "Sanitized enable_fbc value: %d\n",
i915->display.params.enable_fbc);

View File

@ -44,6 +44,7 @@
#include <drm/drm_gem_framebuffer_helper.h>
#include "gem/i915_gem_mman.h"
#include "gem/i915_gem_object.h"
#include "i915_drv.h"
#include "intel_display_types.h"
@ -146,7 +147,7 @@ static void intel_fbdev_fb_destroy(struct fb_info *info)
* the info->screen_base mmaping. Leaking the VMA is simpler than
* trying to rectify all the possible error paths leading here.
*/
intel_unpin_fb_vma(ifbdev->vma, ifbdev->vma_flags);
intel_fb_unpin_vma(ifbdev->vma, ifbdev->vma_flags);
drm_framebuffer_remove(&ifbdev->fb->base);
drm_client_release(&fb_helper->client);
@ -175,7 +176,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct intel_fbdev *ifbdev = to_intel_fbdev(helper);
struct intel_framebuffer *intel_fb = ifbdev->fb;
struct intel_framebuffer *fb = ifbdev->fb;
struct drm_device *dev = helper->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
const struct i915_gtt_view view = {
@ -195,30 +196,30 @@ static int intelfb_create(struct drm_fb_helper *helper,
if (ret)
return ret;
if (intel_fb &&
(sizes->fb_width > intel_fb->base.width ||
sizes->fb_height > intel_fb->base.height)) {
ifbdev->fb = NULL;
if (fb &&
(sizes->fb_width > fb->base.width ||
sizes->fb_height > fb->base.height)) {
drm_dbg_kms(&dev_priv->drm,
"BIOS fb too small (%dx%d), we require (%dx%d),"
" releasing it\n",
intel_fb->base.width, intel_fb->base.height,
fb->base.width, fb->base.height,
sizes->fb_width, sizes->fb_height);
drm_framebuffer_put(&intel_fb->base);
intel_fb = ifbdev->fb = NULL;
drm_framebuffer_put(&fb->base);
fb = NULL;
}
if (!intel_fb || drm_WARN_ON(dev, !intel_fb_obj(&intel_fb->base))) {
struct drm_framebuffer *fb;
if (!fb || drm_WARN_ON(dev, !intel_fb_obj(&fb->base))) {
drm_dbg_kms(&dev_priv->drm,
"no BIOS fb, allocating a new one\n");
fb = intel_fbdev_fb_alloc(helper, sizes);
if (IS_ERR(fb))
return PTR_ERR(fb);
intel_fb = ifbdev->fb = to_intel_framebuffer(fb);
} else {
drm_dbg_kms(&dev_priv->drm, "re-using BIOS fb\n");
prealloc = true;
sizes->fb_width = intel_fb->base.width;
sizes->fb_height = intel_fb->base.height;
sizes->fb_width = fb->base.width;
sizes->fb_height = fb->base.height;
}
wakeref = intel_runtime_pm_get(&dev_priv->runtime_pm);
@ -227,8 +228,8 @@ static int intelfb_create(struct drm_fb_helper *helper,
* This also validates that any existing fb inherited from the
* BIOS is suitable for own access.
*/
vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base, false,
&view, false, &flags);
vma = intel_fb_pin_to_ggtt(&fb->base, false,
&view, false, &flags);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto out_unlock;
@ -241,11 +242,11 @@ static int intelfb_create(struct drm_fb_helper *helper,
goto out_unpin;
}
ifbdev->helper.fb = &ifbdev->fb->base;
ifbdev->helper.fb = &fb->base;
info->fbops = &intelfb_ops;
obj = intel_fb_obj(&intel_fb->base);
obj = intel_fb_obj(&fb->base);
ret = intel_fbdev_fb_fill_info(dev_priv, info, obj, vma);
if (ret)
@ -263,8 +264,9 @@ static int intelfb_create(struct drm_fb_helper *helper,
/* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */
drm_dbg_kms(&dev_priv->drm, "allocated %dx%d fb: 0x%08x\n",
ifbdev->fb->base.width, ifbdev->fb->base.height,
fb->base.width, fb->base.height,
i915_ggtt_offset(vma));
ifbdev->fb = fb;
ifbdev->vma = vma;
ifbdev->vma_flags = flags;
@ -273,7 +275,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
return 0;
out_unpin:
intel_unpin_fb_vma(vma, flags);
intel_fb_unpin_vma(vma, flags);
out_unlock:
intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref);
return ret;

View File

@ -11,8 +11,8 @@
#include "intel_display_types.h"
#include "intel_fbdev_fb.h"
struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
struct intel_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct drm_framebuffer *fb;
struct drm_device *dev = helper->dev;
@ -63,7 +63,7 @@ struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
fb = intel_framebuffer_create(obj, &mode_cmd);
i915_gem_object_put(obj);
return fb;
return to_intel_framebuffer(fb);
}
int intel_fbdev_fb_fill_info(struct drm_i915_private *i915, struct fb_info *info,

View File

@ -13,8 +13,8 @@ struct drm_i915_private;
struct fb_info;
struct i915_vma;
struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes);
struct intel_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes);
int intel_fbdev_fb_fill_info(struct drm_i915_private *i915, struct fb_info *info,
struct drm_i915_gem_object *obj, struct i915_vma *vma);

View File

@ -34,7 +34,8 @@ static void assert_fdi_tx(struct drm_i915_private *dev_priv,
* so pipe->transcoder cast is fine here.
*/
enum transcoder cpu_transcoder = (enum transcoder)pipe;
cur_state = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)) & TRANS_DDI_FUNC_ENABLE;
cur_state = intel_de_read(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder)) & TRANS_DDI_FUNC_ENABLE;
} else {
cur_state = intel_de_read(dev_priv, FDI_TX_CTL(pipe)) & FDI_TX_ENABLE;
}
@ -514,7 +515,7 @@ static void ilk_fdi_link_train(struct intel_crtc *crtc,
* detection works.
*/
intel_de_write(dev_priv, FDI_RX_TUSIZE1(pipe),
intel_de_read(dev_priv, PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
intel_de_read(dev_priv, PIPE_DATA_M1(dev_priv, pipe)) & TU_SIZE_MASK);
/* FDI needs bits from pipe first */
assert_transcoder_enabled(dev_priv, crtc_state->cpu_transcoder);
@ -616,7 +617,7 @@ static void gen6_fdi_link_train(struct intel_crtc *crtc,
* detection works.
*/
intel_de_write(dev_priv, FDI_RX_TUSIZE1(pipe),
intel_de_read(dev_priv, PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
intel_de_read(dev_priv, PIPE_DATA_M1(dev_priv, pipe)) & TU_SIZE_MASK);
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
for train result */
@ -754,7 +755,7 @@ static void ivb_manual_fdi_link_train(struct intel_crtc *crtc,
* detection works.
*/
intel_de_write(dev_priv, FDI_RX_TUSIZE1(pipe),
intel_de_read(dev_priv, PIPE_DATA_M1(pipe)) & TU_SIZE_MASK);
intel_de_read(dev_priv, PIPE_DATA_M1(dev_priv, pipe)) & TU_SIZE_MASK);
/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
for train result */
@ -1034,7 +1035,7 @@ void ilk_fdi_pll_enable(const struct intel_crtc_state *crtc_state)
temp = intel_de_read(dev_priv, reg);
temp &= ~(FDI_DP_PORT_WIDTH_MASK | (0x7 << 16));
temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes);
temp |= (intel_de_read(dev_priv, TRANSCONF(pipe)) & TRANSCONF_BPC_MASK) << 11;
temp |= (intel_de_read(dev_priv, TRANSCONF(dev_priv, pipe)) & TRANSCONF_BPC_MASK) << 11;
intel_de_write(dev_priv, reg, temp | FDI_RX_PLL_ENABLE);
intel_de_posting_read(dev_priv, reg);
@ -1090,7 +1091,7 @@ void ilk_fdi_disable(struct intel_crtc *crtc)
reg = FDI_RX_CTL(pipe);
temp = intel_de_read(dev_priv, reg);
temp &= ~(0x7 << 16);
temp |= (intel_de_read(dev_priv, TRANSCONF(pipe)) & TRANSCONF_BPC_MASK) << 11;
temp |= (intel_de_read(dev_priv, TRANSCONF(dev_priv, pipe)) & TRANSCONF_BPC_MASK) << 11;
intel_de_write(dev_priv, reg, temp & ~FDI_RX_ENABLE);
intel_de_posting_read(dev_priv, reg);
@ -1116,7 +1117,7 @@ void ilk_fdi_disable(struct intel_crtc *crtc)
}
/* BPC in FDI rx is consistent with that in TRANSCONF */
temp &= ~(0x07 << 16);
temp |= (intel_de_read(dev_priv, TRANSCONF(pipe)) & TRANSCONF_BPC_MASK) << 11;
temp |= (intel_de_read(dev_priv, TRANSCONF(dev_priv, pipe)) & TRANSCONF_BPC_MASK) << 11;
intel_de_write(dev_priv, reg, temp);
intel_de_posting_read(dev_priv, reg);

View File

@ -94,7 +94,7 @@ static bool cpt_can_enable_serr_int(struct drm_device *dev)
static void i9xx_check_fifo_underruns(struct intel_crtc *crtc)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
i915_reg_t reg = PIPESTAT(crtc->pipe);
i915_reg_t reg = PIPESTAT(dev_priv, crtc->pipe);
u32 enable_mask;
lockdep_assert_held(&dev_priv->irq_lock);
@ -115,7 +115,7 @@ static void i9xx_set_fifo_underrun_reporting(struct drm_device *dev,
bool enable, bool old)
{
struct drm_i915_private *dev_priv = to_i915(dev);
i915_reg_t reg = PIPESTAT(pipe);
i915_reg_t reg = PIPESTAT(dev_priv, pipe);
lockdep_assert_held(&dev_priv->irq_lock);
@ -209,7 +209,8 @@ static void bdw_set_fifo_underrun_reporting(struct drm_device *dev,
if (enable) {
if (DISPLAY_VER(dev_priv) >= 11)
intel_de_write(dev_priv, ICL_PIPESTATUS(pipe),
intel_de_write(dev_priv,
ICL_PIPESTATUS(dev_priv, pipe),
icl_pipe_status_underrun_mask(dev_priv));
bdw_enable_pipe_irq(dev_priv, pipe, mask);
@ -418,9 +419,11 @@ void intel_cpu_fifo_underrun_irq_handler(struct drm_i915_private *dev_priv,
* the underrun was caused by the downstream port.
*/
if (DISPLAY_VER(dev_priv) >= 11) {
underruns = intel_de_read(dev_priv, ICL_PIPESTATUS(pipe)) &
underruns = intel_de_read(dev_priv,
ICL_PIPESTATUS(dev_priv, pipe)) &
icl_pipe_status_underrun_mask(dev_priv);
intel_de_write(dev_priv, ICL_PIPESTATUS(pipe), underruns);
intel_de_write(dev_priv, ICL_PIPESTATUS(dev_priv, pipe),
underruns);
}
if (intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false)) {

View File

@ -65,6 +65,7 @@
#include "intel_fbc.h"
#include "intel_frontbuffer.h"
#include "intel_psr.h"
#include "intel_tdf.h"
/**
* frontbuffer_flush - flush frontbuffer
@ -93,6 +94,7 @@ static void frontbuffer_flush(struct drm_i915_private *i915,
trace_intel_frontbuffer_flush(i915, frontbuffer_bits, origin);
might_sleep();
intel_td_flush(i915);
intel_drrs_flush(i915, frontbuffer_bits);
intel_psr_flush(i915, frontbuffer_bits, origin);
intel_fbc_flush(i915, frontbuffer_bits, origin);

View File

@ -13,7 +13,7 @@
#include <linux/random.h>
#include <drm/display/drm_hdcp_helper.h>
#include <drm/i915_component.h>
#include <drm/intel/i915_component.h>
#include "i915_drv.h"
#include "i915_reg.h"
@ -30,6 +30,29 @@
#define KEY_LOAD_TRIES 5
#define HDCP2_LC_RETRY_CNT 3
/* WA: 16022217614 */
static void
intel_hdcp_disable_hdcp_line_rekeying(struct intel_encoder *encoder,
struct intel_hdcp *hdcp)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
/* Here we assume HDMI is in TMDS mode of operation */
if (encoder->type != INTEL_OUTPUT_HDMI)
return;
if (DISPLAY_VER(dev_priv) >= 14) {
if (IS_DISPLAY_IP_STEP(dev_priv, IP_VER(14, 0), STEP_D0, STEP_FOREVER))
intel_de_rmw(dev_priv, MTL_CHICKEN_TRANS(hdcp->cpu_transcoder),
0, HDCP_LINE_REKEY_DISABLE);
else if (IS_DISPLAY_IP_STEP(dev_priv, IP_VER(14, 1), STEP_B0, STEP_FOREVER) ||
IS_DISPLAY_IP_STEP(dev_priv, IP_VER(20, 0), STEP_B0, STEP_FOREVER))
intel_de_rmw(dev_priv,
TRANS_DDI_FUNC_CTL(dev_priv, hdcp->cpu_transcoder),
0, TRANS_DDI_HDCP_LINE_REKEY_DISABLE);
}
}
static int intel_conn_to_vcpi(struct intel_atomic_state *state,
struct intel_connector *connector)
{
@ -2005,6 +2028,8 @@ static int _intel_hdcp2_enable(struct intel_atomic_state *state,
connector->base.base.id, connector->base.name,
hdcp->content_type);
intel_hdcp_disable_hdcp_line_rekeying(connector->encoder, hdcp);
ret = hdcp2_authenticate_and_encrypt(state, connector);
if (ret) {
drm_dbg_kms(&i915->drm, "HDCP2 Type%d Enabling Failed. (%d)\n",

View File

@ -3,7 +3,7 @@
* Copyright 2023, Intel Corporation.
*/
#include <drm/i915_hdcp_interface.h>
#include <drm/intel/i915_hdcp_interface.h>
#include "gem/i915_gem_region.h"
#include "gt/intel_gt.h"

View File

@ -4,7 +4,7 @@
*/
#include <linux/err.h>
#include <drm/i915_hdcp_interface.h>
#include <drm/intel/i915_hdcp_interface.h>
#include "i915_drv.h"
#include "intel_hdcp_gsc_message.h"

View File

@ -38,7 +38,7 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/intel_lpe_audio.h>
#include <drm/intel/intel_lpe_audio.h>
#include "g4x_hdmi.h"
#include "i915_drv.h"
@ -83,7 +83,7 @@ assert_hdmi_transcoder_func_disabled(struct drm_i915_private *dev_priv,
enum transcoder cpu_transcoder)
{
drm_WARN(&dev_priv->drm,
intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)) &
intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(dev_priv, cpu_transcoder)) &
TRANS_DDI_FUNC_ENABLE,
"HDMI transcoder function enabled, expecting disabled\n");
}
@ -165,21 +165,21 @@ hsw_dip_data_reg(struct drm_i915_private *dev_priv,
{
switch (type) {
case HDMI_PACKET_TYPE_GAMUT_METADATA:
return HSW_TVIDEO_DIP_GMP_DATA(cpu_transcoder, i);
return HSW_TVIDEO_DIP_GMP_DATA(dev_priv, cpu_transcoder, i);
case DP_SDP_VSC:
return HSW_TVIDEO_DIP_VSC_DATA(cpu_transcoder, i);
return HSW_TVIDEO_DIP_VSC_DATA(dev_priv, cpu_transcoder, i);
case DP_SDP_ADAPTIVE_SYNC:
return ADL_TVIDEO_DIP_AS_SDP_DATA(cpu_transcoder, i);
return ADL_TVIDEO_DIP_AS_SDP_DATA(dev_priv, cpu_transcoder, i);
case DP_SDP_PPS:
return ICL_VIDEO_DIP_PPS_DATA(cpu_transcoder, i);
return ICL_VIDEO_DIP_PPS_DATA(dev_priv, cpu_transcoder, i);
case HDMI_INFOFRAME_TYPE_AVI:
return HSW_TVIDEO_DIP_AVI_DATA(cpu_transcoder, i);
return HSW_TVIDEO_DIP_AVI_DATA(dev_priv, cpu_transcoder, i);
case HDMI_INFOFRAME_TYPE_SPD:
return HSW_TVIDEO_DIP_SPD_DATA(cpu_transcoder, i);
return HSW_TVIDEO_DIP_SPD_DATA(dev_priv, cpu_transcoder, i);
case HDMI_INFOFRAME_TYPE_VENDOR:
return HSW_TVIDEO_DIP_VS_DATA(cpu_transcoder, i);
return HSW_TVIDEO_DIP_VS_DATA(dev_priv, cpu_transcoder, i);
case HDMI_INFOFRAME_TYPE_DRM:
return GLK_TVIDEO_DIP_DRM_DATA(cpu_transcoder, i);
return GLK_TVIDEO_DIP_DRM_DATA(dev_priv, cpu_transcoder, i);
default:
MISSING_CASE(type);
return INVALID_MMIO_REG;
@ -507,7 +507,7 @@ void hsw_write_infoframe(struct intel_encoder *encoder,
const u32 *data = frame;
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
i915_reg_t ctl_reg = HSW_TVIDEO_DIP_CTL(cpu_transcoder);
i915_reg_t ctl_reg = HSW_TVIDEO_DIP_CTL(dev_priv, cpu_transcoder);
int data_size;
int i;
u32 val = intel_de_read(dev_priv, ctl_reg);
@ -532,7 +532,8 @@ void hsw_write_infoframe(struct intel_encoder *encoder,
0);
/* Wa_14013475917 */
if (!(IS_DISPLAY_VER(dev_priv, 13, 14) && crtc_state->has_psr && type == DP_SDP_VSC))
if (!(IS_DISPLAY_VER(dev_priv, 13, 14) && crtc_state->has_psr &&
!crtc_state->has_panel_replay && type == DP_SDP_VSC))
val |= hsw_infoframe_enable(type);
if (type == DP_SDP_VSC)
@ -561,7 +562,7 @@ static u32 hsw_infoframes_enabled(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 val = intel_de_read(dev_priv,
HSW_TVIDEO_DIP_CTL(pipe_config->cpu_transcoder));
HSW_TVIDEO_DIP_CTL(dev_priv, pipe_config->cpu_transcoder));
u32 mask;
mask = (VIDEO_DIP_ENABLE_VSC_HSW | VIDEO_DIP_ENABLE_AVI_HSW |
@ -985,7 +986,7 @@ static bool intel_hdmi_set_gcp_infoframe(struct intel_encoder *encoder,
return false;
if (HAS_DDI(dev_priv))
reg = HSW_TVIDEO_DIP_GCP(crtc_state->cpu_transcoder);
reg = HSW_TVIDEO_DIP_GCP(dev_priv, crtc_state->cpu_transcoder);
else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
reg = VLV_TVIDEO_DIP_GCP(crtc->pipe);
else if (HAS_PCH_SPLIT(dev_priv))
@ -1010,7 +1011,7 @@ void intel_hdmi_read_gcp_infoframe(struct intel_encoder *encoder,
return;
if (HAS_DDI(dev_priv))
reg = HSW_TVIDEO_DIP_GCP(crtc_state->cpu_transcoder);
reg = HSW_TVIDEO_DIP_GCP(dev_priv, crtc_state->cpu_transcoder);
else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
reg = VLV_TVIDEO_DIP_GCP(crtc->pipe);
else if (HAS_PCH_SPLIT(dev_priv))
@ -1215,7 +1216,8 @@ static void hsw_set_infoframes(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
i915_reg_t reg = HSW_TVIDEO_DIP_CTL(crtc_state->cpu_transcoder);
i915_reg_t reg = HSW_TVIDEO_DIP_CTL(dev_priv,
crtc_state->cpu_transcoder);
u32 val = intel_de_read(dev_priv, reg);
assert_hdmi_transcoder_func_disabled(dev_priv,
@ -1474,7 +1476,8 @@ static int kbl_repositioning_enc_en_signal(struct intel_connector *connector,
int ret;
for (;;) {
scanline = intel_de_read(dev_priv, PIPEDSL(crtc->pipe));
scanline = intel_de_read(dev_priv,
PIPEDSL(dev_priv, crtc->pipe));
if (scanline > 100 && scanline < 200)
break;
usleep_range(25, 50);
@ -1783,7 +1786,9 @@ static int intel_hdmi_source_max_tmds_clock(struct intel_encoder *encoder)
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
int max_tmds_clock, vbt_max_tmds_clock;
if (DISPLAY_VER(dev_priv) >= 10)
if (DISPLAY_VER(dev_priv) >= 13 || IS_ALDERLAKE_S(dev_priv))
max_tmds_clock = 600000;
else if (DISPLAY_VER(dev_priv) >= 10)
max_tmds_clock = 594000;
else if (DISPLAY_VER(dev_priv) >= 8 || IS_HASWELL(dev_priv))
max_tmds_clock = 300000;

View File

@ -186,7 +186,8 @@ void i915_hotplug_interrupt_update_locked(struct drm_i915_private *dev_priv,
lockdep_assert_held(&dev_priv->irq_lock);
drm_WARN_ON(&dev_priv->drm, bits & ~mask);
intel_uncore_rmw(&dev_priv->uncore, PORT_HOTPLUG_EN, mask, bits);
intel_uncore_rmw(&dev_priv->uncore, PORT_HOTPLUG_EN(dev_priv), mask,
bits);
}
/**
@ -434,18 +435,21 @@ u32 i9xx_hpd_irq_ack(struct drm_i915_private *dev_priv)
* bits can itself generate a new hotplug interrupt :(
*/
for (i = 0; i < 10; i++) {
u32 tmp = intel_uncore_read(&dev_priv->uncore, PORT_HOTPLUG_STAT) & hotplug_status_mask;
u32 tmp = intel_uncore_read(&dev_priv->uncore,
PORT_HOTPLUG_STAT(dev_priv)) & hotplug_status_mask;
if (tmp == 0)
return hotplug_status;
hotplug_status |= tmp;
intel_uncore_write(&dev_priv->uncore, PORT_HOTPLUG_STAT, hotplug_status);
intel_uncore_write(&dev_priv->uncore,
PORT_HOTPLUG_STAT(dev_priv),
hotplug_status);
}
drm_WARN_ONCE(&dev_priv->drm, 1,
"PORT_HOTPLUG_STAT did not clear (0x%08x)\n",
intel_uncore_read(&dev_priv->uncore, PORT_HOTPLUG_STAT));
intel_uncore_read(&dev_priv->uncore, PORT_HOTPLUG_STAT(dev_priv)));
return hotplug_status;
}

View File

@ -68,7 +68,7 @@
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <drm/intel_lpe_audio.h>
#include <drm/intel/intel_lpe_audio.h>
#include "i915_drv.h"
#include "i915_irq.h"

View File

@ -641,7 +641,7 @@ u32 lspcon_infoframes_enabled(struct intel_encoder *encoder,
if (lspcon->hdr_supported) {
tmp = intel_de_read(dev_priv,
HSW_TVIDEO_DIP_CTL(pipe_config->cpu_transcoder));
HSW_TVIDEO_DIP_CTL(dev_priv, pipe_config->cpu_transcoder));
mask = VIDEO_DIP_ENABLE_GMP_HSW;
if (tmp & mask)

View File

@ -148,7 +148,7 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
/* gen2/3 store dither state in pfit control, needs to match */
if (DISPLAY_VER(dev_priv) < 4) {
tmp = intel_de_read(dev_priv, PFIT_CONTROL);
tmp = intel_de_read(dev_priv, PFIT_CONTROL(dev_priv));
crtc_state->gmch_pfit.control |= tmp & PFIT_PANEL_8TO6_DITHER_ENABLE;
}
@ -161,18 +161,19 @@ static void intel_lvds_pps_get_hw_state(struct drm_i915_private *dev_priv,
{
u32 val;
pps->powerdown_on_reset = intel_de_read(dev_priv, PP_CONTROL(0)) & PANEL_POWER_RESET;
pps->powerdown_on_reset = intel_de_read(dev_priv,
PP_CONTROL(dev_priv, 0)) & PANEL_POWER_RESET;
val = intel_de_read(dev_priv, PP_ON_DELAYS(0));
val = intel_de_read(dev_priv, PP_ON_DELAYS(dev_priv, 0));
pps->port = REG_FIELD_GET(PANEL_PORT_SELECT_MASK, val);
pps->t1_t2 = REG_FIELD_GET(PANEL_POWER_UP_DELAY_MASK, val);
pps->t5 = REG_FIELD_GET(PANEL_LIGHT_ON_DELAY_MASK, val);
val = intel_de_read(dev_priv, PP_OFF_DELAYS(0));
val = intel_de_read(dev_priv, PP_OFF_DELAYS(dev_priv, 0));
pps->t3 = REG_FIELD_GET(PANEL_POWER_DOWN_DELAY_MASK, val);
pps->tx = REG_FIELD_GET(PANEL_LIGHT_OFF_DELAY_MASK, val);
val = intel_de_read(dev_priv, PP_DIVISOR(0));
val = intel_de_read(dev_priv, PP_DIVISOR(dev_priv, 0));
pps->divider = REG_FIELD_GET(PP_REFERENCE_DIVIDER_MASK, val);
val = REG_FIELD_GET(PANEL_POWER_CYCLE_DELAY_MASK, val);
/*
@ -209,23 +210,23 @@ static void intel_lvds_pps_init_hw(struct drm_i915_private *dev_priv,
{
u32 val;
val = intel_de_read(dev_priv, PP_CONTROL(0));
val = intel_de_read(dev_priv, PP_CONTROL(dev_priv, 0));
drm_WARN_ON(&dev_priv->drm,
(val & PANEL_UNLOCK_MASK) != PANEL_UNLOCK_REGS);
if (pps->powerdown_on_reset)
val |= PANEL_POWER_RESET;
intel_de_write(dev_priv, PP_CONTROL(0), val);
intel_de_write(dev_priv, PP_CONTROL(dev_priv, 0), val);
intel_de_write(dev_priv, PP_ON_DELAYS(0),
intel_de_write(dev_priv, PP_ON_DELAYS(dev_priv, 0),
REG_FIELD_PREP(PANEL_PORT_SELECT_MASK, pps->port) |
REG_FIELD_PREP(PANEL_POWER_UP_DELAY_MASK, pps->t1_t2) |
REG_FIELD_PREP(PANEL_LIGHT_ON_DELAY_MASK, pps->t5));
intel_de_write(dev_priv, PP_OFF_DELAYS(0),
intel_de_write(dev_priv, PP_OFF_DELAYS(dev_priv, 0),
REG_FIELD_PREP(PANEL_POWER_DOWN_DELAY_MASK, pps->t3) |
REG_FIELD_PREP(PANEL_LIGHT_OFF_DELAY_MASK, pps->tx));
intel_de_write(dev_priv, PP_DIVISOR(0),
intel_de_write(dev_priv, PP_DIVISOR(dev_priv, 0),
REG_FIELD_PREP(PP_REFERENCE_DIVIDER_MASK, pps->divider) |
REG_FIELD_PREP(PANEL_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(pps->t4, 1000) + 1));
}
@ -321,10 +322,10 @@ static void intel_enable_lvds(struct intel_atomic_state *state,
intel_de_rmw(dev_priv, lvds_encoder->reg, 0, LVDS_PORT_EN);
intel_de_rmw(dev_priv, PP_CONTROL(0), 0, PANEL_POWER_ON);
intel_de_rmw(dev_priv, PP_CONTROL(dev_priv, 0), 0, PANEL_POWER_ON);
intel_de_posting_read(dev_priv, lvds_encoder->reg);
if (intel_de_wait_for_set(dev_priv, PP_STATUS(0), PP_ON, 5000))
if (intel_de_wait_for_set(dev_priv, PP_STATUS(dev_priv, 0), PP_ON, 5000))
drm_err(&dev_priv->drm,
"timed out waiting for panel to power on\n");
@ -339,8 +340,8 @@ static void intel_disable_lvds(struct intel_atomic_state *state,
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(encoder);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
intel_de_rmw(dev_priv, PP_CONTROL(0), PANEL_POWER_ON, 0);
if (intel_de_wait_for_clear(dev_priv, PP_STATUS(0), PP_ON, 1000))
intel_de_rmw(dev_priv, PP_CONTROL(dev_priv, 0), PANEL_POWER_ON, 0);
if (intel_de_wait_for_clear(dev_priv, PP_STATUS(dev_priv, 0), PP_ON, 1000))
drm_err(&dev_priv->drm,
"timed out waiting for panel to power off\n");
@ -379,7 +380,7 @@ static void intel_lvds_shutdown(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
if (intel_de_wait_for_clear(dev_priv, PP_STATUS(0), PP_CYCLE_DELAY_ACTIVE, 5000))
if (intel_de_wait_for_clear(dev_priv, PP_STATUS(dev_priv, 0), PP_CYCLE_DELAY_ACTIVE, 5000))
drm_err(&dev_priv->drm,
"timed out waiting for panel power cycle delay\n");
}

View File

@ -68,7 +68,7 @@ static void intel_crtc_disable_noatomic_begin(struct intel_crtc *crtc,
/* Everything's already locked, -EDEADLK can't happen. */
for_each_intel_crtc_in_pipe_mask(&i915->drm, temp_crtc,
BIT(pipe) |
intel_crtc_bigjoiner_slave_pipes(crtc_state)) {
intel_crtc_joiner_secondary_pipes(crtc_state)) {
struct intel_crtc_state *temp_crtc_state =
intel_atomic_get_crtc_state(state, temp_crtc);
int ret;
@ -189,7 +189,7 @@ static void intel_crtc_disable_noatomic_complete(struct intel_crtc *crtc)
/*
* Return all the pipes using a transcoder in @transcoder_mask.
* For bigjoiner configs return only the bigjoiner master.
* For joiner configs return only the joiner primary.
*/
static u8 get_transcoder_pipes(struct drm_i915_private *i915,
u8 transcoder_mask)
@ -204,7 +204,7 @@ static u8 get_transcoder_pipes(struct drm_i915_private *i915,
if (temp_crtc_state->cpu_transcoder == INVALID_TRANSCODER)
continue;
if (intel_crtc_is_bigjoiner_slave(temp_crtc_state))
if (intel_crtc_is_joiner_secondary(temp_crtc_state))
continue;
if (transcoder_mask & BIT(temp_crtc_state->cpu_transcoder))
@ -216,7 +216,7 @@ static u8 get_transcoder_pipes(struct drm_i915_private *i915,
/*
* Return the port sync master and slave pipes linked to @crtc.
* For bigjoiner configs return only the bigjoiner master pipes.
* For joiner configs return only the joiner primary pipes.
*/
static void get_portsync_pipes(struct intel_crtc *crtc,
u8 *master_pipe_mask, u8 *slave_pipes_mask)
@ -248,16 +248,16 @@ static void get_portsync_pipes(struct intel_crtc *crtc,
*slave_pipes_mask = get_transcoder_pipes(i915, master_crtc_state->sync_mode_slaves_mask);
}
static u8 get_bigjoiner_slave_pipes(struct drm_i915_private *i915, u8 master_pipes_mask)
static u8 get_joiner_secondary_pipes(struct drm_i915_private *i915, u8 primary_pipes_mask)
{
struct intel_crtc *master_crtc;
struct intel_crtc *primary_crtc;
u8 pipes = 0;
for_each_intel_crtc_in_pipe_mask(&i915->drm, master_crtc, master_pipes_mask) {
struct intel_crtc_state *master_crtc_state =
to_intel_crtc_state(master_crtc->base.state);
for_each_intel_crtc_in_pipe_mask(&i915->drm, primary_crtc, primary_pipes_mask) {
struct intel_crtc_state *primary_crtc_state =
to_intel_crtc_state(primary_crtc->base.state);
pipes |= intel_crtc_bigjoiner_slave_pipes(master_crtc_state);
pipes |= intel_crtc_joiner_secondary_pipes(primary_crtc_state);
}
return pipes;
@ -269,21 +269,21 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
u8 portsync_master_mask;
u8 portsync_slaves_mask;
u8 bigjoiner_slaves_mask;
u8 joiner_secondaries_mask;
struct intel_crtc *temp_crtc;
/* TODO: Add support for MST */
get_portsync_pipes(crtc, &portsync_master_mask, &portsync_slaves_mask);
bigjoiner_slaves_mask = get_bigjoiner_slave_pipes(i915,
portsync_master_mask |
portsync_slaves_mask);
joiner_secondaries_mask = get_joiner_secondary_pipes(i915,
portsync_master_mask |
portsync_slaves_mask);
drm_WARN_ON(&i915->drm,
portsync_master_mask & portsync_slaves_mask ||
portsync_master_mask & bigjoiner_slaves_mask ||
portsync_slaves_mask & bigjoiner_slaves_mask);
portsync_master_mask & joiner_secondaries_mask ||
portsync_slaves_mask & joiner_secondaries_mask);
for_each_intel_crtc_in_pipe_mask(&i915->drm, temp_crtc, bigjoiner_slaves_mask)
for_each_intel_crtc_in_pipe_mask(&i915->drm, temp_crtc, joiner_secondaries_mask)
intel_crtc_disable_noatomic_begin(temp_crtc, ctx);
for_each_intel_crtc_in_pipe_mask(&i915->drm, temp_crtc, portsync_slaves_mask)
@ -293,7 +293,7 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
intel_crtc_disable_noatomic_begin(temp_crtc, ctx);
for_each_intel_crtc_in_pipe_mask(&i915->drm, temp_crtc,
bigjoiner_slaves_mask |
joiner_secondaries_mask |
portsync_slaves_mask |
portsync_master_mask)
intel_crtc_disable_noatomic_complete(temp_crtc);
@ -326,7 +326,7 @@ static void intel_modeset_update_connector_atomic_state(struct drm_i915_private
static void intel_crtc_copy_hw_to_uapi_state(struct intel_crtc_state *crtc_state)
{
if (intel_crtc_is_bigjoiner_slave(crtc_state))
if (intel_crtc_is_joiner_secondary(crtc_state))
return;
crtc_state->uapi.enable = crtc_state->hw.enable;
@ -474,7 +474,7 @@ static bool intel_sanitize_crtc(struct intel_crtc *crtc,
}
if (!crtc_state->hw.active ||
intel_crtc_is_bigjoiner_slave(crtc_state))
intel_crtc_is_joiner_secondary(crtc_state))
return false;
needs_link_reset = intel_crtc_needs_link_reset(crtc);
@ -728,19 +728,19 @@ static void intel_modeset_readout_hw_state(struct drm_i915_private *i915)
encoder->base.crtc = &crtc->base;
intel_encoder_get_config(encoder, crtc_state);
/* read out to slave crtc as well for bigjoiner */
if (crtc_state->bigjoiner_pipes) {
struct intel_crtc *slave_crtc;
/* read out to secondary crtc as well for joiner */
if (crtc_state->joiner_pipes) {
struct intel_crtc *secondary_crtc;
/* encoder should read be linked to bigjoiner master */
WARN_ON(intel_crtc_is_bigjoiner_slave(crtc_state));
/* encoder should read be linked to joiner primary */
WARN_ON(intel_crtc_is_joiner_secondary(crtc_state));
for_each_intel_crtc_in_pipe_mask(&i915->drm, slave_crtc,
intel_crtc_bigjoiner_slave_pipes(crtc_state)) {
struct intel_crtc_state *slave_crtc_state;
for_each_intel_crtc_in_pipe_mask(&i915->drm, secondary_crtc,
intel_crtc_joiner_secondary_pipes(crtc_state)) {
struct intel_crtc_state *secondary_crtc_state;
slave_crtc_state = to_intel_crtc_state(slave_crtc->base.state);
intel_encoder_get_config(encoder, slave_crtc_state);
secondary_crtc_state = to_intel_crtc_state(secondary_crtc->base.state);
intel_encoder_get_config(encoder, secondary_crtc_state);
}
}

View File

@ -166,7 +166,7 @@ verify_crtc_state(struct intel_atomic_state *state,
const struct intel_crtc_state *sw_crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
struct intel_crtc_state *hw_crtc_state;
struct intel_crtc *master_crtc;
struct intel_crtc *primary_crtc;
struct intel_encoder *encoder;
hw_crtc_state = intel_crtc_state_alloc(crtc);
@ -193,9 +193,9 @@ verify_crtc_state(struct intel_atomic_state *state,
"transitional active state does not match atomic hw state (expected %i, found %i)\n",
sw_crtc_state->hw.active, crtc->active);
master_crtc = intel_master_crtc(sw_crtc_state);
primary_crtc = intel_primary_crtc(sw_crtc_state);
for_each_encoder_on_crtc(dev, &master_crtc->base, encoder) {
for_each_encoder_on_crtc(dev, &primary_crtc->base, encoder) {
enum pipe pipe;
bool active;
@ -205,7 +205,7 @@ verify_crtc_state(struct intel_atomic_state *state,
encoder->base.base.id, active,
sw_crtc_state->hw.active);
I915_STATE_WARN(i915, active && master_crtc->pipe != pipe,
I915_STATE_WARN(i915, active && primary_crtc->pipe != pipe,
"Encoder connected to wrong pipe %c\n",
pipe_name(pipe));

View File

@ -943,17 +943,19 @@ static void update_pfit_vscale_ratio(struct intel_overlay *overlay)
* line with the intel documentation for the i965
*/
if (DISPLAY_VER(dev_priv) >= 4) {
u32 tmp = intel_de_read(dev_priv, PFIT_PGM_RATIOS);
u32 tmp = intel_de_read(dev_priv, PFIT_PGM_RATIOS(dev_priv));
/* on i965 use the PGM reg to read out the autoscaler values */
ratio = REG_FIELD_GET(PFIT_VERT_SCALE_MASK_965, tmp);
} else {
u32 tmp;
if (intel_de_read(dev_priv, PFIT_CONTROL) & PFIT_VERT_AUTO_SCALE)
tmp = intel_de_read(dev_priv, PFIT_AUTO_RATIOS);
if (intel_de_read(dev_priv, PFIT_CONTROL(dev_priv)) & PFIT_VERT_AUTO_SCALE)
tmp = intel_de_read(dev_priv,
PFIT_AUTO_RATIOS(dev_priv));
else
tmp = intel_de_read(dev_priv, PFIT_PGM_RATIOS);
tmp = intel_de_read(dev_priv,
PFIT_PGM_RATIOS(dev_priv));
ratio = REG_FIELD_GET(PFIT_VERT_SCALE_MASK, tmp);
}
@ -1485,15 +1487,14 @@ intel_overlay_capture_error_state(struct drm_i915_private *dev_priv)
}
void
intel_overlay_print_error_state(struct drm_i915_error_state_buf *m,
intel_overlay_print_error_state(struct drm_printer *p,
struct intel_overlay_error_state *error)
{
i915_error_printf(m, "Overlay, status: 0x%08x, interrupt: 0x%08x\n",
error->dovsta, error->isr);
i915_error_printf(m, " Register file at 0x%08lx:\n",
error->base);
drm_printf(p, "Overlay, status: 0x%08x, interrupt: 0x%08x\n",
error->dovsta, error->isr);
drm_printf(p, " Register file at 0x%08lx:\n", error->base);
#define P(x) i915_error_printf(m, " " #x ": 0x%08x\n", error->regs.x)
#define P(x) drm_printf(p, " " #x ": 0x%08x\n", error->regs.x)
P(OBUF_0Y);
P(OBUF_1Y);
P(OBUF_0U);

View File

@ -8,8 +8,8 @@
struct drm_device;
struct drm_file;
struct drm_i915_error_state_buf;
struct drm_i915_private;
struct drm_printer;
struct intel_overlay;
struct intel_overlay_error_state;
@ -24,7 +24,7 @@ int intel_overlay_attrs_ioctl(struct drm_device *dev, void *data,
void intel_overlay_reset(struct drm_i915_private *dev_priv);
struct intel_overlay_error_state *
intel_overlay_capture_error_state(struct drm_i915_private *dev_priv);
void intel_overlay_print_error_state(struct drm_i915_error_state_buf *e,
void intel_overlay_print_error_state(struct drm_printer *p,
struct intel_overlay_error_state *error);
#else
static inline void intel_overlay_setup(struct drm_i915_private *dev_priv)
@ -55,7 +55,7 @@ intel_overlay_capture_error_state(struct drm_i915_private *dev_priv)
{
return NULL;
}
static inline void intel_overlay_print_error_state(struct drm_i915_error_state_buf *e,
static inline void intel_overlay_print_error_state(struct drm_printer *p,
struct intel_overlay_error_state *error)
{
}

View File

@ -352,7 +352,7 @@ void intel_panel_add_vbt_lfp_fixed_mode(struct intel_connector *connector)
struct drm_i915_private *i915 = to_i915(connector->base.dev);
const struct drm_display_mode *mode;
mode = connector->panel.vbt.lfp_lvds_vbt_mode;
mode = connector->panel.vbt.lfp_vbt_mode;
if (!mode)
return;

View File

@ -224,20 +224,20 @@ static void ilk_pch_transcoder_set_timings(const struct intel_crtc_state *crtc_s
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
intel_de_write(dev_priv, PCH_TRANS_HTOTAL(pch_transcoder),
intel_de_read(dev_priv, TRANS_HTOTAL(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_HTOTAL(dev_priv, cpu_transcoder)));
intel_de_write(dev_priv, PCH_TRANS_HBLANK(pch_transcoder),
intel_de_read(dev_priv, TRANS_HBLANK(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_HBLANK(dev_priv, cpu_transcoder)));
intel_de_write(dev_priv, PCH_TRANS_HSYNC(pch_transcoder),
intel_de_read(dev_priv, TRANS_HSYNC(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_HSYNC(dev_priv, cpu_transcoder)));
intel_de_write(dev_priv, PCH_TRANS_VTOTAL(pch_transcoder),
intel_de_read(dev_priv, TRANS_VTOTAL(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_VTOTAL(dev_priv, cpu_transcoder)));
intel_de_write(dev_priv, PCH_TRANS_VBLANK(pch_transcoder),
intel_de_read(dev_priv, TRANS_VBLANK(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_VBLANK(dev_priv, cpu_transcoder)));
intel_de_write(dev_priv, PCH_TRANS_VSYNC(pch_transcoder),
intel_de_read(dev_priv, TRANS_VSYNC(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_VSYNC(dev_priv, cpu_transcoder)));
intel_de_write(dev_priv, PCH_TRANS_VSYNCSHIFT(pch_transcoder),
intel_de_read(dev_priv, TRANS_VSYNCSHIFT(cpu_transcoder)));
intel_de_read(dev_priv, TRANS_VSYNCSHIFT(dev_priv, cpu_transcoder)));
}
static void ilk_enable_pch_transcoder(const struct intel_crtc_state *crtc_state)
@ -271,7 +271,7 @@ static void ilk_enable_pch_transcoder(const struct intel_crtc_state *crtc_state)
reg = PCH_TRANSCONF(pipe);
val = intel_de_read(dev_priv, reg);
pipeconf_val = intel_de_read(dev_priv, TRANSCONF(pipe));
pipeconf_val = intel_de_read(dev_priv, TRANSCONF(dev_priv, pipe));
if (HAS_PCH_IBX(dev_priv)) {
/* Configure frame start delay to match the CPU */
@ -413,7 +413,7 @@ void ilk_pch_enable(struct intel_atomic_state *state,
intel_crtc_has_dp_encoder(crtc_state)) {
const struct drm_display_mode *adjusted_mode =
&crtc_state->hw.adjusted_mode;
u32 bpc = (intel_de_read(dev_priv, TRANSCONF(pipe)) & TRANSCONF_BPC_MASK) >> 5;
u32 bpc = (intel_de_read(dev_priv, TRANSCONF(dev_priv, pipe)) & TRANSCONF_BPC_MASK) >> 5;
i915_reg_t reg = TRANS_DP_CTL(pipe);
enum port port;
@ -557,7 +557,8 @@ static void lpt_enable_pch_transcoder(const struct intel_crtc_state *crtc_state)
intel_de_write(dev_priv, TRANS_CHICKEN2(PIPE_A), val);
val = TRANS_ENABLE;
pipeconf_val = intel_de_read(dev_priv, TRANSCONF(cpu_transcoder));
pipeconf_val = intel_de_read(dev_priv,
TRANSCONF(dev_priv, cpu_transcoder));
if ((pipeconf_val & TRANSCONF_INTERLACE_MASK_HSW) == TRANSCONF_INTERLACE_IF_ID_ILK)
val |= TRANS_INTERLACE_INTERLACED;

View File

@ -34,6 +34,7 @@
#include "intel_de.h"
#include "intel_display_types.h"
#include "intel_pipe_crc.h"
#include "intel_pipe_crc_regs.h"
static const char * const pipe_crc_sources[] = {
[INTEL_PIPE_CRC_SOURCE_NONE] = "none",
@ -167,7 +168,7 @@ static int vlv_pipe_crc_ctl_reg(struct drm_i915_private *dev_priv,
* - DisplayPort scrambling: used for EMI reduction
*/
if (need_stable_symbols) {
u32 tmp = intel_de_read(dev_priv, PORT_DFT2_G4X);
u32 tmp = intel_de_read(dev_priv, PORT_DFT2_G4X(dev_priv));
tmp |= DC_BALANCE_RESET_VLV;
switch (pipe) {
@ -183,7 +184,7 @@ static int vlv_pipe_crc_ctl_reg(struct drm_i915_private *dev_priv,
default:
return -EINVAL;
}
intel_de_write(dev_priv, PORT_DFT2_G4X, tmp);
intel_de_write(dev_priv, PORT_DFT2_G4X(dev_priv), tmp);
}
return 0;
@ -229,7 +230,7 @@ static int i9xx_pipe_crc_ctl_reg(struct drm_i915_private *dev_priv,
static void vlv_undo_pipe_scramble_reset(struct drm_i915_private *dev_priv,
enum pipe pipe)
{
u32 tmp = intel_de_read(dev_priv, PORT_DFT2_G4X);
u32 tmp = intel_de_read(dev_priv, PORT_DFT2_G4X(dev_priv));
switch (pipe) {
case PIPE_A:
@ -246,7 +247,7 @@ static void vlv_undo_pipe_scramble_reset(struct drm_i915_private *dev_priv,
}
if (!(tmp & PIPE_SCRAMBLE_RESET_MASK))
tmp &= ~DC_BALANCE_RESET_VLV;
intel_de_write(dev_priv, PORT_DFT2_G4X, tmp);
intel_de_write(dev_priv, PORT_DFT2_G4X(dev_priv), tmp);
}
static int ilk_pipe_crc_ctl_reg(enum intel_pipe_crc_source *source,
@ -608,8 +609,8 @@ int intel_crtc_set_crc_source(struct drm_crtc *_crtc, const char *source_name)
goto out;
pipe_crc->source = source;
intel_de_write(dev_priv, PIPE_CRC_CTL(pipe), val);
intel_de_posting_read(dev_priv, PIPE_CRC_CTL(pipe));
intel_de_write(dev_priv, PIPE_CRC_CTL(dev_priv, pipe), val);
intel_de_posting_read(dev_priv, PIPE_CRC_CTL(dev_priv, pipe));
if (!source) {
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
@ -643,8 +644,8 @@ void intel_crtc_enable_pipe_crc(struct intel_crtc *crtc)
/* Don't need pipe_crc->lock here, IRQs are not generated. */
pipe_crc->skipped = 0;
intel_de_write(dev_priv, PIPE_CRC_CTL(pipe), val);
intel_de_posting_read(dev_priv, PIPE_CRC_CTL(pipe));
intel_de_write(dev_priv, PIPE_CRC_CTL(dev_priv, pipe), val);
intel_de_posting_read(dev_priv, PIPE_CRC_CTL(dev_priv, pipe));
}
void intel_crtc_disable_pipe_crc(struct intel_crtc *crtc)
@ -658,7 +659,7 @@ void intel_crtc_disable_pipe_crc(struct intel_crtc *crtc)
pipe_crc->skipped = INT_MIN;
spin_unlock_irq(&pipe_crc->lock);
intel_de_write(dev_priv, PIPE_CRC_CTL(pipe), 0);
intel_de_posting_read(dev_priv, PIPE_CRC_CTL(pipe));
intel_de_write(dev_priv, PIPE_CRC_CTL(dev_priv, pipe), 0);
intel_de_posting_read(dev_priv, PIPE_CRC_CTL(dev_priv, pipe));
intel_synchronize_irq(dev_priv);
}

View File

@ -0,0 +1,152 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2024 Intel Corporation
*/
#ifndef __INTEL_PIPE_CRC_REGS_H__
#define __INTEL_PIPE_CRC_REGS_H__
#include "intel_display_reg_defs.h"
#define _PIPE_CRC_CTL_A 0x60050
#define PIPE_CRC_CTL(dev_priv, pipe) _MMIO_TRANS2((dev_priv), (pipe), _PIPE_CRC_CTL_A)
#define PIPE_CRC_ENABLE REG_BIT(31)
/* skl+ source selection */
#define PIPE_CRC_SOURCE_MASK_SKL REG_GENMASK(30, 28)
#define PIPE_CRC_SOURCE_PLANE_1_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 0)
#define PIPE_CRC_SOURCE_PLANE_2_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 2)
#define PIPE_CRC_SOURCE_DMUX_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 4)
#define PIPE_CRC_SOURCE_PLANE_3_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 6)
#define PIPE_CRC_SOURCE_PLANE_4_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 7)
#define PIPE_CRC_SOURCE_PLANE_5_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 5)
#define PIPE_CRC_SOURCE_PLANE_6_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 3)
#define PIPE_CRC_SOURCE_PLANE_7_SKL REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_SKL, 1)
/* ivb+ source selection */
#define PIPE_CRC_SOURCE_MASK_IVB REG_GENMASK(30, 29)
#define PIPE_CRC_SOURCE_PRIMARY_IVB REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_IVB, 0)
#define PIPE_CRC_SOURCE_SPRITE_IVB REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_IVB, 1)
#define PIPE_CRC_SOURCE_PF_IVB REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_IVB, 2)
/* ilk+ source selection */
#define PIPE_CRC_SOURCE_MASK_ILK REG_GENMASK(30, 28)
#define PIPE_CRC_SOURCE_PRIMARY_ILK REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_ILK, 0)
#define PIPE_CRC_SOURCE_SPRITE_ILK REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_ILK, 1)
#define PIPE_CRC_SOURCE_PIPE_ILK REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_ILK, 2)
/* embedded DP port on the north display block */
#define PIPE_CRC_SOURCE_PORT_A_ILK REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_ILK, 4)
#define PIPE_CRC_SOURCE_FDI_ILK REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_ILK, 5)
/* vlv source selection */
#define PIPE_CRC_SOURCE_MASK_VLV REG_GENMASK(30, 27)
#define PIPE_CRC_SOURCE_PIPE_VLV REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_VLV, 0)
#define PIPE_CRC_SOURCE_HDMIB_VLV REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_VLV, 1)
#define PIPE_CRC_SOURCE_HDMIC_VLV REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_VLV, 2)
/* with DP port the pipe source is invalid */
#define PIPE_CRC_SOURCE_DP_D_VLV REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_VLV, 3)
#define PIPE_CRC_SOURCE_DP_B_VLV REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_VLV, 6)
#define PIPE_CRC_SOURCE_DP_C_VLV REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_VLV, 7)
/* gen3+ source selection */
#define PIPE_CRC_SOURCE_MASK_I9XX REG_GENMASK(30, 28)
#define PIPE_CRC_SOURCE_PIPE_I9XX REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 0)
#define PIPE_CRC_SOURCE_SDVOB_I9XX REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 1)
#define PIPE_CRC_SOURCE_SDVOC_I9XX REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 2)
/* with DP/TV port the pipe source is invalid */
#define PIPE_CRC_SOURCE_DP_D_G4X REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 3)
#define PIPE_CRC_SOURCE_TV_PRE REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 4)
#define PIPE_CRC_SOURCE_TV_POST REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 5)
#define PIPE_CRC_SOURCE_DP_B_G4X REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 6)
#define PIPE_CRC_SOURCE_DP_C_G4X REG_FIELD_PREP(PIPE_CRC_SOURCE_MASK_I9XX, 7)
/* gen2 doesn't have source selection bits */
#define PIPE_CRC_INCLUDE_BORDER_I8XX REG_BIT(30)
#define PIPE_CRC_EXP_RED_MASK REG_BIT(22, 0) /* pre-ivb */
#define PIPE_CRC_EXP_1_MASK_IVB REG_BIT(22, 0) /* ivb */
#define _PIPE_CRC_EXP_GREEN_A 0x60054
#define PIPE_CRC_EXP_GREEN(dev_priv, pipe) _MMIO_TRANS2(dev_priv, pipe, _PIPE_CRC_EXP_GREEN_A)
#define PIPE_CRC_EXP_GREEN_MASK REG_BIT(22, 0) /* pre-ivb */
#define _PIPE_CRC_EXP_BLUE_A 0x60058
#define PIPE_CRC_EXP_BLUE(dev_priv, pipe) _MMIO_TRANS2(dev_priv, pipe, _PIPE_CRC_EXP_BLUE_A)
#define PIPE_CRC_EXP_BLUE_MASK REG_BIT(22, 0) /* pre-ivb */
#define _PIPE_CRC_EXP_RES1_A_I915 0x6005c /* i915+ */
#define PIPE_CRC_EXP_RES1_I915(dev_priv, pipe) _MMIO_TRANS2(dev_priv, pipe, _PIPE_CRC_EXP_RES1_A_I915)
#define PIPE_CRC_EXP_RES1_MASK REG_BIT(22, 0) /* pre-ivb */
#define _PIPE_CRC_EXP_RES2_A_G4X 0x60080 /* g4x+ */
#define PIPE_CRC_EXP_RES2_G4X(dev_priv, pipe) _MMIO_TRANS2(dev_priv, pipe, _PIPE_CRC_EXP_RES2_A_G4X)
#define PIPE_CRC_EXP_RES2_MASK REG_BIT(22, 0) /* pre-ivb */
#define _PIPE_CRC_RES_RED_A 0x60060
#define PIPE_CRC_RES_RED(dev_priv, pipe) _MMIO_TRANS2((dev_priv), (pipe), _PIPE_CRC_RES_RED_A)
#define _PIPE_CRC_RES_GREEN_A 0x60064
#define PIPE_CRC_RES_GREEN(dev_priv, pipe) _MMIO_TRANS2((dev_priv), (pipe), _PIPE_CRC_RES_GREEN_A)
#define _PIPE_CRC_RES_BLUE_A 0x60068
#define PIPE_CRC_RES_BLUE(dev_priv, pipe) _MMIO_TRANS2((dev_priv), (pipe), _PIPE_CRC_RES_BLUE_A)
#define _PIPE_CRC_RES_RES1_A_I915 0x6006c /* i915+ */
#define PIPE_CRC_RES_RES1_I915(dev_priv, pipe) _MMIO_TRANS2((dev_priv), (pipe), _PIPE_CRC_RES_RES1_A_I915)
#define _PIPE_CRC_RES_RES2_A_G4X 0x60080 /* g4x+ */
#define PIPE_CRC_RES_RES2_G4X(dev_priv, pipe) _MMIO_TRANS2((dev_priv), (pipe), _PIPE_CRC_RES_RES2_A_G4X)
/* ivb */
#define _PIPE_CRC_EXP_2_A_IVB 0x60054
#define _PIPE_CRC_EXP_2_B_IVB 0x61054
#define PIPE_CRC_EXP_2_IVB(pipe) _MMIO_PIPE(pipe, _PIPE_CRC_EXP_2_A_IVB, _PIPE_CRC_EXP_2_B_IVB)
#define PIPE_CRC_EXP_2_MASK_IVB REG_BIT(22, 0) /* ivb */
/* ivb */
#define _PIPE_CRC_EXP_3_A_IVB 0x60058
#define _PIPE_CRC_EXP_3_B_IVB 0x61058
#define PIPE_CRC_EXP_3_IVB(pipe) _MMIO_PIPE(pipe, _PIPE_CRC_EXP_3_A_IVB, _PIPE_CRC_EXP_3_B_IVB)
#define PIPE_CRC_EXP_3_MASK_IVB REG_BIT(22, 0) /* ivb */
/* ivb */
#define _PIPE_CRC_EXP_4_A_IVB 0x6005c
#define _PIPE_CRC_EXP_4_B_IVB 0x6105c
#define PIPE_CRC_EXP_4_IVB(pipe) _MMIO_PIPE(pipe, _PIPE_CRC_EXP_2_A_IVB, _PIPE_CRC_EXP_2_B_IVB)
#define PIPE_CRC_EXP_4_MASK_IVB REG_BIT(22, 0) /* ivb */
/* ivb */
#define _PIPE_CRC_EXP_5_A_IVB 0x60060
#define _PIPE_CRC_EXP_5_B_IVB 0x61060
#define PIPE_CRC_EXP_5_IVB(pipe) _MMIO_PIPE(pipe, _PIPE_CRC_EXP_2_A_IVB, _PIPE_CRC_EXP_2_B_IVB)
#define PIPE_CRC_EXP_5_MASK_IVB REG_BIT(22, 0) /* ivb */
/* ivb */
#define _PIPE_CRC_RES_1_A_IVB 0x60064
#define _PIPE_CRC_RES_1_B_IVB 0x61064
#define PIPE_CRC_RES_1_IVB(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_RES_1_A_IVB, _PIPE_CRC_RES_1_B_IVB)
/* ivb */
#define _PIPE_CRC_RES_2_A_IVB 0x60068
#define _PIPE_CRC_RES_2_B_IVB 0x61068
#define PIPE_CRC_RES_2_IVB(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_RES_2_A_IVB, _PIPE_CRC_RES_2_B_IVB)
/* ivb */
#define _PIPE_CRC_RES_3_A_IVB 0x6006c
#define _PIPE_CRC_RES_3_B_IVB 0x6106c
#define PIPE_CRC_RES_3_IVB(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_RES_3_A_IVB, _PIPE_CRC_RES_3_B_IVB)
/* ivb */
#define _PIPE_CRC_RES_4_A_IVB 0x60070
#define _PIPE_CRC_RES_4_B_IVB 0x61070
#define PIPE_CRC_RES_4_IVB(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_RES_4_A_IVB, _PIPE_CRC_RES_4_B_IVB)
/* ivb */
#define _PIPE_CRC_RES_5_A_IVB 0x60074
#define _PIPE_CRC_RES_5_B_IVB 0x61074
#define PIPE_CRC_RES_5_IVB(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_RES_5_A_IVB, _PIPE_CRC_RES_5_B_IVB)
/* hsw+ */
#define _PIPE_CRC_EXP_A_HSW 0x60054
#define _PIPE_CRC_EXP_B_HSW 0x61054
#define PIPE_CRC_EXP_HSW(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_EXP_A_HSW, _PIPE_CRC_EXP_B_HSW)
/* hsw+ */
#define _PIPE_CRC_RES_A_HSW 0x60064
#define _PIPE_CRC_RES_B_HSW 0x61064
#define PIPE_CRC_RES_HSW(pipe) _MMIO_PIPE((pipe), _PIPE_CRC_RES_A_HSW, _PIPE_CRC_RES_B_HSW)
#endif /* __INTEL_PIPE_CRC_REGS_H__ */

View File

@ -119,7 +119,7 @@ vlv_power_sequencer_kick(struct intel_dp *intel_dp)
else
DP |= DP_PIPE_SEL(pipe);
pll_enabled = intel_de_read(dev_priv, DPLL(pipe)) & DPLL_VCO_ENABLE;
pll_enabled = intel_de_read(dev_priv, DPLL(dev_priv, pipe)) & DPLL_VCO_ENABLE;
/*
* The DPLL for the pipe must be enabled for this to work.
@ -272,12 +272,12 @@ typedef bool (*pps_check)(struct drm_i915_private *dev_priv, int pps_idx);
static bool pps_has_pp_on(struct drm_i915_private *dev_priv, int pps_idx)
{
return intel_de_read(dev_priv, PP_STATUS(pps_idx)) & PP_ON;
return intel_de_read(dev_priv, PP_STATUS(dev_priv, pps_idx)) & PP_ON;
}
static bool pps_has_vdd_on(struct drm_i915_private *dev_priv, int pps_idx)
{
return intel_de_read(dev_priv, PP_CONTROL(pps_idx)) & EDP_FORCE_VDD;
return intel_de_read(dev_priv, PP_CONTROL(dev_priv, pps_idx)) & EDP_FORCE_VDD;
}
static bool pps_any(struct drm_i915_private *dev_priv, int pps_idx)
@ -292,7 +292,7 @@ vlv_initial_pps_pipe(struct drm_i915_private *dev_priv,
enum pipe pipe;
for (pipe = PIPE_A; pipe <= PIPE_B; pipe++) {
u32 port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(pipe)) &
u32 port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(dev_priv, pipe)) &
PANEL_PORT_SELECT_MASK;
if (port_sel != PANEL_PORT_SELECT_VLV(port))
@ -491,17 +491,17 @@ static void intel_pps_get_registers(struct intel_dp *intel_dp,
else
pps_idx = intel_dp->pps.pps_idx;
regs->pp_ctrl = PP_CONTROL(pps_idx);
regs->pp_stat = PP_STATUS(pps_idx);
regs->pp_on = PP_ON_DELAYS(pps_idx);
regs->pp_off = PP_OFF_DELAYS(pps_idx);
regs->pp_ctrl = PP_CONTROL(dev_priv, pps_idx);
regs->pp_stat = PP_STATUS(dev_priv, pps_idx);
regs->pp_on = PP_ON_DELAYS(dev_priv, pps_idx);
regs->pp_off = PP_OFF_DELAYS(dev_priv, pps_idx);
/* Cycle delay moved from PP_DIVISOR to PP_CONTROL */
if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv) ||
INTEL_PCH_TYPE(dev_priv) >= PCH_CNP)
regs->pp_div = INVALID_MMIO_REG;
else
regs->pp_div = PP_DIVISOR(pps_idx);
regs->pp_div = PP_DIVISOR(dev_priv, pps_idx);
}
static i915_reg_t
@ -1111,7 +1111,7 @@ static void vlv_detach_power_sequencer(struct intel_dp *intel_dp)
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
enum pipe pipe = intel_dp->pps.pps_pipe;
i915_reg_t pp_on_reg = PP_ON_DELAYS(pipe);
i915_reg_t pp_on_reg = PP_ON_DELAYS(dev_priv, pipe);
drm_WARN_ON(&dev_priv->drm, intel_dp->pps.active_pipe != INVALID_PIPE);
@ -1656,7 +1656,7 @@ void intel_pps_unlock_regs_wa(struct drm_i915_private *dev_priv)
pps_num = intel_num_pps(dev_priv);
for (pps_idx = 0; pps_idx < pps_num; pps_idx++)
intel_de_rmw(dev_priv, PP_CONTROL(pps_idx),
intel_de_rmw(dev_priv, PP_CONTROL(dev_priv, pps_idx),
PANEL_UNLOCK_MASK, PANEL_UNLOCK_REGS);
}
@ -1714,8 +1714,8 @@ void assert_pps_unlocked(struct drm_i915_private *dev_priv, enum pipe pipe)
if (HAS_PCH_SPLIT(dev_priv)) {
u32 port_sel;
pp_reg = PP_CONTROL(0);
port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(0)) & PANEL_PORT_SELECT_MASK;
pp_reg = PP_CONTROL(dev_priv, 0);
port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(dev_priv, 0)) & PANEL_PORT_SELECT_MASK;
switch (port_sel) {
case PANEL_PORT_SELECT_LVDS:
@ -1736,13 +1736,13 @@ void assert_pps_unlocked(struct drm_i915_private *dev_priv, enum pipe pipe)
}
} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
/* presumably write lock depends on pipe, not port select */
pp_reg = PP_CONTROL(pipe);
pp_reg = PP_CONTROL(dev_priv, pipe);
panel_pipe = pipe;
} else {
u32 port_sel;
pp_reg = PP_CONTROL(0);
port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(0)) & PANEL_PORT_SELECT_MASK;
pp_reg = PP_CONTROL(dev_priv, 0);
port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(dev_priv, 0)) & PANEL_PORT_SELECT_MASK;
drm_WARN_ON(&dev_priv->drm,
port_sel != PANEL_PORT_SELECT_LVDS);

View File

@ -6,6 +6,7 @@
#ifndef __INTEL_PPS_REGS_H__
#define __INTEL_PPS_REGS_H__
#include "intel_display_conversion.h"
#include "intel_display_reg_defs.h"
/* Panel power sequencing */
@ -13,12 +14,11 @@
#define VLV_PPS_BASE (VLV_DISPLAY_BASE + PPS_BASE)
#define PCH_PPS_BASE 0xC7200
#define _MMIO_PPS(pps_idx, reg) _MMIO(dev_priv->display.pps.mmio_base - \
PPS_BASE + (reg) + \
(pps_idx) * 0x100)
#define _MMIO_PPS(dev_priv, pps_idx, reg) \
_MMIO(__to_intel_display(dev_priv)->pps.mmio_base - PPS_BASE + (reg) + (pps_idx) * 0x100)
#define _PP_STATUS 0x61200
#define PP_STATUS(pps_idx) _MMIO_PPS(pps_idx, _PP_STATUS)
#define PP_STATUS(dev_priv, pps_idx) _MMIO_PPS(dev_priv, pps_idx, _PP_STATUS)
#define PP_ON REG_BIT(31)
/*
* Indicates that all dependencies of the panel are on:
@ -45,7 +45,7 @@
#define PP_SEQUENCE_STATE_RESET REG_FIELD_PREP(PP_SEQUENCE_STATE_MASK, 0xf)
#define _PP_CONTROL 0x61204
#define PP_CONTROL(pps_idx) _MMIO_PPS(pps_idx, _PP_CONTROL)
#define PP_CONTROL(dev_priv, pps_idx) _MMIO_PPS(dev_priv, pps_idx, _PP_CONTROL)
#define PANEL_UNLOCK_MASK REG_GENMASK(31, 16)
#define PANEL_UNLOCK_REGS REG_FIELD_PREP(PANEL_UNLOCK_MASK, 0xabcd)
#define BXT_POWER_CYCLE_DELAY_MASK REG_GENMASK(8, 4)
@ -55,7 +55,7 @@
#define PANEL_POWER_ON REG_BIT(0)
#define _PP_ON_DELAYS 0x61208
#define PP_ON_DELAYS(pps_idx) _MMIO_PPS(pps_idx, _PP_ON_DELAYS)
#define PP_ON_DELAYS(dev_priv, pps_idx) _MMIO_PPS(dev_priv, pps_idx, _PP_ON_DELAYS)
#define PANEL_PORT_SELECT_MASK REG_GENMASK(31, 30)
#define PANEL_PORT_SELECT_LVDS REG_FIELD_PREP(PANEL_PORT_SELECT_MASK, 0)
#define PANEL_PORT_SELECT_DPA REG_FIELD_PREP(PANEL_PORT_SELECT_MASK, 1)
@ -66,12 +66,12 @@
#define PANEL_LIGHT_ON_DELAY_MASK REG_GENMASK(12, 0)
#define _PP_OFF_DELAYS 0x6120C
#define PP_OFF_DELAYS(pps_idx) _MMIO_PPS(pps_idx, _PP_OFF_DELAYS)
#define PP_OFF_DELAYS(dev_priv, pps_idx) _MMIO_PPS(dev_priv, pps_idx, _PP_OFF_DELAYS)
#define PANEL_POWER_DOWN_DELAY_MASK REG_GENMASK(28, 16)
#define PANEL_LIGHT_OFF_DELAY_MASK REG_GENMASK(12, 0)
#define _PP_DIVISOR 0x61210
#define PP_DIVISOR(pps_idx) _MMIO_PPS(pps_idx, _PP_DIVISOR)
#define PP_DIVISOR(dev_priv, pps_idx) _MMIO_PPS(dev_priv, pps_idx, _PP_DIVISOR)
#define PP_REFERENCE_DIVIDER_MASK REG_GENMASK(31, 8)
#define PANEL_POWER_CYCLE_DELAY_MASK REG_GENMASK(4, 0)

File diff suppressed because it is too large Load Diff

View File

@ -9,7 +9,7 @@
#include "intel_display_reg_defs.h"
#include "intel_dp_aux_regs.h"
#define TRANS_EXITLINE(trans) _MMIO_TRANS2(dev_priv, (trans), _TRANS_EXITLINE_A)
#define TRANS_EXITLINE(dev_priv, trans) _MMIO_TRANS2(dev_priv, (trans), _TRANS_EXITLINE_A)
#define EXITLINE_ENABLE REG_BIT(31)
#define EXITLINE_MASK REG_GENMASK(12, 0)
#define EXITLINE_SHIFT 0
@ -23,7 +23,7 @@
#define HSW_SRD_CTL _MMIO(0x64800)
#define _SRD_CTL_A 0x60800
#define _SRD_CTL_EDP 0x6f800
#define EDP_PSR_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _SRD_CTL_A)
#define EDP_PSR_CTL(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _SRD_CTL_A)
#define EDP_PSR_ENABLE REG_BIT(31)
#define BDW_PSR_SINGLE_FRAME REG_BIT(30)
#define EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK REG_BIT(29) /* SW can't modify */
@ -66,8 +66,8 @@
#define EDP_PSR_IIR _MMIO(0x64838)
#define _PSR_IMR_A 0x60814
#define _PSR_IIR_A 0x60818
#define TRANS_PSR_IMR(tran) _MMIO_TRANS2(dev_priv, tran, _PSR_IMR_A)
#define TRANS_PSR_IIR(tran) _MMIO_TRANS2(dev_priv, tran, _PSR_IIR_A)
#define TRANS_PSR_IMR(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _PSR_IMR_A)
#define TRANS_PSR_IIR(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _PSR_IIR_A)
#define _EDP_PSR_TRANS_SHIFT(trans) ((trans) == TRANSCODER_EDP ? \
0 : ((trans) - TRANSCODER_A + 1) * 8)
#define TGL_PSR_MASK REG_GENMASK(2, 0)
@ -86,7 +86,7 @@
#define HSW_SRD_AUX_CTL _MMIO(0x64810)
#define _SRD_AUX_CTL_A 0x60810
#define _SRD_AUX_CTL_EDP 0x6f810
#define EDP_PSR_AUX_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _SRD_AUX_CTL_A)
#define EDP_PSR_AUX_CTL(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _SRD_AUX_CTL_A)
#define EDP_PSR_AUX_CTL_TIME_OUT_MASK DP_AUX_CH_CTL_TIME_OUT_MASK
#define EDP_PSR_AUX_CTL_MESSAGE_SIZE_MASK DP_AUX_CH_CTL_MESSAGE_SIZE_MASK
#define EDP_PSR_AUX_CTL_PRECHARGE_2US_MASK DP_AUX_CH_CTL_PRECHARGE_2US_MASK
@ -96,12 +96,12 @@
#define HSW_SRD_AUX_DATA(i) _MMIO(0x64814 + (i) * 4) /* 5 registers */
#define _SRD_AUX_DATA_A 0x60814
#define _SRD_AUX_DATA_EDP 0x6f814
#define EDP_PSR_AUX_DATA(tran, i) _MMIO_TRANS2(dev_priv, tran, _SRD_AUX_DATA_A + (i) * 4) /* 5 registers */
#define EDP_PSR_AUX_DATA(dev_priv, tran, i) _MMIO_TRANS2(dev_priv, tran, _SRD_AUX_DATA_A + (i) * 4) /* 5 registers */
#define HSW_SRD_STATUS _MMIO(0x64840)
#define _SRD_STATUS_A 0x60840
#define _SRD_STATUS_EDP 0x6f840
#define EDP_PSR_STATUS(tran) _MMIO_TRANS2(dev_priv, tran, _SRD_STATUS_A)
#define EDP_PSR_STATUS(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _SRD_STATUS_A)
#define EDP_PSR_STATUS_STATE_MASK REG_GENMASK(31, 29)
#define EDP_PSR_STATUS_STATE_IDLE REG_FIELD_PREP(EDP_PSR_STATUS_STATE_MASK, 0)
#define EDP_PSR_STATUS_STATE_SRDONACK REG_FIELD_PREP(EDP_PSR_STATUS_STATE_MASK, 1)
@ -126,14 +126,14 @@
#define HSW_SRD_PERF_CNT _MMIO(0x64844)
#define _SRD_PERF_CNT_A 0x60844
#define _SRD_PERF_CNT_EDP 0x6f844
#define EDP_PSR_PERF_CNT(tran) _MMIO_TRANS2(dev_priv, tran, _SRD_PERF_CNT_A)
#define EDP_PSR_PERF_CNT(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _SRD_PERF_CNT_A)
#define EDP_PSR_PERF_CNT_MASK REG_GENMASK(23, 0)
/* PSR_MASK on SKL+ */
#define HSW_SRD_DEBUG _MMIO(0x64860)
#define _SRD_DEBUG_A 0x60860
#define _SRD_DEBUG_EDP 0x6f860
#define EDP_PSR_DEBUG(tran) _MMIO_TRANS2(dev_priv, tran, _SRD_DEBUG_A)
#define EDP_PSR_DEBUG(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _SRD_DEBUG_A)
#define EDP_PSR_DEBUG_MASK_MAX_SLEEP REG_BIT(28)
#define EDP_PSR_DEBUG_MASK_LPSP REG_BIT(27)
#define EDP_PSR_DEBUG_MASK_MEMUP REG_BIT(26)
@ -153,7 +153,7 @@
#define _PSR2_CTL_A 0x60900
#define _PSR2_CTL_EDP 0x6f900
#define EDP_PSR2_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _PSR2_CTL_A)
#define EDP_PSR2_CTL(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _PSR2_CTL_A)
#define EDP_PSR2_ENABLE REG_BIT(31)
#define EDP_SU_TRACK_ENABLE REG_BIT(30) /* up to adl-p */
#define TGL_EDP_PSR2_BLOCK_COUNT_MASK REG_BIT(28)
@ -172,6 +172,10 @@
#define TGL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES 5
#define TGL_EDP_PSR2_IO_BUFFER_WAKE(lines) REG_FIELD_PREP(TGL_EDP_PSR2_IO_BUFFER_WAKE_MASK, \
(lines) - TGL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES)
#define LNL_EDP_PSR2_IO_BUFFER_WAKE_MASK REG_GENMASK(18, 13)
#define LNL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES 5
#define LNL_EDP_PSR2_IO_BUFFER_WAKE(lines) REG_FIELD_PREP(LNL_EDP_PSR2_IO_BUFFER_WAKE_MASK, \
(lines) - LNL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES)
#define EDP_PSR2_FAST_WAKE_MASK REG_GENMASK(12, 11)
#define EDP_PSR2_FAST_WAKE_MAX_LINES 8
#define EDP_PSR2_FAST_WAKE(lines) REG_FIELD_PREP(EDP_PSR2_FAST_WAKE_MASK, \
@ -195,7 +199,7 @@
#define _PSR_EVENT_TRANS_C 0x62848
#define _PSR_EVENT_TRANS_D 0x63848
#define _PSR_EVENT_TRANS_EDP 0x6f848
#define PSR_EVENT(tran) _MMIO_TRANS2(dev_priv, tran, _PSR_EVENT_TRANS_A)
#define PSR_EVENT(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _PSR_EVENT_TRANS_A)
#define PSR_EVENT_PSR2_WD_TIMER_EXPIRE REG_BIT(17)
#define PSR_EVENT_PSR2_DISABLED REG_BIT(16)
#define PSR_EVENT_SU_DIRTY_FIFO_UNDERRUN REG_BIT(15)
@ -215,21 +219,21 @@
#define _PSR2_STATUS_A 0x60940
#define _PSR2_STATUS_EDP 0x6f940
#define EDP_PSR2_STATUS(tran) _MMIO_TRANS2(dev_priv, tran, _PSR2_STATUS_A)
#define EDP_PSR2_STATUS(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _PSR2_STATUS_A)
#define EDP_PSR2_STATUS_STATE_MASK REG_GENMASK(31, 28)
#define EDP_PSR2_STATUS_STATE_DEEP_SLEEP REG_FIELD_PREP(EDP_PSR2_STATUS_STATE_MASK, 0x8)
#define _PSR2_SU_STATUS_A 0x60914
#define _PSR2_SU_STATUS_EDP 0x6f914
#define _PSR2_SU_STATUS(tran, index) _MMIO_TRANS2(dev_priv, tran, _PSR2_SU_STATUS_A + (index) * 4)
#define PSR2_SU_STATUS(tran, frame) (_PSR2_SU_STATUS(tran, (frame) / 3))
#define _PSR2_SU_STATUS(dev_priv, tran, index) _MMIO_TRANS2(dev_priv, tran, _PSR2_SU_STATUS_A + (index) * 4)
#define PSR2_SU_STATUS(dev_priv, tran, frame) (_PSR2_SU_STATUS(dev_priv, tran, (frame) / 3))
#define PSR2_SU_STATUS_SHIFT(frame) (((frame) % 3) * 10)
#define PSR2_SU_STATUS_MASK(frame) (0x3ff << PSR2_SU_STATUS_SHIFT(frame))
#define PSR2_SU_STATUS_FRAMES 8
#define _PSR2_MAN_TRK_CTL_A 0x60910
#define _PSR2_MAN_TRK_CTL_EDP 0x6f910
#define PSR2_MAN_TRK_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _PSR2_MAN_TRK_CTL_A)
#define PSR2_MAN_TRK_CTL(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _PSR2_MAN_TRK_CTL_A)
#define PSR2_MAN_TRK_CTL_ENABLE REG_BIT(31)
#define PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR_MASK REG_GENMASK(30, 21)
#define PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(val) REG_FIELD_PREP(PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR_MASK, val)
@ -248,56 +252,11 @@
/* PSR2 Early transport */
#define _PIPE_SRCSZ_ERLY_TPT_A 0x70074
#define PIPE_SRCSZ_ERLY_TPT(trans) _MMIO_TRANS2(dev_priv, trans, _PIPE_SRCSZ_ERLY_TPT_A)
#define _SEL_FETCH_PLANE_BASE_1_A 0x70890
#define _SEL_FETCH_PLANE_BASE_2_A 0x708B0
#define _SEL_FETCH_PLANE_BASE_3_A 0x708D0
#define _SEL_FETCH_PLANE_BASE_4_A 0x708F0
#define _SEL_FETCH_PLANE_BASE_5_A 0x70920
#define _SEL_FETCH_PLANE_BASE_6_A 0x70940
#define _SEL_FETCH_PLANE_BASE_7_A 0x70960
#define _SEL_FETCH_PLANE_BASE_CUR_A 0x70880
#define _SEL_FETCH_PLANE_BASE_1_B 0x71890
#define _SEL_FETCH_PLANE_BASE_A(plane) _PICK(plane, \
_SEL_FETCH_PLANE_BASE_1_A, \
_SEL_FETCH_PLANE_BASE_2_A, \
_SEL_FETCH_PLANE_BASE_3_A, \
_SEL_FETCH_PLANE_BASE_4_A, \
_SEL_FETCH_PLANE_BASE_5_A, \
_SEL_FETCH_PLANE_BASE_6_A, \
_SEL_FETCH_PLANE_BASE_7_A, \
_SEL_FETCH_PLANE_BASE_CUR_A)
#define _SEL_FETCH_PLANE_BASE_1(pipe) _PIPE(pipe, _SEL_FETCH_PLANE_BASE_1_A, _SEL_FETCH_PLANE_BASE_1_B)
#define _SEL_FETCH_PLANE_BASE(pipe, plane) (_SEL_FETCH_PLANE_BASE_1(pipe) - \
_SEL_FETCH_PLANE_BASE_1_A + \
_SEL_FETCH_PLANE_BASE_A(plane))
#define _SEL_FETCH_PLANE_CTL_1_A 0x70890
#define PLANE_SEL_FETCH_CTL(pipe, plane) _MMIO(_SEL_FETCH_PLANE_BASE(pipe, plane) + \
_SEL_FETCH_PLANE_CTL_1_A - \
_SEL_FETCH_PLANE_BASE_1_A)
#define PLANE_SEL_FETCH_CTL_ENABLE REG_BIT(31)
#define _SEL_FETCH_PLANE_POS_1_A 0x70894
#define PLANE_SEL_FETCH_POS(pipe, plane) _MMIO(_SEL_FETCH_PLANE_BASE(pipe, plane) + \
_SEL_FETCH_PLANE_POS_1_A - \
_SEL_FETCH_PLANE_BASE_1_A)
#define _SEL_FETCH_PLANE_SIZE_1_A 0x70898
#define PLANE_SEL_FETCH_SIZE(pipe, plane) _MMIO(_SEL_FETCH_PLANE_BASE(pipe, plane) + \
_SEL_FETCH_PLANE_SIZE_1_A - \
_SEL_FETCH_PLANE_BASE_1_A)
#define _SEL_FETCH_PLANE_OFFSET_1_A 0x7089C
#define PLANE_SEL_FETCH_OFFSET(pipe, plane) _MMIO(_SEL_FETCH_PLANE_BASE(pipe, plane) + \
_SEL_FETCH_PLANE_OFFSET_1_A - \
_SEL_FETCH_PLANE_BASE_1_A)
#define _PIPE_SRCSZ_ERLY_TPT_B 0x71074
#define PIPE_SRCSZ_ERLY_TPT(pipe) _MMIO_PIPE((pipe), _PIPE_SRCSZ_ERLY_TPT_A, _PIPE_SRCSZ_ERLY_TPT_B)
#define _ALPM_CTL_A 0x60950
#define ALPM_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _ALPM_CTL_A)
#define ALPM_CTL(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _ALPM_CTL_A)
#define ALPM_CTL_ALPM_ENABLE REG_BIT(31)
#define ALPM_CTL_ALPM_AUX_LESS_ENABLE REG_BIT(30)
#define ALPM_CTL_LOBF_ENABLE REG_BIT(29)
@ -321,7 +280,7 @@
#define ALPM_CTL_AUX_LESS_WAKE_TIME(val) REG_FIELD_PREP(ALPM_CTL_AUX_LESS_WAKE_TIME_MASK, val)
#define _ALPM_CTL2_A 0x60954
#define ALPM_CTL2(tran) _MMIO_TRANS2(dev_priv, tran, _ALPM_CTL2_A)
#define ALPM_CTL2(dev_priv, tran) _MMIO_TRANS2(dev_priv, tran, _ALPM_CTL2_A)
#define ALPM_CTL2_SWITCH_TO_ACTIVE_LATENCY_MASK REG_GENMASK(28, 24)
#define ALPM_CTL2_SWITCH_TO_ACTIVE_LATENCY(val) REG_FIELD_PREP(ALPM_CTL2_SWITCH_TO_ACTIVE_LATENCY_MASK, val)
#define ALPM_CTL2_AUX_LESS_WAKE_TIME_EXTENSION_MASK REG_GENMASK(19, 16)
@ -335,7 +294,8 @@
#define ALPM_CTL2_NUMBER_AUX_LESS_ML_PHY_SLEEP_SEQUENCES(val) REG_FIELD_PREP(ALPM_CTL2_NUMBER_AUX_LESS_ML_PHY_SLEEP_SEQUENCES_MASK, val)
#define _PORT_ALPM_CTL_A 0x16fa2c
#define PORT_ALPM_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _PORT_ALPM_CTL_A)
#define _PORT_ALPM_CTL_B 0x16fc2c
#define PORT_ALPM_CTL(dev_priv, port) _MMIO_PORT(port, _PORT_ALPM_CTL_A, _PORT_ALPM_CTL_B)
#define PORT_ALPM_CTL_ALPM_AUX_LESS_ENABLE REG_BIT(31)
#define PORT_ALPM_CTL_MAX_PHY_SWING_SETUP_MASK REG_GENMASK(23, 20)
#define PORT_ALPM_CTL_MAX_PHY_SWING_SETUP(val) REG_FIELD_PREP(PORT_ALPM_CTL_MAX_PHY_SWING_SETUP_MASK, val)
@ -345,7 +305,8 @@
#define PORT_ALPM_CTL_SILENCE_PERIOD(val) REG_FIELD_PREP(PORT_ALPM_CTL_SILENCE_PERIOD_MASK, val)
#define _PORT_ALPM_LFPS_CTL_A 0x16fa30
#define PORT_ALPM_LFPS_CTL(tran) _MMIO_TRANS2(dev_priv, tran, _PORT_ALPM_LFPS_CTL_A)
#define _PORT_ALPM_LFPS_CTL_B 0x16fc30
#define PORT_ALPM_LFPS_CTL(dev_priv, port) _MMIO_PORT(port, _PORT_ALPM_LFPS_CTL_A, _PORT_ALPM_LFPS_CTL_B)
#define PORT_ALPM_LFPS_CTL_LFPS_START_POLARITY REG_BIT(31)
#define PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT_MASK REG_GENMASK(27, 24)
#define PORT_ALPM_LFPS_CTL_LFPS_CYCLE_COUNT_MIN 7

View File

@ -39,7 +39,6 @@
#include <drm/drm_rect.h>
#include "i915_drv.h"
#include "i915_reg.h"
#include "i9xx_plane.h"
#include "intel_atomic_plane.h"
#include "intel_de.h"

Some files were not shown because too many files have changed in this diff Show More